Post-Humanity

Our human experience is completely limited to our own; we can never truly know what it feels like to be someone else. In this sense, we understand consciousness and self-awareness through a deeply personal and intuitive experience without a concrete definition or defined boundaries between what dictates the conscious or the unconscious. Therefore, how can we determine when artificial intelligence crosses that undefined line?

We still don’t have a solid answer, but most accept that it will inevitably happen. It’s interesting that most people agree that something will happen although we’re not quite sure what that something is. A part of me hopes that I won’t live to see artificial intelligence become self-aware and independent, but another part is curious as to whether humans and AI could actually coexist. Since we keep pushing the definition of what sentient AI is, maybe it will forever be something of the far future, or about 10 years.

On that same note, I find it extremely difficult to imagine a world that doesn’t revolve around humans. I mean it took me up until about the 6th grade to realize the world didn’t solely revolve around me. Evolution tells us that humans and animals alike evolved from a common ancestor, yet something along the way diverged in order for us to deem ourselves superior. Cats, on the other hand, would disagree because they view us as big dumb furless babies.

Whether superior AI decides to obliterate the human race or simply disregard our existence depends solely on their desires and the role we would play in their lives. I think, or rather optimistically hope, that if humans did not pose a risk to the existence of AI, then they would have no reason to want to destroy us. Whenever I see ants going about their day, I don’t feel the need to squash them. However, if they invade my kitchen and contaminate my watermelon, then it is war. I think that insects/animals that are larger in size and more similar to our humanity are generally more sympathized with. For example, humans will not think twice about killing a tiny spider, but it is unthinkable and utterly inhumane to kill a dog. We have to take into account the cultural aspects as well. Cows are larger, but they are considered an essential meat group in the United States. As a result, slaughtering them by the thousands is commonplace. In India, cows are extremely sacred, so murdering a cow would be blasphemous and morally wrong. There’s no set rhyme or reason to religion, so perhaps AI would develop their own way of life and humans could easily be on their shitlist or a part of their holy trinity. There’s no surefire way to tell. OR, maybe, unlike humans, AI would be perfectly comfortable with existing without a purpose or reason and wouldn’t need to come up with arbitrary guidelines and values to justify and validate their lives.

To be honest, our discussion left me feeling muddled in existential dread with equal parts eager curiosity. My favorite part was when someone stated, “Happiness is overrated.” It got me thinking about when and how did I learn that achieving happiness through altruism and ~making the world a better place~ was the ultimate end goal? Is it something that is intrinsic to our humanity? Personally, I am a fan of films and books that end with a question, instead of some positive universal clichĂ© call-to-action of some sort, because they are unapologetically honest.

3 thoughts on “Post-Humanity

  1. Interesting set of thoughts…

    I guess you can add me to the people who don’t think that machines will inevitably become conscious. I’m not sure that is something that a machine can do, at least not in the same way we are conscious.

    This doesn’t mean that we won’t need to ask the question, sometime in the future, of whether we need to act ethically with respect to machines. But if they are in some way self-aware, I’m not sure it will be in a way that we could either understand or recognize.

    Hard questions, but interesting to think about…

  2. Great post! You really got me thinking with your statement, “OR, maybe, unlike humans, AI would be perfectly comfortable with existing without a purpose or reason and wouldn’t need to come up with arbitrary guidelines and values to justify and validate their lives.” What an interesting and unique question.

Comments are closed.