You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

This week’s discussion managed to bridge two of my interests: computer science and the comparative study of religions. Entering college, I have been considering a concentration in computer science with a secondary in the comparative study of religions. Both fields challenge the way I think, but in different ways. While contemplating these choices of study, I never really tried to find a connection between the two. Monday’s discussion about Artificial Intelligence revealed a relationship between them because the questions of one’s purpose in life and morality continuously appeared. The concept of “The Singularity,” the moment when machine intelligence surpasses that of humanity, leads to the question of will there be a need for humanity? In the malevolent scenario – humans are no longer needed and are killed/attacked – what becomes the purpose of life for the machines? The creation of Artificial Intelligence has been to help humans with meticulous and beyond human tasks, so if humans are gone, what will machines essentially “live” for? This made me wonder if robots would create religions. A religion helps add guidance, purpose, and meaning to a life. However, this makes me further question if these AI machines will even have a “life.”

According to Merriam Webster, life is “the condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity, and continual change preceding death.” If these robots gain the capacity to think for themselves, are they still inorganic matter? Personally, I do not think robots would fit into the (current) definition of life, but they might achieve consciousness.

Consciousness is defined as “the state of being awake and aware of one’s surroundings.” To me, this seems like it would be easy to claim a computer’s consciousness because we program them to be aware of their surroundings. Judging whether a computer is “awake” or not is the challenging part. Sentience is defined as “the capacity to feel, perceive, or experience subjectively.” This definition will be harder to corroborate with machines because all of these actions are personal actions. This reminds me of Descartes’, “I think, therefore I am.” Each individual can know that they are conscious, but how do we know that anyone else is conscious. I find it interesting that the definition of sentience includes the ability to “experience subjectivity” because thinking about it, this is a very human trait. We all form opinions and treat each situation in our life with bias if we like it or not. This means that the people creating Artificial Intelligence influence their work with their bias, both subconsciously and consciously so the machines may have a sense of subjectivity, but it will have been programmed into them. This isn’t much different from humans though because we pick up opinions and our own subjective definitions based on the people we interact with and experiences we have.

This raises many ethical questions regarding Artificial Intelligence. We know that we are going to have to encode morality so how do we decide what is the “right” opinion and “wrong” opinion. Even in our seminar, people had a wide range of views. I think this will prove to be one of the most difficult questions while developing AI.

Finally, will reading articles about the Facebook AI that created it’s own language, I read this article that also discussed Google’s AI. I learned that Google Translate’s AI started translating to it’s own language, and then translating to the requested language. This turned out to be a useful tool, so Google did not shut the program down like Facebook did with their AI. I also found this article interesting because there was a fact check at the end of the article clearing up the reason why Facebook shut down their AI. The Internet had skewed the story to be that Facebook shut down their AI because they were afraid of the outcome from the AI knowing their own language, but in reality, the program was shut down because it was not serving the purpose it was created for.

I am looking forward to seeing how AI will develop in the coming years. I wonder if this will create a new sort of arms race between countries and governments. This also makes me realize that different types of AI will be created to serve different purposes. In a dystopian future, I wonder if this will create different “races” of AI.

Does s/he think?

2 thoughts on “Does s/he think?

  • October 22, 2017 at 5:54 pm
    Permalink

    The ethical questions of how to distinguish right and wrong is being seen in more and more technology discussions, not just those having to do with AI (although they are particularly pressing there). We (the CS program) are starting to work with the Philosophy department to include more ethics training in the CS courses we teach. What the instructors agree is that we aren’t trying to teach what is right, but rather how to recognize that there is an ethical dimension to a problem and how to think about it.

    The combination of comparative religion and CS will lead to some interesting topics. I’d love to hear what this leads to…

  • October 22, 2017 at 8:40 pm
    Permalink

    Like you and Jim, I feel that ethical questions around what to do in particular, real-life situations are the most interesting questions of our generation. As computing systems infiltrate every aspect of our lives (and not just strict computing aspects), the design of these systems has become much more complicated. It will be people like you with your dual interests that will lead us through this complexity. I enjoyed your thoughtful post, and I look forward to where you lead us.

Comments are closed.