Defining Ourselves

Last week, my friends and I had one of those classic late-night, deep philosophical dorm room conversations that left us all mind-blown and starry-eyed. And it was all prompted by some thoughts I came away with after our seminar on the Intelligence Singularity on Monday.

I had been reflecting on the fact someone mentioned in class that a couple billionaires plan to pay to be preserved so that, if and when the technology exists, they can have their minds transferred to automated platforms and in a sense live forever (you can read more here). I asked my friends if they would be willing to do this, were the technology offered to them.

I was surprised to hear that very few of them would. Even with the guarantee that their family would also be automated in this hypothetical situation, they didn’t believe one would be human anymore. I wondered if the way I live my life today would be different without the subconscious urgency from the fact that our time here is limited. Another one of my friends told us that he believes that death shouldn’t be feared: “When your time has come, there’s nothing you can or should do about it,” he argued. He reckoned that those billionaires were motivated ultimately by an unnecessary fear of death.

I had difficulty agreeing with that argument, but tried to wrap my mind around what life would be if I were just a network within a machine. Would it be comparable or even tolerable  without the senses and mechanics of life in my body? How linked is our conception of our lives and our consciousness with being in our body?

Of course, this all gets at what it means to be human—a question we’ve tackled previously in this course as we’ve tried to determine what the true Turing Test, or, benchmark for Artificial Intelligence should be. One of our proposed definitions was whether you can be in love with AI—the 2013 movie Her comes to mind, in which the main character falls in love with his operating system and later finds that she is simultaneously talking to and in love with thousands of other humans around the world. Is that really love? And if it is, is that a good benchmark, or can AI achieve superior intelligence without emotional intelligence?

Ultimately, these are a jumble of questions that we may very well never have a concrete answer to. But as Professor Smith wrote in a recent post, worrying about the future may in some ways not be as important as the present. It seems that one of the main outcomes of considering the future is  pushing the boundaries to understand how we define ourselves.

2 Comments

  1. Thank you for sharing an outline of your late-night, philosophical conversation with your friends. It is too easy to forget that what I might think is not how others might view the world, and more importantly, why they hold their views. That curiosity and courage to walk in another person’s shoes is critical, in my humble opinion. Your post has given me new things to consider.

  2. This is a terrific post. But let me see if I can start a different late-night session.

    You talk about the AI in “Her” as being in love with millions of people, and wondering if it can be real love. Perhaps it can; what is being violated here is our human notion that love needs to be limited. But if the AI can truly love all of those people, isn’t the problem ours, not hers (its?).

    The question here is really about the ethical expectations that we have of other entities we consider intelligent. If an AI is smart like us, it should love like us–but I’m not sure that the connection is right.

    Hum, this may keep me up late tonight…

Leave a Reply to Jim Waldo Cancel reply

Your email address will not be published.

*