Last week, my friends and I had one of those classic late-night, deep philosophical dorm room conversations that left us all mind-blown and starry-eyed. And it was all prompted by some thoughts I came away with after our seminar on the Intelligence Singularity on Monday.
I had been reflecting on the fact someone mentioned in class that a couple billionaires plan to pay to be preserved so that, if and when the technology exists, they can have their minds transferred to automated platforms and in a sense live forever (you can read more here). I asked my friends if they would be willing to do this, were the technology offered to them.
I was surprised to hear that very few of them would. Even with the guarantee that their family would also be automated in this hypothetical situation, they didn’t believe one would be human anymore. I wondered if the way I live my life today would be different without the subconscious urgency from the fact that our time here is limited. Another one of my friends told us that he believes that death shouldn’t be feared: “When your time has come, there’s nothing you can or should do about it,” he argued. He reckoned that those billionaires were motivated ultimately by an unnecessary fear of death.
I had difficulty agreeing with that argument, but tried to wrap my mind around what life would be if I were just a network within a machine. Would it be comparable or even tolerable without the senses and mechanics of life in my body? How linked is our conception of our lives and our consciousness with being in our body?
Of course, this all gets at what it means to be human—a question we’ve tackled previously in this course as we’ve tried to determine what the true Turing Test, or, benchmark for Artificial Intelligence should be. One of our proposed definitions was whether you can be in love with AI—the 2013 movie Her comes to mind, in which the main character falls in love with his operating system and later finds that she is simultaneously talking to and in love with thousands of other humans around the world. Is that really love? And if it is, is that a good benchmark, or can AI achieve superior intelligence without emotional intelligence?
Ultimately, these are a jumble of questions that we may very well never have a concrete answer to. But as Professor Smith wrote in a recent post, worrying about the future may in some ways not be as important as the present. It seems that one of the main outcomes of considering the future is pushing the boundaries to understand how we define ourselves.