You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Projection and Confession

1

I increasingly find myself in the middle of discussions about what the machines are going to do when they become smarter than we are. This worry has fueled science fiction for as long as there has been science fiction (I have an anthology from the 1940s and early 1950s where the theme shows up). But the conversation has taken on a new immediacy since the deep learning advances of the past couple of years. Machines now play go at the highest level, machine vision is getting much better, and there seem to be new breakthroughs all the time. So it’s just a matter of time, right?

I’m not so sure.

My first area of skepticism is whether, as the AIs get better and better at what they do, that they come closer and closer to being sentient or thinking. Computers play chess in a very different way than people play chess, and I suspect that the new silicon go champions are not approaching the game the way their human counterparts do. I’m always reminded of Dijkstra’s comment “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” Submarines certainly move through the water well, but it isn’t what I would call swimming. And while computers can do lots of interesting tasks, I’m not sure it makes sense to say that they think.

Saying that they think projects our own model of what it takes to solve a problem or do a task onto (or into) the machines. We tend to build these kinds of models, where we anthropomorphize non-human things, all the time. We even do it with machines– we talk about how our computers are out to get us, or the personality of our cars. Of course, we also project our internal life to other people, where all we have as evidence is that they act and react like we do. But we also share a lot more with other humans (evolution, biology, and the like) that makes the projection seem a bit more reasonable (although not provable, as the problem of other minds is still active in philosophy).

So I tend to be a bit skeptical of the claim that, because machines can do the things that we do, that they are therefore able to think and be conscious in the same way we do.

But even if I were willing to grant that at some point of complexity and with the ability to learn the computers of the future will become sentient and self-aware, I’m not sure that the worries of many who talk about the singularity are warranted. My skepticism here is the unstated assumption that if the machines become sentient, they will also behave in the way that people behave. The worriers seem to jump from the singularity to the conclusion that the new, super-intelligent machines will keep us as pets, or try to destroy us, or set themselves up as gods, or simply not need us and treat us as ants.

Maybe this too is projection. If the machines are sentient the way we are, they will act the way we do. I tend to see this as more a case of confession on the part of the people doing the worrying– this tells us what they would do if they we the super-intelligent beings. But the motivations for humans are much more complex than what sentience or even intelligence dictates. We are still wired for the desires for food, and reproduction, and all sorts of other things that have nothing to do with learning or being intelligent (if you think human behavior is driven by intelligence, you haven’t been paying attention).

So I’m not at all sure that machines will ever be intelligent or sentient, but if they do I’m even less sure I know what will drive their actions. A super-intelligent machine might decide to go all Skynet on us, but I think it is just as likely to ignore us completely. And just as we don’t understand how many of the current machine algorithms actually work, we might not understand much about a super-intelligent machine. Because of this, on my list of worries, the singularity doesn’t make the cut…

previous:
Prematurely right…
next:
Empire and Innovation

1 Comment

  1. cindizzle4

    October 22, 2017 @ 10:38 pm

    1

    I personally am also not very worried about the singularity. I don’t have the fear of machines going Skynet on us because if machines become “smarter” than humans, they wouldn’t behave in a way that we predetermined.

    The fact that humans are connected to their biology is also a very important point to consider when “recreating” human intelligence. Human actions, although sometimes not very obviously, are often determined by biological needs, like the desire to survive and reproduce. This heavily influences much of how human beings act and think, being an integral part in how I believe we should define “intelligence”. How can artificial intelligence be like human intelligence without this common trait? There are too many questions 🙁

Leave a Comment