You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

The Singularity, and Confessional Language

ø

In our seminar this last week, we talked about the Singularity, that point at which machines become smarter than people, and start designing machines that are even smarter so that the gap between humans and computers running AI programs just gets larger and larger. Depending on who you listen to, this could happen in 2045 (when the computing power of a CPU will, if current trends continue, be greater than that of the human brain), or sooner, or later. There are people who worry about this a lot, and in the past couple of weeks there have even been a couple of Presidential studies that address the issue.

I always find these discussions fascinating, as much for what is presupposed in the various positions as for the content of the discussion. The claim that machines will be as “smart” as humans when the complexity of the chips equals the complexity of the human brain assumes a completely reductionist view of human intelligence, where it is just a function of the number of connections. This may be true, but whether it is or not is a philosophical question that has been under discussion at least since Descartes. Consciousness is not something that we understand well, and while it might be a simple function of the number of connections, it might be something else again. In which case, the creation of a computer that has the same level of complexity as the human brain would not be the same as creating a conscious computer, although it might be a step in that direction.

Then there is the assumption that when we have a conscious computer, we will be able to recognize it. I’m not at all sure what a conscious computer would think about, or even how it would think. It doesn’t have the kinds of inputs that we have, nor the millions of years of evolution built into the hardware. We have trouble really understanding other humans who don’t live like we do (that is the study of anthropology), and this even goes back to Wittgenstein’s dictum that “to understand a language is to understand a way of life.” How could we understand the way of life of a computer, and how would it understand ours? For all we know, computers are in some way conscious now, but in a way so different than we can’t recognize it as consciousness. Perhaps the whole question is irrelevant; Dijkstra’s aphorism that “The question of whether machines can think is about as relevant as the question of whether submarines can swim” seems relevant here.

Beyond the question of whether machines will become more intelligent than humans, I find the assumptions of what the result of such a development would be to tell us something about the person doing the speculation. There are some (like Bill Joy) who think that the machines won’t need us, and so will become a threat to our existence. Others, like Ray Kurzweil, believe we will merge with the machines and become incredibly more intelligent (and immortal). Some think the intelligent machines will become benevolent masters, others that we are implementing Skynet.

I do wonder if all of these speculations aren’t more in the line of what John L. Austin talked about as confessional language. While it appears that they are talking about the Singularity, in fact each of these authors is telling us about himself– how he would react to being a “superior” being, or his fears or hopes of how such a being would be. These things are difficult to gauge, but the discussion was terrific…

previous:
Money, bits, and the network
next:
Technology and government

Leave a Comment