You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Superhuman Intelligence

Ramblings of a college student who thinks he’s smarter than he is; mostly in response to this piece by Vernor Vinge.

I’m not in a position to criticize a whole field, or even a small subset of a field, but I find the general approach to finding “superhuman intelligence” quite unconvincing. It seems the limitations are taken to be largely computational, that the human brain is somehow better than modern hardware. I’m not sure something that runs on less than 100 watts is computationally more powerful than a cluster of computers consuming several kilowatts. From what I understand the strength of our brains seems to come from some innate structure or features that are dictated genetically. This allows any given human infant to learn any given human language even with the so-called “poverty of stimulus”.[1] More and more we are finding unexpected characteristics in our “AI” programs, notably those that exhibit human bias in the form of racial prejudice. Surely if these algorithms can learn and “think” just as well as humans then given enough time even the racist ones would conclude that racism has no biological basis. I think rather that any intelligent being can only learn if it has been “pre-programmed” to be “narrow minded” in some sense. Like humans may have some sort of innate understanding of grammar so too must machines. Even if we build our AI using methods that simulate evolutionary processes, we could have to constrain them initially as the space of all possible genetic combinations (or the space of all possible algorithms) of a reasonable length is far far larger than what can reasonable be explored (even by evolution in the “natural” world, even given billions of years…with or without accelerating returns). Assuming a static fitness landscape, where exactly we begin will determine what local maximums (in terms of fitness, i.e. intelligence) are plausibly achieved. There is no guarantee of getting anywhere unless we constrain, and if we do we are not guarantied anything close to a global maximum. Also, if we constrain, then we will be the creators and principle designers of super-intelligence. Somehow this sounds much less exciting — assuming we are pretty dumb, human super-intelligence doesn’t sound nearly as exciting as “superhuman intelligence.”

Another potential way to attain “superhuman intelligence” is by augmenting human intelligence with machines. Vinge discusses a mild form of transhumanism by having humans interact with computers to collaborate on tasks rather than uploading our brains to silicone chips etc. His argument is the humans augmented with machines can reach a higher intelligence by moving beyond what just a human can do. High school algebra is certainly easier with a calculator, but I think that many of the advances in human machine interaction/collaboration have not created more intelligent beings. Cellphone-distraction causes accidents, both for motorists and pedestrians. Studies show that their presence, even when turned completely off, distracts us and makes us perform worse on human-only tasks. Yes we can communicate far more quickly and with far fewer constraints and yes we have the world at our fingertips but most of us just end up opening and closing the same programs in cycles, scrolling through Facebook news feeds consuming adverts (Russian or otherwise) and arguably gaining nothing but the comfort of dull complacency. All while risking death by distracted street-crossing.[2] Moving on to stronger transhumanism (real physical integration) I expect Facebook, Apple and co. will make this just as distracting (even if it increases productivity, as cellphones certainly can) and again it won’t actually be a significant step towards “superhuman intelligence.” At any rate, I think we should first figure out what makes us intelligent (or not) and what intelligence even is (it’s not chess anymore) before claiming the singularity is near.

 

 

[1] Some argue that the poverty of stimulus and universal grammar are not correct theories, but when in doubt I’ll side with Chomsky.

[2] I haven’t found any real evidence that this has happened, but sadly I find it more “inevitable” than super-intelligence.

Leave a Comment

Log in