You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

What the Heck is Intelligence?

Hi everyone! I doubt anyone outside of my class reads this (in fact, with what someone in class said today in mind, I doubt anyone outside of Professor Waldo and Dean Smith read this), but if you exist, I didn’t post last week because Columbus Day is a Harvard holiday, so we had no class. But now we’re back and better than ever, moving into artificial intelligence the Singularity.

If you’re unfamiliar with the Singularity, think about it like this: humans manage to create superhumanly intelligent beings. These beings create beings who are even more intelligent and who create beings… I hope you see where this is going. The resulting intelligence explosion is commonly called the Singularity, as it will propel us into a completely new era, one where human intelligence is virtually obsolete.

In class, we had a big debate about whether we could create a superhuman intelligence that was like a human in every aspect. I argue it’s possible (there’s nothing magical about the brain) but it seems like an absolutely awful idea. One person in class raised the point that we worry that more intelligent beings will want to subjugate us, just as we have subjugated all other life on this planet. He continued to ask, “Does that speak to the beings we will create or to ourselves?” Maybe that’s what we fear because that’s what we do. And in that case, why in the world would we want to create a human-like super-intelligence? It would be forging our own shackles. Perhaps more intelligent beings will necessarily subjugate less intelligent beings. But we KNOW human-like intelligent beings do that. Why would we create them? People often try to explore new technological territory by doing something that is familiar with it, but with the stakes this high, we can’t afford to mess up.

Of course, if we’re not creating human-like intelligence, what exactly is “intelligence”? I would define intelligence as the ability for an entity to function on its own, conducting various input-to-output functions that self-optimize as the entity takes in more input. For example, as humans we take in input (stove is hot) and produce output (take hand off stove). In the future, our decision-making functions change to prevent us from putting our hand on the stove in the first place. You could take issue with this definition—indeed, approximately half of class today was spent trying to nail down a definition of intelligence. But I like it. Intelligence in this way is not restricted to things like playing chess or solving math problems. It can apply to social interaction, emotions, literally anything you can imagine that an intelligent being would have to do. It also seems to fit well with ideas of “machine learning”, about computers changing their algorithms based on the input they get.

Of course, this definition opens the door to discussion of free will because if you believe in free will, you probably don’t accept my idea of decision-making, empathy, emotions, etc. as just input-output functions. But they certainly seem to be. The human brain is just a collection of neurons firing, or on a lower level, just a collection of chemical reactions. There is nothing magical about it. Which isn’t to say it isn’t a beautiful piece of machinery, just that it’s deterministic. If you put in some input, you will get a pre-determined output (note again that I don’t think recreating human brains is good—this is just to give an example of a well-known intelligence). So while free will is a white lie I tell myself to make my head hurt less when grappling with decision theory, it is certainly not true and shouldn’t enter discussions of intelligence.

So now we have this definition of intelligence. How do we build God with it? How do we not mess this up? I don’t know, but I believe these will be the most important questions of our generation.

3 Comments

  1. Allison Lee

    October 19, 2016 @ 8:40 pm

    1

    Hi Duncan,

    Thanks for your thoughts! I actually have been reading your blog posts, and they’re really great. 🙂
    With regards to your definition of intelligence, I definitely agree– but then my question is, do you think that the people behind our technology are considering this same definition? And if not, what other “human-like” qualities do you think will be implemented into our self-determined machines as a facet of intelligence? Or do you think that innovators are currently fixated on giving our machines autonomy? I, not unlike a lot of people, fear the rise of an apathetic– perhaps even antipathic at worst– superbeing. What do you think?

  2. duncanryoo

    October 20, 2016 @ 7:05 am

    2

    Hey Allison, thanks for commenting! You’re questions are really good. To be perfectly honest, my knowledge of how actual researchers approach artificial intelligence is very spotty, limited to a smattering of posts on Less Wrong by people like Eliezer Yudkowsky. I imagine so: my general impression is that the people who do this sort of work think like me, just better. But I’m not certain. With regards to human-like qualities that could find their way into machines, I imagine that people will try to include emotion. I can sense the elephant trudging into the room: “Well, what is emotion?” That’s a discussion for another time. But emotional intelligence is one of the things that many people think will be hardest to recreate, and most vital to include. Because if a computer cannot feel emotions, how could it empathize with us? Why would it help us? Personally, I don’t think this is necessary (though neither is it necessarily bad). Algorithms can optimize without any sense of empathy. But I’m not really certain about what a brain without emotion would look like, so it’s hard to say. With regards to risk, there’s certainly plenty to fear about creating superintelligence. If we mess it up, we’re all dead or worse. But I tend to accept Vernor Vinge’s argument that it will happen no matter what (we live in a competition-driven world and the pre-Singularity world is an unstable equilibrium). And I think if it’s going to happen, we might as well be pouring all of our efforts into getting it right.

  3. school of applied science

    November 3, 2016 @ 11:22 am

    3

    I think you have observed some very interesting points, regards for the post.

Leave a Comment

Log in