Robot Rights

During our discussion on Monday, we digressed on a long tangent about whether a self-driving car would save its own passenger or a larger group of people about to be impacted. Of course, this would be a decision made by a programmer, but the computer’s ability to think faster than humans would be in control of the situation if a car accident were to occur. Ideally, if everyone had self-driving cars, then car accidents would become obsolete. However, humans are dumb/impatient at being drivers as well as pedestrians (which became extremely obvious to me upon coming to Cambridge), so there is still potential for lethal collisions. That is a big decision and a lot of power in the hands of a programmer. If someone were to die, then who would be held responsible for the death?

This very question reminded me of the Takata airbag lawsuit that resulted in the deaths of over 11 people. Basically, the creators/testers of the airbags knew that the airbags were faulty and used harmful chemicals but fudged the numbers and lied on documents in order to get them approved and make more profit off of them. After multiple deaths were attributed to shrapnel and explosions emitting from the airbags, they accused the executives with egregious conduct. However, I remember listening to the radio when they said the executives were charged with murder, and I thought it was peculiar. Technically, they didn’t kill anyone with their own hands, but the middleman was a product of their design. Where do we draw the line between cause and correlation? Can a machine be held responsible for death?

In the future, I think we will begin to see advocates for robot rights (especially with our PC culture). I am not sure about the possibility of robots gaining self-awareness or consciousness because I’m not really sure what it means to be self-aware and conscious myself, but they are definitely making strides in their ability to adapt and learn on their own. Can we program AI to have feelings? Siri seems to be able to feel sadness/anger when you insult her. It is only a algorithmic verbal reaction, rather than visceral or deeply-rooted emotional one, but I’m not sure if that’s any different from some of my emotions/reactions.

On that same note, Vonnegut wrote a short story EPICAC about a machine who is programmed to gather data about one woman in order to write poetry that she would like. Although the programmer designed the machine to make the woman fall in love with him, the machine ends up falling in love with the woman instead. The ending is hilarious, which I recommend watching the 20 min. short film here. Anyways, this was written in 1950, but we’re still begging the question whether computers can make ~real~quality~meaningful~ art. In my opinion, I totally think they can. As long as it produces emotional reactions in its viewers, it is worthy of my approval as fine art. Whether the artist or machine intended for it to have a certain meaning is besides the point because art can have multiple interpretations. The whole point of art is to get you to think about something in one way or the other. In fact, I want to see a computer produced artwork in a major art museum before I die. It’s no different than Ai WeiWei coming up with an idea and hiring his crew of painters to actually execute it while he’s probably away pissing off the Chinese government or something.

Nonetheless, back in the early 2000s, I remember when I first discovered Cleverbot, my first encounter with AI. Frankly, it creeped the heck out of me, but I couldn’t shake my awe. I thought “there has to be someone on the other end responding to me. There’s no way this is a computer.” I would invite my friends over, and we would log onto my fat PC, Windows 95 landscape and everything, so we could converse with Cleverbot for at least an hour. We tried everything to catch her in a mistake. Occasionally, she would say something that didn’t make sense, and we would feel a sense of relief that this AI was not quite as advanced as us 5th graders. After we validated our superiority, we would log off, forget about it, and go play some Poptropica or something. We had no idea how powerful this new technology truly was.

 

One thought on “Robot Rights

  1. Great post, with some great questions.

    We will be talking a lot more about the possibility of sentience in robots, and our ethical responsibilities to them and each other, in a couple of weeks (when we talk about the singularity). So we will be going deeper into all of the questions you ask.

    You properly bring up the question of whether or not we can know if other people are really sentient. This is known in philosophy as the problem of other minds (how do we know that there are any?), and there is a long history of discussion and debate on the topic. If we can’t even be clear about other people, how are we going to get clear about machines?

Comments are closed.