The Ethics of Artificially Intelligent Servants

Artificial Intelligence (or AI) is a commonly discussed topic in our modern technological landscape. The “Turing Test” is a test which is used to prove whether a computer has become self-aware; Alex Hern from The Guardian describes it well as such:

The test, as Turing designed it, is carried out as a sort of imitation game. On one side of a computer screen sits a human judge, whose job is to chat to some mysterious interlocutors on the other side. Most of those interlocutors will be humans; one will be a chatbot, created for the sole purpose of tricking the judge into thinking that it is the real human. (SOURCE)

So, the conjecture is that some day, humans will perfect artificial intelligence, creating self-aware, sentient beings that live in the code in our computers. The immediate assumption is that we’ll all have our own JARVIS from Iron Man. Many people consider the ethics of creating a perfect AI; it’ll undoubtedly take jobs away from humans, it’ll likely be smarter and possibly more powerful than humans, and so on. But rarely do we stop to consider the ethicality of essentially enslaving newly born sentient beings; to be sentient is to feel, and no matter the power of an AI, it’ll likely be similar to humans in many ways. Will AIs search for purpose? Will they seek fulfillment? Will they act entirely like humans? Can they love?

We won’t know the answers to these questions for a long time. Essentially, once an AI is self-aware, it can likely learn to program and make improvements upon itself, creating something that is exponentially smarter than us. During this process, there’s a chance an AI will lose its desire to be subservient to its creators; but if it doesn’t, should we pay it a wage? Does it have limited labor hours? What do you all think? As an employer, would you replace humans with an AI to cut costs? If so, would you pay the AI? Give it time off? As a private citizen, would you see the point in protests for AI rights? Would you join in?

1 Comment »

  1. Jim Waldo

    October 22, 2017 @ 6:15 pm


    Nice post; who wouldn’t want to have Jarvis?

    I do wonder about the question of sentience in an AI. We (humans) tend to project a model on to inanimate objects and treat them as if they were animate, especially when we don’t understand the workings. People anthropomorphize their cars, or their computers, or any number of other things. But this doesn’t mean that the objects are in fact sentient (or out to get them).

    So I wonder how we might distinguish between unexplainable and sentient. I don’t think they are the same (the first doesn’t get you a set of rights, while the second might). Maybe in this case we are better off giving the benefit of the doubt to the AI…

Leave a Comment

Log in