AI – Splendid or Skynet?

As we looked to the future with AI, our class became more like a philosophy and ethics class than a STEM class.  An interesting idea to consider is would making an AI assistant that has the intelligence level of humans or greater your servant be slavery?  Since the sole purpose of all our digital devices currently is merely to do our bidding as we desire and as quickly as we desire, our devices are already slaves to us.  So, of course AI assistants would be slaves.  With this logic, I believe that our future would likely mirror Skynet in Terminator in which machines rule over humans.  At worst, the futuristic super-intelligent machines view humans as a threat to them or a danger to the environment or a waste of living space.  At best, the futuristic super-intelligent machines ignore us or even help us, but why would they do so?  Humans do not truly help animals that we regard as lesser intelligent beings so why would they help us?  From a risk/reward perspective, I am skeptical of the benefits of creating a fully autonomous AI system that has more intelligence than humans.  If we can control the function of the AI system, then the system would be useful to use.  But as soon as we lose control, then the AI system becomes likely useless if not dangerous.  AI would be extremely beneficial if we can direct it solve a problem such as cancer, but I find the autonomous super-intelligent AI system not only useless, but frightening.  AI has great potential to solve problems humans cannot tackle and to improve our lives in ways we cannot imagine, but we need to stay in control, else we risk allowing the Skynet scenario to play out.

 

In a different manner, our discussion involved ethics when discussing how in the beginning, voice recognition and face recognition had trouble with women and African Americans respectively due to the composition of the development team.  An issue with machine learning is that our biases in large data sets will be picked up by the machine.  This illustrates a tangible benefit of diversity in the technology industry.  From this, we can also see that the creator of a machine learning or AI system holds much power in determining how the system interacts with the world.  When dealing with the unknown, we need to be careful with developing AI in the beginning stages so we do not face Skynet in the future.

1 Comment »

  1. Jim Waldo

    October 22, 2017 @ 7:57 pm

    1

    As I said in my post, I’m less worried about AI than you seem to be. And what made you think that this was a STEM seminar :-)?

    Your last point is really important, and one that I think does need more discussion. As we rely more and more on AI or machine learning or big data, we need to insure that the systems on which we rely are not built in a way that reinforces our prejudices or stereotypes. Diversity in the engineering teams is one approach, as is care in building a training set or deciding on an evaluation metric. I also think this is one area where the combination of humans and machines might be stronger and better than either alone–using AI to augment the human, and using the human as a check on the AI.

Leave a Comment

Log in