◓ Google & DeepMind: Society, too, must ask ethical questions


Google just bought a new Artificial Intelligence firm, DeepMind. Not a surprising move, but every step Google takes in the Robots / AI direction makes the need to consider the ethical and legal implication of these activities more urgent.

In a nutshell, DeepMind is a Singularity-inspired (see co-founder Shane Legg’s talk at 2010 Singularity Summit) and London-based AI firm. This is reported to be a talent aquisition.

Founded by neuroscientists, DeepMind’s goal is to create computers that can function as human brains. Legg sees this happenning by 2013 – of course, in the process of making intelligent machines, they also wonder about what exactly is intelligence, and how to measure it (see Legg’s paper here).

In the AI lingo, this is called “strong AI”. The following short description stresses that Strong AI is about replicating Human Intelligence in geenral, not solving specific problems (like: How can Google’s search engine can give you better ads based on what they already know about you from your emails?):

Strong AI is a hypothetical artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that could successfully perform any intellectual task that a human being can.  It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as “artificial general intelligence” or as the ability to perform “general intelligent action.” Strong AI is associated with traits such as consciousness, sentience, sapience and self-awareness observed in living beings.

Some references emphasize a distinction between strong AI and “applied AI”: the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.


What does this matter? So DeepMind’s three top talents will join ranks with other brilliant AI inventors at Google, including Singularity pioneer Ray Kurzweil who joined in 2012 as Engineering Director. Google has a secret lab on campus that deals with moonshot ideas, Google X, and that already gave us the self-driving Google car. Google’s Andy Rubin has a carte blanche to create a robotics revolution. Google bought a firm that builds robots and that used to be primarily funded by grants from the US Military (via DARPA): Boston Dynamics. Regina Dugan, DARPA’s former Director, Regina Dugan, has also joined in 2012. Google also bought a firm that builds cute Humanoid robots, Meka. Since their acquisition, their website just says: “We have been acquired by Google and are busy building the robot revolution.” These are a couple items on a much longer list.

Are all of these people working together to build a giant Skynet-like organization? Probably not. Are these completely unrelated acquisitions from Google executives seeking to invest in tomorrow’s exponential businesses? Even if Google has a track record of having people work in silo, it’s tough to assume that there won’t be synergies in these domains.

Google’s intent with these aquisitions rather seems to mimic DARPA’s core purpose: “To work in vigorous pursuit of [one] mission: making the pivotal early technology investments that ultimately create and prevent decisive surprise.” (April 2013 DARPA letter from the Office of Director). Unless that with Google, it is not to prevent surprises “for U.S. National Security” but for Google’s business. That makes things quite different.

It doesn’t have to go wrong, but the move raises legitimate ethical and legal concerns. This new ecosystem that Google is building, bringing together the best minds in robotics and AI and providing them enough budgetary leeway to make all fantasies come true, more or less behind closed and opaque doors, deserves an open debate.

The most interesting comment on Google’s acquisition of DeepMind is that DeepMind have reportedly asked for an Ethical Board to be set up within Google in order to evaluate how Google could/should work on AI. In 2011, DeepMind’s Shane Legg was already evaluating as “too low” the “current level of awareness of possible risks from AI”. He warned: “it could well be a double edged sword: by the time the mainstream research community starts to worry about this issue, we might be risking some kind of arms race if large companies and/or governments start to secretly panic. That would likely be bad.”.

It is a good news that Google is taking steps to set up an Ethical Board to think about these questions, but society should also take the hint. Google’s behavior, its leader’s declarations and recent acquisition tell us: it is time for society, too, to ramp up their ethical and legal thinking of these questions.


* Jan. 29th update –  More in that direction: THE VERGE reported that as Google was selling Motorola to Lenovo, its “Advanced Technology and Projects” division of about 100 people, led by Dugan, will remain at Google and join the Android teams.

Photo credit – http://www.flickr.com/photos/fallentomat…



  1. lucychili

    January 31, 2014 @ 11:56 pm


    Legg sees this happening by 2013? Is the date correct?

  2. Arthur Klein

    February 26, 2014 @ 6:50 am


    IMHO one of the most fascinating aspects of this evolution of AI & our journey to singularity is how economically society will adjust to valuing an individuals value to community when it has to be based on something other than a measure of value to a traditional work force since most jobs could be left to artificial intelligence and automation.

    I would imagine the only survival for humans will be to value our existence and benefit to each other differently than we have since the Magna Carta and “the Rights of Man” doctrine. Big decisions are ahead and if they are based on greed and old paradigms we humans are probably doomed to extinction very soon unless we can re-balance our priorities as a society to take care of each other, our environment and ourselves by loving the uniqueness of what we together have created since 4 ever…

    Om Mani Padme Hum,

    Love and pranams,

  3. Arod

    March 3, 2014 @ 8:52 pm


    Wow. This is perhaps one of the more interesting interpretations I’ve read so far on the matter. Overall, thought i’d say that this is one of the more interesting (and underappreciated) blogs out there, in my opinion. Keep it up. I’d be happy to share my thoughts on the matter (am a law student myself) if you’re interested (drop me an email).

    Anyway, way to go!

Leave a Comment

Log in