March 25th, 2019, I enjoyed attending a very interesting workshop, “The Future of Artificial Intelligence: Language, Ethics, Technology” at the University of Cambridge.
The Future of Artificial Intelligence: Language, Ethics, Technology
25 March 2019, 10:00 – 17:00
Room SG1, The Alison Richard Building, 7 West Road, Cambridge, CB3 9DT
This is the inaugural workshop of Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.
The workshop will bring together experts from politics, industry, and academia to consider the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions will focus on different aspects of the complex relationships between language, ethics, and technology. These issues are of particular relevance in an age when we talk to Virtual Personal Assistants such as Siri, Cortana, and Alexa ever more frequently, when the automated detection of offensive language is bringing free speech and censorship into direct conflict, and when there are serious ethical concerns about the social biases present in the training data used to build influential AICT systems.
Professor Emily M. Bender, University of Washington
Baroness Grender MBE, House of Lords Select Committee on AI
Dr Margaret Mitchell, Google
Dr Melanie Smallman, UCL, Alan Turing Institute
Dr Marcus Tomalin, University of Cambridge
Dr Adrian Weller, University of Cambridge, Alan Turing Institute, The Centre for Data Ethics and Innovation
Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing, and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already rapidly transforming our modern digital democracies. While they can certainly have a positive impact on society (e.g. by promoting free speech and political engagement), they also offer opportunities for distortion and deception. Unbalanced data sets can reinforce problematical social biases; automated Twitter bots can drastically increase the spread of malinformation and hate speech online; and the responses of automated Virtual Personal Assistants during conversations about sensitive topics (e.g. suicidal tendencies, religion, sexual identity) can have serious consequences.
Responding to these increasingly urgent concerns, this project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology, and political theory to develop design objectives for the creation of AICT systems that are more ethical, trustworthy, and transparent. These technologies will have the potential to affect more positively the kinds of social change that will shape modern digital democracies in the immediate future.
9.30 – 10.00
10.00 – 10.30
Marcus Tomalin (University of Cambridge)
Welcome and Introduction
10.30 – 11.15
Baroness Grender MBE (House of Lords Select Committee on AI)
‘AI Ready, Willing and Able? What Can the Government Do?’
11.15 – 11.30
11.30 – 12.15
Melanie Smallman (University College London/Alan Turing Institute)
‘Fair, Diverse and Equitable Technologies: The Need for Multiscale-Ethics’
12.15 – 13.00
Adrian Weller (University of Cambridge/Alan Turing Institute/The Centre for Data Ethics and Innovation)
‘Can We Trust AI Systems?’
13.00 – 14.00
14.00 – 14.45
Marcus Tomalin (University of Cambridge)
‘The Ethics of Language and Algorithmic Decision-making’
14.45 – 15.30
Emily M. Bender (University of Washington)
‘A Typology of Ethical Risks in Language Technology with an Eye Towards Where Transparent Documentation Can Help’
15.30 – 15.45
15.45 – 16.30
Margaret Mitchell (Google)
‘Bias in the Vision and Language of Artificial Intelligence’
16.30 – 17.00
Round Table Discussion