Voice Assistants, Health, and Ethical Design – Part II

By Cansu Canca

[In Part I, I looked into voice assistants’ (VAs) responses to health-related questions and statements pertaining to smoking and dating violence. Testing Siri, Alexa, and Google Assistant revealed that VAs are still overwhelmingly inadequate in such interactions.]

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? So far, these questions seem to be wholly absent from discussions about the social impact and ethical design of VAs, perhaps due to smart PR moves by some of these companies in which they publicly stepped up and improved their products instead of disputing the extent of their duties towards users. These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?

Continue reading

Voice Assistants, Health, and Ethical Design – Part I

By Cansu Canca

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents” (emphasis added).

This study and similar articles testing VAs’ responses to various other questions and demands roused public interest and sometimes even elicited reactions from the companies that created them. Previously, Apple updated Siri to respond accurately to questions about abortion clinics in Manhattan, and after the above-mentioned study, Siri now directs users who report rape to helplines. Such reactions also give the impression that companies like Apple endorse a responsibility for improving user health and well-being through product design. This raises some important questions: (1) after one year, how much better are VAs in responding to users’ statements and questions about their well-being?; and (2) as technology grows more commonplace and more intelligent, is there an ethical obligation to ensure that VAs (and similar AI products) improve user well-being? If there is, on whom does this responsibility fall?

Continue reading

Harvard Effective Altruism: Nick Bostrom, September 4 at 8 PM

[This message is from the students at Harvard Effective Altruism.]

Welcome back to school, altruists! I’m happy to announce our first talk of the semester – from philosopher Nick Bostrom. See you there!

Harvard College Effective Altruism presents:
Superintelligence: Paths, Dangers, Strategies
with Nick Bostrom
Director of the Future of Humanity Institute at Oxford University

What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Professor Bostrom will explore these questions, laying the foundation for understanding the future of humanity and intelligent life. Q&A will follow the talk. Copies of Bostrom’s new book – Superintelligence: Paths, Dangers, Strategies – will be available for purchase. RSVP on Facebook.

Thursday, September 4
8 pm
Emerson 105