Just published: my article entitled “the Opportunities and Risks of AI” in WHITE PAPER Information and Communications in Japan by Ministry of Internal Affairs and Communications

On July 9th, 2019.  My article entitled “the Opportunities and Risks of AI” has been just published in the WHITE PAPER Information and Communications in Japan by Ministry of Internal Affairs and Communications (in Japanese)

 

Posted in AGI, AI, AI Narratives, biased data, cross-disciplinary approach, Culture and Communication, Human-robot interaction, opportunities, risks, Robot, super-aging society | Comments Off on Just published: my article entitled “the Opportunities and Risks of AI” in WHITE PAPER Information and Communications in Japan by Ministry of Internal Affairs and Communications

Interviewed by Stephen Ibaraki, ACM

 

On June 20th, 2019.  I was interviewed by Stephen Ibaraki, the founder of UN ITU AI for Good Global Summit.

I talked about my recent interview with Prof. Yoshua Bengio, MILA and how I think about the AGI and the future of AI.

Stephen also asked me about my childhood as well as higher education I had among Japan, US and UK and how I adapt to cultural differences between Japan and the West.

I also talked about my current cross-cultural and cross-disciplinary projects on social impact of AI and the potential of AI for Good.  I’ve been working for some different collaborative projects in US and UK such as CFI, University of Cambridge, Columbia University and Harvard Berkman Klein Center for the Internet & Society.  Finally I talk about the future work.

After all, it became long interview over 40 minutes but I really enjoyed talking with Stephen and I really appreciate his great support!!

 http://stephenibaraki.com/acm/interviews…

 

 

Posted in AI, Culture and Communication, education, future society, Human-robot interaction, Robot | Comments Off on Interviewed by Stephen Ibaraki, ACM

Interview with Yoshua Bengio, Pioneer of AI

Prof. Yoshua Bengio at the MILA

On June 7th, 2019 at the MILA (Montreal Institute for Learning Algorithms) in Montreal, Canada, I conducted my interview with Professor Yoshua Bengio, who is one of the pioneers of AI (Artificial Intelligence).  He is well-known as the “father of AI” for his great contribution to developing so-called deep learning.  He has received the 2018 ACM A.M. Turing Award with Geoffrey Hinton and Yann LeCun for major breakthroughs in AI.

In my interview, I asked him about the possibilities of AGI (Artificial General Intelligence), biased data, people’s concerns about GAFA (Google, Amazon, Facebook, Apple) and China, the opportunities and risks of AI and the future of AI.  All these questions are based on my previous experiences in the University of Cambridge as well as many international summits and conferences on AI I have been invited to recently. 

Bengio is also noteworthy because he chooses to remain as an academic, staying at the University of Montreal as head of the MILA, while other AI leaders such as Geoffrey Hinton have left academia and now work for Google.  Bengio continues to contribute to teaching students as well as engaging with local communities. He believes the education of future generations and people’s engagement with AI is crucial for the creation of a better society including AI. This is because he is aware of not only the opportunities but also the risks of AI.  As he owns his startup, so-called Element AI, he is instrumental in building a bridge between academia and the business world.  

This is my interview with Yoshua.

The Road to AGI

Yoshua Bengio           Did you have some questions for me?

Toshie Takahashi        Yes, of course.  Thank you for taking the time.  I’d like to ask you about AGI.

YB             Okay.

TT              I watched some of your videos and I understand you are very positive about AGI.

YB             No.

TT             No? I thought you showed a…

YB             I’m positive that we can build machines as intelligent as humans, but completely general intelligence is a different story. I’m not positive as to how humans might use it because we’re not very wise.

TT             Okay. So can you show a road map of how you could create AGI?

YB             Yes, the one I have chosen to explore.

TT             I spent some time in Cambridge, and some scholars, for example Professor John Daugman, the head of Artificial Intelligence Group at the University of Cambridge, said that AGI is an illusion created by science fiction because he said that we don’t even understand a single neuron so how can we create AGI?

YB             Yes, I disagree with him. 

TT             Okay, so could you tell me about that?

YB             Sure. Having worked for decades on AI and machine learning I feel strongly that we have made very substantial progress, and in particular we have uncovered some principles, which today allow us to build very powerful systems. I also recognise that there’s a long way towards human level AI, and I don’t know how long it’s going to take. So I didn’t say we’ll find human level AI in five years, or ten years or 50 years. I don’t know how much time it’s going to take, but the human brain is a machine. It’s a very complex one and we don’t fully understand it, but there’s no reason to believe that we won’t be able to figure out those principles.

TT              I see. Mr. Tom Everitt at DeepMind said that he could create AGI in a couple of decades, maybe 20 or 30 years.  Not too far in the future.

 YB             How does he know?

 TT              I don’t know. I asked him but he didn’t answer it.

 YB             Nobody knows.

 TT              Nobody knows. Yes, of course. When I met Professor Sheldon Lee Glashow, a Nobel Prize winning American theoretical physicist, he told us that we won’t have AGI. Or even if we have it, it’d be very far away.

 YB             Possibly, so we don’t know. It could be ten years, it could be 100 years.

 TT              Oh really?

 YB             Yes.

 TT              Okay.

 YB             It’s impossible to know these things. There’s a beautiful analogy that I heard my friend Yann LeCun mention first. As a researcher, our progress is like climbing a mountain and as we approach the peak of that mountain we realise there’s some other mountains behind.

 TT              Yes, exactly.

 YB             And we don’t know what other higher peak is hidden from our view right now.

 TT              I see.

 YB             So it might be that the obstacles that we’re currently working on are going to be the last ones to reach human level AI or maybe there will be ten more big challenges that we don’t even perceive right now, so I don’t think it’s plausible that we could really know when, how many years, how many decades, it will take to reach human level AI.

 TT              I see.  But some people also say that we need a different breakthrough to create AGI. We need a kind of paradigm shift from our current approach.  Do you think that you can see the road to reach if you keep on with deep learning? So this is a right road?

 YB             We have understood as I said some very important principles through our work on deep learning and I believe those principles are here to stay, but we need additional advances that are going to be combined with things we have already figured out. I think deep learning is here to stay, but as is, it’s obviously not sufficient to do, for example, higher-level cognition that humans are doing. We’ve made a lot of progress on what psychologists call System 1 cognition, which is everything to do with intuitive tasks. Here is an example of what we’ve discovered, in fact one of the central ideas in deep learning: the notion of distributed representation. I’m very, very sure that this notion will stay because it’s so powerful.

 TT              Wonderful! I’m happy to hear that.

 YB             Yes.

Posted in AGI, AI, biased data, cross-disciplinary approach, Culture and Communication, deep learning, education, future society, opportunities, risks, Robot | Comments Off on Interview with Yoshua Bengio, Pioneer of AI

Invitation to the An[O]ther {AI} in Art Summit 2019 @New Museum in NY

From April 24 to 27, 2019, I was invited to the An[O]ther {AI} in Art Summit 2019 @New Museum in NY.

my lighting talk entitled “People’s Engagement: Key to Understanding AI’s Social Impact”

I enjoyed giving my lighting talk entitled “People’s Engagement: Key to Understanding AI’s Social Impact” to these great participants who are AI and Art professionals.

We have been discussing AI and the future of Art in terms of Aesthetics, Authorship, Audience Engagement, Pedagogies and Funding. I hope the important conversations we had and the conclusions we drew throughout this 4-day summit both formally and informally will be made public soon and we can keep discussing how to create a better AI society together after the summit!

Many Thanks to Amir Baradaran, Founder and Lead Organizer of An[O]ther {AI} in Art and Columbia University School of Engineering and Applied Sciences for your kind invitation!

https://www.anotherai.art

Press Release Another AI in Art Summit

+++
Programming
Opening Kickoff Event: Wednesday, April 24th
6:30pm–9:30pm: The New Museum Theatre (Open to the Public)

Summit Day One: Thursday, April 25th
8:30am–6:30pm: The New Museum (Summit Participants Only)
7:00pm–9:00pm: Evening Partner Activations (Summit Participants + Partners)

Summit Day Two: Friday, April 26th
8:30am–6:30pm: The New Museum (Summit Participants Only)
7:00pm–9:00pm: Evening Partner Activations (Summit Participants + Partners)

Farewell Brunch, Saturday, April 27th (TBC)
11:00am–1:30pm: (Summit Participants + Partners)

with George Zarkadakis

Speakers
Alberto Ibargüen (President and CEO, Knight Foundation)
Anne del Castillo (Commissioner, New York City Mayor’s Office of Media and Entertainment)
Cathy O’Neil (Mathematician and Author)
George Zarkadakis (Author, AI engineer, and entrepreneur)
Victoria Vesna (Director, Art and Science Center, UCLA)
Dr. Zia Khan (Vice President, Innovation, The Rockefeller Foundation)
Kamal Sinclair, (Director, Future of Culture Initiative & New Frontier Lab, Sundance Institute)
Karen Wong (Deputy Director, New Museum)
William Uricchio (Principal Investigator, Open Documentary Lab, MIT)
Toshie Takahashi (Leverhulme Centre for the Future of Intelligence, University of Cambridge, Professor, Waseda University, Tokyo)
Eva Kozanecka (Google AI + Art)
Loretta Todd (Metis Cree Canadian film director, producer, activist, storyteller, and writer)
Justin Hendrix (CEO, NYC Media Lab)
Jose Diaz (Chief Curator, Warhol Museum)
Steve Feiner (Director of CGUI Lab, Department of Computer Science, School of Engineering and Applied Sciences, Columbia University)
Newman (Principal, metaLAB, Harvard University)
Stephanie Dinkins (Artist and Professor, Data & Society)
Gideon Mann (Head of Data Science, Bloomberg)
Tomas Garcia (VP, Technology and Digital Media, LACMA)
Caspar Sonnen (New Media Coordinator, IDFA)

Complete participant list available here

+++

Posted in AI, Art, audience engagement, communication model, Culture and Communication, digital literacy, Human-robot interaction, Robot | Comments Off on Invitation to the An[O]ther {AI} in Art Summit 2019 @New Museum in NY

Article: “Japan UK Technology and Humanity in Education 2019″, organized by the embassy of Japan UK in Nikkei Veritas magazine (in Japanese)

This is the article about “Japan UK Technology and Humanity in Education 2019″, organized by the embassy of Japan UK, published by Nikkei Veritas magazine (in Japanese).

Posted in AI, AI Narratives, Culture and Communication, digital literacy, education, Youth and Media | Comments Off on Article: “Japan UK Technology and Humanity in Education 2019″, organized by the embassy of Japan UK in Nikkei Veritas magazine (in Japanese)

Invitation to the “Should Robots Be Our Friends?” conference @Boston University

I enjoyed giving my talk entitled  “The Complexity Model of Communication in the AI Age: the case of Japanese Engagement with Artificial Intelligence and Robots in Everyday Life” at the “Should Robots Be Our Friends?” conference @Boston University

The Aim of the conference

Artificial intelligence is increasingly prevalent in our work, social, and civic lives. From voice-enabled personal assistants such as Amazon’s Alexa and Apple’s Siri to autonomous vehicles and robotic elder care, AI permeates contemporary life; it is critical that researchers explore what it means to be human in a world of AI. To that end, Boston University presents two international symposium, inviting scholars, policy-makers, and analysts to collaboratively investigate artificial intelligence in relationship to society, specifically exploring issues such as labor, ethics, emotions, and identity. These events are particularly timely as 2019 marks twenty years since the mobile turn, when we began the move away from the telephone toward a culture of perpetual contact via portable electronic devices. Through the April workshops, we aim to explore the future of technology and humanity, with a lens toward our past. Learn more about April 10th: Human Community and Perpetual Contact and April 11: Should Robots Be Our Friends?

by http://sites.bu.edu/emsconf/

Many thanks to Prof. James Katz for his kind invitation!

Nobel Prize Winning American theoretical physicist, Prof. Sheldon Lee Glashow

It’s a great honor to have a reception dinner with Nobel Prize Winning American theoretical physicist, Prof. Sheldon Lee Glashow.

Posted in AI, communication model, Mobile Media, Robot, self-creation, Social Media | Comments Off on Invitation to the “Should Robots Be Our Friends?” conference @Boston University

Talked on “AI Narratives and Robotics in Japan” on the Global AI Narratives Panel at BSLS

On April 4th, 2019, I enjoyed giving a talk on “AI Narratives and Robotics in Japan: the Complexity Model of Communication” in the AI Narratives panel at the British Society for Literature and Science @ Royal Holloway.

Global AI Narratives Panel: Kanta Dehil, Beth Singler and Toshie Takahashi

Abstracts

Arguably, we are seeing the dawn of “thefourth industrial revolution”. With the disruptive potential of new and emerging technologies such asArtificial Intelligence (AI)and robots come both a slew of risks and opportunities, locally and globally.  Technological developments in AI and robots have been discussed within dichotomy between utopia and dystopia.  European views tend to be dystopia with such fear as unemployment and AI divide, while Japanese views tend to be utopia with social benefits in the super aging society.  The Japanese embrace of AI and robots has been drawn as many caricatures into the long history of techno-orientalism in Western portrayal of the Japanese. But how are Japanese different from westerners and why?

In my talk, briefly, I shall begin by introducing the theoretical framework,“the complexity model of communication” (Takahashi, 2016),which I have developed for a deeper understanding of the social impact of AI and robots with uniting the sciences and the humanities.  Secondly, I will share some observations from AI narratives within a Japanese context.  I will introduce some manga and TV anime in terms of AI/robots in 1950s and 1960s within historical and social contexts, which have greatly influenced robotics in the today’s Japanese society.

Program:BSLS-Timetable-2019

Beautiful Royal Holloway Campus

 

Posted in AI, AI Narratives, communication model, Culture and Communication, Human-robot interaction, Robot | Comments Off on Talked on “AI Narratives and Robotics in Japan” on the Global AI Narratives Panel at BSLS

Podcast recording @CFI : AI narratives in the West and Japan

April 1st, 2019, I challenged a podcast recording about AI Narratives in the West and Japan with Beth Slinger at the CFI, University of Cambridge.

Why are Japanese AI narratives different from western ones? The consensus so far seems to point to the influence of  Christianity in the West and Techno-animism in Japan.

However techno-animism has also been criticised because it is seen as reinforcement of stereotypical images of Japan which have led to a long tradition of techno-orientalism in the West.

Therefore I try to explain Japanese AI narratives from Japanese historical, social, political and economic perspectives.

 

Posted in AI, AI Narratives, Culture and Communication, Robot | Comments Off on Podcast recording @CFI : AI narratives in the West and Japan

Publication: Academic Article entitled, “Artificial Intelligence/Robots and Social Impacts: Is Human First Innovation Wishful Thinking? ” (in Japanese)

I’m delighted to announce that my article entitled,  Artificial Intelligence/Robots and Social Impacts: Is Human First Innovation Wishful Thinking? has been just published by Journal of Information Systems Society of Japan, Vol.14, No.2. (2019) in Japanese

Abstracts

The aim of this paper is to understand the social impact of Artificial Intelligence (AI) and robots both theoretically and empirically. Firstly, I introduce the theoretical framework I have developed for a deeper understanding of the social impact of AI and robots. Secondly, in order to address the differences between Western and Japanese perceptions and engagement with AI and robots, I demonstrate AI narratives within Japanese historical and social contexts since the 1920s. This work is done in collaboration with the Global AI Narratives project at the Leverhulme Centre for the Future of Intelligence, University of Cambridge. Thirdly, I investigate Japanese engagement with AI and robots using the results of both qualitative and quantitative research. I introduce my two on-going projects, firstly “Youth and AI” project and secondly “Robots Engagement” project. Finally, I give some suggestions which I call “Human First Innovation” regarding the future of this field. I hope this study will help to create a better AI society together beyond techno-orientalism within the dichotomy between the West and the Rest.

Posted in AI, AI Narratives, communication model, Culture and Communication, Human-robot interaction, Robot, self-creation, super-aging society, Youth and Media | Comments Off on Publication: Academic Article entitled, “Artificial Intelligence/Robots and Social Impacts: Is Human First Innovation Wishful Thinking? ” (in Japanese)

“The Future of Artificial Intelligence: Language, Ethics, Technology” @ the University of Cambridge.

March 25th, 2019, I enjoyed attending a very interesting workshop, “The Future of Artificial Intelligence: Language, Ethics, Technology” at the University of Cambridge.

The Future of Artificial Intelligence: Language, Ethics, Technology
25 March 2019, 10:00 – 17:00
Room SG1, The Alison Richard Building, 7 West Road, Cambridge, CB3 9DT

This is the inaugural workshop of Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.​

The workshop will bring together experts from politics, industry, and academia to consider the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions will focus on different aspects of the complex relationships between language, ethics, and technology. These issues are of particular relevance in an age when we talk to Virtual Personal Assistants such as Siri, Cortana, and Alexa ever more frequently, when the automated detection of offensive language is bringing free speech and censorship into direct conflict, and when there are serious ethical concerns about the social biases present in the training data used to build influential AICT systems. Continue reading

Posted in AI | Comments Off on “The Future of Artificial Intelligence: Language, Ethics, Technology” @ the University of Cambridge.