THE FUTURE IS HERE

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed? We will tackle these questions and more as the leading AI experts, roboticists, neuroscientists, and legal experts debate the ethics and morality of thinking machines.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Subscribe to our YouTube Channel for all the latest from WSF.
Visit our Website: http://www.worldsciencefestival.com/
Like us on Facebook: https://www.facebook.com/worldsciencefestival
Follow us on twitter: https://twitter.com/WorldSciFest

Original Program Date: June 4, 2016
MODERATOR: Bill Blakemore
PARTICIPANTS: Fernando Diaz, Colonel Linell Letendre, Gary Marcus, Matthias Scheutz, Wendell Wallach

Can Life and Death Decisions Be Coded? 00:00

Siri… What is the meaning of life? 1:49

Participant introductions 4:01

Asimov’s Three Laws of Robotics 6:22

In 1966 ELIZA was one of the first artificial intelligence systems. 10:20

What is ALPHAGO? 15:43

TAY Tweets the first AI twitter bot. 19:25

Can you test learning Systems? 26:31

Robots and automatic reasoning demonstration 30:31

How do driverless cars work? 39:32

What is the trolley problem? 49:00

What is autonomy in military terms? 56:40

Are landmines the first automated weapon? 1:10:30

Defining how artificial intelligence learns 1:16:03

Using Minecraft to teach AI about humans and their interactions 1:22:27

Should we be afraid that AI will take over the world? 1:25:08