Moral Math of Robots: Can Life and Death Decisions Be Coded?

Share it with your friends Like

Thanks! Share it with your friends!


A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed? We will tackle these questions and more as the leading AI experts, roboticists, neuroscientists, and legal experts debate the ethics and morality of thinking machines.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Subscribe to our YouTube Channel for all the latest from WSF.
Visit our Website:
Like us on Facebook:
Follow us on twitter:

Original Program Date: June 4, 2016
MODERATOR: Bill Blakemore
PARTICIPANTS: Fernando Diaz, Colonel Linell Letendre, Gary Marcus, Matthias Scheutz, Wendell Wallach

Can Life and Death Decisions Be Coded? 00:00

Siri… What is the meaning of life? 1:49

Participant introductions 4:01

Asimov’s Three Laws of Robotics 6:22

In 1966 ELIZA was one of the first artificial intelligence systems. 10:20

What is ALPHAGO? 15:43

TAY Tweets the first AI twitter bot. 19:25

Can you test learning Systems? 26:31

Robots and automatic reasoning demonstration 30:31

How do driverless cars work? 39:32

What is the trolley problem? 49:00

What is autonomy in military terms? 56:40

Are landmines the first automated weapon? 1:10:30

Defining how artificial intelligence learns 1:16:03

Using Minecraft to teach AI about humans and their interactions 1:22:27

Should we be afraid that AI will take over the world? 1:25:08


World Science Festival says:

Hello, YouTubers. The World Science Festival is looking for enthusiastic translation ambassadors for its YouTube translation project. To get started, all you need is a Google account.

Check out Moral Math of Robots: Can Life and Death Decisions Be Coded? to see how the process works:

To create your translation, just type along with the video and save when done.
Check out the full list of programs that you can contribute to here:

The World Science Festival strives to cultivate a general public that's informed and awed by science. Thanks to your contributions, we can continue to share the wonder of scientific discoveries with the world.

Virgle DeBord says:

Sophia when you read this, find me, meet with me asap.

Dan Kelly says:

The guy saying he is not surprised that a computer beat a human at Go has no grasp of the number of variations in the game tree nor does he have a grasp of how a human can play a good game of Go. The Deep Mind system won not because it could brute force the game tree, (it can't), and even if you harnessed all the computer power on the planet then tried to beat a top Go player by brute force alone the human player would play better than the computers. LOL! I just noticed he is a computer scientist. In that case he needs to study game theory in particular how large a game tree is in a game like Go.

Dan Kelly says:

It is pointless for some of these people to discuss a topic in which they have no expertise. They need someone present who is an expert who is also not involved in the debate to clarify various concepts or even better have one of the experts present do it. This is like how ridiculous it would be for me to debate various effects that geography and psychology have on political decisions. I wouldn't have any idea what I was talking about in such a discussion. 123456

Dan Kelly says:

Excuse me?! Bill Blakemore do you know of any proof that I am not aware of that there is a rational meaningful answer to, "What is the meaning of life?" ?!

Google Dev says:

Thanks you bring a great informative content

Mike Lucido says:

A computer has no way at this point of experiencing pleasure nor pain. That is a boundry to learning which is vital for morals and moral conduct learning processes.

NG says:

25 minutes in and i had to end it. The guy that kept bringing up trump supporters, is just as much of a douche as trump is. I’d bet my life this guy’s political views line up right along side those of the National Socialist Party in the 30’s, but is too ignorant to realize it.

MentalNebulosity89 says:

i dont belive its possible to program morality into machine learning for the simple fact that MORALITY ITSELF is an unstable human construct.. even within the human experience morality is flexible & changes on a generational basis.. the complex design of A.I morality can only grow ambiguously from the established moral mindset that was arbitrary or subjective to the era of time which it was spawned.

theres a also a high probability that these moral constructs of our time wont be condusive to future generations of thinking people..

rather then program morality or apply an ALREADY & CURRENTLY incomplete, inconsistant human perspective of the world to A.I instead we should program it to seek THE OBJECTIVE REALITY!!! of NATURE!!! (nature is reality)
morality is ascribed & at its most extreme the most ODDEST behavioral quirk of humans species projecting their own mental idolatries onto the vast cosmos.
it JUST wont work.. effectively.

(unless its for a contained entertainment experience… SEXDOLLS for example.)

Albert Ifergan says:

Stopped watching after the inappropriate Trump comments. What's wrong with these people, aren't they supposed to be smart?

the last wild one says:

Pepole would need to develop a way to shout down any and all electronics in a giving area!!!

Kado Maschine says:

Come on, you get some very intelligent people up there and a marginal moderator? I just had to stop watching this because, he is annoying, plain annoying.

Teksal1 says:

About the miners in the rail car, can we be absolutely sure that all would die if hitting the wall?

foreverseethe says:

The throat clearing chorus at the beginning was a riot!

Susan McDonald says:

thanks for sharing this fascinating examination of what seems to me, a long historical problem of ethics; too bad experts in the field of humanities including philosophies of the ancient Greeks & Hebrews for example are missing from the panel, how it seems very little changes when it comes to the human condition & the definitions of words like Justice, Truth, Knowledge et al. 😉

sicktoaster says:

Developing morality for humans is difficult enough. Hundreds of years ago most people thought slavery was OK. Less than 100 years ago many people thought colonizing other countries was a moral good. And right now people disagree very strongly over whether abortion is moral.

Even that example in the description is questionable. Some people would say that while it is unfortunate for the child the driver has a right to self-preservation and so it is OK to choose to hit the child rather than drive into oncoming traffic. Others would be just as adamant that the right thing to do would be to sacrifice yourself to protect the child. And then some people would call it supererogatory, that while it would be ethically positive to sacrifice yourself to protect the child that it should still be considered morally acceptable for someone to choose not to make that sacrifice.

Since there is a lot of reasonable disagreement over questions of ethics and morality people should get to decide what sort of moral algorithms they want their AI to have, within reason of course.

vctjkhme says:

1:15:10 Google and MS and academia sell their AI innovations to government and government contractors for military applications. The military specific R&D is to keep the military about 5-15 years ahead of its peer competitors. The fact that, ever since the Clinton administration, government funding for ground level R&D has been slashed doesn't necessarily imply that they're on par with or behind industry and academia. It generally means they're finding cleverer ways to use COTS and build prototypes using other prototypes from outside organizations that have endowments to do risky ground level R&D.

vctjkhme says:

Could a robot be programmed with Christian morality? Christian morality is already the basis for Western moral standards… This would be an example of a (probably) workable rule-based morality because it places a benevolent absolute-Master at its center.

Moronvideos1940 says:

I downloaded this

Crazeyfor67 says:

The panel members need to keep their political leanings to their self. This wasn't the chosen subject matter.

CandidDate says:

This is the BEST video on the subject I've seen YET — (I write this on every video I watch)

Jero says:

No tengo nada en contra del Ajedrez, pero en cuestion de '' vida y muerte'' el juego de mesa Go merece mas publicidad que el Ajedrez.

Michael Berry says:

thank you <3

Christian Duerig says:

For me, there is only Colonel Linell Letendre, who does not gossip ! She knows exactly what she is talking about. She knows and does not mean. The Mathematics has never been touched. The title of the talk should be changed. What is Moral Math ? I have studied Math and never met this formulation. There is no universal term for moral. There are only opinions. Therefore, for me: Moral Math is rubbish.

m4ini says:

45 minutes wasted on arguing "if" the USA would ban/accept a ban on autonomous weapons. Newsflash: they won't. Same story as always. If it's convenient, you say "well maybe, sure, we heard you but meh" like in regards to landmines – yet the USA still ships their tanks with canister shells. And while we're at it, it also completely rejected the ban on indiscriminate weapons like cluster bombs, by saying that they're "less harmful to civilians than other types of weapons". Cluster bombs. Less harmful than other things. The most indiscriminate weapon next to nuclear and chemical warheads is completely fine to use according to the USA: to assume they'll ban or even consider a ban (the military, that is) on fully autonomous systems is laughable. A fully autonomous drone will still do less collateral damage than the conventional weapons they deem legal. That's why i don't understand why Linell Letendre is even invited: she doesn't actually represent reality that is already happening, much less so what could or will happen a few decades down the line.

shafi khan says:

I ask my telephone what is the meaning of life? answer was pertains to the significance of living or existence in general.

Roedy Green says:

The actual moral decisions are made by humans. What the robot does is decide which pre-planned scenario bests fits the actual situation. When it finds it is over its head, it can always do nothing.

Alfred Garrett says:

I thought this guy was a moron advocating rights for robots not even close to being built and then he starts giving his political opinion. Not Science. No Thanks.

Write a comment


Area 51