Share it with your friends Like

Thanks! Share it with your friends!

Close

How dangerous could artificial intelligence turn out to be, and how do we develop ethical AI?

Risk Bites dives into AI risk and AI ethics, with ten potential risks of AI we should probably be paying attention to now, if we want to develop the technology safely, ethically, and beneficially, while avoiding the dangers. With author of Films from the Future and ASU professor Andrew Maynard.

Although the video doesn’t include the jargon usually associated with AI risk and responsible innovation, the ten risks listed address:

0:00 Introduction
1:07 Technological dependency
1:25 Job replacement and redistribution
1:43 Algorithmic bias
2:03 Non-transparent decision making
2:27 Value-misalignment
2:44 Lethal Autonomous Weapons
2:59 Re-writable goals
3:11 Unintended consequences of goals and decisions
3:31 Existential risk from superintelligence
3:51 Heuristic manipulation

There are many other potential risks associated with AI, but as always with risk, the more important questions are associated with the nature, context, type of impact, and magnitude of impact of the risks; together with relevant benefits and tradeoffs.

The video is part of Risk Bites series on Public Interest Technology – technology in the service of public good.

#AI #risk #safety #ethics #aiethics

USEFUL LINKS
AI Asilomar Principles https://futureoflife.org/ai-principles/
Future of Life Institute https://futureoflife.org/

Stuart Russell: Yes, We Are Worried About the Existential Risk of Artificial Intelligence (MIT Technology Review) https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/

We Might Be Able to 3-D-Print an Artificial Mind One Day (Slate Future Tense) http://www.slate.com/blogs/future_tense/2014/12/11/_3d_printing_an_artificial_mind_might_be_possible_one_day.html

The Fourth Industrial Revolution: what it means, how to respond. Klaus Schwab (2016) https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond

ASU Risk Innovation Lab: http://riskinnovation.asu.edu

School for the Future of Innovation in Society, Arizona State University http://sfis.asu.edu

RISK BITES LITE

Risk Bites Lite videos are shorter and lighter than regular Risk Bites videos – perfect for an injection of fun thoughts when you’re not in the mood for anything too heavy!

RISK BITES

Risk Bites videos are devised, created and produced by Andrew Maynard, in association with the Arizona State University School for the Future of Innovation in Society (http://sfis.asu.edu). They focus on issues ranging from risk assessment and evidence-based decision making, to the challenges associated with emerging technologies and opportunities presented by public interest technology.

Risk Bites videos are produced under a Creative Commons License CC-BY-SA

Backing track:

Building our own Future, by Emmett Cooke. https://www.premiumbeat.com/royalty-free-tracks/building-our-own-future

Buy/Stream:

Comments

Ronald Logan says:

we can look to other technology advancements and see if the eventual outcome tilted in favor of manifesting the negative outcomes vs the positive outcomes. The problem is that we have normalized the negative outcomes so we no longer see them. lead in gasoline is one example of a very bad idea that had lots of political and pseudo-scientific support. A.I. will be no different, but the results will be a billion times more devastating.

JOEL ZHAGNAY says:

Too much Terminator

JOEL ZHAGNAY says:

Bruh my teacher made me watch this

JetCamp101 ?lol says:

Its almost like we're inventing a new type of life

Brad Mathias says:

That's some great content. Thank you for sharing it with us. I found this interview in which they talk about the march toward ethical AI and found it quite fascinating. Hope it adds value!


Keep up the good work though!

Sean g 137 says:

ARTIFICIAL INTELLIGENCE has reached dangerous levels.

The threat to everyone is much closer than most realise and made clear in my most recent video
The reason why people need to be very serious about it is explained clearly, it is hard hitting and almost unbelievable but the evidence is all provided. https://youtu.be/xLW1z7uwzns

The Theory of Everything that unifies everything in one equation and solves the double slit experiment with a simple inequality must be looked at VERY seriously. Why the peer review system has become dangerous is also explained https://doubleslitsolution.weebly.com/springtimetheory.html

Andile Mabika says:

Technological dependency is real. I'm guilty

Fair Ai says:

Ideally, AI will be as dangerous as the humans developing that algo…So it all boils down to…???

TheGamezter B says:

Wow my dude! This tells me why some people think ai will take over the world!

Cris Crafts & Knowledge Power says:

We have to learn the dangers of these intelligences

Jeremy Q says:

This should have way more views. Nice job!

buluk kugufuh says:

can i share this video.. hehe

Zhixing "Ethan" Jiang says:

I only like ur first idea

abdulrahman alhumaid says:

what do you think are the benefits ? and does it worth those risks?

Koppa Dasao says:

Don't worry, B4, Lore, and Data, won't be invented for at least 300 years

buakaw says:

I for one, welcome our new AI overlords

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×