Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Share it with your friends Like

Thanks! Share it with your friends!

Close

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.

Topics discussed in this episode include:

-The historical and intellectual foundations of AI
-How AI systems achieve or do not achieve intelligence in the same way as the human mind
-The rise of AI and what it signifies
-The benefits and risks of AI in both the short and long term
-Whether superintelligent AI will pose an existential risk to humanity

You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/

You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3

You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

Timestamps:

0:00 Intro
4:30 The historical and intellectual foundations of AI
11:11 Moving beyond dualism
13:16 Regarding the objectives of an agent as fixed
17:20 The distinction between artificial intelligence and deep learning
22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind
49:46 What changes to human society does the rise of AI signal?
54:57 What are the benefits and risks of AI?
01:09:38 Do superintelligent AI systems pose an existential threat to humanity?
01:51:30 Where to find and follow Steve and Stuart

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Comments

James More says:

I think it is fascinating that the same issues involved with training the objectives of AI are the same problems I see in getting society to take action to accomplish the greater good. Our individual goals are selfish and antagonistic to achieving happiness, self worth and freedom. When we figure out what say the three fundamental rules are to keep AI from turning on us maybe we should apply them to Congress, politics and all social interactions ;> Even with world wide similar suffering due to Covid we can't seem to find common positive ground. There discuss much about how humans learn and generalize. How do I program myself as a member of society to maximize social goodwill, usefulness and contentment for so varied as we are… I'm guessing it should start in the womb. Keeping AI subservient all of the sudden seems tame by comparison. If intelligence is defined as achieving our objective… well since the objectives are clear we must be dumb or easily distracted.

ХОРОШО says:

I didn't got the explanation of differences between Alpha Go and human processing. The Alpha Go think steps to the end of the game is exatly what people do when they're trying to achieve any goal, they are constantly keeping it in their mind, and calculating steps to achieving it. The only difference with Alpha Go here is that Alpha Go doing it much deeper.

ХОРОШО says:

They are so wrong about DL.

ХОРОШО says:

I don't think there are many people who saved the world recently, so you can give the reward practically randomly.

Diego !Djee-ae-gu Caleiro !Kah-lay-ru says:

Pinker tries.

It's good to have someone playing the role he's playing.

Mackenzie Karkheck says:

Founder's list should include Doug Lenat from cyc.com. The out of fashion classical AI is mind bending.

Ben Nguyen says:

Sam Harris suggests you only have to believe that AI will continue to advance, in order to buy into the existential AI threat. However, like (modern) physics, isn't it possible for progress to be asymptotic, where tangible improvements level off?

It seems most AI researchers actively involved in AI, don't seem to share his concern.. for example, would love to hear Dr. Ken Ford and Stuart have a discussion!

Humble House says:

Fascinating discussion. Thank you.

Leighton Dawson says:

I think focusing on the existential risk / WMD aspect of autonomous weapons avoided another important aspect of that discussion, which is the fact that we're significantly more trigger happy from behind a screen. Loved the episode though! I like some of Pinker's thinking, but I feel he doesn't take the risks associated with super-intelligence seriously.

RazorbackPT says:

Pinker's arguments regarding A.I. safety are incredibly poor. It was really satisfying to get to listen to Stuart Russel inform him of that to his face. Many thanks for bringing these two together!

Future of Life Institute says:

Timestamps:

0:00 Intro

4:30 The historical and intellectual foundations of AI

11:11 Moving beyond dualism

13:16 Regarding the objectives of an agent as fixed

17:20 The distinction between artificial intelligence and deep learning

22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind

49:46 What changes to human society does the rise of AI signal?

54:57 What are the benefits and risks of AI?

01:09:38 Do superintelligent AI systems pose an existential threat to humanity?

01:51:30 Where to find and follow Steve and Stuart

Write a comment

*