Episode #26 – “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
FULL INTERVIEW STARTS AT (00:09:55)
In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
**Progress in Artificial Intelligence (00:00:00)**
Discussion about the rapid progress in AI, its impact on AI safety, and revisiting assumptions.
**Introduction to AI Safety Podcast (00:00:49)**
Introduction to the "For Humanity and AI Safety" podcast, its focus on human extinction threat from AI, and revising AI risk percentages.
**Need for Compute Cap Regulations (00:04:16)**
Discussion about the need for laws to cap compute power used by big AI companies, ethical implications, and the appointment of Paul Christiano to a new AI safety governmental agency.
**Personal Journey into AI Risk Awareness (00:15:26)**
Holly Elmore's personal journey into AI risk awareness, understanding AI risk, humility, and the importance of recognizing unexperienced events' potential impact.
**The Overton Window Shift and Imagination Limitation (00:22:05)**
Discussion on societal reactions to dramatic changes and the challenges of imagining the potential impact of artificial intelligence.
**OpenAI's Approach to AI Safety (00:25:53)**
Discussion on OpenAI's strategy for creating AI, the mindset at OpenAI, and the internal dynamics within the AI safety community.
**The History and Evolution of AI Safety Community (00:41:37)**
Discusses the origins and changes in the AI safety community, engaging the public, and ethical considerations in AI safety decision-making.
**Impact of Technology on Social Change (00:51:47)**
Explores differing perspectives on the role of technology in driving social change, perception of technology, and progress.
**Challenges and Opportunities in AI Adoption (01:02:42)**
Explores the possibility of a third way in AI adoption, the effectiveness of protests, and concerns about AI safety.
Resources:
Azeer Azar+Connor Leahy Podcast
Debating the existential risk of AI, with Connor Leahy
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast
FULL INTERVIEW STARTS AT (00:09:55)
In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.
This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.
For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.
Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
TIMESTAMPS:
**Progress in Artificial Intelligence (00:00:00)**
Discussion about the rapid progress in AI, its impact on AI safety, and revisiting assumptions.
**Introduction to AI Safety Podcast (00:00:49)**
Introduction to the “For Humanity and AI Safety” podcast, its focus on human extinction threat from AI, and revising AI risk percentages.
**Need for Compute Cap Regulations (00:04:16)**
Discussion about the need for laws to cap compute power used by big AI companies, ethical implications, and the appointment of Paul Christiano to a new AI safety governmental agency.
**Personal Journey into AI Risk Awareness (00:15:26)**
Holly Elmore’s personal journey into AI risk awareness, understanding AI risk, humility, and the importance of recognizing unexperienced events’ potential impact.
**The Overton Window Shift and Imagination Limitation (00:22:05)**
Discussion on societal reactions to dramatic changes and the challenges of imagining the potential impact of artificial intelligence.
**OpenAI’s Approach to AI Safety (00:25:53)**
Discussion on OpenAI’s strategy for creating AI, the mindset at OpenAI, and the internal dynamics within the AI safety community.
**The History and Evolution of AI Safety Community (00:41:37)**
Discusses the origins and changes in the AI safety community, engaging the public, and ethical considerations in AI safety decision-making.
**Impact of Technology on Social Change (00:51:47)**
Explores differing perspectives on the role of technology in driving social change, perception of technology, and progress.
**Challenges and Opportunities in AI Adoption (01:02:42)**
Explores the possibility of a third way in AI adoption, the effectiveness of protests, and concerns about AI safety.
Resources:
Azeer Azar+Connor Leahy Podcast
Debating the existential risk of AI, with Connor Leahy
Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
JOIN THE FIGHT, help Pause AI!!!!
Pause AI
Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7
22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk