Why OpenAI is playing with Fire developing Artificial Intelligence, with Olle Häggström

In this episode I look at serious AI risks to Humanity and OPEN AI:s Preparedness Framework with AI expert prof. of mathematical Statistics Olle Häggström!

Why is OPEN AI not taking AI risks serious in their OPEN AI Preparedness Framework? What do they need to do to make it safe/less risky? This and and much more!

Join us for an important conversation and don´t miss part 1 below where we look at AI risks that could pose a threat to Humanity in just 7 years and AI risks in general, how they could play out and be prevented!

Don´t forget a thumbs up and consider susbcribing to help support the Evolution Show!


Johan Landgren, host Evolution Show

Part 1 “Artificial Intelligence End to Humanity in 7 years? Prof. Olle Häggström explains AI Risks”

Link to Open AI Preparedness Framework:


0:00 -Intro

0:30 – Opensourcing AI dangerous

8:10 – Transparency of AI breakthroughs

12:58 – Open AI Preparedness Framework

26:09 – Open making AI to protect us from AI

30:53 – Deleting too dangerous AI:s

35:27 – Black Swans, Unkown Unknowns