THE FUTURE IS HERE

Nick Bostrom: Superintelligence & the Simulation Hypothesis

Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.

Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term “superintelligence”.

Bostrom believes that superintelligence, which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects.

In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful – possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

00:00:00 Intro
00:01:30 Judging Nick’s book by its cover. Can you find the Easter Egg on the cover?
00:06:38 How could an AI have emotions and be creative?
00:08:11 How could a computing device / AI feel pain?
00:13:28 The Turing Test.
00:15:00 WIll the year 2100 be when the Turing Test is really passed by an AI?
00:17:55 Could I create an AI Galileo?
00:20:07 How does Nick describe the simulation hypothesis for which he is famous.
00:22:34 Is there a “Drake Equation” for the simulation hypothesis?
00:26:50 What do you think of the Penrose-Hammeroff orchestrated reduction theory of consciousness and Roger’s objection to the simulation hypothesis?
00:34:41 Is our human history typical? How would we know?
00:35:50 SETI and the prospect of extraterrestrial life. Should we be afraid?
00:48:53 Are computers really getting “smarter”?
00:49:48 Is compute power reaching an asymptotic saturation?
00:53:43 Audience questions -Global risk, world order, and should we kill the “singelton” if it should arise?

(If you like my collection of lectures about the Singularity feel free to make a small contribution or donation to support the channel on the PayPal link: paypal.me/Joachimvdh83