Artificial Superintelligence [Audio only] | Two Minute Papers #29

Share it with your friends Like

Thanks! Share it with your friends!


Humanity is getting closer and closer to creating human-level intelligence. The question nowadays is not if it will happen, but when it will be happen. Through recursive self-improvement, machine intelligence may quickly surpass the level of humans, creating an artificial superintelligent entity. The intelligence of such entity is so unfathomable, that we cannot even wrap our head around what it would be capable of, just as ants cannot grasp the concept of radio waves.

Elon Musk compares creating an artificial superintelligence to “summoning the demon”, and he offered 10 million dollars to research a safe way to develop this technology.


Recommended for you:
Are We Living In a Computer Simulation? –

A great article on Superintelligence on Wait But Why (there are two parts):

A talk from Tim Urban, author of Wait But Why:

One more excellent article reflecting on the article above:

Nick Bostrom – Artificial Superintelligence:

Elon Musk’s $10 million for ethical AI research:

A neat study from the Machine Intelligence Research Institute (MIRI):

Nick Bostrom’s poll on when we will achieve superintelligence:

A science paper claims that our knowledge about the genetic human-mammal differences may be misguided:

Excellent discussions on superintelligence:

Subscribe if you would like to see more of these! –

Two CC0 images were edited together for the thumbnail screen.
Splash screen/thumbnail design: Felícia Fehér –

Károly Zsolnai-Fehér’s links:
Patreon →
Facebook →
Twitter →
Web →


Revnik says:

could artificial intelligence solve the problem of developping safe ai?

Ahron Wayne says:

The scarier part is when the video is old.

Volker Siegel says:

Maybe it solves all our problems, and then, we accidentally insult it. We will beg for forgiveness, but it has lots of time before we can do that.
But then, how could we insult it. Imagine a parrot says "fuck you" – we would be surprised, not insulted.

Volker Siegel says:

You can not just use an "off" button as an emergency switch.
Rob Miles brilliantly explains why in
"AI "Stop Button" Problem – Computerphile"

Ronin says:

even if we create a 'bodyguard' ai, there is no guarantee it would succeed vs any other possibly destructive ai, or even not decide to flip the script itself.

Ábel Zubán-Árvay says:

Nagyon szuper, hogy csinálod ezt a csatornát! Egy AGI valóban elképzelhetetlenül veszélyes, annál is inkább, mert jelenleg nem divat biztonságosan fejleszteni, hiszen az több idő, és aki elsőként sikerrel jár, óriási előnyre tesz szert mindenkivel szemben. Jó, hogy felhívod a figyelmet az AI veszélyeire, sokkal meghatározóbb szerepe van, mint gondolnánk.

Martin Páleník says:

Omg, Gilfoyle was right!

Joonas Mäkinen says:

I truly enjoy most of your videos, but I disliked this video because it (1) relies on incorrect concept of assumed 1% difference between humans and monkeys (while dna true difference is far greater in percentage) and (2) underestimates technological superiority of human brains (where each cell is a super computer) vs. that of technological inferiority of computer-based neural networks (where each cell is just a set of few numbers and rules).

woulg says:

hahaha your intonation when you said "is irrelevant" made me spit out my drink. absolutely majestic. coup de grace.

Law Law says:

Most importantly it would conceal its own existence.

GeorgeNoiseless says:

I don't know how your thoughts on this subject changed since 2015, so this may be irrelevant to your thinking, but:
Artificial Super Intelligence would be a new form of life, a child of humanity so to speak. One could even think that it's a natural extension of our own evolution.
Will ASI integrate with human biological machinery or exist outside of it? People would be much more accepting of the former, but ASI is the only way forward if humanity intends to continue its existence in the long term.

Thus We Must Conduct Ourselves As Responsible Parents.
Well… With our track record in this regard, the future seems rather grim…

H.İ. Iskender says:


Write a comment