Nick Bostrom: "Superintelligence" | Talks at Google

Share it with your friends Like

Thanks! Share it with your friends!

Close

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

This talk was hosted by Boris Debic.

Comments

Alan Farfort says:

look for the syrup in the audience

ХОРОШО says:

If you extinct you tends to stay extinct. lol.

George Dodge says:

Here's the thing. It is very hard to program for man's benefit. Making a mess of things (sometimes fatally) seems to be the default.

LAUCH3D says:

a track record for tail events of a fat tail domain… statistics way outside a 95% percent confidence interval … if based on this assumption, then the whole talk is lelelel

John Farris says:

Why do you all think a computer can get mad? Look at what you've done I'm melting what a world what a world. Douglas Preston – Blasphemy – The Kraken Project.

Thadeu Luz says:

"Less than 50% chance of doom.." Go team human! o/

H Agarwal says:

Don't think Bostrom understood the utility monster too well.

H Agarwal says:

He has been my favorite philosopher before he became more well known, small as it is even now.

Rocky MCMXXLIII says:

An apocalyptic vision of AI destroying humanity needs to be furnished with mechanical details of how it could happen to be convincing. Is the paperclip monster going to starting building more factories? how does it does this? How would it control the actions of factory building robots? Does it have extraordinary robot logistical skills as well as software hijacking capabilities built into it (besides making paperclips?). Also, can it take control of weapons systems and have self defence capabilities against human interference (besides making paperclips?) The whole A.I. doomsday scenario really needs to be fleshed out to hold any weight.

boson96 says:

I am with Ray Kurzweil on this one. Ray has been in the field for decades and has tracked real-time growth trajectories and is also actively working in the field to affect its outcome. Not only that his track record has been spectacular, which in itself doesn't prove that his predictions will definitely come true in the future too, but gives credence to his methods of making those predictions and hence make them more reliable.

Bostrom, on the other hand, is a Philosopher who is doing what philosophers have always done: make speculations about a field in which they hold no expertise in, not to mention which are most often wrong and are predominantly based on what humans mostly base their predictions on i.e fear. Like in this video Bostrom explains the paper clip making AI destroying all of humanity for the sole purpose of making more paper clips, completely ignoring the fact a human level AI, let alone superintelligence, won't be subservient to our trivial wishes and commands. Even in the rare scenario that the AI wants to serve us, it will be clearly able to see the paradox of killing us while completing its goal in pursuit of highest efficiency, thus will reevaluate the methods of achieving those goals.

People will tend to agree with Bostrom more because he provides them with that sweet sweet confirmation bias and reaffirms their paranoia that "machines are going to get us" which itself is derived from poorly thought out Hollywood movies that are made to generate revenue and not make sense. And we all know that appealing to fear and anger is one of the best ways to generate revenue off of humans.

rolf johansen says:

strange that people believe "intelligence" will end us all , when we have experienced "un-intelligence" have almost killed us all already

john miller says:

Back in the days before Google decided to be evil after all. lol

Grailer Grailer says:

Invent 6G tech and fry everyone

Z06M6B613 says:

If you like Nick, check out Isaac Arthur on YouTube. His (roughly 20min) video on simulation theory is great and references Bostrom.

FreedomsDmocracy1st says:

…AI will be
humans, not devices, humans will use AI as chips in their own bodies and
external devices as slaves directed by super computers.  Androids will be
the human slaves, yet they will have most of the AI capabilities within: our
"slaves" in a sense of view if you want to see it that way. 
Don't mistake the future.  The human body will be enhancing.  Cities
will be enhanced and home, cars, houses, stores, medicine, engineering,
factories, airplanes, satellites, everything.  The use of nuclear devices
for war will be automatically control by AI and it will "order" its
destruction at which humans will disagree …and there will be a
humanoid-human great argument dealing with the future of human existence
as such.  But AI will not take over the world… they "if you want to
call them they", they can't.  AI is made, human Intelligent, HI is
evolved via biological evolution while AI can't evolve but
created.  That the basic big different.  They, AI, could
"think" and be made.  Humans think,  are born and evolve

.

Steve Matthews says:

Will I be able to up-load my balls to your Facebook account Mr Kurzweil?

Steve Matthews says:

Maybe after AI we will get human intelligence

Steve Matthews says:

AI will be our last invention , shitty.

Steve Matthews says:

The algorithms are gonna be the biggest threat when they get smart .

Where you gonna run , where you gonna hide, no where cause the algorithms will always find you.

Write a comment

*