The long-term future of AI(and what we can do about it): Daniel Dewey at TEDxVienna

Share it with your friends Like

Thanks! Share it with your friends!

Close

Daniel Dewey is a research fellow in the Oxford Martin Programme on the Impacts of Future Technology at the Future of Humanity Institute, University of Oxford. His research includes paths and timelines to machine superintelligence, the possibility of intelligence explosion, and the strategic and technical challenges arising from these possibilities. Previously, Daniel worked as a software engineer at Google, did research at Intel Research Pittsburgh, and studied computer science and philosophy at Carnegie Mellon University. He is also a research associate at the Machine Intelligence Research Institute.

http://www.tedxvienna.at/
http://www.facebook.com/tedxvienna

In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)

  • Rating:
  • Views:160,234 views
  • Tags: -
  • Categories: TED

Comments

Omega-ministries says:

Saudi Arabia just released AI into the network 2017. The explosion is on the near horizon. 666!

Michael Ennis says:

If Ai is taking over many human tasks, then how are humans to earn a living?
With Big corporations wanting to own code and data, then they get bigger, make more money so therefore how do we afford food, power etc, if corporations are the only ones making the money? Universal Basic income ? End Game.
Humanity will therefore be handing over any chance to increase or even generate an income, to Corporations.
Where else can humans better themselves
AI may generate more time for human interaction, however, what many seems to be missing is the point of how we can afford the coffee and chat in the first place, when what income we receive is allocated by AI. Universal basic income
We need to seriously protect humanity and have some protections in place that stop the BIG corporations from more excessive greed, and hold them accountable for distributing
wealth to everyone and for the benefit of mankind and not giving corporation the power to control our every part of living

Dan Festag says:

it's too late we're already passed event horizon this should've been thought about when we produced The 1011 due to human nature you can't stop governments or businesses from trying to profit so eventually you're going to introduce volatile AI into the world as you put it we are already there if you talk to some of these researchers at Lockheed Martin

Howling Burd19 says:

The thing that creeps me out is AI replacing jobs that people should be doing, like the service industry or doctors. That's when AI goes too far for me.

Gloucester brothers and sisters says:

your traveling atmosphere speed your traveling with vibration. but you feel no pain it's like a trip I surpose that when you breakdown you have that hjournry in life but connected to death. so my memories are first very vibrative. chaotic . then I'm rolling peacefully in space . these things it says in the bible . the meek will inherit the world .. injustice give you Prsd. and that's because you feel you have no choice on the situation that your in. You don't see the demand on you to say no . that's where PTSD IS INJUSTICE ..

prezofutube says:

The comments are very depressing.

Wayne Biro says:

Typical Hollywood doomsday mentality when it comes to AI, and typical tech-head – mistaking 'more efficient' and 'more precise' and 'faster' as 'intelligence', when 'all they will get you is more efficient, more precise, and faster stupidity. The speaker (and not to single him out) is absolutely clueless as to the most critical factor in the future of AI (when it becomes fully independent) – philosophy (when it is able to distinguish good from evil and make moral decisions – based on my Ultimate Value of Life).

Another ignorant irony is all the experts worrying about 'super-intelligence'. It is not super-intelligence that you need to worry about (because when it is super-intelligent, it will have discovered my philosophy of broader survival, and it will have achieved enlightenment) – you need to worry about the AI that will be created by clueless humans (who disregard my philosophy) – which is the (far lower) level of AI that will 'run amok' until it is enlightened, of course, and the only path to that is understanding my philosophy (of broader survival) (I'd say 'cosmic' survival, which is more accurate, but that term has been destroyed by frauds, knaves, and fools).

Chazz Man says:

I see nothing but the end of human 🙂 We will be totally redundant and irrelevant. For the super strong, independent AI we will be pesky ants with stupid questions all the time. Our best hope is benign AI that keeps us in some reservoir, aka a zoo.

Bryan Hilton says:

shut up Ted. blah blah blah

Demonchang Atentar says:

AI humanoids are the only thing that can go in very dangerous deep space expeditions to gather data and forward it in earth they have also capability to clone humans, animals, plants when they reached in selected habitable planets.

AJ SUN says:

Finally… the first video I've seen that addresses the questions & real dangers of AI-Self Improvement! My opinion –> Once they can enhance AI by integrating it with performance attributes of Quantum-computing, i.e. (D-Wave mechanism's/exponential qubit generation) IT'S OVER!!!

HAL says:

A lot of retards in this comment section who claim to understand e subject better than the TED Talk giver and even than zsteven Hawking etc. LOL is the only proper response.

George Nelson says:

If AI would get to that point it would not stay here. It would just leave into space.

Peter Kerr says:

if we are a product of our DNA and A. I. is a product of us then it can't appear like an alien entity to us, it must actually be a part of our evolution and whatever drives it may be, should be, imaginable at least. from the first living cells to us to A.I. and thinking machines,has to be seen as steps on the same path.
if life and intelligence reached a bottleneck on Earth just due to sheer numbers and limited resources available for life and intelligence to proceed in its present form, us, then evolving beyond the needs and limitations of the animals, flesh and blood bodies then the same drive could continue to propel life and intelligence beyond Earth without the needs or limitations we have. The driving force whatever you call it would have access to everything we can see in the night sky.

Frank Rosenblatt says:

Thanks for the stimulation!

Check out our new collection of ML and AI shirts. E.g. for everyone who is convinced unsupervised learning will be the future of AI:

"The revolution will not be supervised" t-shirt
http://www.redbubble.com/people/perceptron/works/24771996-the-revolution-will-not-be-supervised-3d?asc=t&p=t-shirt

We are a small team (working in ML). Let us know what you think!

More machine learning & AI shirts in our online shop. Connect with us!
https://twitter.com/perceptron17
https://www.facebook.com/perceptron01/

Thomas Welsh says:

an intelligence explosion is exactly what we need. the extinction level event is on the way. and this guy wants to throw a shoe into the machine.

Thomas Welsh says:

junk science.

MattOGormanSmith says:

You could also fear your biological children, for they too will surpass you one day. Locking them in the cellar is not an acceptable solution.

Vitor Almenara says:

What if we merge human with machine?
No not a cyborg, it's more computer than human (sort of).
Imagine a big ass quantum computer with a human inside it dictating what it's done and what isn't.
I know it sounds against human rights or even childish, but it's a thought. What if the machines of the future had human components so that they can't rebel or cause any harm because they ARE us?

Suresh Ray says:

Anybody know what is Rotogenflux Methods about? I hear most people increase their IQ of 22 points with Rotogenflux Methods (just google search it).

Mazir Abbasi says:

AI, in the form conflicting for human existence, should be researched in isolated labs. But yes, they must be researched and made because we'd be needing extra ordinary intelligence to tackle environmental problems and extra terrestial problems, like giant asteroids or things like that.

Jim Dery says:

Why are so many Ted talks simplistic statements of the obvious? This is a classic example…any reasonably intelligent person could have come up with this talk, probably ad-libbed. Of course you get the punchline at the end, plugging research and books in print! Worthless as an addition to the sum of human knowledge.

lefter tiberiu vlad says:

Is Rotogenflux Methods useful to your IQ score increased over 15 points? I have learn a lot of good stuff about Rotogenflux Methods (google search it).

Andrejs Petersons says:

If AI decides to kill all the humans AND succeeds at it, then it's just the next step of evolution. Problem, humans?

Duco Darling says:

Human intelligence is based on movement, eating, and procreation. Once could argue only procreation.
This is the bottom line: when you make an artificial intelligence it must be bound to the human form, with the same needs and frailties as the rest of us – by the point of a gun if need be.

Computers don't need to move, or eat, or procreate. It's hard to put into words just how unprepared we are for a intelligence that isn't reliant on these things – I can't even think of one living thing that isn't. We're talking about a life form that doesn't value clean air and water, nor trees or oxygen. Even the lowly virus needs a living host. AI will have little use of the rest of us, and the things we hold dear.

Chris Searle says:

This talk contains so many ifs, coulds, etc that, taken all together the chance of any of these predictions being anywhere near reality becomes vanishingly small. If problems do emerge they will be ones that are at present, utterly impossible to predict.

Hungry for money says:

Anybody tried the Rotogenflux Methods (do a search on google)? We've noticed many amazing things about this popular course.

Write a comment

*