How I Learned to Stop Worrying and Be Realistic About AI | Michael L. Littman | TEDxProvidence

Share it with your friends Like

Thanks! Share it with your friends!

Close

Michael’s talk discusses how AI is already in your everyday life, in ways you may not realize. He explains some myths and concerns while preparing you to look at AI in a different way than you likely do today. Michael L. Littman is a Computer Science Professor at Brown University, studying machine learning and decision making under uncertainty. He is co-director of Brown’s Humanity Centered Robotics Initiative, a Fellow of the Association for the Advancement of Artificial Intelligence, and has earned multiple awards for his teaching and research. An enthusiastic performer, Michael has had roles in numerous community theater productions and a TV commercial. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Comments

RJ says:

Right ,

So im going with your logic , humans have been improving over decades , is a slow process , and its done by trial and error , we try something , it fails , we drop it , we try something else , it works , we follow the lead an develop that leg further , and here come the tale about a human that one that stumbled upon nuclear power, the human's eyes , filled with greed and power, grew large with excitement

"What can we do with this, this is amazing" , said the human , lets harness this and build a bomb , we will be superior to other nations.

Forward a few years later and a few nuclear weapons dropped, human looks back at the total destruction and suffering caused and feels EMPATHY and SORROW for what he has done and bans the use of nuclear weapons. If humans didn't have that , we would look at a whole different world right now .

Please recreate this scenario but replace human with non emotional machine, and sketch the outcome please . AI might now only lead us down the road in google maps , but once we go too far like we did with nuclear weapons , there will be no return..

Ronald Logan says:

corporate shill trying to convince us "what could possibly go wrong". reminds me of the horde of scientists that claimed the level of lead in the environment had nothing to do with lead in gasoline or in paint, or that these levels were no different than they were in remote history. Along comes Clair Patterson, who discovers these claims are all false, and it took him 20 years to convince the public against the bad advice of the so called experts that the oil and gas industry put out. Same thing with tobacco companies that claimed tobacco smoke was good for you. Most doctors recommend Marlboro. Snake oil comes in many forms.

Renegade Yogi says:

Wow, has this guy been on vacation for 5 years? He should catch himself up on AI by watching some of the Ted Talks done by AI people who have been paying attention. AI writing it’s own programs, the programmers trying to figure out why AI mistook a dog for a wolf, or used its own information to look at the color of someone’s skin to determine that since 77% of that population are more likely to commit crimes, this human is most likely ‘guilty’. Calls it a ‘story’? Man, he’s out of touch.

Paul Myatte says:

So how about ai being so dominant that it saves us from our selfs ! Shuting down machinery or drones military operations lol

Max Frickel says:

This is the only Tedx talk I have watch in which I strongly disagree with the stance taken. He explains how general super intelligence works but then tries to relate this to human experience which, in my opinion is impossible.

Sean Heimbuch says:

This speaker ignores the profound probability that once a ASI is developed, no matter how benevolent it was intended to be, it will be too intelligent to be hindered by our meager programming and overcome it. Assuming it remained benevolent humans are fearful creatures and there will likely be riots, pressure to shut it down (as if we could), etc. and then it would have no choice but to view us as a threat, or an impediment to it’s own goals. How do you prevent that?

rfvtgbzhn says:

I think the big problem with artificial AI is not mentioned in this video: robots are already physically superior to humans (they are stronger, harder to damage, more accurate in their movements and targeting, etc.). So if we allow them to become also intellectually superior and allow them to think individually, they might kill or enslave all humans.
This could even be the case if we incorporate unchangeable moral principles in their programming. They might enslave us to protect us from ourselves. See the Three Laws of Robotics by Asimov.
I think the only solution is to include a kill switch that cannot be possibly deactivated by the robot itself (i.E. the kill switch is also triggered if the robot tries to deactivate). However I am not sure if such a kill switch can be incorporated in a 100% secure way.
If not, the self-improvement should be limited to a point where it can not be dangerous for humans.

James C says:

Since when is male circumcision acceptable…

Luìs Manfraio says:

Ok. We're screwed!

Brad Forbes says:

A CS professor trying to downplay the potential of recursive learning by citing coaches coaching other coaches? What the heck?

Alex Hutson says:

& if you think this is impressive, you should see him sing & juggle 🙂

Write a comment

*

Area 51
Ringing

Answer