Prof. Stuart Russell – Building Artificial Intelligence That is Provably Safe & Beneficial

Share it with your friends Like

Thanks! Share it with your friends!


How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

Recorded August, 2017


letMeSayThatInIrish says:

Inverse reinforcement learning is a brilliant and promising idea. Still, significant problems remain as illustrated near the end of this talk (1:01:45). what if some people want 'bad' things? To avoid the AI helping us do bad things, someone will need to somehow codify morality.

budes matpicu says:

what an idiotic tautology…. superAGI will be beneficial because we POSTULATE that humANIMAL is the benefactor (principal)… and yes, we will "teenagerize" that AGI beast so that is so that it is unsure about anything… and then the AGI-POWERED chinese evil comes… and then it comes… HAPPY TO BE SWITCHED OFF… yes, YOU will be happy… chipped and remotely controlled like your autonomous "car" by government totalitarians (well before any AGI)

Richard Fredlund says:

the fundamental of potential of AI is what they're used for.

BtotheDtotheF says:

Thanks for the upload

KROOL says:

β„Œπ”¬π”΄ π” π”žπ”« 𝔴𝔒 π”₯π”žπ”―π”«π”’π”°π”° 𝔱π”₯𝔒 𝔭𝔬𝔴𝔒𝔯 𝔬𝔣 𝔰𝔲𝔭𝔒𝔯𝔦𝔫𝔱𝔒𝔩𝔩𝔦𝔀𝔒𝔫𝔱 𝔄ℑ 𝔴π”₯𝔦𝔩𝔒 π”žπ”©π”°π”¬ 𝔭𝔯𝔒𝔳𝔒𝔫𝔱𝔦𝔫𝔀 𝔱π”₯𝔒 π” π”žπ”±π”žπ”°π”±π”―π”¬π”­π”₯𝔒 𝔬𝔣 π”―π”¬π”Ÿπ”¬π”±π”¦π”  π”±π”žπ”¨π”’π”¬π”³π”’π”―? 𝔄𝔰 𝔴𝔒 π”ͺ𝔬𝔳𝔒 𝔠𝔩𝔬𝔰𝔒𝔯 π”±π”¬π”΄π”žπ”―π”‘ π” π”―π”’π”žπ”±π”¦π”«π”€ π”žπ”©π”©-𝔨𝔫𝔬𝔴𝔦𝔫𝔀 π”ͺπ”žπ” π”₯𝔦𝔫𝔒𝔰, 𝔄ℑ 𝔭𝔦𝔬𝔫𝔒𝔒𝔯 π”–π”±π”²π”žπ”―π”± β„œπ”²π”°π”°π”’π”©π”© 𝔦𝔰 𝔴𝔬𝔯𝔨𝔦𝔫𝔀 𝔬𝔫 𝔰𝔬π”ͺ𝔒𝔱π”₯𝔦𝔫𝔀 π”ž π”Ÿπ”¦π”± 𝔑𝔦𝔣𝔣𝔒𝔯𝔒𝔫𝔱: π”―π”¬π”Ÿπ”¬π”±π”° 𝔴𝔦𝔱π”₯ π”²π”«π” π”’π”―π”±π”žπ”¦π”«π”±π”Ά. β„Œπ”’π”žπ”― π”₯𝔦𝔰 𝔳𝔦𝔰𝔦𝔬𝔫 𝔣𝔬𝔯 π”₯𝔲π”ͺπ”žπ”«-𝔠𝔬π”ͺπ”­π”žπ”±π”¦π”Ÿπ”©π”’ 𝔄ℑ 𝔱π”₯π”žπ”± π” π”žπ”« 𝔰𝔬𝔩𝔳𝔒 π”­π”―π”¬π”Ÿπ”©π”’π”ͺ𝔰 𝔲𝔰𝔦𝔫𝔀 𝔠𝔬π”ͺπ”ͺ𝔬𝔫 𝔰𝔒𝔫𝔰𝔒, π”žπ”©π”±π”―π”²π”¦π”°π”ͺ π”žπ”«π”‘ 𝔬𝔱π”₯𝔒𝔯 π”₯𝔲π”ͺπ”žπ”« π”³π”žπ”©π”²π”’π”°.

Crouzier Benjamin says:

1:03:57 Jokes on you, I build bridges that fall down:

Dan Kelly says:

Provably safe? LOL Yeah just like they will solve the problem of proving whether a program has a bug.

Anders flortjΓ€rn says:

Best talk on AI safety i heard so far. Because it presents some real solutions how the problem might be solved.

Christopher Macias says:

No one talks about the obvious. The goal of all companies is generating profit – sophisticated method of sucking $ from the market. With help of A.I We all lose. Only 5 people become a Gods by all means.

Sai Sasank Y says:

The Trump joke though XD

Write a comment


Area 51