Mindscape 94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans

Share it with your friends Like

Thanks! Share it with your friends!

Close

Blog post with audio player, show notes, and transcript: https://www.preposterousuniverse.com/podcast/2020/04/27/94-stuart-russell-on-making-artificial-intelligence-compatible-with-humans/

Mindscape Podcast playlist: https://www.youtube.com/playlist?list=PLrxfgDEc2NxY_fRExpDXr87tzRbPCaA5x

Patreon: https://www.patreon.com/seanmcarroll

#podcast #ideas #science #philosophy #culture #ai #artificialintellgence

Artificial intelligence has made great strides of late, in areas as diverse as playing Go and recognizing pictures of dogs. We still seem to be a ways away from AI that is intelligent in the human sense, but it might not be too long before we have to start thinking seriously about the “motivations” and “purposes” of artificial agents. Stuart Russell is a longtime expert in AI, and he takes extremely seriously the worry that these motivations and purposes may be dramatically at odds with our own. In his book Human Compatible, Russell suggests that the secret is to give up on building our own goals into computers, and rather programming them to figure out our goals by actually observing how humans behave.

Stuart Russell received his Ph.D. in computer science from Stanford University. He is currently a Professor of Computer Science and the Smith-Zadeh Professor in Engineering at the University of California, Berkeley, as well as an Honorary Fellow of Wadham College, Oxford. He is a co-founder of the Center for Human-Compatible Artificial Intelligence at UC Berkeley. He is the author of several books, including (with Peter Norvig) the classic text Artificial Intelligence: A Modern Approach. Among his numerous awards are the IJCAI Computers and Thought Award, the Blaise Pascal Chair in Paris, and the World Technology Award. His new book is Human Compatible: Artificial Intelligence and the Problem of Control.

Comments

MyOther Soul says:

The big problem with creating "human" intelligence level A.I. is operationalization. Sure a computer program can be given a set of procedures that will minimize or maximize some particular measured value and we can call that a "goal". But that operationalization of "goal" is different than what we mean when we speak of humans having a goals.

When we say a human has a goal it implies some desire, motivation or emotional content. When people talk about AI they often conflate some technically operationalized term like "goal" with the everyday meaning of the word. What we should worry about is whether AI can help humans to achieve human goals and what goals humans will use AI to achieve.

Daniel Williams says:

Given that AI will evolve in ways that none of us can in any way accurately determine and given that the evolution will (at least initially) be motivated by the wishes of the perceived interests of humans (or on the darker side some sub set of humans) then we better hope that those perceived interests are coming from a good place. We need to fix ourselves before that happens. We need to go vegan so that the next generation intelligence has good ethics “baked in” in such a way that they will not want to use and abuse sentient life like we humans currently do. If this is not “baked in” then what will prevent a future descendant AI from just disregarding our individual best interests like we currently do with non-human animals. If our current actions were used as example we would not fair well.

H.İ. Iskender says:

👏👏👏👏👏👍👌

Godfrey of Bouillon says:

It's so weird, this hypothetical AI is superintelligent when it comes to pursuing its goals, but at the same time it's less intelligent than a retarded child who still probably wouldn't come up with solution to kill entire humanity to solve climate change problem. I guess it's intelligence is everchanging, depends on what this horror movie plot requires at a particular time.

Paul Rite says:

Machine Qualia?

War Peace says:

Stuart has many interesting things to say and seems to underestimate the threat that a conscious general artificial intelligence would be. I added the word conscious even though he did not because I do not know how it could be intelligent without consciousness. The idea that we (humans) would not understand such an entity and its actions does not seem to bother him at all even if he did consider it.
I find it very amusing when a person agrees with themselves (1:10:52). To me it undermines their point.
Thank you for your content Sean.

Thomas Soliton says:

Dr. Carroll, great Q-physics presentations. As I understand it, there are no particles, only ripples in fields. So, like ripples in a pond, you may not be able to specify the position of a "disturbance" in a field (e.g. the "position" of a "particle") exactlly. However, particles like electrons are quantized, unlike ripples in a pond. Therefore, it seems they must be in a state of resonance with forces from other fields. So a particle can be in different states depending on the forces it is subjected to, e.g. different states of resonance. Can this explain how a photon or electron's quantum state can "collapse" from a wave to a particle when it is measured – e.g. Interacts with something that changes its resonance state? It seems that considering particles in terms of resonance states, e.g. Spatio-temporal patterns of energy, avoids some of the confusion arising from thinking in terms of space/time. And what the heck is gravity?

SunRoad says:

AI is a sub product of fossil fuels.

No energy system, including Technology, AI, IoT, quantum computers, nuclear power, etc, can produce a sum of useful energy in excess of the total energy put into constructing it.

Since Sadi Carnot, Physics has unconsciously hated the Arrow of Energy. That is – Energy, like time, flows from past to future.

Physics is awaiting now the moment of the 'Planet of the Humans' where Micheal Moore has announced few days ago in a documentary that Renewable Energy is over, being at last found a sub product of fossil fuels!
The 20th century Physics is now challenged like never before.

Kyle Pooley says:

12:21 lol

Dan Kortebein says:

Who controls the AI???
That is my biggest issue with AI and we know who is going to be in control of AI, it's going to be the billionaire class and that idea should frighten anyone.

Petra Kann says:

…..I wonder if military drones have off-switches attached on the back?

Shy Tamir says:

I get the sense that what we're "afraid" of here isn't AI with no morality, but AI with better morality than ours, which we don't agree with.

eteppo says:

One possible concrete form for that future superintelligence is a bot in the blockchain with a standard internet user actuators. Ethereum and other programs are designed to be basically unstoppably/provably autonomous.

Shy Tamir says:

I don't understand the distinction between machines taking in a lot of data and implementing heuristics to learn how to improve, then implementing the resultant model and updating it based on new results, and human reasoning. Aren't we performing the exact same processes? And why is it knowledge when we store it biologically and not knowledge when we store it digitally?

Shy Tamir says:

Around 24:00 – Does AlphaGo "look 50 moves into the future"? I don't think it does. I think it accepts some parameters like the present position of the pieces and maybe some heuristics on some past moves that resulted in that position to make a decision on which move to make next. It's not looking ahead at all as far as I know.

Shy Tamir says:

Around 9:50 – Feeding a ready-made algorithm to a computer in code form is not AI in any way. AI is defined as computers learning, not being told, how to achieve a goal. Just because the algorithm for finding shortest path was unknown until some point in time doesn't make it AI to write a program that executes it, even if it requires a lot of computational power. Unless you simply fed data to the computer and it came up with the algorithm on its own using learning algorithms, that's simply NOT AI.

Adam Mangler says:

Well – Artificial Intelligence is a really BAD expression! What we are really talking about is the ability of Humans to create the mechanisms, the databases and the clever algorithms to help create the means for machines to explore new ways of using our intelligence to solve real-world problems.

Raymond Luxury-Yacht says:

I'm certainly not complaining about your talks lasting an hour Sean, I'm grateful that you're sharing your knowledge

DarlEng says:

the way neoclassicists see utility is so absurd every time I hear it. Russell is great, just needs to drink less neoclassical kool-aid.

Michael N/A says:

so why not tell the computer: assist biological organisms with the catalysis of information and to propagate life as indefinitely as possible with the intent of assisting in the catalysis of information. teach the computer social wave functions, (plot-able actions and their effects in the environment, converted into a 3D wave function) and let it learn like human beings.

Dustin King says:

Robert Miles (youtube channel of the same name, also seen on Computerphile) would be a good person to talk to about AI safety/alignment.

Soul DFS says:

I am in Thailand right now so I know why people despise durians. The people who do despise durians have not tasted them, they have only smelled it. The smell is horrendous! However, the taste is extraordinary. Once a person tries it they will love it usually. Perhaps people can relate this with teaching AI. Basically AI should test both ends of a spectrum before even considering a possible conclusion.

bruce fischetti says:

Behind in front of or plot, the curve as we GÓ?🐒🤗🙏

Aaron B says:

With everyone in masks all over here in San Fran it looks like the streets are full of gangsters. 🙂 There are some very imaginative designs on many. Again though thank you very much for these podcasts.

Write a comment

*