Turing Lecture: Provably beneficial AI

Share it with your friends Like

Thanks! Share it with your friends!

Close

Is it reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios? Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? While some in the mainstream AI community dismiss the issue, Professor Russell will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimise arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us.

In this talk, he will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help. This uncertainty causes machine and human behaviour to be inextricably (and game-theoretically) linked, while opening up many new avenues for research. The ideas in this talk are described in more detail in his new book, “Human Compatible: AI and the Problem of Control” (Viking/Penguin, 2019).

About the speaker:

Stuart Russell received his BA with first-class honours in physics from Oxford University in 1982 and his PhD in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum’s Council on AI and Robotics.

His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in AI; it has been translated into 14 languages and is used in over 1,400 universities in 128 countries.

His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Comments

Jeremy Helm says:

Folder of Time

hellbroth says:

I wonder how we would deal with self-reflection. Marvin Minsky in his book The society of the mind talks a lot about self refection in terms of decision making.
So we could give either an objective to a robot or a set of preferences, but would the robot be able to say if I do optimise or satisfy this request then:
Would it cause an issue?
Would it better to suggest a different objective or a set of preferences?
Would it be able to refuse, or recommend that it lacks resources to succeed?
Also why should it do this or what good comes out of succeeding satisfying such a request?

Rolf Nelson says:

Not to be confused with the ACM's A.M. Turing Award Lecture.

Mac MacPherson says:

thanks stuart… for a bit of interesting side-hustle, take a quick look at zeb's little video on Game of Life, here – https://www.youtube.com/watch?v=CuYR5CXAeGA

Write a comment

*

Area 51
Ringing

Answer