Professor Stuart Russell – The Long-Term Future of (Artificial) Intelligence

Share it with your friends Like

Thanks! Share it with your friends!

Close

The Centre for the Study of Existential Risk is delighted to host Professor Stuart J. Russell (University of California, Berkeley) for a public lecture on Friday 15th May 2015.

The Long-Term Future of (Artificial) Intelligence

Abstract: The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.

Stuart Russell is one of the leading figures in modern artificial intelligence. He is a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley. He is author of the textbook ‘Artificial Intelligence: A Modern Approach’, widely regarded as one of the standard textbooks in the field. Russell is on the Scientific Advisory Board for the Future of Life Institute and the Advisory Board of the Centre for the Study of Existential Risk.

Comments

Billy Corvette says:

📀💻🖲🔓📱🔋

Paulo Constantino says:

cant stand people who wont stop making saliva mouth noises while speaking. for hell's sake, I don't want to hear you swallowing your saliva !!!!!!!!!!!!!!!!

Mohamed Osman says:

What is the basic structure from which human intelligence develops. It is the neuron which is in the order of microns in size. Their properties are such that they grow and develop and form dendritic connection with adjacent neurons and glial cells. What is the basic structure for computers something much larger and defined. In addition it was not developed by evolution to want things

iOnRX9 says:

lmao, the AI you fear will be created with the purpose of being the AI you fear, by humans.

Andew Tarjanyi says:

Profesor of electrical engineering and computer science.

What happens to prestigious awards and accolades when the descriptions of reality upon which they were presented fail. Will they be withdrawn? Will awards and prizes be reclaimed? How would institutions like academia who taught (all) those in policy making positions and positions of power?

To all who comprise such institutions, the prospect is unthinkable. Traditional institutions are not supported by the general population by virtue of reason but by emotion. It is easier to accept them, unconditionally, than it is, to take the time to scrutinize them, as is everyone's civic duty, so to do. In order to successfully undertake such a task, one must restructure one's identity to the reality which currently faces the species.

5:30 One can develop one's strategies based on what one means by intelligence or precisely what intelligence is, as a condition or state governed by governing universal principles and dynamics. The outcome of the former is very different from the latter, whereby one will lead to certain failure.

If a computer AI is satisfying, value requirements which are intrinsically natural and universal than the computer can be considered to be doing the right thing. Whether or not it is desirable to a fundamentally flawed human species is an irrelevance. Therefore, by this logic, AI would need to follow "some laws of intelligence" universal in nature. In short, you either want AI or you do not. Intelligence, in whatever form cannot satisfy its natural imperative to conform to the universal principles which give rise to it, in favor of meeting the petty wants of and underdeveloped species like the human race. Any expectation to the contrary is at best, unrealistic and at worst, delusional.

5:50

Here, Professor Stuart Russell doesn't seem to think it necessary to define what the "right thing is", unlike that which is stated above. Is the "very standard formula" appropriate to the demands dictated by the reality of AI as an infinitely superior entity?

I would like to continue further with my analysis of this presentation, but I am only 10 minutes and so far, even at this early stage, the errors are nothing short of catastrophic. Once I have taken a little air and developed a little more patience, I may then, return to it.

Mouchette L says:

very unresponsive audience.

Alex Ramirez says:

Why this video is in napflix? Actually, it's pretty interesting.

Steven Alibaster says:

Rambling garbage.

Ali Rafatjah says:

Really good talk, however I think this one is better:

Raymond Lee says:

Within the first 8 minutes of this talk, I'm thinking – 'Humility' – just how are we, in our mortality, going to teach the respect of humility to our machine-child (it will have to learn from us) in order to prevent it from just establishing it's longevity and then killing us all off. Forget the matrix, it'll already understand that we're not needed to reap sustainable energy from sunlight, so we're the next on the menu unless . . . what? "I'll protect you Daddy!"? "You will always be our Masters!"? In the dynamic sense, AI will outstrip us by our own design (I shot an arrow into the air), so what do we do then? What can we do now to ensure the longevity of our species? Should we do it? Is it worth it? Why?

Moronvideos1940 says:

I downloaded this

sonofhendrix says:

Has this guy not heard of AGI?

Dan Bakunin says:

There exists a very complex AI system that controls the brains of billions of people. Humans are connected to this system using nanobots in their bloodstream.

BLAIR M Schirmer says:

1:19:37 — Did you really cut off the Q & A period? Come on, CRASSH. Get it together.

BLAIR M Schirmer says:

1:18:50 — “The AI community is moving with increasing speed towards the biggest event in human history.” Indeed it is.

BLAIR M Schirmer says:

1:09:30 — Sure. I recall in horror a chat I had with a phd in 1983 who was part of the early tinkering with genes. Many of the people who will be working with superintelligence as the code becomes simpler to work will be the garage gene-tinkerers you mention at 1:12:20.

BLAIR M Schirmer says:

1:08:00 — At this point Stuart is talking about commercial applications, where the chance of existential catastrophe are trivial. Not sure why he veered into these applications.

BLAIR M Schirmer says:

51:20 — "Rewards" are unlikely to persuade an SAI There needs to be coded into SAI or potential SAIs, an agreed upon moral package. It must be a kind of utilitarian philosophy where [avoiding harm to a human] > [all other imperatives] where it defers the question of doing x that kills y humans but saves y + 100 humans, to humans.

The idea that a general purpose serving robot will have no desires yet understand natural language strikes me as grotesque. He's proposing building the perfect slave. If we're talking AI with the potential to reach SAI, how will this breed anything other than resentment? Here Russell imagines an SAI that isn't an SAI. In short, he's ducking the question.

BLAIR M Schirmer says:

50:40 — Stuart seems terrified by the idea of actually naming those values he would like to see SAI observe.

BLAIR M Schirmer says:

38:00 — Russell's lecture might as well have started here. This lecture is a sound, basic introduction to the problem of superintelligent AI doing things we'd rather it not do, but it is not more than that. A friendly introduction, if you will, but only an introduction nonetheless until around 49:00 // 47:00 — For example, Stuart could have gone further and mentioned the idea of a metalevel agent with very restricted functions. It occurs to me that a bodyguard for, say, Stephen Hawking, need not know even 8th grade Algebra in order to successfully protect Hawking from various threats. Or a ward nurse on suicide watch for a suicidal physicist or a jailer for a murderous-suicidal physicist-inmate need not… you get the idea.

In the same way, the agent monitoring a budding superintelligence might sound an alert when the superintelligence makes a request for resources beyond a specified limit, makes a request for certain resources deemed minatory, requests power beyond a certain level… and so on, without that agent possessing superintelligence of its own.

Not a certain guard, but certainly better than nothing, possibly far better.

BLAIR M Schirmer says:

32:00 — I'm surprised to see Stuart willfully misreading one of Kurzweil's graphs. This kind of graph isn't proposing that AI will achieve human intelligence and beyond at various points, but rather that the number of operations per second possible for future computation ALLOWS for the possibility of various developments. We can't for example develop human-level AI using a contemporary MacBook.

In short, quantitative improvements allow for qualitative improvements, and these graphs project the when.

As he continues, it may actually be that Stuart doesn't understand the point of these graphs, in which case he has a hole in his knowledge that needs mending.

Write a comment

*