Stuart Russell – AI: The Story So Far – CSRBAI 2016

Share it with your friends Like

Thanks! Share it with your friends!


Presented at the 2016 Colloquium Series on Robust and Beneficial AI (CSRBAI) hosted by the Machine Intelligence Research Institute (MIRI) and Oxford's Future of Humanity Institute (FHI).


judgeomega says:

Human values are injected subliminally into our children through our environment, school, and mass media. Morality is not some transcendental truth which exists and we just learn it. It is manufactured by you and I. Whether we know it or not, every action and word we make has influence on the world.

John Champagne says:

Can we get a ranking from AI entities of the proposals that promise an end to poverty AND a limit to humans' impacts on the environment consistent with the will of the people at large? What are the best proposals for creating a sustainable and just society?

As a point of comparison, there is a call for equal sharing of natural wealth that would involve industries paying money to the people whenever they take resources or put pollution. They would pay more when random surveys show that most people want more effort put toward reducing impacts of this or that kind.

Equal sharing of Natural Resources promotes Justice and Sustainability:

David Gelperin says:

If a major problem for beneficial AI is "value system alignment", there are several questions: (1) Is there a single human "value system" with strong consensus or are there multiple human "value systems"? (2) If multiple systems, will alignment with any one lead to beneficial AI? (3) Can any "value function" reflect any human "value system"? (4) Can we "know" that a value function accurately reflects its associated value system? (5) Are all conditions in a reflecting value function determinable (alive vs. dead)?

Science, Technology & the Future says:

Awesome to have Stuart Russell discussing AI Safety – a very important topic. Too long have people been associating the idea of AI safety issues with Terminator – unfortunately the human condition is such that people often don't give themselves permission to take seriously non-mainstream ideas unless they see a tip of the hat from an authority figure.

Ben Sibree-Paul says:

Thousands of people will get the chance to see this so thanks for uploading and great talk too.

Christopher Macias says:

If we will not find answer about human "VALUES" then AI will do that for us based on average values of all 7 billion humans. Would it be your values right now????

Luis Guillermo Restrepo Rivas says:

One of the many errors of the automatic subtitles: the coauthor of "AI: A modern approach" is Peter Norvig, not some Pierre Norfolk.

Nobody says:

Nice to see you updating again.

Write a comment