Shane Legg, Nate Soares, Richard Mallah, Dario Amodei, Viktoriya Krakovna, Bas Steunebrink, and Stuart Russell explore technical research we can do now to maximize the chances of safe and beneficial AI. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial. For more information on the BAI ‘17 Conference: https://futureoflife.org/ai-principles/ Beneficial AI 2017 A Principled AI Discussion in Asilomar
Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures. Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination Find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ Apply to be the FLI Podcast Producer here: https://futureoflife.org/job-postings/ Follow the podcast on: Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP Apple Podcasts: https://podcasts.apple.com/us/podcast/future-of-life-institute-podcast/id1170991978?mt=2 SoundCloud: https://soundcloud.com/futureoflife Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 1:58 What is truth and knowledge? 11:39 What is subjectivity and objectivity? 15:13 What is the universe ultimately? 20:32 Is the universe a cellular automaton? Is the universe ultimately digital or analogue? 25:59 Hilbert’s hotel from the point of view of computation 39:14 Seeing the world as a fractal 43:00 Describing human consciousness 57:46 Meaning, purpose, and harvesting negentropy 1:02:30 The path to aligned AGI 1:05:13 Bottlenecks to beneficial futures and existential security 1:16:01 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures? 1:30:39 Non-duality and collective coordination 1:34:16 What difficulties are there for an idealist worldview that involves computation? 1:37:19 Which features of mind and consciousness are necessarily coupled and which aren’t? 1:47:47 Joscha’s final thoughts on AGI This podcast is possible because of the support [More]
Is it reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios? Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? While some in the mainstream AI community dismiss the issue, Professor Russell will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimise arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. In this talk, he will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help. This uncertainty causes machine and human behaviour to be inextricably (and game-theoretically) linked, while opening up many new avenues for research. The ideas in this talk are described in more detail in his new book, “Human Compatible: AI and the Problem of Control” (Viking/Penguin, 2019). About the speaker: Stuart Russell received his BA with first-class honours in physics from Oxford University in 1982 and his PhD in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum’s Council on [More]
General Chair: Fahiem Bacchus, University of Toronto Program Chair: Carles Sierra (卡尔.谢拉), IIIA of the Spanish Research Council Keynote Speaker: Stuart Russell, UC Berkeley Plenary, Melbourne Convention Centre, Tue, Aug 22 2017
EECS Colloquium Wednesday, October 16, 2019 306 Soda Hall (HP Auditorium) 4-5p Caption available upon request
Panel Discussion: https://www.youtube.com/watch?v=LShKHZkc34M Stuart Russell is a computer scientist known for his contributions to artificial intelligence. He is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco.
Stuart Russell – Provably Beneficial AI. Presentation given at CogX 2019, on the Alan Turing Research Stage Stuart Russell; Professor of Computer Science, UC Berkeley 03_S5_ALANTURING: 03_5_AlanTuring_day2
[Subtitles included] turn on caption [CC] to enable it 🙂 This video explain about AI concepts, types of AI, How AI works? Benefits and disadvantages of artificial intelligence, What will happen if AI surpass human intelligence, machine learning, technological singularity, artificial neural network, narrow artificial intelligence, weak AI, Strong AI, Artificial general intelligence, super intelligence etc. Music credits: Epic Mountain https://soundcloud.com/epicmountain/war-on-drugs Video clips are from Terminator movie Time Travel: Explained in a nutshell | Can we time travel? | 5 possible ways including limitations https://www.youtube.com/watch?v=ZJoGoH3B0gs&t=61s All about Quasar: The brightest thing of the universe https://www.youtube.com/watch?v=cR2ni… Black hole, White hole and Wormhole Explained as fast as possible https://www.youtube.com/watch?v=huqwH… Top 6 certain astronomical events in 21st century & another top 5 events in the future beyond that https://www.youtube.com/watch?v=aNFih… 5 Mysterious and Unknown Things of the Universe [Subtitles] https://www.youtube.com/watch?v=k_onv… My email: bandhanislam@yahoo.com My Facebook ID: https://www.facebook.com/bandhan.islam.1 Facebook page: https://www.facebook.com/theodd5sstudio/
Exclusive interview with Stuart Russell. He discusses the importance of achieving friendly AI – Strong AI that is provably (probably approximately) beneficial. Points of discussion: A clash of intuitions about the beneficiality of Strong Artificial Intelligence – A clash of intuitions: Alan Turing raised the concern that if we were to build an AI smarter than we are, we might not be happy about the results. While there is a general notion amoungst AI developers etc that building smarter than human AI would be good. – But it’s not clear why the objectives of Superintelligent AI will be inimicable to our values – so we need to solve what some poeple call the value alignment problem. – we as humans learn values in conjunction with learning about the world The Value Alignment problem Basic AI Drives: Any objective generates sub-goals – Designing an AI not want to disable it’s off switch – 2 principles – 1) its only objective is to maximise your reward function (this is not an objective programmed into the machine but is a kind of (non-observed) latent variable – 2) the machine has to be explicitly uncertain about what that objective is – if the robot thinks it knows what your objective functions are, then it won’t believe that it will make you unhappy and therefore has an incentive to disable the off switch – the robot will only want to be switched off if thinks it will makes you unhappy – How will the machines [More]
Stuart Russell gives an introduction to the problem of AI alignment as we walk the path towards provably beneficial AI. The Beneficial AGI 2019 Conference: https://futureoflife.org/beneficial-agi-2019/ After our Puerto Rico AI conference in 2015 and our Asilomar Beneficial AI conference in 2017, we returned to Puerto Rico at the start of 2019 to talk about Beneficial AGI. We couldn’t be more excited to see all of the groups, organizations, conferences and workshops that have cropped up in the last few years to ensure that AI today and in the near future will be safe and beneficial. And so we now wanted to look further ahead to artificial general intelligence (AGI), the classic goal of AI research, which promises tremendous transformation in society. Beyond mitigating risks, we want to explore how we can design AGI to help us create the best future for humanity. We again brought together an amazing group of AI researchers from academia and industry, as well as thought leaders in economics, law, policy, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day technical workshop to look more deeply at how we can create beneficial AGI, and we followed that with a 2.5-day conference, in which people from a broader AI background considered the opportunities and challenges related to the future of AGI and steps we can take today to move toward an even better future.
How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values. Recorded August, 2017
Stuart Russell explores methods by which we might be able to ensure that AI is robust and beneficial. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial. For more information on the BAI ‘17 Conference: AI Principles Beneficial AI 2017 A Principled AI Discussion in Asilomar