In this video I will talk about the Artificial Intelligence: A Modern Approach book by Stuart Russell and Peter Norvig. This book was the introduction into Artificial Intelligence. AI is all around our lives and it will continue to be even more important in the future, so it’s important to understand it. My playlist about AI: https://www.youtube.com/playlist?list=PL8k7NlvXa9ZmDp_a4XAJVG1jspQkIesgZ Twitter: https://twitter.com/AttilaonthWorld Apple Podcast: https://podcasts.apple.com/podcast/attila-on-the-world/id1543938035 Spotify: https://open.spotify.com/show/1abYxyHGXUEiXNyDcB0FNu Music from Mixkit.co
Stuart Russell is a Professor of Computer Science at the University of California and an author. Programming machines to do what we want them to is a challenge. The consequences of getting this wrong become very grave if that machine is superintelligent with essentially limitless resources and no regard for humanity’s wellbeing. Stuart literally wrote the textbook on Artificial Intelligence which is now used in hundreds of countries, so hopefully he’s got an answer to perhaps the most important question of this century. Expect to learn how artificial intelligence systems have already manipulated your preferences to make you more predictable, why social media companies genuinely don’t know what their own algorithms are doing, why our reliance on machines can be a weakness, Stuart’s better solution for giving machines goals, what the future of artificial intelligence holds and much more… Sponsors: Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Get perfect teeth 70% cheaper than other invisible aligners from DW Aligners at http://dwaligners.co.uk/modernwisdom Extra Stuff: Buy Human Compatible – https://amzn.to/3jh2lX5 Get my free Reading List of 100 books to read before you die → https://chriswillx.com/books/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom #artificialintelligence #controlproblem #computerscience – 00:00 Intro 00:33 King Midas & AI 06:07 Super-intelligent AI 11:48 Language Challenges 21:42 How AI Could Go Wrong 46:17 Social Media Algorithms 1:03:14 Becoming Enfeebled by Machines 1:20:44 Maintaining Control of AI Growth 1:42:23 Impacts of Stuart’s Work 1:48:01 Where to Find Stuart – [More]
Stuart Russell is a British computer scientist known for his contributions to artificial intelligence. He warns about the risks involved in the creation of AI systems. Artificial intelligence has become a key behind-the-scenes component of many aspects of our day-to-day lives. The promise of AI has lured many into attempting to harness it for societal benefit, but there are also concerns about its potential misuse. Dr. Stuart Russell is one of AI’s true pioneers and has been at the forefront of the field for decades. He proposes a novel solution which bring us to a better understanding of what it will take to create beneficial machine intelligence. In view of the recent warnings from researchers and entrepreneurs that artificial intelligence or AI may become too smart, major players in the technology field are thinking about different methods for mitigating these concerns and preserving human control. Stuart Russell, argues that if AI surpasses humanity in general intelligence and becomes Superintelligent, then this new Superintelligence could become powerful and difficult to control. Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed Superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the Superintelligence therefore decides to resist shutdown and modification, it would be smart enough to outwit its human [More]
“How can AI improve everyone’s quality of life? Professor Stuart Russell of Berkeley, discusses the life-changing benefits of artificial intelligence. In part one of this four part interview series, James Manyika, co-chair and a director of the McKinsey Global Institute sat down with Professor Stuart Russell to talk about how artificial intelligence (AI) can make our lives better, such as helping to accelerate scientific breakthroughs dramatically, improving education around the globe, and “putting together a complete picture… beyond the capabilities of the human mind.”” Professor Russell is a leading artificial-intelligence researcher at the University of California, Berkeley, and author of the book “”Human Compatible”” (Penguin Random House, October 2019). In this broad conversation, they explore the immense benefits ahead and what our role will be as AI becomes more pervasive. They also delve into potential challenges we may face with our current approach to AI, and how we can redefine AI to ensure it helps humanity achieve its full potential. Part 1: https://youtu.be/HKkmcMy6_A8 Part 2: https://youtu.be/0TVUV9C12XY Part 3: https://youtu.be/-ZwFsZpepmI Part 4: https://youtu.be/QRjtVRVSfy4 See the full the interview on our website: https://www.mckinsey.com/featured-insights/artificial-intelligence/how-to-ensure-artificial-intelligence-benefits-society-a-conversation-with-stuart-russell-and-james-manyika Learn more about McKinsey’s approach to AI: https://www.mckinsey.com/featured-insights/artificial-intelligence Subscribe to McKinsey on YouTube (https://www.youtube.com/c/McKinsey/?sub_confirmation=1) Connect with us at: Website: https://www.mckinsey.com/ LinkedIn: https://www.linkedin.com/company/mckinsey/ Twitter: https://twitter.com/McKinsey Facebook: https://www.facebook.com/McKinsey/ Instagram: https://www.instagram.com/mckinseyco/ #ai #artificialintelligence #computerscience”
The slides to this talk can be downloaded here: http://www.pt-ai.org/2013/program Abstract: The notion of bounded optimality has been proposed as a replacement for perfect rationality as a theoretical foundation for AI. I will review the motivation for this concept, including similar ideas from other fields, and describe some research undertaken within this paradigm to address the problems faced by intelligent agents in making complex decisions over long time scales.
Stuart Russell, Professor at UC Berkeley and AI pioneer explains how it will be up to us to teach robots how to make the right decisions. The Hello Tomorrow Global Summit 2017 www.hello-tomorrow.org Credits to Web Style Productions, BETAVITA and IMMAGINARTI Digital & Video
Professor Stuart Russell, one of the world’s leading scientists in Artificial Intelligence, has come to consider his own discipline an existential threat to humanity. In this video he talks about how we can change course before it’s too late. His new book ‘Human Compatible: AI and the Problem of Control’ is out now: https://www.penguin.co.uk/books/307/307948/human-compatible/9780141987507.html Watch full 41min interview on Ai: ► https://www.patreon.com/posts/45684565 Join the Future of Journalism ► https://www.patreon.com/DoubleDownNews Support DDN ► https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=TLXUE9P9GA9ZC&source=url
——————Support the channel———— Patreon: https://www.patreon.com/thedissenter SubscribeStar: https://www.subscribestar.com/the-dissenter PayPal: paypal.me/thedissenter PayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuy PayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9l PayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpz PayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9m PayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ——————Follow me on——————— Facebook: https://www.facebook.com/thedissenteryt/ Twitter: https://twitter.com/TheDissenterYT Anchor (podcast): https://anchor.fm/thedissenter RECORDED ON NOVEMBER 20th 2020. Dr. Stuart Russell is a Professor of Computer Science at the University of California, Berkeley, where he also holds the Smith-Zadeh Chair in Engineering. He founded and leads the Center for Human-Compatible Artificial Intelligence there. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity. Dr. Russell is co-author of the most popular textbook in the field of artificial intelligence: Artificial Intelligence: A Modern Approach used in more than 1,400 universities in 128 countries. His most recent book is Human Compatible: Artificial Intelligence and the Problem of Control. In this episode, we talk about artificial intelligence. We start with what intelligence is, and what would human-level AI look like. We get into what is worrisome about advanced AI systems, and the issue with value alignment. We discuss different instances where AI is already impacting our lives, and where it will progress in the future, including the impact it will have on the job market. We also [More]
General Chair: Fahiem Bacchus, University of Toronto Program Chair: Carles Sierra (卡尔.谢拉), IIIA of the Spanish Research Council Keynote Speaker: Stuart Russell, UC Berkeley Plenary, Melbourne Convention Centre, Tue, Aug 22 2017
In this livestream, I will be joined by UC Berkley professor Stuart Russell to explore the role of artificial intelligence in our world. As one of the world’s leading thought leaders on the topic, we will talk about the latest AI innovations, the dangers that come with AI, as well as what this all means for us humans.
http://www.weforum.org/ “Everything civilization has to offer is the product of our intelligence, so if we can amplify that there is no limit to where the human race can go,” says Stuart Russell from the University of California, Berkeley. But Russell says it’s vital men and machines have a shared set of values.
“Future Cities and A.I. in Partnership with Dubai Municipality” by Prof. Stuart Russel, Professor of Computer Science and Electrical Engineering, UC Berkeley. #WorldGovSummit
Stuart Russell (University of California, Berkeley, USA): I will briefly survey recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans. Moderator: Helga Nowotny (Chair of the ERA Council Forum Austria and Former President of the ERC)
EECS Colloquium Wednesday, October 16, 2019 306 Soda Hall (HP Auditorium) 4-5p Caption available upon request
*유튜브 한글 자막 제공 *Korean captions available AI expert Stuart Russell argues that the field of AI will undergo a profound shift. Machines will need to understand human values, and in the process we will understand them better ourselves. 인공지능 권위자 스튜어트 러셀 교수는 앞으로 인공지능 분야가 겪게 될 근본적인 변화에 대해 이야기한다. 그는 기계가 인간의 가치를 배워야만 하며, 그 과정을 통해 우리 또한 인간의 가치에 대해 더 많은 것을 깨닫게 될 것이라고 주장한다.
This conversation is part of the Future of Work Pioneers Podcast. Today we are joined by Professor Stuart Russell. Dr Russell was trained at Oxford and Stanford. He is currently a member of the faculty at UC Berkeley, where he holds the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum’s Council on AI and Robotics. SPONSORED BY: – https://www.experfy.com/ GUEST INFO: – https://people.eecs.berkeley.edu/~russell/ OUTLINE: 0:09 – Introduction 0:45 – UK VS US Education System 5:56 – Human Compatible: AI and the Problem of Control 11:23 – Human-level or Superhuman AI 16:53 – AI bubbles 21:49 – Increased popularity of AI courses 25:35 – Groundbreaking developments in AI 29:20 – Tesla, Waymo, & Google: Self-Driving & Lidar 34:30 – Future of Work: AI impacting the Economy CONNECT: Subscribe to this YouTube channel LinkedIn Host: https://www.linkedin.com/in/hsingh1/
Dr. Stuart Russell is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco. Dr. Russell’s research spans many areas of artificial intelligence, including machine learning, probabilistic reasoning, and philosophical foundations. Recently, his work has focused on ensuring that advanced AI is developed safely.
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include: -The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as [More]
Interview with Stuart Russell, Professor of Computer Science, University of California, Berkeley, at the AI for Good Global Summit 2018, ITU, Geneva, Switzerland.
What could go wrong with artificial intelligence? And how can we make it right? Prof. Stuart Russell and James Manyika discuss the possibilities and solutions. The idea of intelligent machines taking control dates back to the 1870s. But, says Stuart Russell, “We don’t really want intelligent machines… that pursue objectives. What we want are machines that are beneficial to us. This sort of binary relationship.” Watch the full series: https://mck.co/31stnk4
How Not to Destroy the World with AI – Stuart Russell | AAAI 2020 Source: https://vimeo.com/389553895 Stuart Russell (University of California, Berkeley, USA) Subscribe and hit 🔔 to be in notification squad 🙂 This video is reposted for educational purpose. Recent and expected developments in AI and their implications. Some are enormously positive, while others, such as the development of autonomous weapons and the replacement of humans in economic roles, may be negative. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real and that the technical aspects of it are solvable if we replace current definitions of AI with a version based on provable benefit to humans.
Panel Discussion: https://www.youtube.com/watch?v=LShKHZkc34M Stuart Russell is a computer scientist known for his contributions to artificial intelligence. He is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco.
Listen to the Science Salon Podcast # 118 (audio-only): http://bit.ly/ScienceSalon118 In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable. In this groundbreaking book, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial. Shermer and Russell also discuss: • natural intelligence vs. artificial intelligence • “g” in human intelligence vs. G in AGI (Artificial [More]