Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 3rd November 2011).
Discuss this talk on the Effective Altruism Forum: https://forum.effectivealtruism.org/posts/6uiXHHJQEtMaYQrti/max-tegmark-effective-altruism-existential-risk-and
Elon Musk thinks the advent of digital superintelligence is by far a more dangerous threat to humanity than nuclear weapons. He thinks the field of AI research must have government regulation. The dangers of advanced artificial intelligence have been popularized in the late 2010s by Stephen Hawking, Bill Gates & Elon Musk. But Musk alone is probably the most famous public person to express concern about artificial superintelligence. Existential risk from advanced AI is the hypothesis that substantial progress in artificial general intelligence could someday result in human extinction or some other unrecoverable global catastrophe. One of many concerns in regards to AI is that controlling a superintelligent machine, or instilling it with human-compatible values, may prove to be a much harder problem than previously thought. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals. An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, endanger or even destroy modern civilization. Such risks come in forms of natural disasters like Super volcanoes, or asteroid impacts, but an existential risk can also be self induced or man-made, like weapons of mass destruction. Which most experts agree are by far, the most dangerous threat to humanity. But Elon Musk thinks otherwise. He thinks superintelligent AI is a far more greater threat to humanity than nukes. Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated [More]
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include: -The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as [More]
Many people view the triumph of the computer Watson over the world’s best Jeopardy players in 2011 as the leading edge of a new age of smart machines. Since then, we have seen self-driving cars, smart search engines, and increasingly able robots moving from the realm of science fiction to deployed technology. As this occurs, we increasingly hear dystopian futurists bemoan a world where drones will replace pilots, computers will replace doctors, and scientists will be put out of work as intelligent computers increasingly replace the knowledge workers in modern society. They worry whether, as scientist Steven Hawking stated, “The development of full artificial intelligence could spell the end of the human race.” In this talk, Dr. Hendler will explore that position and contrast with another approach — that “social machines,” which bring together humans and increasingly intelligent computers, may not be something to fear, but rather the best hope to solve the complex problems facing our world.
Many people view the triumph of the computer Watson over the world’s best Jeopardy players in 2011 as the leading edge of a new age of smart machines. Since then, we have seen self-driving cars, smart search engines, and increasingly able robots moving from the realm of science fiction to deployed technology. As this occurs, we increasingly hear dystopian futurists bemoan a world where drones will replace pilots, computers will replace doctors, and scientists will be put out of work as intelligent computers increasingly replace the knowledge workers in modern society. They worry whether, as scientist Steven Hawking stated, “The development of full artificial intelligence could spell the end of the human race.” In this talk, Dr. Hendler will explore that position and contrast with another approach — that “social machines,” which bring together humans and increasingly intelligent computers, may not be something to fear, but rather the best hope to solve the complex problems facing our world.
AI expert Joanna Bryson discusses the real existential threat of AI.