Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list. Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term “superintelligence”. Bostrom believes that superintelligence, which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is [More]
Protocol Labs founder Juan Benet speaks with Nick Bostrom, a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is also the most-cited professional philosopher in the world under the age of 50. Breakthroughs in Computing is a speaker series focused on how technology will shape society in the next 5-25 years. Join us in person https://breakthroughs-in-computing.labweek.io/ or sign up for future livestreams this week: https://www.youtube.com/c/ProtocolLabs
#simulationhypothesis #artificialintelligence #nickbostrom Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list. Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term “superintelligence”. Bostrom believes that superintelligence, which he defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In his book Superintelligence, Professor Bostrom asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals [More]
How should we prepare for the time when machines surpass humans in intelligence? Professor Nick Bostrom explores the frontiers of thinking about the human condition and the future of intelligent life.
For highlights from the video please visit the site: https://www.youtube.com/watch?v=WWpdZ5-5VZE Monday, September 8, 2014 at Northwestern Hughes Auditorium Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful—possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom’s work nothing less than a reconceptualization of the essential task of our time. Speaker: Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the [More]
Like this video about the future of AI and subscribe here: https://freeth.ink/youtube-subscribe-futureofai Up next: Can Artificial Intelligence Solve Our Biggest Problems? https://youtu.be/omiibU3oi9I Beyond the brains of even the most intelligent human beings lies artificial superintelligence, which will have the potential to grow infinitely intelligent at an unbelievable rate. Most experts agree that we could see the development of superintelligence in our lifetimes, and we’re hopeful for positive outcomes, such as a cure for cancer. On the flipside though, the future of AI also has the potential to pose serious threats to humanity. Nick Bostrom, a philosopher and expert on AI ethics, is attempting to fathom the unfathomable so the human race can be ready. See the full article on the dangers and future of AI here: https://www.freethink.com/shows/uprising/future-of-ai-superintelligence Check out our other popular videos on robots: -The Search & Rescue Robots That Could Save Your Life: https://youtu.be/qqZJci3C8HQ -Meet Your Future Caretaker Bot: https://youtu.be/KypYVagpYBk -Robots Are Stealing Our Jobs, But It Might Be a Good Thing: https://youtu.be/-jeSitHw-lk Follow Freethink. -Facebook: https://www.facebook.com/freethinkmedia -Twitter: https://twitter.com/freethinkmedia -Instagram: https://www.instagram.com/freethink -Website: http://www.freethink.com Join the Freethink forum: http://www.facebook.com/groups/freethinkforum
Full episode with Nick Bostrom (Mar 2020): https://www.youtube.com/watch?v=rfKiTGj-zeQ Clips channel (Lex Clips): https://www.youtube.com/lexclips Main channel (Lex Fridman): https://www.youtube.com/lexfridman (more links below) Podcast full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Podcasts clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 Podcast website: https://lexfridman.com/ai Podcast on Apple Podcasts (iTunes): https://apple.co/2lwqZIr Podcast on Spotify: https://spoti.fi/2nEwCF8 Podcast RSS: https://lexfridman.com/category/ai/feed/ Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Subscribe to this YouTube channel or connect on: – Twitter: https://twitter.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – Medium: https://medium.com/@lexfridman – Support on Patreon: https://www.patreon.com/lexfridman
CeBIT Global Conferences – 17 March 2016: Keynote Dr. Nick Bostrom, Director, Future of Humanity Institute, University of Oxford
Author & Founding Director of Oxford University’s Future of Humanity Institute, Nick Bostrom, discusses the implications and potential effects of artificial intelligence and developments in technology at IP EXPO Europe 2016
Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: – ExpressVPN at https://www.expressvpn.com/lexpod – MasterClass: https://masterclass.com/lex – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick’s website: https://nickbostrom.com/ Future of Humanity Institute: – https://twitter.com/fhioxford – https://www.fhi.ox.ac.uk/ Books: – Superintelligence: https://amzn.to/2JckX83 Wikipedia: – https://en.wikipedia.org/wiki/Simulation_hypothesis – https://en.wikipedia.org/wiki/Principle_of_indifference – https://en.wikipedia.org/wiki/Doomsday_argument – https://en.wikipedia.org/wiki/Global_catastrophic_risk PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 – Introduction 2:48 – Simulation hypothesis and simulation argument 12:17 – Technologically mature civilizations 15:30 – Case 1: if something kills all possible civilizations 19:08 – Case 2: if we lose interest in creating simulations 22:03 – Consciousness 26:27 – Immersive worlds 28:50 – Experience machine 41:10 – Intelligence and consciousness 48:58 – Weighing probabilities of the simulation argument 1:01:43 – Elaborating on Joe Rogan conversation 1:05:53 – Doomsday argument and anthropic reasoning 1:23:02 – Elon Musk 1:25:26 – What’s outside the simulation? 1:29:52 – Superintelligence 1:47:27 – AGI utopia 1:52:41 – Meaning of life CONNECT: – Subscribe to [More]
Artificial Superintelligence or ASI, sometimes referred to as digital superintelligence is the advent of a hypothetical agent that possesses intelligence far surpassing that of the smartest and most gifted human minds. AI is a rapidly growing field of technology with the potential to make huge improvements in human wellbeing. However, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique risks. Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when or how this will happen. One only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI: – Intelligence is a product of information processing in physical systems. – We will continue to improve our intelligent machines. – We do not stand on the peak of intelligence or anywhere near it. Philosopher Nick Bostrom expressed concern about what values a superintelligence should be designed to have. Any type of AI superintelligence could proceed rapidly to its programmed goals, with little or no distribution of power to others. It may not take its designers into account at all. The logic of its goals may not be reconcilable with human ideals. The AI’s power might lie in making humans its servants rather than vice versa. If it were to succeed in this, it would “rule without competition under a dictatorship of one”. Elon Musk has also warned that the global race toward AI could result in a third world war. [More]
Link to TechEmergence article: http://techemergence.com/podcast-62-nick-bostrom/ Episode Summary: In our exclusive interview with Dr. Bostrom (below), we explore the topic of identifying “existential” human risks (those that could wipe out life forever), and how individuals and groups might mediate these risks on a grand scale to better secure the flourishing of humanity in the coming decades and centuries.
Taken from JRE #1350 w/Nick Bostrom: https://youtu.be/5c4cv7rVlE8
Η Τεχνητή Νοημοσύνη γίνεται όλο και εξυπνότερη. Κάποια στιγμή θα μας φτάσει και θα μας ξεπεράσει. “Η Τ.Ν. θα είναι η τελευταία εφεύρεση του ανθρώπου…” Ομιλία του φιλοσόφου και τεχνολόγου Nick Bostrom Μεταγλώτιση του βίντεο: (12) What happens when our computers get smarter than we are? | Nick Bostrom – YouTube https://www.youtube.com/watch?v=MnT1xgZgkpk
What happens when our computers get smarter than we are? | Nick Bostrom * About the talk: … * About the speaker: … — Kênh Youtube “TED TALKS VIETSUB HAY NHẤT”: – Chọn lọc video TED hay nhất, mang tới bạn những ý tưởng thú vị – Phụ đề song ngữ Anh Việt – Công khai video vào 19h tối mỗi ngày — Vui lòng: + SUBSCRIBE + LIKE + COMMENT + SHARE để ủng hộ & không bỏ lỡ video mới 🙂 — Bạn có thể tìm thấy kênh của chúng tôi qua các từ khóa sau: ted talk song ngữ,ted talks vietsub,ted talks,ted talks vietsub hay nhất,ted talks subtitles english vietsub,ted ed song ngữ,ted song ngữ,ted talks vietsub học tiếng anh,ted vietsub,ted talks song ngu,ted talk vietsub,ted ed vietsub,ted ideas worth spreading có phụ đề,ted talks subtitles english,ted talk,ted talks vietsub engsub,ted.com english vietsub,ted ed,ted talks vietsub english,ted talk song ngữ jack ma,ted talks ai,ted talks education,ted talks vietsub đỗ nhật nam,ted có phụ đề tiếng anh,ted english subtitle,ted talk future,ted talk vietsub engsub,ted talk with english subtitle,ted talks english,ted talks orgasm,ted talks psychology,ted talks vietsub song ngữ,ted technology,ted.com english có phụ đề,ted.com talks,tedtalks,vietsub —- Thank you, TED! Website: www.ted.com Youtube: https://www.youtube.com/user/TEDtalksDirector — THANKS FOR WATCHING 🙂