A high-level talk about AI and what it means for humanity. ~~~ References: 1) Can we build AI without losing control over it? | Sam Harris (https://youtu.be/8nt3edWLgIg) 2) The danger of AI is weirder than you think | Janelle Shane (https://youtu.be/OhCzX0iLnOc) 3) “The Rise of the Drones” NOVA season 40, episode 8 (https://youtu.be/0p4BQ1XzwDg) 4) Artificial Intelligence: it will kill us | Jay Tuck | TEDxHamburgSalon (https://youtu.be/BrNs0M77Pd4) 5) The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo (https://youtu.be/TRzBk_KuIaM) 6) Introduction to Machine Learning with Tensoreflow.js | Asim Hussain 7) Gordon Moore | Wikipedia (https://en.wikipedia.org/wiki/Gordon_Moore) 8) Neural Network In 5 Minutes | What Is A Neural Network? | How Neural Networks Work | Simplilearn (https://youtu.be/bfmFfD2RIcg) 9) AI in China | Siraj Raval (https://youtu.be/4Gk6mxKXKTk) 10) From Artificial Intelligence to Superintelligence: Nick Bostrom on AI & The Future of Humanity | Science Time (https://youtu.be/Kktn6BPg1sI)
Stuart Russell warns about the dangers involved in the creation of artificial intelligence. Particularly, artificial general intelligence or AGI. The idea of an artificial intelligence that might one day surpass human intelligence has been captivating and terrifying us for decades now. The possibility of what it would be like if we had the ability to create a machine that could think like a human, or even surpass us in cognitive abilities is something that many envision. But, as with many novel technologies, there are a few problems with building an AGI. But what if we succeed? What would happen should our quest to create artificial intelligence bear fruit? How do we retain power over entities that are more intelligent than us? The answer, of course, is that nobody knows for sure. But there are some logical conclusions we can draw from examining the nature of intelligence and what kind of entities might be capable of it. Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He outlines the definition of AI, the risks and benefits it poses for the future. According to him, the idea of an AGI is the most important problem to intellectually to work on. An AGI could be used for many good and evil purposes. Although there are huge benefits to creating an AGI, there are also downsides to doing so. If we create and [More]