Humans create an AI designed to produce paperclips. It has one goal, to maximize the number of paperclips in the universe. The machine becomes more intelligent and optimizes its single function, producing paperclips. Eventually that machine learns that converting all of the matter, including people, into paperclips, is the best way to achieve its singular goal.
That’s a thought experiment in the book titled, Superintelligence, by nick bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford.
It sounds stupid at first, right?
Well, not entirely.
Our actions as humans are based on values, rules, interest, greed, love, fear.
Artificial Superintelligence has none of that.Just one goal to achieve and unlimited resources. Of course , we are oversimplifying things here, but that’s in order to understand how easy it is to underestimate the potential threats coming from artificial intelligence.
But not everyone agrees with this scenario.
Kurzweil [computers become smarter than humans] this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.”
I am infinitely excited about artificial intelligence and not worried at all. Not in the slightest. AI will free us humans from highly repetitive mindless repetitive office work, and give us much more time to be truly creative. I can’t wait. — Sebastian Thrun, computer science professor, Stanford University
But Stephen Hawking it’s not so optimistic. He has warned that humans would have no chance to control a superintelligent machine , and that “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”
Russell made this analogy: “It’s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you’d better contain the fusion reaction. Similarly, if you want unlimited intelligence, you’d better figure out how to align computers with human needs.”
But let’s set all that to the side.The stage that we are today, is called weak artificial intelligence, and it’s used today on semi self driving cars, personal assistants, and much more. It’s not self-aware or goal-driven,therefore it doesn’t represent any threats to humanity. We are nowhere near having a general purpose artificial intelligence, let alone a superintelligent one. Experts agree, that we are at least 20 years away from the next breakthrough in this field.
Worrying about evil-killer AI today is like worrying about overpopulation on the planet Mars. Perhaps it’ll be a problem someday, but we haven’t even landed on the planet yet. This hype has been unnecessarily distracting everyone from the much bigger problem AI creates, which is job displacement. — Andrew NG, VP and chief scientist of Baidu; co-chair and co-founder of Coursera; adjunct professor, Stanford University
In my opinion, that’s a good point. Let’s focus on the effect of AI on the economy.
“I am less concerned with Terminator scenarios, If current trends continue, people are going to rise up well before the machines do.” MIT economist Andrew McAfee.
Impending Boom by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Backed Vibes Clean – Rollin at 5 by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)