THE FUTURE IS HERE

The biggest A.I. risks: Superintelligence and the elite silos | Ben Goertzel

New videos DAILY: https://bigth.ink

Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge

———————————————————————————-

We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be “insane” to think we can control what it does. What’s the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that’s imbued with compassion and understanding, says Goertzel. One way to limit “people doing bad things out of frustration,” it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

———————————————————————————-

BEN GOERTZEL

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

———————————————————————————-

TRANSCRIPT:

BEN GOERTZEL: We can have no guarantee that a super intelligent AI is going to do what we want. Once we’re creating something ten, a hundred, a thousand, a million times more intelligent than we are it would be insane to think that we could really like rigorously control what it does. It may discover aspects of the universe that we don’t even imagine at this point.

However, my best intuition and educated guess is that much like raising a human child, if we raise the young AGI in a way that’s imbued with compassion, love and understanding and if we raise the young AGI to fully understand human values and human culture then we’re maximizing the odds that as this AGI gets beyond our rigorous control at least it’s own self-modification and evolution is imbued with human values and culture and with compassion and connection. So I would rather have an AGI that understood human values and culture become super intelligent than one that doesn’t understand even what we’re about. And I would rather have an AGI that was doing good works like advancing science and medicine and doing elder care and education becomes super intelligent than an AGI that was being, for example, a spy system, a killer drone coordination system or an advertising agency. So even when you don’t have a full guarantee I think we can do things that commonsensically will bias the odds in a positive way.

Now, in terms of nearer-term risks regarding AI, I think we now have a somewhat unpleasant situation where much of the world’s data, including personal data about all of us and our bodies and our minds and our relationships and our tastes, much of the world’s data and much of the world’s AI fire power are held by a few large corporations, which are acting in close concert with a few large governments. In China the connection between big tech and the government apparatus is very clear, but in the U.S. as well. I mean there was a big noise about Amazon’s new office, well 25,000 Amazon employees are going in Crystal City Virginia right next-door to the Pentagon; there could be a nice big data pipe there if they want. We in the U.S. as well have very close connections between big tech and government. Anyone can Google Eric Schmidt verses NSA as well. So there’s a few big companies with close government connections hoarding everyone’s data, developing AI processing power, hiring most of the AI PhDs and it’s not hard to see that this can bring up some ethical issues in the near-term, even before we get to superhuman super intelligences potentially turning the universe into paper clips. And decentralization of AI can serve to counteract these nearer-term risks in a pretty palpable way.

So as a very concrete example, one of our largest AI development offices for SingularityNET, and for Hanson Robotics the robotics company I’m also involved with, is in Addis Ababa Ethiopia. We have 25 AI developers and 40 or 50 interns there. I mean these young Ethiopians aren’t going to get a job for Google, Facebook, Tencent or Baidu except in very rare cases when they managed to get a work visa to go to one of these countries somehow. And many of the AI applications of acute interest in those countries, say AI for analyzing agriculture and preventing agricultural disease o…

For the full transcript, check out https://bigthink.com/videos/ai-superintelligence