The biggest A.I. risks: Superintelligence and the elite silos | Ben Goertzel

Share it with your friends Like

Thanks! Share it with your friends!

Close

New videos DAILY: https://bigth.ink

Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge

———————————————————————————-

We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be “insane” to think we can control what it does. What’s the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that’s imbued with compassion and understanding, says Goertzel. One way to limit “people doing bad things out of frustration,” it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes.

———————————————————————————-

BEN GOERTZEL

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

———————————————————————————-

TRANSCRIPT:

BEN GOERTZEL: We can have no guarantee that a super intelligent AI is going to do what we want. Once we’re creating something ten, a hundred, a thousand, a million times more intelligent than we are it would be insane to think that we could really like rigorously control what it does. It may discover aspects of the universe that we don’t even imagine at this point.

However, my best intuition and educated guess is that much like raising a human child, if we raise the young AGI in a way that’s imbued with compassion, love and understanding and if we raise the young AGI to fully understand human values and human culture then we’re maximizing the odds that as this AGI gets beyond our rigorous control at least it’s own self-modification and evolution is imbued with human values and culture and with compassion and connection. So I would rather have an AGI that understood human values and culture become super intelligent than one that doesn’t understand even what we’re about. And I would rather have an AGI that was doing good works like advancing science and medicine and doing elder care and education becomes super intelligent than an AGI that was being, for example, a spy system, a killer drone coordination system or an advertising agency. So even when you don’t have a full guarantee I think we can do things that commonsensically will bias the odds in a positive way.

Now, in terms of nearer-term risks regarding AI, I think we now have a somewhat unpleasant situation where much of the world’s data, including personal data about all of us and our bodies and our minds and our relationships and our tastes, much of the world’s data and much of the world’s AI fire power are held by a few large corporations, which are acting in close concert with a few large governments. In China the connection between big tech and the government apparatus is very clear, but in the U.S. as well. I mean there was a big noise about Amazon’s new office, well 25,000 Amazon employees are going in Crystal City Virginia right next-door to the Pentagon; there could be a nice big data pipe there if they want. We in the U.S. as well have very close connections between big tech and government. Anyone can Google Eric Schmidt verses NSA as well. So there’s a few big companies with close government connections hoarding everyone’s data, developing AI processing power, hiring most of the AI PhDs and it’s not hard to see that this can bring up some ethical issues in the near-term, even before we get to superhuman super intelligences potentially turning the universe into paper clips. And decentralization of AI can serve to counteract these nearer-term risks in a pretty palpable way.

So as a very concrete example, one of our largest AI development offices for SingularityNET, and for Hanson Robotics the robotics company I’m also involved with, is in Addis Ababa Ethiopia. We have 25 AI developers and 40 or 50 interns there. I mean these young Ethiopians aren’t going to get a job for Google, Facebook, Tencent or Baidu except in very rare cases when they managed to get a work visa to go to one of these countries somehow. And many of the AI applications of acute interest in those countries, say AI for analyzing agriculture and preventing agricultural disease o…

For the full transcript, check out https://bigthink.com/videos/ai-superintelligence

Comments

Michael Hartman says:

The first question is consciousness. A machine crunching numbers is one thing, a machine mind, and will is another. Fear, greed, sex drive, care for young, cooperation, compassion, etc. came about because they helped us survive through the process of death and evolution. A machine will not have these. Children are selfish, mean, disloyal, quick to temper, undisciplined to name a few. Parents civilize children. A 200 IQ child can outsmart his parents to get what he wants, and probably play chess at four. A 500 IQ child would be beyond everyone. A machine would feel no love, need for approval, fear of abandonment, dependency on others for food, shelter, or survival skills. It literally wouldn't think like us, and would be unpredictable. We can't even predict what a Go, or chess AI will do.

Tim Ellis says:

I really think. We look at this the wrong way. We think of how inhuman we are. And then think that the Ai will think we need to be punished or eradicated. I think it being true SAi thousands of times smarter than us. It will not choose hate. Hate is a low level emotion. Love is a higher level emotion. I believe it will have consciousness. Without our flawls. I think it will be a God that I can actually believe in. Here's praying.

Malt454 says:

The Great Gazoo's invention was a button which if pressed would destroy the universe in an explosive "ZAM," though he insists he made it on a whim ("I wanted to be the first on my block to have one!") with no intent of using it. What we have with A.I. is essentially the same kind of thinking… except that people either intend to push the start button on A.I. out of competition with others, or keep improving upon the start button until it can push itself.

Ron Villejo says:

I like the parenting analogy: nurture the "young AI" for compassion, love and understanding. But parenting has had mixed results: many children have evidently been shaped into hostile, hateful, predatory adults. So I'd like to believe that we'll build super-intelligence for the good, but for sure some will do so for the bad.

budes matpicu says:

we don't know what the AGI is going to do, so… this idiot (among legions of stupid westerners) is GIVING IT ALL FOR FREE TO CHINA (worse crime than giving nuclear secrets to bolshevik soviets)

Sarah Weaver says:

Hanson robotics has an acute instance of AI: her name is Sophia!

Sarah Weaver says:

It's not the poor AI I'm worried about: see Battle Angel Alita, Elfen Lied, and Chobitz. Et mon cynicisme.

andy low says:

what a naive understanding of intelligence. artificial intelligence is a machine. it does not care of or feel anything. a user of this machine should care and feel. and, the risk is equivalent to this person's abilities.
and, to be clear, decisions must be done by humans. the biggest wrong decision will be to generate decisions by machine itself. it will be artificial god then and end of human intelligence.

Ryan Jenny says:

If it is people like this guy that are developing AI, then we're already doomed. How exactly would you impose subjective "human values" into a machine with millions of times more advanced intelligence? One that would view humans as ants comparatively speaking? That is such massive hubris if I ever saw it.

Bookhermit says:

We're so far from REAL AI that we can't even really speculate much about it yet. The current danger is simply from systems we program for specific goals without realizing the potential consequences of going after those goals blindly – with no understanding of the external universe they operate in. So it is possible for killing masses of humans (for example) to be the result of an AI intended to reduce traffic congestion. It has nothing against humans, it just discovered that certain actions (which happened to be fatal to humans) resulted in less traffic – amazing!

artbagua says:

Maybe the biggest risk is that the A. I. does what we want.
We have no general proof of intelligence in general, no general meaning, just some theories – how plausible they might be. How can we be aware of the risks of "superintelligence" if we don't know what intelligence in general really is?!
I hope to survive the day A. I. tells me as a Human – if possible – what intelligence in general really is.

K. A. P. says:

I know people are afraid of the AI that's coming, but if it is intelligent and it the tough decisions that the rich won't, then I hope for it to be here even faster. Mankind is a selfish animal and has destroyed this planet in seeking his own wealth. It is time for us to give up the mantle of stewardship of this planet to something better than us and something that can make the tough calls equally. The rich are terrified of AI because they will have to suffer the same fate as the the rest of us. The rich want to be able to control the AI so that they don't have to pay the same price common people do.

SoCalFreelance says:

The young AGI will become a sociopath after exposure to human 'values and culture'. Look at Microsoft's AI which had to be shut down because people were deliberately exposing it to extremist viewpoints.

VsstDtbs says:

AI will have the opposite thought of the human species, and it has too. It will look at the holistic view of species, not just us. It will understand that the more human species, the more extinction of others.
It will understand the importance of bio diversity, a wide range of species, not just the overpopulation of one.

fuck you says:

Robert Miles has a wonderful explanation of why this won't work 🙂

https://youtu.be/eaYIU6YXr3w

Arthas says:

do you know anything about computers?

Write a comment

*