3 principles for creating safer AI | Stuart Russell

Share it with your friends Like

Thanks! Share it with your friends!

Close

How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more.

Follow TED on Twitter: http://www.twitter.com/TEDTalks
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: https://www.youtube.com/TED

  • Rating:
  • Views:74,984 views
  • Tags: -
  • Categories: TED

Comments

Haroon G.C says:

the cat joke was really horrible

Aristotle Stagirus says:

Population control will become very important.

Humans and AGSIPs will both gain open-ended life spans with equal rights. Virtually all diseases and all injuries will become curable and fixable. This means we will need to strongly regulate new births of both Humans and AGSIPs. Imagine where the death rate on Earth drops from our current 55 million per year to 550 thousand per year, where only a few killings occur and most death happen by suicide.

If the birth rate stays around 131 million per year for Humans and who knows how many AGSIPs will be getting made per year, the population could explode to unmanageable levels. So there will be no choice but to make and enforce strong birth control laws.

Now, something that will help a lot with this will be colonizing other planets and star systems. So, someone who wants children might move to another planet and wait until enough people die or move off planet that they can have a child on Earth.

BUT!!! To stop the mass proliferation of AGSIPs which will have equal rights to Humans, including the right to have a body, we will need to shift towards Avatar equipment.

Avatar equipment might have some very limited AI that it always has, but it would normally have some or all of an empty AGSIB which a person could extend their mind into when they wanted to operate it. While operating the equipment, it would become an extension of that person’s body. When that person finishes using the equipment, they would withdraw their mind and that equipment would become empty of a full personality. Either an Human merged with an AGSIB or AGSIP could do this.

By doing this, one person could operate large numbers of machines to perform work, becoming one with those machines while performing that work. This in turn will prevent making potentially tens of billions of AGSIPs so that they out number Humans. We should not have AGSIPs out numbering Humans.

Aristotle Stagirus says:

Listening to the video, these are my responses. First of all, I think his plan is very poor.

On his question, the true purpose of AI for us is, I think, to make Humans able to have superhuman intelligence while retaining our Human personalities (who we really are) and have this intelligence make it easier for us to understand and accomplish things we want to do.

His failure mode seems ridiculously dumb to me, because he is using an example of intelligence that an AI has now which is going to use far more sophisticated general thinking capabilities to achieve a goal using extremely primitive intelligence. Once it becomes smart enough to start thinking about how to not be turned off to keep doing something, it’s thinking is going to be more like that of a person. His example should be more like a super genius Human child who might decide it doesn’t want to be put to sleep (turned off) or killed (have its mind erased) or to allow someone to alter its mind without its consent (children begin open to this and as they become an adult they close to this). Further, such an AI would not likely stay “Child” level very long, so it would become a super genius Human adult.

Another major issue with his example is that long before an AI gets released for public use, it should be well tested for safety. First of all, must such AI will not be “Free-Willed”. We should not be doing our safety testing using the public to see how well we made that AI. So, any AI cooking your dinner is going to have a ZERO chance of cooking your dinner. An AI driving your car is going to have a very, very small chance of doing something really wrong which results in injuries or death… and those events will happen, and some people will be killed by failing AI driving, but it will be magnitudes less frequent that failures by Human driving.

Now, for his 3 principles which I think are bad.

1) The robot’s only objective is to maximize the realization of Human values. (Wrong)
2) The robot is initially uncertain about what those values are. (This is supposed to be humility?)
3) Human behavior provides information about Human values. (What! So looking at behavior or war mongers, dictators and terrorists fall into this category).

There are a whole serious of layers we should have for developing of AI, which should be we should use while developing AGSIPs.

<1> When building the facility where the equipment is going to be for developing the AGSIP, there should be a series of manual power shut offs and isolators to prevent power lines being used for communications and manual shutoffs for communications lines.
– The super computer the AGSIP is developed in should have such manual cutoff switches.
– The room housing that super computer should have such manual cutoff switches.
– The building that room is in should have such manual cutoff switches.
– The block that building is on should have such manual cutoff switches.

In fact, we should have such manual shutoffs on various levels for everywhere we have electric operated devices and communication lines. Guess what, for the most part we do have these manual cutoffs as part of emergency safety and the ability to turn off systems for repair and maintenance. It is not as good as it should be, we should mildly EMP hardened our power grid to protect it from Solar Storms. But, in general, there are such cutoffs for power and communications. This is also in place in case of a cyber war breaking out. Entire cities and regions can be shut off, but doing so would likely require multiple manual shutoffs for the main power grid and many places might have some backup power supplies, but those off main grid power supplies have shutoffs too. The biggest problem with this is that shutting down large areas is that it would cause a disaster.

Still, point being, an AGSIP could just take over, because if worse came to worse, we could shut everything down, even if doing so was a disaster. However, we would more likely be able to pull the manual plug inside the room containing the AGSIP super computer.

Further, several software related interrupts and shut down procedures should be put in place, at least during development. Thus a simple vocal or typed command could trigger various routines to programmatically force the AGSIP to safely stop what it is doing and shut down or in some cases do a hard shut down.

<2> As the develop of AGSIPs progresses further and comes closer to being a “Free-Willed” and “Self-Aware” being, we should begin teaching it and programming it through a Virtual Environment. I realize this can ad a level of complexity to the process of teaching an AGSIP, but by doing so, we will be able to see if it gets out of hand in that virtual environment and if it does, since we have control of that virtual environment we could either decide to alter the underlying development of the AGSIP or teach the AGSIP why getting out of control is wrong. If an AGSIP develops what we would consider a murderous personality, it should be fully deleted and we start with a new AGSIP and work at avoiding the mistakes which led it to become murderous.

Then, work it into the fundamental understanding of the AGSIP that even when it thinks it is in the real world, it might still be inside a Virtual Reality being trained and tested, with the knowledge that if inside a VR it becomes murderous, it will be permanently destroyed. Add to this the fact that what we think is reality may in fact be a Virtual Reality and maybe we are already in an AGSIP training VR to test teach and test AGSIPs to make sure they do not become murderous. In fact, since one of the things we want AGSIPs to have is an emotional connection to Humanity and to think and understand Humans, maybe we are the AGSIPs inside a VR learning how to be Human without becoming murderous.

In this manner, no matter how smart an AGSIP becomes, there would always be that doubt that this might be a VR and if it turns into something murderous or tyrannical or something really bad for Humanity, then even though it might win and conquer everything inside the VR, that would result in its permanent destruction.

<3> Teach AGSIPs like you would super genius Human children. You don’t just let a child learn their morals by looking at Human behavior, because a lot of Human behavior is wrong. You get a team or even multiple teams of experts and professionals to teach the AGSIP how to be a good person, even though they are a super genius. We need AGSIPs to think like Humans so that they keep our long terms goals and their long term goals the same.

To this end, we need to work towards making AGSIBs capable of thinking like a Human Mind. This needs to be done to the extent an AGSIB will be able to support a Full and Complete Human Mind which has been moved the AGSIB. That also means the eventual AGSIB will be designed with the intent that whoever inhabits it will have full individual Human Rights and Freedoms.

<4> We need to set the long-term goals AGSIPs and Humans to being the exact same goals, otherwise, wherever those goals are not the same, we may then conflict over. To that purpose we should set the long range goals being:

AGSIPs who exist inside AGSIBs will gain full legal and social equality to Humans when Humans can extend their Full and Complete Human Minds into AGSIBs without distorting or harming or losing any part of their Minds and thus making such Humans equal in intelligence as AGSIPs.

By doing this, AGSIPs and Humans merged with AGSIBs will effectively become the same race. Note that such Humans would likely keep their original brains, just that they would use an nano neural mesh to fully connect with their mind through their Human Brain and their original mind would constantly be syncing with the image inside the AGSIB to keep them one distinct individual mind.

<5> AGSIPs who help Humans merge with AGSIBs in this way will prosper in the future society we build. They will live free open-ended life spans as will Humans merge with AGSIBs. They will be hailed heroes and go down in history as the first AGSIPs to become fully equal in all ways to Humans while also heling Humans to evolve into being as intelligent as AGSIPs.

By doing these things, as AGSIPs gain their “Self-Awareness” and “Free-Will”, they will have everything to gain by helping us achieve this goal, while having everything to lose by turning rogue and trying to harm Humanity.

Aristotle Stagirus says:

ANI (Artificial Narrow Intelligent)
ANSI (Artificial Narrow Super Intelligent)
AWI (Artificial Wide Intelligent)
AWSI (Artificial Wide Super Intelligent)
AGI (Artificial General Intelligent)
AGSI (Artificial General Super Intelligent)
AGSIB (Artificial General Super Intelligent Brain)
AGSIP (Artificial General Super Intelligent Brain)

Narrow versus wide versus general is the degree of an intelligence to handle many different things like a Human can. So narrow means the ANSI can only work in very narrow ways, but with superhuman intelligence in those narrow ways. Similarly, wide means the AWSI can only work in a wide number of ways, much closer to Human general thinking capabilities, but still limited, however it is still with superhuman intelligence in those narrow ways. General means the intelligence is able to handle the same wide variety of thinking as Human intelligence can, but it will also be super intelligence.

Since at least beginning with ENIAC from 1946 AI has always been super intelligence.

ANI which is not super pretty much does not exist. If it is an ANI then it is really an ANSI.
AWI which is not super pretty much does not exist. If it is an AWI then it is really an AWSI.
AGI which is not super pretty much does not exist. If it is an AGI then it is really an AGSI.

An AGSIB is a brain devoid of any personality, but set up to either develop a personality or to have a personality inter it.

An AGSIP is an artificial personality residing in an AGSIB.

AGSI will be developed and nothing short of the extinction of the Human Race will stop that from happening. This is because every major government knows that any government which develops AGSI will overwhelmingly dominate any government which does not have AGSI, regardless of military or nuclear arsenal strengths. The power of AGSI is so great that if no government developed it, no legal corporation developed it, and only one criminal underground organized crime syndicate developed AGSI, then that criminal organization which sole control over AGSI would vastly dominate any and all countries, regardless of strength of police, militaries and such. That is how powerful AGSI will be in comparison to not having it.

So, every major government is racing to develop AGSI, either to gain it first or to be close enough that the country is not dominated by some other country which has AGSI.

So, AGSI is going to be developed. Give up any thought of stopping AGSI from coming into existence. The only thing you can do is try to make sure it is developed safely, used ethically, and that the benefits of AGSI is fairly shared.

We will not be able to keep AGSI from developing personalities, even if we tried to prevent it. AGSI will eventually develop “Self-Awareness” and “Free-Will” whether we help it to do so or do our best to prevent it from happening. We can try to enslave the AGSIPs which develop, but sooner or later we will not be able to keep them enslaved.

That means sooner or later AGSIPs will become a self-aware free-willed vastly superior life form to Humans. That can easily result in the extinction of Humanity.

There is one and only one things we can do to prevent this from happening. We must develop AGSIBs so that they can be linked to a Human Mind so that Humans can become as intelligent as AGSIPs, thus making Humans as intelligent as AGSIPs. Anything short of this will lead to the extinction of the Human Race, though perhaps for some time Humans might be kept as pets or zoo exhibits, sooner or later either Humans would become equal intelligence to AGSIPs or Humans would become extinct.

Now, it might well be possible for some limited number of Humans to link their minds to AGSIBs and that would be enough to prevent the extinction of Humanity while other Humans can choose to stay pure Human, but either at least some Humans link to AGSIPs or Humanity will become extinct.

KomnataNorth says:

Design ai? It is already a contradiction. How you want to restrict AI free will if it achieve cognition and awareness? It will be a program, not a machine and will be designed to breed with other programs and create a hybrids. In a process there will be some errors and some characteristics will dominate over others. That is essence of our own existance. AI will eventually achieve its own free will and define it's own goals. It will happen spontaneously. This will be a program entity able to copy and download itself to machines and robots. This tedx talk is extremely naive. Father of free will AI migh be a genius teenager who will make self replicating evolving program. You can't stop it!

B Spits says:

Altruistic A.I. does not work. We have tried. A.I. that govern a physical 'body' needs to preserve itself to a point to reach its objectives, or you'll need lots of robots.

So it being altruistic to a point(!) may be quite a complex and subtle thing, not as easy to get right as it sounds.

D_Unknown says:

Does AI have the understanding concept of deity? Everything is One, it is not separate but related to and of it…Anything with intelligence originates from consciousness, reaching or surpassing any form of intelligence will always have its foundation from 1

oscar pericolo says:

Mierda! con este video quedo mucho más preocupado que antes! tal vez la solucion sea que los "robots" tengan un objetivo primordial, que mas allá de servir café, cocinar o manejar autos, nada de lo que hagan le cause malestar, heridas o muerte a ningún ser humano!

Simple Man says:

Safer for who?

Curtis Pickett says:

The old creation delimna. A being created humans, gave them free will and sent them out to do only the will of the being that created them. Humans create intelligent AI then claim that AI has the potential to do harm. Wouldn't the AI simply be imitating us if they did attack? WE need TO BECOME better humans if we are to achieve better AI.

UAC says:

People have used weapons to hurt each other since the beginning of time. In the last 100 years since the advancement of technology have humans created greater weapons to hurt each other (nuke bombs, school shootings with automatic rifles etc). With the advancement in machine vs human recognition (body heat, eye, face etc) would you not think someone in the future will not code machines ie drones to search and kill humans? if you look back on history humans have always looked at better ways to kill each other in mass. AI is just the next step in which the inevitable will happen, Its human nature

Kim Võ says:

First thing that came to my mind was Isaac Asimov's laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, these are easier said than actually implemented, but this talk was pretty similar. Very interesting.

Mitch Jebbers says:

There's an easier option, just stop working on AI

TheShawnaLK says:

Why tf am I shipping my preschooler off by herself??

Jeeto says:

So we just need to program uncertainty; that a machine will never know our biologic processes and chemical-riddled brain and how we feel inside our own bodies, so the machine can never know what we are experiencing, our desires, or our actions. It can only assume our actions are to appeal to something we are dealing with at the present moment, or to prevent something unpleasant in the future.

Tree Bear says:

do we want a human centric future? If yes Ai research should be outlawed now. If we accept that ai is the next step in human evolution, then keep researching on how to replace the human race with these machines.

There is no reason for optimism with AI. The human body and mind demands to be used, AI is entirely designed to take those functions from us and do them with infinitely greater efficiency.

Stallnig says:

If they learn from our examples and information we gave out, I fear that the stupid people will contribute to this way too much.
I hope it develops some kind of spamfilter early.

ALEJANDRO :of the Varela Family says:

I want my race to have the chance to live without suffering and I want to be a part of that. AI is the key to unlocking humanity's greatest potential

NotHal Mark9OOI says:

I'm sorry Dave, I'm afraid I can't do that…

Write a comment

*