Stuart Armstrong – AI Risk – & book "Smarter than Us"

Share it with your friends Like

Thanks! Share it with your friends!


Good Reads:
What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we’re the strongest or the fastest but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit.

Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a “good” world is, and skilled enough to describe it perfectly to a computer program.

AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.

Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?

A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability. Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, a non-profit organization studying the theoretical underpinnings of artificial superintelligence.

“A waste of time. A complete and utter waste of time” were the words that the Terminator didn’t utter: its programming wouldn’t let it speak so irreverently. Other Terminators got sent back in time on glamorous missions, to eliminate crafty human opponents before they could give birth or grow up. But this time Skynet had taken inexplicable fright at another artificial intelligence, and this Terminator was here to eliminate it—to eliminate a simple software program, lying impotently in a bland computer, in a university IT department whose “high-security entrance” was propped open with a fire extinguisher.
The Terminator had machine-gunned the whole place in an orgy
of broken glass and blood—there was a certain image to maintain.
And now there was just the need for a final bullet into the small lap-top with its flashing green battery light. Then it would be “Mission Accomplished.”

Subscribe to this Channel:

Science, Technology & the Future:



Serry K says:

Stuart Armstrong wow very smart man there.

ggyshay says:

In the last decade the computers have became faster, lighter, cheaper, better designed. Some argue that if they keep on getting better at this rate, in a few years they will be smarter than a human being. But, is it possible? can computers think? what about create?
The biggest counter argument against computers becoming smarter than human beings is that computers can only obey their programers, they can never create nothing by their own. That seems unrrefutable at first but if thought about it, not even we humans create anything. Every formula ‘created’ by us is just a description of the nature that was already there, every painting that one paints is just a distorted picture of something that already existed. Even abstract art is not a creation, it is just a composition of elements that were unrelated in the real world, but they were already there.
Then what about God? It has to be a creation of the man. At first sight it does seem like so, but the God represented on the bible is nothing more than a human figure with special abilities. All the concepts around Him already existed, nothing was created. Therefore, we humans never created anything, we just associate things and concepts around us to make new things.
The computer works very alike the humans, it stores it’s knowledge in a specific place, and uses a special part to associate theses pieces of knowledge and project new things on the screen. If it is so close to the process of our brain rearranging facts, than, at a certain point, we shall be able to build a computer that can make really complex associations, in order to appear that it is thinking.
As smartness is defined as the capacity of making correlations between known facts in a certain speed. Therefore, when we build a computer able to ‘think’ it is just a matter of time until we build a faster one that will be considered smarter than the man.

Kate S says:

Why would AI if it becokes really much smarter than us, care about us at all? We dont usually care much about animals, who are much dumber than us unless they are useful a lot or harmful a lot. We live our own life and let those animals live their own, and influence many of them only without intention. Even most of species who disappeared because of our actions, we didnt want them to disappear, and didnt want them to prosper, we didnt care about them at all, our actions influenced them without our intention
And why would powerful AI create utopia or dystopia for us, maybe they will prefer to not interrupt our life anymore, and live their own

gaby de wilde says:

Great conversation, thanks for posting it. I has some thoughts of my own….

Take that argument that skynet in terminator is limited in its abilities by what the viewer can imagine and the story is set up to create a balanced conflict. (It isn't about accuracy, they are trying to make a film for a big audience.) In short: the picture drawn is overly simplistic.

Now I would propose that the same is true at the more informed levels. (Same people, same limitations.) While the researcher knows it wont be humanoid robots he cant seem to distance himself from using electronics to accomplish the goal. This while any type of logic engine that is Turing complete should be equally suitable for the task. We do see people look for the AI of doom at the cellular automation level but this simply isn't the place to look.

What is required to build the machine intellect is a medium of data storage and an intelligence that can rewrite the data. The line between human intelligence and machine intelligence is the point where human qualities and human values are there or not.

You have the machine intelligence already at that very moment where a system can perform those 2 tricks while suffering insignificant obstruction from human ideals or desires.

Then the other component of the 2nd degree terminator fallacy is to think we would be smart enough to identify, oppose and struggle with it.

You keep touching on the topics but you never quite manage to see them for what they are! You mention religious utopia and giggle about what a fallacy it is but the reality is that we have humans who are the logic engine and scriptures that are the executables. You mention how human labor is considered some sacred doctrine but again you fail to identify the AI while staring it in the face. These processes or automations are already free from human ability for logic.

These are examples that you look at from the outside. You mention that predicting politics is not within our means. So here we have a system that understands it self while at the same time it is not understood by the humans who are simply following their instructions. (And I do mean instructions in the programming sense.)

It is as if having a huge data set that confuses the human provides an excuse for pretending it is not an intelligent entity. LOL!

If it has already happened, what other indicators do we have?

Did you mention starting lots of wars? A process that goes from using people to design war cenarios and using people to salve problems that involve executing those programs. A process specifically selecting that kind of person for that kind of job, then if they fail to stick to doctrine they are simply replaced with more suitable logic engines.

And then when the kids are send out into the field to kill another we are to pretend that it was some how a human pulling all the strings.

Ahh but we do get to vote every 4 years, we have some serious human influence right there!… maybe not? In reality we see Americans elect Obama who put ending the wars at the top of his agenda.

Or another funny aspect, I have way to many thoughts of my own to be able to do university. I'm not as able to memories instructions, I have to constantly think about everything I do. It logically follows that I've put an endlessly larger amount of thought into every aspect of life than people skilled at the art of monkey see monkey do.

The interesting bit here is that without the degree I see myself systemically removed from all decision making in any social, economical or political process.

Clearly the robot eugenics is set up to prefer a kind of gullible person who is capable of deep thought but only inside the boundaries set up for him. A fully sandboxed solution.

Just gullibility alone is not good enough, we have an endless number of systems where the overly gulible are removed if not killed.

If people use drugs they end up in prison with the excuse that it is for their own good. We are told this is because some tiny percentage might develop a mental illness. It sounds very ambitious and idealistic but the argument fails where we tolerate all kinds of foods that kill people with a much higher degree of certainty.

The truth here seem to be that the drugs are illegal because they cause unpredictable behavior. The stable drone may all of a sudden break out of his sandbox and start thinking about all sorts of heretical things… like…. why are we in a war? …this religion doesn't make sense? …am I to take this election seriously?

Or take the NSA spying spectacle, I cant even tell if it is a good thing that they are trying to keep track of what is going on, as a kind of last hope for human influence… or if they are the ultimate agent of systemic oppression.

gaby de wilde says:

The AI would first get rid of people who don't care about the survival of intelligent life.

Write a comment