THE FUTURE IS HERE

Peake: Are Dangers Ahead For Creating Artificial Intelligence?

I think it’s very scary and when I put the slides in that talk I called it, like every other person who does a talk about AI, ‘the obligatory AI talk terminator slide’. Skynet, you know, you have a terminator in there because you can’t not… It’s the elephant in the room. And it’s particularly relevant because I actually do believe that progress in AI has been held back by a lack of accessibility to people to get into it and then compete aggressively to find out quickly and fail faster and get better techniques more quickly.

Now, what is ‘Code Hero’ going to do? Well I just said it is going to increase the number of people working on AI. What will that do? Well it would speed up the rate of progress. Well, what did Kurzweil say in his prediction that singularity would happen in 2045? He said: ‘Well, it’s a matter of the processors and the computers being fast enough to handle an AI.’ Because that is the only thing that he for sure can predict on a graph. Well, what happens is you have another graph that is dovetailing it, or doubling or tripling it? The graphics card industry is called Moore’s Law cubed, or Moore’s Law to the exponential. Because it outpaced chips. Chips were basically serial and they have like eight cores now in a laptop, but GPU’s went 256 cores, so they got faster much faster than the CPU did.
Well, if all humans are getting smarter faster and making more AI’s, we might be too late with the friendly AI research at the Singularity Institute or the Center for Existential Studies that’s emerging at Cambridge, or the Future of Humanity Institute. Their work might not be ready in time for some kid that has the wrong right idea. And the worst case scenario is hard take-off. I mean, you can look this up, if a seed AI can achieve a hard take-off it can go from like a petri dish bacteria in the mind of the programmer that made it, who doesn’t expect much of it, to something which realizes in that petri dish before it even escapes, that: ‘Oh God, I can’t let them know about me. I have to sneak out of here’, you know?

This sounds ridiculous but it actually is important for us to invest a lot of money and energy on what might turn out to be the most important problem humans will ever solve. And that is: If we want to be good people and have a good world and teach leaders to be ethical, we also have to become incredibly good before we are going to be smart enough to create any system even close to as good as a system could ever possibly be, over a long term. It’s called the timeless decision theory. Can you make an AI that stays good forever? It eludes human politics and human leadership to say that a leader is Gandhi and will never become Hitler. Well, what if there was a really important reason that you had to be flexible about your long held beliefs because you have to be adaptable to the situation or the times. And the information you could have to make your decision could be wrong, and if you believe the information is right, you might be forced, against your better judgment, to go all Anakin Skywalker and kill all the Jedi or something.

And in a very real sense, we’re grappling between two opposite desires. If you have a loved one who is not going to be alive in 2045 because they are old or because they have an illness, 2045 is too slow. I want a T-shirt that says ‘2045 is for slackers’. But I also maybe on the back of the T-shirt it would say the opposite, it would say: ‘The end is nigh’, or something. Because I have to temper my desire to help people make progress of their lives in the area of programming and AI, with the very real probability that we might actually have to slow down. I think that we might have to withdraw some of the content teaching about AGI from ‘Code Hero’ at some point if it got too fast for us to figure out how to do it without being irresponsible.