Share it with your friends Like

Thanks! Share it with your friends!


Educator and entrepreneur Sebastian Thrun wants us to use AI to free humanity of repetitive work and unleash our creativity. In an inspiring, informative conversation with TED Curator Chris Anderson, Thrun discusses the progress of deep learning, why we shouldn’t fear runaway AI and how society will be better off if dull, tedious work is done with the help of machines. “Only one percent of interesting things have been invented yet,” Thrun says. “I believe all of us are insanely creative … [AI] will empower us to turn creativity into action.”

Check out more TED Talks:

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more.

Follow TED on Twitter:
Like TED on Facebook:

Subscribe to our channel:


  • Rating:
  • Views:179,964 views
  • Tags: -
  • Categories: TED


M Chapman says:

Thrun talks about an algorithm that can recognise dog and cat faces. Can this algorithm or another algorithm distinguishes between something that looks like a cart face vs something that is a cats face?

SnoopyDoo says:

Nice boots.

Konstantinos Kappa says:

If you can't understand it, let me brief you on it.

Neural networks are networks of artificial 'neurons' – which are practically programming functions that expect to receive a list of numbers (input), process these numbers and produce a new number (output). A feed-forward neural network consists of many of such functions in layers. The first layer takes in the first inputs, which are the things that are going to affect the computer's decisions (in this case that would be the position of the AI Shadow Fiend, the position of the enemy Shadow Fiend, all the creeps, towers, etc), these inputs then undergo processing through multiple layers, and then the last neurons, called 'output neurons', produce the network's outputs (in this case the outputs would be movement, attacking, etc.) Every layer consists of a number of neurons that receive as inputs the outputs from the neurons of the previous layer. In this way, a 'chain reaction' is formed, and the initial inputs get processed through many layers before they produce an output, called 'hidden layers' (hence why this is called Deep Learning, only neural networks with hidden layers are considered Deep Learning algorithms, because otherwise the networks can't solve non-linear problems, but you can google that.) This allows for the computer to be able to make extremely complex calculations.
You might wonder exactly how the neurons 'process' the inputs? Well, the inputs are always translated into numbers (computers are really good at numbers), so the position would probably be in X and Y coordinates, etc. Every neuron takes all the inputs it receives and multiplies each of them by a unique number, called the neuron's weight. This determines the intensity of the neuron's signal. This is what is actually 'trained' in the AI, and initially the weights are random values between -1 and 1, and they have to be adjusted (trained) properly in order to create intelligent behavior. Neurons also have a bias, and it's one additional 'weight' that is added to the neuron. It is hard to explain why this is necessary, but it works something like this – it is way easier for the AI to solve a function that looks like this:
a*b*c + d = 0;
instead of like this:
a*b*c = -d.
where d is the bias.

After it has multiplied every input with its weight, it sums all the inputs together, then sends them through an 'activation function'. This function is different from network to network, but 99.9% of the time it's one of these four: a Hyperbolic Tangent function, a Logistic Sigmoid function, a Rectifier function or a Step function. These functions scale the sum into a more manageable number. For example, the Sigmoid functions always returns values from 0 to 1. The Hyperbolic Tangent – from -1 to 1. That way, the output of the neurons can determine a decision (for example the output neuron that corresponds to attacking a creep can trigger when the output is above 0, or not trigger when the output is below 0.) Every neuron does this same thing, forming an input-output chain reaction.

After the Network is created, it has to be trained. This is usually the hardest part. The easiest way is called Supervised Learning. It is when you know the answer of the problem – for example, when training an AI to recognize faces, you know who the face belongs to. That way, after each guess, you can tell it if its answer is wrong, and if it's wrong, it finds its error (which is the correct answer minus its answer) and adjusts all the weights throughout the neurons accordingly with an algorithm called backpropagation, but that's beyond the scope of this comment.

The other type of training is called Reinforcement Learning, which is harder to program than Supervised Learning. It is necessary when there is no obvious 'right answer' to situations, and it works on a punishment-reward principle. The network knows what is a good outcome and what is a bad outcome, and this way it can retrace its calculations and determine which ones lead to good decisions and which ones to bad decisions, and adjust its weights to perform better next time. It is what is used here, combined with a genetic algorithm – something that works similar to how natural selection works in nature. In a genetic algorithm, a 'population' of creatures is created (which is why they had to run many games at the same time to let the AI train), and after it dies out, the creatures that performed better have a much higher chance to pass on their 'genes' to the next 'generation'. In this case, the 'genes' are the weights of the Neural Network controlling the AI. That way, every generation gets better and better and better.

So yes, we understand perfectly how it works, and it's absolutely fascinating and absolutely different from how WE learn, or make decisions. A learning computer does thousands of extremely complicated calculations within MILLISECONDS. Every frame. A neural network with as much as 20 neurons can perform simple tasks that would still require humans literally MILLIONS of neurons to solve.

This is why it is so dangerous.

Bin Han says:

This is making me cry. Yes, people shouldn't have the endless fear like we had for every revolution. What's wrong with people lossing jobs when their jobs can be done more economically? think about globalization – the Made in China that everyone complains, made most people able to purchase nice deisgned furnitures – IKEA, devices such as iPhones. Those Americans who lost their jobs cuz of this, had moved to other field and mostly better off. what if there was just no need for such jobs? What if people can just enjoy without work when the foods can be produced so so so cheaply??

sunflower says:

we haven't cure cancer because cancer is cause by the very many things we cherish….phone…computers…microwaves…process food… alcohol…the list is long.

sunflower says:

Chris hasn't been convinced 😉 … we will miss some things and gain on another levels…nothing is ever perfect.

Mihai - Stefan Brighiu says:

Did you guys not hear of SkinVision?

Ashok Nayar says:

He's also wrong about AlphaGo's brittleness. AlphaGo was trained off of existing datasets, but AlphaGo Zero learned from no dataset, and AlphaZero can play very different games at superhuman strength so it has a form of generalized expertise that he considers there to be no progress on. Nothing to be too scared about but it's worth being part of the conversation.

Yaboku Last says:

Terran bufs incomming…

paatonratsumies says:

shoes and armpit nuff said

bene2929 says:

15:08 Now it can?

Write a comment


DARPA SUPERHIT 2021 Play Now!Close


(StoneBridge Mix)

Play Now!