Share it with your friends Like

Thanks! Share it with your friends!

Close

What’s actually happening to a neural network as it learns?
Next chapter: https://youtu.be/tIeHLnjs5U8
Help fund future projects: https://www.patreon.com/3blue1brown
An equally valuable form of support is to simply share some of the videos.
Special thanks to these supporters: http://3b1b.co/nn3-thanks

And by CrowdFlower: http://3b1b.co/crowdflower
Home page: https://www.3blue1brown.com/

The following video is sort of an appendix to this one. The main goal with the follow-on video is to show the connection between the visual walkthrough here, and the representation of these “nudges” in terms of partial derivatives that you will find when reading about backpropagation in other resources, like Michael Nielsen’s book or Chis Olah’s blog.

Video timeline:
0:00 – Introduction
0:23 – Recap
3:07 – Intuitive walkthrough example
9:33 – Stochastic gradient descent
12:28 – Final words

Buy/Stream:

Comments

Mayank Mishra says:

Experienced countless EUREKA moments in this series. Grant Sanderson is a legend.

Amrish Kelkar says:

Is it correct that back propogation also adjusts the activation function (say 'filters' in a CNN). Grant seems to say here that only the weights and biases are being adjusted — and may be he says to keep things simple – But I'm interested in knowing if the filters themselves are modified as the model is tuned.

Ali Al Shammaa says:

8:31 How do you suddenly switch from changes in activation to changes in weights ? @3blue1brown

A says:

Great, but could you turn the distracting background music off? I'm forever bemused by good presenters such as your self doing this as I can't recall single occasion I was in a lecture theatre where they had background music playing.

thetruereality says:

Wonderful beautiful presentation, really helps in understanding for someone like me who is pursuing AIML.
I just have this one question.
7:33 Why? Why do we want the other outputs to have lesser activation? Wont that reduce the networks ability to identify numbers other than 2? Are you just giving this one specific example of 2 to explain back propagation?

Ceder Veltman says:

the graphic at 8:00 is mindblowingly good at explaining what you mean.

James Mosher says:

How do neural networks handle catastrophic failure? Namely, say incorrectly assigning a zero results in death (like say, a car ramming full speed into a barrier, something no non-suicidal human would do intentionally, ie, undistracted)? Is there a meaningful way to encode this? I bring this up bc despite autonomous machines “on average” being safer, I don’t see human beings tolerating a system whereby, even if, in the average they are safer, occasionally the system “intentionally” engages in catastrophic failure. Such thinking is a hallmark of safe and reliable engineering; bend before breaking, leak before bursting, etc.

says:

Deep learning of a regimented framework. How excitingly boring next…..

gaming snake says:

this has been better than a year of uni

william cipto says:

Great content!

Vaddagani Shiva says:

Mini Enlightenment starts @ 8:05

Tymothy Lim says:

Thank you very much for this video! It is very educational and intuitive to understand! 🙂

Wirannauxa says:

I may have enrolled to a course which may have violated your copyright and content.
Please check it out.
They may have looted me by plagiarizing your content.

Kevin Connolly says:

This series is totally brilliant. I am 73 years old and used to teach mathematics. I am still learning stuff and with the help of sites like yours it makes it so much easier. Have you thought of doing any videos on the really complex subject of real analysis/. Keep up the good work. Kevin Connolly

Giacomo Miola says:

Yes he makes it look easy to understand, but that's because he's scratching the surface. Real deep learning is not for us average humans

anotherplatypus says:

You sounded so jazzed when you brought up getting a free shirt from that sponser. = )

Fahd Baba says:

Why do you need two 16-neuron layers? Why not just have one 784-neuron long input layer, then a 10-neuron output layer

Nicolai Matthew says:

interesting stuff!

Marcos Mohareb says:

I wish I had the option to pay my college tuition to you instead of my university. Well done mate!

Buck Rothschild says:

Is it called three blue one brown because of the ratio of sea versus land on the Earth?

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×