Stanford Winter Quarter 2016 class: CS231n: Convolutional Neural Networks for Visual Recognition. Lecture 4. Get in touch on Twitter @cs231n, or on Reddit /r/cs231n.
Help fund future projects: https://www.patreon.com/3blue1brown An equally valuable form of support is to simply share some of the videos. Special thanks to these supporters: http://3b1b.co/nn3-thanks This one is a bit more symbol-heavy, and that’s actually the point. The goal here is to represent in somewhat more formal terms the intuition for how backpropagation works in part 3 of the series, hopefully providing some connection between that video and other texts/code that you come across later. For more on backpropagation: http://neuralnetworksanddeeplearning.com/chap2.html https://github.com/mnielsen/neural-networks-and-deep-learning http://colah.github.io/posts/2015-08-Backprop/ Music by Vincent Rubinetti: https://vincerubinetti.bandcamp.com/album/the-music-of-3blue1brown —————— Video timeline 0:00 – Introduction 0:38 – The Chain Rule in networks 3:56 – Computing relevant derivatives 4:45 – What do the derivatives mean? 5:39 – Sensitivity to weights/biases 6:42 – Layers with additional neurons 9:13 – Recap —————— 3blue1brown is a channel about animating math, in all senses of the word animate. And you know the drill with YouTube, if you want to stay posted on new videos, subscribe, and click the bell to receive notifications (if you’re into that): http://3b1b.co/subscribe If you are new to this channel and want to see more, a good place to start is this playlist: http://3b1b.co/recommended Various social media stuffs: Website: https://www.3blue1brown.com Twitter: https://twitter.com/3Blue1Brown Patreon: https://patreon.com/3blue1brown Facebook: https://www.facebook.com/3blue1brown Reddit: https://www.reddit.com/r/3Blue1Brown
What’s actually happening to a neural network as it learns? Next chapter: https://youtu.be/tIeHLnjs5U8 Help fund future projects: https://www.patreon.com/3blue1brown An equally valuable form of support is to simply share some of the videos. Special thanks to these supporters: http://3b1b.co/nn3-thanks And by CrowdFlower: http://3b1b.co/crowdflower Home page: https://www.3blue1brown.com/ The following video is sort of an appendix to this one. The main goal with the follow-on video is to show the connection between the visual walkthrough here, and the representation of these “nudges” in terms of partial derivatives that you will find when reading about backpropagation in other resources, like Michael Nielsen’s book or Chis Olah’s blog. Video timeline: 0:00 – Introduction 0:23 – Recap 3:07 – Intuitive walkthrough example 9:33 – Stochastic gradient descent 12:28 – Final words