THE FUTURE IS HERE

Training Neural Networks: Crash Course AI #4

Today we’re going to talk about how neurons in a neural network learn by getting their math adjusted, called backpropagation, and how we can optimize networks by finding the best combinations of weights to minimize error. Then we’ll send John Green Bot into the metaphorical jungle to find where this error is the smallest, known as the global optimal solution, compared to just where it is relatively small, called local optimal solutions, and we’ll discuss some strategies we can use to help neural networks find these optimized solutions more quickly.

Crash Course is produced in association with PBS Digital Studios
https://www.youtube.com/pbsdigitalstudios

Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse

Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:

Eric Prestemon, Sam Buck, Mark Brouwer, Timothy J Kwist, Brian Thomas Gossett, Haxiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Zach Van Stanley, Bob Doye, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Indika Siriwardena, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Sam Ferguson, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, David Noe, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore

Want to find Crash Course elsewhere on the internet?
Facebook – http://www.facebook.com/YouTubeCrashCourse
Twitter – http://www.twitter.com/TheCrashCourse
Tumblr – http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse

CC Kids: http://www.youtube.com/crashcoursekids

#CrashCourse #ArtificialIntelligence #MachineLearning