12b: Deep Neural Nets

Thanks! Share it with your friends!

*NOTE: These videos were recorded in Fall 2015 to update the Neural Nets portion of the class.
MIT 6.034 Artificial Intelligence, Fall 2010
View the complete course: http://ocw.mit.edu/6-034F10
Instructor: Patrick Winston

In this lecture, Prof. Winston discusses BLANK and modern breakthroughs in neural net research.

More courses at http://ocw.mit.edu

Julian G says:

With regard to gesture and voice at 15:28 when it comes to the question why exactly this works is just amazing, hence inspiring 🙂 great lecture!

Ironway Jeremy says:

robin jacobsson says:

For some weird reason, the way he acts and talks makes him really funny and interesting to listen to. I have no idea why but it's awesome!

Rasheed Zayid says:

Well done Prof.Patrick H. Winston, providing us these great videos

seanmchughinfo says:

When it guesses the wrong thing (school bus on black and yellow stripes) isn't the "real problem" there that it doesn't have enough data or good enough data?

Pedro Santos says:

Thanks for the class!

Nikhil D'Souza says:

Can someone explain what he is doing with -1 and the threshold value at 25:49? I watched his previous lecture 12a, but I still don't really understand how he can get rid of thresholds by doing that.

shpluk says:

no vampire involved here ))

Bacher Alsaffar says:

Amazing visualization! Tho I don't know to find his *sigh*s funny or worrying about his health.

Longshot says:

I can't get auto coding working. https://youtu.be/VrMHA3yX_QI?t=19m21s I keep getting RMS of 2+ on 8 inputs/outputs and 3 hidden after training it 5000 times. I use 256 values to train it. Logic ports are trained correctly and quickly with RMS of 0.02 in around 100 training samples. So i do think my neural net works. Am puzzled.

I wonder why it still works when shutting down some of the neurons and left only 2 of them.

Anton Panchishin says:

Nice update Prof. Winston. It is a challenge to videos up to date with the changing times. Just 7 years ago you stated that people who used neural nets were overly fascination with that toolset and that NNs weren't going anywhere. The future is certainly hard to predict.

Dmitri Nosovicki says:

Autocoding: But it doesn't have to look familiar to be a valid representation. Important is there exists an encoder and a decoder that can compress further input into that representation, and decompress it back with acceptable loss. Fascinating! Compression rate can be extremely high, practically arbitrary. It is a language, not an entropy coding.

Amita Kapoor says:

Nice video, and good information. but every time Prof Winston breathes, I get concerned about his heart health….

Samarth Singal says:

Great lecture! Haven't seen a more concise explanation of NNs anywhere.

TripedalTroductions says:

6:15 mother of God….

Rodrigo Loza says:

haha they had to upgrade it! Is it me or he does not like very much CNNs?

Viktor Gorchev says:

Thank you for this brilliant lecture on modern AI technology

Hursant B says:
Gleiry Agustín Serrano Wong says:

I laughed so hard with the initial song haha.

Берзин Григорий says:

I was felling bad for the curve at 31:45

q zorn says:

very nice ai information, this will help on the raspberry pi3 limited deep learning results thanks.

Tilex says:

For those looking for the first lecture (12a: Neural Nets):