Deep Learning State of the Art (2020) | MIT Deep Learning Series

Share it with your friends Like

Thanks! Share it with your friends!

Close

Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.

Website: https://deeplearning.mit.edu
Slides: http://bit.ly/2QEfbAm
References: http://bit.ly/deeplearn-sota-2020
Playlist: http://bit.ly/deep-learning-playlist

OUTLINE:
0:00 – Introduction
0:33 – AI in the context of human history
5:47 – Deep learning celebrations, growth, and limitations
6:35 – Deep learning early key figures
9:29 – Limitations of deep learning
11:01 – Hopes for 2020: deep learning community and research
12:50 – Deep learning frameworks: TensorFlow and PyTorch
15:11 – Deep RL frameworks
16:13 – Hopes for 2020: deep learning and deep RL frameworks
17:53 – Natural language processing
19:42 – Megatron, XLNet, ALBERT
21:21 – Write with transformer examples
24:28 – GPT-2 release strategies report
26:25 – Multi-domain dialogue
27:13 – Commonsense reasoning
28:26 – Alexa prize and open-domain conversation
33:44 – Hopes for 2020: natural language processing
35:11 – Deep RL and self-play
35:30 – OpenAI Five and Dota 2
37:04 – DeepMind Quake III Arena
39:07 – DeepMind AlphaStar
41:09 – Pluribus: six-player no-limit Texas hold’em poker
43:13 – OpenAI Rubik’s Cube
44:49 – Hopes for 2020: Deep RL and self-play
45:52 – Science of deep learning
46:01 – Lottery ticket hypothesis
47:29 – Disentangled representations
48:34 – Deep double descent
49:30 – Hopes for 2020: science of deep learning
50:56 – Autonomous vehicles and AI-assisted driving
51:50 – Waymo
52:42 – Tesla Autopilot
57:03 – Open question for Level 2 and Level 4 approaches
59:55 – Hopes for 2020: autonomous vehicles and AI-assisted driving
1:01:43 – Government, politics, policy
1:03:03 – Recommendation systems and policy
1:05:36 – Hopes for 2020: Politics, policy and recommendation systems
1:06:50 – Courses, Tutorials, Books
1:10:05 – General hopes for 2020
1:11:19 – Recipe for progress in AI
1:14:15 – Q&A: what made you interested in AI
1:15:21 – Q&A: Will machines ever be able to think and feel?
1:18:20 – Q&A: Is RL a good candidate for achieving AGI?
1:21:31 – Q&A: Are autonomous vehicles responsive to sound?
1:22:43 – Q&A: What does the future with AGI look like?
1:25:50 – Q&A: Will AGI systems become our masters?

CONNECT:
– If you enjoyed this video, please subscribe to this channel.
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman

Comments

Lex Fridman says:

This is the opening lecture on recent developments in deep learning and AI, and hopes for 2020. It's humbling beyond words to have the opportunity to lecture at MIT and to be part of the AI community. Here's the outline:
0:00 – Introduction
0:33 – AI in the context of human history
5:47 – Deep learning celebrations, growth, and limitations
6:35 – Deep learning early key figures
9:29 – Limitations of deep learning
11:01 – Hopes for 2020: deep learning community and research
12:50 – Deep learning frameworks: TensorFlow and PyTorch
15:11 – Deep RL frameworks
16:13 – Hopes for 2020: deep learning and deep RL frameworks
17:53 – Natural language processing
19:42 – Megatron, XLNet, ALBERT
21:21 – Write with transformer examples
24:28 – GPT-2 release strategies report
26:25 – Multi-domain dialogue
27:13 – Commonsense reasoning
28:26 – Alexa prize and open-domain conversation
33:44 – Hopes for 2020: natural language processing
35:11 – Deep RL and self-play
35:30 – OpenAI Five and Dota 2
37:04 – DeepMind Quake III Arena
39:07 – DeepMind AlphaStar
41:09 – Pluribus: six-player no-limit Texas hold'em poker
43:13 – OpenAI Rubik's Cube
44:49 – Hopes for 2020: Deep RL and self-play
45:52 – Science of deep learning
46:01 – Lottery ticket hypothesis
47:29 – Disentangled representations
48:34 – Deep double descent
49:30 – Hopes for 2020: science of deep learning
50:56 – Autonomous vehicles and AI-assisted driving
51:50 – Waymo
52:42 – Tesla Autopilot
57:03 – Open question for Level 2 and Level 4 approaches
59:55 – Hopes for 2020: autonomous vehicles and AI-assisted driving
1:01:43 – Government, politics, policy
1:03:03 – Recommendation systems and policy
1:05:36 – Hopes for 2020: Politics, policy and recommendation systems
1:06:50 – Courses, Tutorials, Books
1:10:05 – General hopes for 2020
1:11:19 – Recipe for progress in AI
1:13:11 – Q&A: Limitations / road-blocks of deep learning
1:14:15 – Q&A: What made you interested in AI
1:15:21 – Q&A: Will machines ever be able to think and feel?
1:18:20 – Q&A: Is RL a good candidate for achieving AGI?
1:21:31 – Q&A: Are autonomous vehicles responsive to sound?
1:22:43 – Q&A: What does the future with AGI look like?
1:25:50 – Q&A: Will AGI systems become our masters?

Apurva Kokate says:

Huge fan. Thanks a lot, this helped me with getting back on track with ML research in 2020

Christina Marie Hicks says:

Bum of utube..

DukeLukeProd says:

1:15 my heart stopped

xsor says:

how to put a double like?

Canal do Marcio says:

The giraffe perturbation 😂😂😂😂😂

Grand_Wizard of_GoW says:

Real-Life Hitman is a real big inspiration to me.
I appreciate your work Lex.

Franklin He says:

thank you brotha !!!! i got a psych / neuro degree but too poor to go on HAHAHA but fuck it my journey in coding ai building robots starts today lol like you said maybe multidisciplinary researches will help the advancement of AI

MeEstYou YouEstMe says:

Thank you Mr.Luther!

Allen Altiner says:

Thank you so much for sharing this!

グールにも愛が必要 says:

"How to ruin all of your intellectual self esteem in an hour and a half by Lex Fridman"

Long Nguyen-Vu says:

Well done, while I'm struggling to read and understand a single paper, Lex delivers ~30 at once

Bruce Liu says:

It might be worth forcing an AI to adapt to another AI that has a greater response time. Because in the human realm the slower reaction speed person generally will generate better tactics to adapt. Then invert the speed weights. Repeat. Is a worthy experiment.

Mike Rutecky says:

Thank u Lex for the  lecture  from Mit    …… I have a love of science and auctualy scored  top 25% with latterly no education  …so in life I just get bie  with trash jobs but  I know for fact  I should be doing and know I can be doing more … but never given chance  …aka need to work for living   ..long back story short …..

Write a comment

*

Area 51
Ringing

Answer