THE FUTURE IS HERE

AAAI 20 / AAAI 2020 Keynotes Turing Award Winners Event / Geoff Hinton, Yann Le Cunn, Yoshua Bengio

**Highlighted Topics**
02:52 [Talk: Stacked Capsule Autoencoders by Geoffrey Hinton]
36:04 [Talk: Self-Supervised Learning by Yann LeCun]
1:09:37 [Talk: Deep Learning for System 2 Processing by Yoshua Bengio]
1:41:06 [Panel Discussion]

Auto-chaptering powered by VideoKen (https://videoken.com/)
For indexed video, https://conftube.com/video/vimeo-390347111

**All Topics**
03:09 Two approaches to object recognition
03:53 Problems with CNNs: Dealing with viewpoint changes
04:42 Equivariance vs Invariance
05:25 Problems with CNNs
10:04 Computer vision as inverse computer graphics
11:55 Capsules 2019: Stacked Capsule Auto-Encoders
13:21 What is a capsule?
14:58 Capturing intrinsic geometry
15:37 The generative model of a capsule auto-encoder
20:28 The inference problem: Inferring wholes from parts
21:44 A multi-level capsule auto-encoder
22:30 How the set transformer is trained
23:14 Standard convolutional neural network for refining word representations based on their context
23:41 How transformers work
24:43 Some difficult examples of MNIST digits
25:20 Modelling the parts of MNIST digits
27:03 How some of the individual part capsules contribute to the reconstructions
28:37 Unsupervised clustering of MNIST digits using stacked capsule autoencoders
31:25 The outer loop of vision
31:36 Dealing with real 3-D images
32:51 Conclusion
36:04 *[Talk: Self-Supervised Learning by Yann LeCun]*
36:25 What is Deep Learning?
38:37 Supervised Learning works but requires many labeled samples
39:25 Supervised DL works amazingly well, when you have data
40:05 Supervised Symbol Manipulation
41:50 Deep Learning Saves Lives
43:40 Reinforcement Learning: works great for games and simulations.
45:12 Three challenges for Deep Learning
47:39 How do humans and animals learn so quickly?
47:43 Babies learn how the world works by observation
48:43 Early Conceptual Acquisition in Infants [from Emmanuel Dupoux]
49:33 Prediction is the essence of Intelligence
50:28 Self-Supervised Learning = Filling in the Blanks
50:53 Natural Language Processing: works great!
51:55 Self-Supervised Learning for Video Prediction
52:09 The world is stochastic
52:43 Solution: latent variable energy-based models
53:55 Self-supervised Adversarial Learning for Video Prediction
54:12 Three Types of Learning
55:30 How Much Information is the Machine Given during Learning?
55:54 The Next Al Revolution
56:23 Energy-Based Models
56:32 Seven Strategies to Shape the Energy Function
57:02 Denoising AE: discrete
58:44 Contrastive Embedding
1:00:39 MoCo on ImageNet
1:00:52 Latent-Variable EBM for inference & multimodal prediction
1:02:07 Learning a (stochastic) Forward Model for Autonomous Driving
1:02:26 A Forward Model of the World
1:04:42 Overhead camera on highway. Vehicles are tracked
1:05:00 Video Prediction: inference
1:05:15 Video Prediction: training
1:05:30 Actual, Deterministic, VAE+Dropout Predictor/encoder
1:05:57 Adding an Uncertainty Cost (doesn’t work without it)
1:06:01 Driving an Invisible Car in “Real” Traffic
1:06:51 Conclusions
1:09:37 *[Talk: Deep Learning for System 2 Processing by Yoshua Bengio]*
1:10:10 No-Free-Lunch Theorem, Inductive Biases Human-Level AI
1:15:03 Missing to Extend Deep Learning to Reach Human-Level AI
1:16:48 Hypotheses for Conscious Processing by Agents, Systematic Generalization
1:22:02 Dealing with Changes in Distribution
1:25:13 Contrast with the Symbolic AI Program
1:28:07 System 2 Basics: Attention and Conscious Processing
1:28:19 Core Ingredient for Conscious Processing: Attention
1:29:16 From Attention to Indirection
1:30:35 From Attention to Consciousness
1:31:59 Why a Consciousness Bottleneck?
1:33:07 Meta-Learning: End-to-End OOD Generalization, Sparse Change Prior
1:33:21 What Causes Changes in Distribution?
1:34:56 Meta-Learning Knowledge Representation for Good OOD Performance
1:35:14 Example: Discovering Cause and Effect
1:36:49 Operating on Sets of Pointable Objects with Dynamically Recombined
1:37:36 RIMS: Modularize Computation and Operate on Sets of Named and Typed Objects
1:39:42 Results with Recurrent Independent Mechanisms
1:40:17 Hypotheses for Conscious Processing by Agents, Systematic Generalization
1:40:46 Conclusions
1:41:06 *[Panel Discussion]*
1:41:59 Connection between Neural Networks as a Computer Science and a Machine Learning Concept – Natural Competition
1:45:35 Idea of Differentiation: Representation and Listening
1:49:36 Alternate to Gradient Based Learning
1:51:04 What is the role of university when Facebook, Google can manage these enormous projects
1:53:50 What do you think students to read?
1:54:50 Mechanisms for Human Level AI
1:57:59 Where do new ideas come from? How do you decide which one works out?
1:59:54 How should I proceed when people writes me reviews and doesn’t like my research?
2:01:53 Publications effect on the field
2:05:36 Can we code during AI doing science
2:06:52 What is not General Intelligence, how to measure? and Neural Architecture
2:08:44 Disagreements