**Highlighted Topics** 02:52 [Talk: Stacked Capsule Autoencoders by Geoffrey Hinton] 36:04 [Talk: Self-Supervised Learning by Yann LeCun] 1:09:37 [Talk: Deep Learning for System 2 Processing by Yoshua Bengio] 1:41:06 [Panel Discussion] Auto-chaptering powered by VideoKen (https://videoken.com/) For indexed video, https://conftube.com/video/vimeo-390347111 **All Topics** 03:09 Two approaches to object recognition 03:53 Problems with CNNs: Dealing with viewpoint changes 04:42 Equivariance vs Invariance 05:25 Problems with CNNs 10:04 Computer vision as inverse computer graphics 11:55 Capsules 2019: Stacked Capsule Auto-Encoders 13:21 What is a capsule? 14:58 Capturing intrinsic geometry 15:37 The generative model of a capsule auto-encoder 20:28 The inference problem: Inferring wholes from parts 21:44 A multi-level capsule auto-encoder 22:30 How the set transformer is trained 23:14 Standard convolutional neural network for refining word representations based on their context 23:41 How transformers work 24:43 Some difficult examples of MNIST digits 25:20 Modelling the parts of MNIST digits 27:03 How some of the individual part capsules contribute to the reconstructions 28:37 Unsupervised clustering of MNIST digits using stacked capsule autoencoders 31:25 The outer loop of vision 31:36 Dealing with real 3-D images 32:51 Conclusion 36:04 *[Talk: Self-Supervised Learning by Yann LeCun]* 36:25 What is Deep Learning? 38:37 Supervised Learning works but requires many labeled samples 39:25 Supervised DL works amazingly well, when you have data 40:05 Supervised Symbol Manipulation 41:50 Deep Learning Saves Lives 43:40 Reinforcement Learning: works great for games and simulations. 45:12 Three challenges for Deep Learning 47:39 How do humans and animals learn so quickly? 47:43 Babies learn how the [More]