Building Intelligent Systems: A Guide to Machine Learning Engineering https://amzn.to/2Vk0fuB In this Conversation in Artificial Intelligence, Geoff Hulten discusses how to think systemically about building artificial intelligence products. Subscribe to Conversations in Artificial Intelligence https://youtu.be/To9ExvqY4t0 https://medium.com/how-intelligent-are-intelligent-machines https://www.facebook.com/AIconversations/ More about Geoff Hulten https://www.linkedin.com/in/geoff-hulten-58136a1/ More about the interviewer: Paolo Messina https://www.linkedin.com/in/paolo-messina/ Become an AI manager with our courses: https://innodemia.com https://go.innodemia.com/download-course-brochure https://go.innodemia.com/the-perfect-artificial-intelligence-guide-for-managers-and-innovators https://go.innodemia.com/webinar-registration Check out our document Intelligence service at http://pdfextractoronline.com Download my free AI productivity app https://snapchart.co
**Highlighted Topics** 02:52 [Talk: Stacked Capsule Autoencoders by Geoffrey Hinton] 36:04 [Talk: Self-Supervised Learning by Yann LeCun] 1:09:37 [Talk: Deep Learning for System 2 Processing by Yoshua Bengio] 1:41:06 [Panel Discussion] Auto-chaptering powered by VideoKen (https://videoken.com/) For indexed video, https://conftube.com/video/vimeo-390347111 **All Topics** 03:09 Two approaches to object recognition 03:53 Problems with CNNs: Dealing with viewpoint changes 04:42 Equivariance vs Invariance 05:25 Problems with CNNs 10:04 Computer vision as inverse computer graphics 11:55 Capsules 2019: Stacked Capsule Auto-Encoders 13:21 What is a capsule? 14:58 Capturing intrinsic geometry 15:37 The generative model of a capsule auto-encoder 20:28 The inference problem: Inferring wholes from parts 21:44 A multi-level capsule auto-encoder 22:30 How the set transformer is trained 23:14 Standard convolutional neural network for refining word representations based on their context 23:41 How transformers work 24:43 Some difficult examples of MNIST digits 25:20 Modelling the parts of MNIST digits 27:03 How some of the individual part capsules contribute to the reconstructions 28:37 Unsupervised clustering of MNIST digits using stacked capsule autoencoders 31:25 The outer loop of vision 31:36 Dealing with real 3-D images 32:51 Conclusion 36:04 *[Talk: Self-Supervised Learning by Yann LeCun]* 36:25 What is Deep Learning? 38:37 Supervised Learning works but requires many labeled samples 39:25 Supervised DL works amazingly well, when you have data 40:05 Supervised Symbol Manipulation 41:50 Deep Learning Saves Lives 43:40 Reinforcement Learning: works great for games and simulations. 45:12 Three challenges for Deep Learning 47:39 How do humans and animals learn so quickly? 47:43 Babies learn how the [More]
Catch up on the live feed from our unedited webcast! A new field of collective intelligence has emerged in recent years, supported by a wave of new digital technologies that make it possible for organizations and societies to think at large scale. But why do smart technologies not always automatically lead to smart results? Chief Executive of Nesta Geoff Mulgan shows how this intelligence has to be carefully organized and orchestrated in order to fully harness and direct its powers. SUBSCRIBE to our channel! Follow the RSA on Twitter: https://twitter.com/RSAEvents Like RSA Events on Facebook: https://www.facebook.com/RSAEventsoff… Listen to RSA podcasts: https://soundcloud.com/the_rsa See RSA Events behind the scenes: https://instagram.com/rsa_events/