In this episode of Machine Learning Street Talk, Tim Scarfe, Connor Shorten and Yannic Kilcher react to Yoshua Bengio’s ICLR 2020 Keynote “Deep Learning Priors Associated with Conscious Processing”. Bengio takes on many future directions for research in Deep Learning such as the role of attention in consciousness, sparse factor graphs and causality, and the study of systematic generalization. Bengio also presents big ideas in Intelligence that border on the line of philosophy and practical machine learning. This includes ideas such as consciousness in machines and System 1 and System 2 thinking, as described in Daniel Kahneman’s book “Thinking Fast and Slow”. Similar to Yann LeCun’s half of the 2020 ICLR keynote, this talk takes on many challenging ideas and hopefully this video helps you get a better understanding of some of them! Thanks for watching! Please Subscribe for more videos! Paper Links: Link to Talk: https://iclr.cc/virtual_2020/speaker_7.html The Consciousness Prior: https://arxiv.org/abs/1709.08568 Thinking Fast and Slow: https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 Systematic Generalization: https://arxiv.org/abs/1811.12889 CLOSURE: Assessing Systematic Generalization of CLEVR Models: https://arxiv.org/abs/1912.05783 Neural Module Networks: https://arxiv.org/abs/1511.02799 Experience Grounds Language: https://arxiv.org/pdf/2004.10151.pdf Benchmarking Graph Neural Networks: https://arxiv.org/pdf/2003.00982.pdf On the Measure of Intelligence: https://arxiv.org/abs/1911.01547 Please check out our individual channels as well! Machine Learning Dojo with Tim Scarfe: https://www.youtube.com/channel/UCXvHuBMbgJw67i5vrMBBobA Yannic Kilcher: https://www.youtube.com/channel/UCZHmQk67mSJgfCCTn7xBfe Henry AI Labs: https://www.youtube.com/channel/UCHB9VepY6kYvZjj0Bgxnpbw 00:00:00 Tim and Yannics takes 00:01:37 Intro to Bengio 00:03:13 System 2, language and Chomsky 00:05:58 Cristof Koch on conciousness 00:07:25 Francois Chollet on intelligence and consciousness 00:09:29 Meditation and Sam Harris on consciousness 00:11:35 Connor Intro 00:13:20 Show Main Intro 00:17:55 [More]
Yoshua Bengio and Gary Marcus on the best way forward for AI Moderated by Vincent Boucher ORIGINAL LIVE STREAMING | Monday, 23 December 2019 from 6:30 PM to 8:30 PM (EST) at Mila: https://www.facebook.com/MontrealAI/videos/498403850881660/ Transcript of the AI Debate: https://medium.com/@Montreal.AI/transcript-of-the-ai-debate-1e098eeb8465 Gary Marcus, slides : https://montrealartificialintelligence.com/aidebate/slidesmarcus.pdf Yoshua Bengio, slides : https://montrealartificialintelligence.com/aidebate/slidesbengio.pdf AI Debate Official Web Page: https://montrealartificialintelligence.com/aidebate/ To take part in the conversation on social media: #AIDebate Agenda : 6h30m00s PM EST : Opening Address | Vincent Boucher — 3 min. 6h33m00s PM EST : Opening statement | Gary Marcus — 20 min. 6h53m00s PM EST : Opening statement | Yoshua Bengio — 20 min. 7h13m00s PM EST : Response | Gary Marcus — 7.5 min. 7h20m30s PM EST : Response | Yoshua Bengio — 7.5 min. 7h28m00s PM EST : Interview | Vincent Boucher : Yoshua Bengio & Gary Marcus — 15 min. 7h43m00s PM EST : Public Questions | Yoshua Bengio & Gary Marcus — 22,5 min. 8h05m30s PM EST : Int’l Audience Questions | Yoshua Bengio & Gary Marcus — 22,5 min. 8h28m00s PM EST : Closing Remarks | Vincent Boucher — 2 min By MONTREAL.AI | Kévin Ka, Patrick Taillon & Mila The General Secretariat of MONTREAL.AI Email: secretariat@montreal.ai
**Highlighted Topics** 02:52 [Talk: Stacked Capsule Autoencoders by Geoffrey Hinton] 36:04 [Talk: Self-Supervised Learning by Yann LeCun] 1:09:37 [Talk: Deep Learning for System 2 Processing by Yoshua Bengio] 1:41:06 [Panel Discussion] Auto-chaptering powered by VideoKen (https://videoken.com/) For indexed video, https://conftube.com/video/vimeo-390347111 **All Topics** 03:09 Two approaches to object recognition 03:53 Problems with CNNs: Dealing with viewpoint changes 04:42 Equivariance vs Invariance 05:25 Problems with CNNs 10:04 Computer vision as inverse computer graphics 11:55 Capsules 2019: Stacked Capsule Auto-Encoders 13:21 What is a capsule? 14:58 Capturing intrinsic geometry 15:37 The generative model of a capsule auto-encoder 20:28 The inference problem: Inferring wholes from parts 21:44 A multi-level capsule auto-encoder 22:30 How the set transformer is trained 23:14 Standard convolutional neural network for refining word representations based on their context 23:41 How transformers work 24:43 Some difficult examples of MNIST digits 25:20 Modelling the parts of MNIST digits 27:03 How some of the individual part capsules contribute to the reconstructions 28:37 Unsupervised clustering of MNIST digits using stacked capsule autoencoders 31:25 The outer loop of vision 31:36 Dealing with real 3-D images 32:51 Conclusion 36:04 *[Talk: Self-Supervised Learning by Yann LeCun]* 36:25 What is Deep Learning? 38:37 Supervised Learning works but requires many labeled samples 39:25 Supervised DL works amazingly well, when you have data 40:05 Supervised Symbol Manipulation 41:50 Deep Learning Saves Lives 43:40 Reinforcement Learning: works great for games and simulations. 45:12 Three challenges for Deep Learning 47:39 How do humans and animals learn so quickly? 47:43 Babies learn how the [More]
Yoshua Bengio is one of the most prominent names in deep learning. What does he think about the future of artificial intelligence? This video was produced by The Tesseract Academy (http://tesseract.academy) We help decision makers understand data science, AI and Blockchain 📈 FOLLOW US ON INSTAGRAM https://www.instagram.com/thetesseractacademy/ 🤖 FOLLOW US ON TWITTER https://twitter.com/tesseractacade1
Yoshua Bengio, considered one of the ‘Godfathers of Artificial Intelligence’ discusses Recurrent independent mechanisms, sample complexity, end-to-end adaptation, multivariate categorical MLP conditionals and more. When summarising his talk, Professor Bengio gave three key points to keep in mind when ‘looking forward’ – We must build a world model which meta-learns causal effects in abstract space of causal variables. This requires a necessity to quickly adapt to change and generalize out-of-distribution by sparsely recombining modules – The necessity to acquire knowledge and encourage exploratory behaviour – The need to bridge the gap between the aforementioned system 1 and system 2 ways of thinking, with old neural networks and consciousness reasoning taken into account
In this causalcourse.com guest talk from Yoshua Bengio, Yoshua talks about causal representation learning. Playlist link: https://www.youtube.com/watch?v=rKZJ0TJWvTk&list=PLoazKTcS0Rzb6bb9L508cyJ1z-U9iWkA0&index=80
Yoshua Bengio, director of the Montreal Institute for Learning Algorithms at the Université de Montréal, outlines some of the progress made in AI. Martin Ferguson-Pell, a professor at the University of Alberta, discusses the use of technology to address problems in rehabilitation medicine and chronic disease management. The committee holds another hearing on the role of robotics, 3D printing and artificial intelligence in the healthcare system. May 3, 2017
Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. Bengio received his Bachelor of Science, Master of Engineering and PhD from McGill University. Recorded: September 8, 2017
The talks at the Deep Learning School on September 24/25, 2016 were amazing. I clipped out individual talks from the full live streams and provided links to each below in case that’s useful for people who want to watch specific talks several times (like I do). Please check out the official website (http://www.bayareadlschool.org) and full live streams below. Having read, watched, and presented deep learning material over the past few years, I have to say that this is one of the best collection of introductory deep learning talks I’ve yet encountered. Here are links to the individual talks and the full live streams for the two days: 1. Foundations of Deep Learning (Hugo Larochelle, Twitter) – https://youtu.be/zij_FTbJHsk 2. Deep Learning for Computer Vision (Andrej Karpathy, OpenAI) – https://youtu.be/u6aEYuemt0M 3. Deep Learning for Natural Language Processing (Richard Socher, Salesforce) – https://youtu.be/oGk1v1jQITw 4. TensorFlow Tutorial (Sherry Moore, Google Brain) – https://youtu.be/Ejec3ID_h0w 5. Foundations of Unsupervised Deep Learning (Ruslan Salakhutdinov, CMU) – https://youtu.be/rK6bchqeaN8 6. Nuts and Bolts of Applying Deep Learning (Andrew Ng) – https://youtu.be/F1ka6a13S9I 7. Deep Reinforcement Learning (John Schulman, OpenAI) – https://youtu.be/PtAIh9KSnjo 8. Theano Tutorial (Pascal Lamblin, MILA) – https://youtu.be/OU8I1oJ9HhI 9. Deep Learning for Speech Recognition (Adam Coates, Baidu) – https://youtu.be/g-sndkf7mCs 10. Torch Tutorial (Alex Wiltschko, Twitter) – https://youtu.be/L1sHcj3qDNc 11. Sequence to Sequence Deep Learning (Quoc Le, Google) – https://youtu.be/G5RY_SUJih4 12. Foundations and Challenges of Deep Learning (Yoshua Bengio) – https://youtu.be/11rsu_WwZTc Full Day Live Streams: Day 1: https://youtu.be/eyovmAtoUx0 Day 2: https://youtu.be/9dXiAecyJrY Go to http://www.bayareadlschool.org for more information on the event, speaker [More]
This is a combined slide/speaker video of Yoshua Bengio’s talk at NeurIPS 2019. Slide-synced non-YouTube version is here: https://slideslive.com/neurips/neurips-2019-west-exhibition-hall-c-b3-live This is a clip on the Lex Clips channel that I mostly use to post video clips from the Artificial Intelligence podcast, but occasionally I post clips from other lectures by me or others. Hope you find these interesting, thought-provoking, and inspiring. If you do, please subscribe, click bell icon, and share. Lex Clips channel: https://www.youtube.com/lexclips Lex Fridman channel: https://www.youtube.com/lexfridman Artificial Intelligence podcast website: https://lexfridman.com/ai Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/category/ai/feed/ Connect with on social media: – Twitter: https://twitter.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – Medium: https://medium.com/@lexfridman
When Microsoft acquired deep learning startup Maluuba in January, Maluuba’s highly respected advisor, the deep learning pioneer Yoshua Bengio, agreed to continue advising Microsoft on its artificial intelligence efforts. Bengio, head of the Montreal Institute for Learning Algorithms, recently visited Microsoft’s Redmond, Washington, campus, and took some time for a chat. Read the full conversation with Harry Shum and Yoshua Bengio: http://www.aka.ms/ja00tl
Presented at Cognitive Computational Neuroscience (CCN) 2017 (http://www.ccneuro.org) held September 6-8, 2017.
Moving beyond supervised deep learning Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
World-known pioneer Yoshua Bengio (MILA) discusses the challenges ahead for deep learning toward artificial intelligence. #FranceisAI Web – https://franceisai.com/ Twitter – https://twitter.com/franceisai LinkedIn – https://linkedin.com/company/franceisai/ Video credit: VLAM
Laureates at the 7th HLF sat down with Tom Geller, Tom Geller Productions, to discuss their career, mentoring and their experience at the Heidelberg Laureate Forum (HLF). These renowned scientists have been honored with most prestigious awards in mathematics and computer science: Abel Prize, ACM A.M. Turing Award, ACM Prize in Computing, Fields Medal and Nevanlinna Prize. The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video. Background: The Heidelberg Laureate Forum Foundation (HLFF) annually organizes the Heidelberg Laureate Forum (HLF), which is a networking event for mathematicians and computer scientists from all over the world. The HLFF was established and is funded by the German foundation the Klaus Tschira Stiftung (KTS), which promotes natural sciences, mathematics and computer science. The HLF is strongly supported by the award-granting institutions, the Association for Computing Machinery (ACM: ACM A.M. Turing Award, ACM Prize in Computing), the International Mathematical Union (IMU: Fields Medal, Nevanlinna Prize), and the Norwegian Academy of Science and Letters (DNVA: Abel Prize). The Scientific Partners of the HLFF are the Heidelberg Institute for Theoretical Studies (HITS) and Heidelberg University. More information to the Heidelberg Laureate Forum: Website: http://www.heidelberg-laureate-forum…. Facebook: https://www.facebook.com/HeidelbergLa… Twitter: https://twitter.com/hlforum Flickr: https://www.flickr.com/hlforum More videos from the HLF: https://www.youtube.com/user/Laureate… Blog: https://scilogs.spektrum.de/hlf/
Challenges for deep learning towards AI Yoshua Bengio, Professor of Computer Science and Operations Research at the University of Montreal.
Interview between Susan Dumais and Yoshua Bengio. See more at https://www.microsoft.com/en-us/research/videos/ai-distinguished-lecture-series/
Inaugural AI Research Week, hosted by the MIT-IBM Watson AI Lab. Yoshua Bengio, full professor and head of the Montreal Institute for Learning Algorithms (MILA), University of Montreal, presents research on learning to understand language. Keynote Speaker Yoshua Bengio, Head of the Montreal Institute for Learning Algorithms (MILA) Introduction by Lisa Amini, Lab Director, IBM Research Cambridge
This interview took place at the RE•WORK Deep Learning Summit in Boston, on 12-13 May 2016. Yoshua Bengio (PhD in CS, McGill University, 1991), post-docs at M.I.T. (Michael Jordan) and AT&T Bell Labs (Yann LeCun), CS professor at Université de Montréal, Canada Research Chair in Statistical Learning Algorithms, NSERC Chair, CIFAR Fellow, member of NIPS foundation board and former program/general chair, co-created ICLR conference, authored two books and over 300 publications, the most cited being in the areas of deep learning, recurrent networks, probabilistic learning, natural language and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks.
Yoshua Bengio (MILA) discusses the obstacles we are likely to face on the path to beneficial artificial general intelligence. The Beneficial AGI 2019 Conference: https://futureoflife.org/beneficial-agi-2019/ After our Puerto Rico AI conference in 2015 and our Asilomar Beneficial AI conference in 2017, we returned to Puerto Rico at the start of 2019 to talk about Beneficial AGI. We couldn’t be more excited to see all of the groups, organizations, conferences and workshops that have cropped up in the last few years to ensure that AI today and in the near future will be safe and beneficial. And so we now wanted to look further ahead to artificial general intelligence (AGI), the classic goal of AI research, which promises tremendous transformation in society. Beyond mitigating risks, we want to explore how we can design AGI to help us create the best future for humanity. We again brought together an amazing group of AI researchers from academia and industry, as well as thought leaders in economics, law, policy, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day technical workshop to look more deeply at how we can create beneficial AGI, and we followed that with a 2.5-day conference, in which people from a broader AI background considered the opportunities and challenges related to the future of AGI and steps we can take today to move toward an even better future.