Share it with your friends Like

Thanks! Share it with your friends!


Yoshua Bengio: “Deep Learning for AI”

This lecture will look back at some of the principles behind the recent successes of deep learning as well as acknowledge current limitations, and finally propose
research directions to build on top of this progress and towards human-level AI.
Notions of distributed representations, the curse of dimensionality, and compositionality with neural networks will be discussed, along with the fairly recent advances changing neural networks from pattern recognition devices to systems that can process any data structure thanks to attention mechanisms, and that can imagine novel but plausible configurations of random variables through deep generative networks. At the same time, analyzing the mistakes made by these systems suggests that the dream of learning a hierarchy of representations which disentangle the underlying high-level concepts (of the kind we communicate with language) is far from achieved. This suggests new research directions for deep learning, in particular from the agent perspective, with grounded language learning, discovering causal variables and causal structure, and the ability to explore in an unsupervised way to understand the world and quickly adapt to changes in it.

This video is also available on another stream:

The opinions expressed in this video do not necessarily reflect the views of the Heidelberg Laureate Forum Foundation or any other person or associated institution involved in the making and distribution of the video.

More information to the Heidelberg Laureate Forum:

More videos from the HLF:



Write a comment


DARPA SUPERHIT 2021 Play Now!Close


(StoneBridge Mix)

Play Now!