With increasing regularity we see stories in the news about machine learning algorithms causing real-world harm. People’s lives and livelihood are affected by the decisions made by machines. Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Now you can become part of the solution.
Pattie Maes is a professor in MIT’s program in Media Arts and Sciences and until recently served as academic head. She runs the Media Lab’s Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Pattie is a prestigious academic, published more than 400 academic publications in premier (Human-Computer Interaction) HCI venues, and has over 40000 citations.
One of the main challenges for AI remains unsupervised learning, at which humans are much better than machines, and which we link to another challenge: bringing deep learning to higher-level cognition. We review earlier work on the notion of learning disentangled representations and deep generative models and propose research directions towards learning of high-level abstractions. This follows the ambitious objective of disentangling the underlying causal factors explaining the observed data. We argue that in order to efficiently capture these, a learning agent can acquire information by acting in the world, moving our research from traditional deep generative models of given datasets to that of autonomous learning or unsupervised reinforcement learning. We propose two priors which could be used by an agent acting in its environment in order to help discover such high-level disentangled representations of abstract concepts. The first one is based on the discovery of independently controllable factors, i.e., in jointly learning policies and representations, such that each of these policies can independently control one aspect of the world (a factor of interest) computed by the representation while keeping the other uncontrolled aspects mostly untouched. This idea naturally brings fore the notions of objects (which are controllable), agents (which control objects) and self. The second prior is called the consciousness prior and is based on the hypothesis that our conscious thoughts are low-dimensional objects with a strong predictive or explanatory power (or are very useful for planning). A conscious thought thus selects a few abstract factors (using the attention [More]
Yoshua Bengio, considered one of the ‘Godfathers of Artificial Intelligence’ discusses Recurrent independent mechanisms, sample complexity, end-to-end adaptation, multivariate categorical MLP conditionals and more. When summarising his talk, Professor Bengio gave three key points to keep in mind when ‘looking forward’ – We must build a world model which meta-learns causal effects in abstract space of causal variables. This requires a necessity to quickly adapt to change and generalize out-of-distribution by sparsely recombining modules – The necessity to acquire knowledge and encourage exploratory behaviour – The need to bridge the gap between the aforementioned system 1 and system 2 ways of thinking, with old neural networks and consciousness reasoning taken into account
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast. As I explain in the introduction, due to an unfortunate mishap, this conversation is audio-only. Hope you still enjoy it and find it interesting. This episode is presented by Cash App: download it & use code “LexPodcast” INFO: Podcast website: https://lexfridman.com/ai Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/category/ai/feed/ Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41 OUTLINE: 0:00 – Introduction 3:59 – Common language with an alience species 5:46 – Structure of language 7:18 – Roots of language in our brain 8:51 – Language and thought 9:44 – The limit of human cognition 16:48 – Neuralink 19:32 – Deepest property of language 22:13 – Limits of deep learning 28:01 – Good and evil 29:52 – Memorable experiences 33:29 – Mortality 34:23 – Meaning of life CONNECT: – Subscribe to this YouTube channel – Twitter: https://twitter.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – Medium: https://medium.com/@lexfridman – Support on Patreon: https://www.patreon.com/lexfridman