Harri Valpola: System 2 AI and Planning in Model-Based Reinforcement Learning

Share it with your friends Like

Thanks! Share it with your friends!

Close

In this episode of Machine Learning Street Talk, Tim Scarfe, Yannic Kilcher and Connor Shorten interviewed Harri Valpola, CEO and Founder of Curious AI. We continued our discussion of System 1 and System 2 thinking in Deep Learning, as well as miscellaneous topics around Model-based Reinforcement Learning. Dr. Valpola describes some of the challenges of modelling industrial control processes such as water sewage filters and paper mills with the use of model-based RL. Dr. Valpola and his collaborators recently published “Regularizing Trajectory Optimization with Denoising Autoencoders” that addresses some of the concerns of planning algorithms that exploit inaccuracies in their world models!

00:00:00 Intro to Harri and Curious AI System1/System 2
00:04:50 Background on model-based RL challenges from Tim
00:06:26 Other interesting research papers on model-based RL from Connor
00:08:36 Intro to Curious AI recent NeurIPS paper on model-based RL and denoising autoencoders from Yannic
00:21:00 Main show kick off, system 1/2
00:31:50 Where does the simulator come from?
00:33:59 Evolutionary priors
00:37:17 Consciousness
00:40:37 How does one build a company like Curious AI?
00:46:42 Deep Q Networks
00:49:04 Planning and Model based RL
00:53:04 Learning good representations
00:55:55 Typical problem Curious AI might solve in industry
01:00:56 Exploration
01:08:00 Their paper – regularizing trajectory optimization with denoising
01:13:47 What is Epistemic uncertainty
01:16:44 How would Curious develop these models
01:18:00 Explainability and simulations
01:22:33 How system 2 works in humans
01:26:11 Planning
01:27:04 Advice for starting an AI company
01:31:31 Real world implementation of planning models
01:33:49 Publishing research and openness

We really hope you enjoy this episode, please subscribe!

Regularizing Trajectory Optimization with Denoising Autoencoders: https://papers.nips.cc/paper/8552-regularizing-trajectory-optimization-with-denoising-autoencoders.pdf
Pulp, Paper & Packaging: A Future Transformed through Deep Learning: https://thecuriousaicompany.com/pulp-paper-packaging-a-future-transformed-through-deep-learning/
Curious AI: https://thecuriousaicompany.com/
Harri Valpola Publications: https://scholar.google.com/citations?user=1uT7-84AAAAJ&hl=en&oi=ao
Some interesting papers around Model-Based RL:
GameGAN: https://cdn.arstechnica.net/wp-content/uploads/2020/05/Nvidia_GameGAN_Research.pdf
Plan2Explore: https://ramanans1.github.io/plan2explore/
World Models: https://worldmodels.github.io/
MuZero: https://arxiv.org/pdf/1911.08265.pdf
PlaNet: A Deep Planning Network for RL: https://ai.googleblog.com/2019/02/introducing-planet-deep-planning.html
Dreamer: Scalable RL using World Models: https://ai.googleblog.com/2020/03/introducing-dreamer-scalable.html
Model Based RL for Atari: https://arxiv.org/pdf/1903.00374.pdf

Comments

Bianca A. - There's art to data science says:

I usually watch your videos during lunch breaks. I thought it was a good idea to use lunch time and consume some food for the brain also (not just for the body). Today you made me reconsider my idea. That sewage plant example was a bit of a challenge for my digestion. 😁

Interesting talk. I feel you're getting better and better at finding the strengths and weaknesses in the models you're discussing. It's the kind of conversation I find very useful. Keep up the great work.

Peter Ott says:

Great job on this podcast you guys! I also think you hit a good balance of cool editing and background music levels without it being overwhelming. Keep it up!

Ali Baheri says:

I can argue that what he claims as system 2 can be easily viewed as system 1. In fact, what he talks as system 2 already falls in system 1. The real system 2 is system 1+ what is beyond our imagination. The real system 2 is something like POET/enhanced POET that are able to “really” surprise us for creating something beyond our imagination. Is it fair that we call our research as a new paradigm and know these concepts have been already introduced a couple of decades ago by Schmidhuber on 1990?

Write a comment

*