#Reinforcement Learning Course by David Silver# Lecture 1: Introduction to Reinforcement Learning #Slides and more info about the course: http://goo.gl/vUiyjq
(Pieter Abbeel, UC Berkeley | Covariant) Pieter Abbeel is Professor at UC Berkeley, where he is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab. Abbeel’s research strives to build ever more intelligent systems, with main emphasis on deep reinforcement learning, meta-learning. His lab also investigates how AI could advance other science and engineering disciplines. Abbeel has founded several companies, including Gradescope (AI to help instructors with grading homework and exams), Covariant (AI for robotic automation of warehouses and factories). Abbeel is also the host of The Robot Brains Podcast. Abbeel has received many awards and honors, including the PECASE, NSF-CAREER, ONR-YIP, Darpa-YFA, TR35. His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review.
Jeroen explains digital twins and reinforcement learning, and how these techniques can be combined to optimize real world systems. ___ We are an applied artificial intelligence engineering service provider, with headquarters in Antwerp (Belgium). We serve clients with a vision and budget for a future where machine learning and deep learning will continue to drive our client’s core business in a rapidly transforming economic reality. Our in-depth knowledge of artificial intelligence, natural language, image classification, profiling and sensor data have made us the first choice for some of the most innovative companies in the world. We can advance your strategic goals by imagining, developing, and implementing the technology stack you need to stay ahead. ___ ►STAY CONNECTED www.faktion.com www.linkedin.com/company/faktion www.facebook.com/teamfaktion www.instagram.com/teamfaktion www.twitter.com/faktion
In this episode of Machine Learning Street Talk, Tim Scarfe, Yannic Kilcher and Connor Shorten interviewed Harri Valpola, CEO and Founder of Curious AI. We continued our discussion of System 1 and System 2 thinking in Deep Learning, as well as miscellaneous topics around Model-based Reinforcement Learning. Dr. Valpola describes some of the challenges of modelling industrial control processes such as water sewage filters and paper mills with the use of model-based RL. Dr. Valpola and his collaborators recently published “Regularizing Trajectory Optimization with Denoising Autoencoders” that addresses some of the concerns of planning algorithms that exploit inaccuracies in their world models! 00:00:00 Intro to Harri and Curious AI System1/System 2 00:04:50 Background on model-based RL challenges from Tim 00:06:26 Other interesting research papers on model-based RL from Connor 00:08:36 Intro to Curious AI recent NeurIPS paper on model-based RL and denoising autoencoders from Yannic 00:21:00 Main show kick off, system 1/2 00:31:50 Where does the simulator come from? 00:33:59 Evolutionary priors 00:37:17 Consciousness 00:40:37 How does one build a company like Curious AI? 00:46:42 Deep Q Networks 00:49:04 Planning and Model based RL 00:53:04 Learning good representations 00:55:55 Typical problem Curious AI might solve in industry 01:00:56 Exploration 01:08:00 Their paper – regularizing trajectory optimization with denoising 01:13:47 What is Epistemic uncertainty 01:16:44 How would Curious develop these models 01:18:00 Explainability and simulations 01:22:33 How system 2 works in humans 01:26:11 Planning 01:27:04 Advice for starting an AI company 01:31:31 Real world implementation of planning models 01:33:49 Publishing research [More]
An AI learns to park a car in a parking lot in a 3D physics simulation. The simulation was implemented using Unity’s ML-Agents framework (https://unity3d.com/machine-learning). The AI consists of a deep Neural Network with 3 hidden layers of 128 neurons each. It is trained with the Proximal Policy Optimization (PPO) algorithm, which is a Reinforcement Learning approach. Basically, the input of the Neural Network are the readings of eight depth sensors, the car’s current speed and position, as well as its relative position to the target. The outputs of the Neural Network are interpreted as engine force, braking force and turning force. These outputs can be seen at the top right corner of the zoomed out camera shots. The AI starts off with random behaviour, i.e. the Neural Network is initialized with random weights. It then gradually learns to solve the task by reacting to environment feedback accordingly. The environment tells the AI whether it is doing good or bad with positive or negative reward signals. In this project, the AI is rewarded with small positive signals for getting closer to the parking spot, which is outlined in red, and gets a larger reward when it actually reaches the parking spot and stops there. The final reward for reaching the parking spot is dependent on how parallel the car stops in relation to the actual parking position. If the car stops in a 90° angle to the actual parking direction for instance, the AI will only be rewarded a very [More]
In Lecture 14 we move from supervised learning to reinforcement learning (RL), in which an agent must learn to interact with an environment in order to maximize its reward. We formalize reinforcement learning using the language of Markov Decision Processes (MDPs), policies, value functions, and Q-Value functions. We discuss different algorithms for reinforcement learning including Q-Learning, policy gradients, and Actor-Critic. We show how deep reinforcement learning has been used to play Atari games and to achieve super-human Go performance in AlphaGo. Keywords: Reinforcement learning, RL, Markov decision process, MDP, Q-Learning, policy gradients, REINFORCE, actor-critic, Atari games, AlphaGo Slides: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture14.pdf ————————————————————————————– Convolutional Neural Networks for Visual Recognition Instructors: Fei-Fei Li: http://vision.stanford.edu/feifeili/ Justin Johnson: http://cs.stanford.edu/people/jcjohns/ Serena Yeung: http://ai.stanford.edu/~syyeung/ Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This lecture collection is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. From this lecture collection, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. Website: http://cs231n.stanford.edu/ For additional learning opportunities please visit: http://online.stanford.edu/
CS547: Human-Computer Interaction Seminar Human in the Loop Reinforcement Learning Speaker: Emma Brunskill