MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran - #442
Publisher |
Sam Charrington
Media Type |
audio
Categories Via RSS |
Tech News
Technology
Publication Date |
Dec 28, 2020
Episode Duration |
00:38:01
Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington. At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research. The complete show notes for this episode can be found at twimlai.com/go/442

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review