Please login or sign up to post and edit reviews.
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
May 02, 2020
Episode Duration |
01:14:42

According to Yann Le Cun, the next big thing in machine learning is unsupervised learning. Self-supervision has changed the entire game in the last few years in deep learning, first transforming the language world with word2vec and BERT -- but now it's turning computer vision upside down. 

This week Yannic, Connor and I spoke with one of the authors, Aravind Srinivas who recently co-led the hot-off-the-press CURL: Contrastive Unsupervised Representations for Reinforcement Learning alongside Michael (Misha) Laskin. CURL has had an incredible reception in the ML community in the last month or so. Remember the Deep Mind paper which solved the Atari games using the raw pixels? Aravind's approach uses contrastive unsupervised learning to featurise the pixels before applying RL. CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features! This is a huge step forwards in being able to apply RL in the real world. 

We explore RL and self-supervision for computer vision in detail and find out about how Aravind got into machine learning. 

Original YouTube Video: https://youtu.be/1MprzvYNpY8

Paper:

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

Aravind Srinivas, Michael Laskin, Pieter Abbeel

https://arxiv.org/pdf/2004.04136.pdf

Yannic's analysis video: https://www.youtube.com/watch?v=hg2Q_O5b9w4 

#machinelearning #reinforcementlearning #curl #timscarfe #yannickilcher #connorshorten

Music credit; https://soundcloud.com/errxrmusic/in-my-mind

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review