#032- Simon Kornblith / GoogleAI - SimCLR and Paper Haul!
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
Dec 06, 2020
Episode Duration |
01:30:29

This week Dr. Tim Scarfe, Sayak Paul and Yannic Kilcher speak with Dr. Simon Kornblith from Google Brain (Ph.D from MIT). Simon is trying to understand how neural nets do what they do. Simon was the second author on the seminal Google AI SimCLR paper. We also cover "Do Wide and Deep Networks learn the same things?", "Whats in a Loss function for Image Classification?",  and "Big Self-supervised models are strong semi-supervised learners". Simon used to be a neuroscientist and also gives us the story of his unique journey into ML.

00:00:00 Show Teaser / or "short version"

00:18:34 Show intro

00:22:11 Relationship between neuroscience and machine learning

00:29:28 Similarity analysis and evolution of representations in Neural Networks

00:39:55 Expressability of NNs

00:42:33 Whats in a loss function for image classification

00:46:52 Loss function implications for transfer learning

00:50:44 SimCLR paper 

01:00:19 Contrast SimCLR to BYOL

01:01:43 Data augmentation

01:06:35 Universality of image representations

01:09:25 Universality of augmentations

01:23:04 GPT-3

01:25:09 GANs for data augmentation??

01:26:50 Julia language

@skornblith

https://www.linkedin.com/in/simon-kornblith-54b2033a/

https://arxiv.org/abs/2010.15327

Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth

https://arxiv.org/abs/2010.16402

What's in a Loss Function for Image Classification?

https://arxiv.org/abs/2002.05709

A Simple Framework for Contrastive Learning of Visual Representations

https://arxiv.org/abs/2006.10029

Big Self-Supervised Models are Strong Semi-Supervised Learners

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review