Please login or sign up to post and edit reviews.
Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378
Publisher |
Sam Charrington
Media Type |
audio
Categories Via RSS |
Tech News
Technology
Publication Date |
May 25, 2020
Episode Duration |
00:52:06
Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review