Please login or sign up to post and edit reviews.
Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292
Publisher |
Sam Charrington
Media Type |
audio
Categories Via RSS |
Tech News
Technology
Publication Date |
Aug 19, 2019
Episode Duration |
00:50:17
Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:  • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression,  • We also look at a few recent papers including “Lottery Hypothesis."

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review