Please login or sign up to post and edit reviews.
650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy - Publication Date |
- Feb 03, 2023
- Episode Duration |
- 00:07:47
SparseGPT is a noteworthy one-shot pruning technique that can halve the size of large language models like GPT-3 without adversely affecting accuracy. In this episode, Jon Krohn provides an overview of this development and explains its commercial and environmental implications.
Additional materials:
www.superdatascience.com/650
Interested in sponsoring a SuperDataScience Podcast episode? Visit
JonKrohn.com/podcast for sponsorship information.
This episode could use a review!
This episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review