Please login or sign up to post and edit reviews.
670: LLaMA: GPT-3 performance, 10x smaller - Publication Date |
- Apr 14, 2023
- Episode Duration |
- 00:13:26
How does Meta AI's natural language model, LLaMa compare to the rest? Based on the Chinchilla scaling laws, LLaMa is designed to be smaller but more performant. But how exactly does it achieve this feat? It's all done by training a small model for a longer period of time. Discover how LLaMa compares to its competition, including GPT-3, in this week's episode.
Additional materials:
www.superdatascience.com/670
Interested in sponsoring a SuperDataScience Podcast episode? Visit
JonKrohn.com/podcast for sponsorship information.
This episode could use a review!
This episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review