This episode currently has no reviews.
Submit ReviewLarge Language Model (LLM) capabilities have reached new heights and are nothing short of mind-blowing! However, with so many advancements happening at once, it can be overwhelming to keep up with all the latest developments. To help us navigate through this complex terrain, we’ve invited Raj - one of the most adept at explaining State-of-the-Art (SOTA) AI in practical terms - to join us on the podcast.
Raj discusses several intriguing topics such as in-context learning, reasoning, LLM options, and related tooling. But that’s not all! We also hear from Raj about the rapidly growing data science and AI community on TikTok.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-219.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:43) - Rajiv Shah(01:55) - AI on TikTok?(03:31) - Community engagement on TikTok(04:49) - Ever-growing mind-blowing moments(06:24) - Reaching different audiences(07:57) - What is in-context-learning?(10:52) - Prompt engineering with better models(13:01) - Growing productive users(14:52) - The landscape of large language models(18:16) - Sorting through this delightful mess(19:46) - Hugging Face highlights(23:06) - Practical fine-tuning(26:00) - What are we talking about?(28:29) - Where does AI fit into education?(30:20) - A different kind of consumer(31:54) - Talking to the average Joe about AI(34:02) - What do you see through the looking glass?(36:06) - Great Hugging Face resources(37:08) - Outro
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review