This podcast currently has no reviews.
Submit ReviewThis podcast currently has no reviews.
Submit ReviewLarge Language Model (LLM) capabilities have reached new heights and are nothing short of mind-blowing! However, with so many advancements happening at once, it can be overwhelming to keep up with all the latest developments. To help us navigate through this complex terrain, we’ve invited Raj - one of the most adept at explaining State-of-the-Art (SOTA) AI in practical terms - to join us on the podcast.
Raj discusses several intriguing topics such as in-context learning, reasoning, LLM options, and related tooling. But that’s not all! We also hear from Raj about the rapidly growing data science and AI community on TikTok.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-219.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:43) - Rajiv Shah(01:55) - AI on TikTok?(03:31) - Community engagement on TikTok(04:49) - Ever-growing mind-blowing moments(06:24) - Reaching different audiences(07:57) - What is in-context-learning?(10:52) - Prompt engineering with better models(13:01) - Growing productive users(14:52) - The landscape of large language models(18:16) - Sorting through this delightful mess(19:46) - Hugging Face highlights(23:06) - Practical fine-tuning(26:00) - What are we talking about?(28:29) - Where does AI fit into education?(30:20) - A different kind of consumer(31:54) - Talking to the average Joe about AI(34:02) - What do you see through the looking glass?(36:06) - Great Hugging Face resources(37:08) - Outro
What can art historians and computer scientists learn from one another? Actually, a lot! Amanda Wasielewski joins us to talk about how she discovered that computer scientists working on computer vision were actually acting like rogue art historians and how art historians have found machine learning to be a valuable tool for research, fraud detection, and cataloguing. We also discuss the rise of generative AI and how we this technology might cause us to ask new questions like: “What makes a photograph a photograph?”
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-218.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:42) - Amanda Wasielewski(04:28) - What is art history?(10:00) - Integrating artworks for ML?(13:47) - How are art historians adaption to ML?(19:16) - Art models and the Tank Classifier(24:27) - What ML devs can learn from art history(32:06) - Deep learning paradoxes(38:12) - Where is the field going?(42:21) - Outro
Daniel and Chris explore the intersection of Kaggle and real-world data science in this illuminating conversation with Christof Henkel, Senior Deep Learning Data Scientist at NVIDIA and Kaggle Grandmaster. Christof offers a very lucid explanation into how participation in Kaggle can positively impact a data scientist’s skill and career aspirations. He also shared some of his insights and approach to maximizing AI productivity uses GPU-accelerated tools like RAPIDS and DALI.
Changelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-217.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:41) - Christof Henkel(01:57) - What is Kaggle?(07:34) - How has Kaggle helped?(09:38) - Deep Learning 5-6 years ago(11:05) - What were the changes like?(14:10) - Sponsor: Changelog++(15:09) - How Kaggle compares to real life(21:56) - Any Kaggle highlights?(24:04) - How to climb the Kaggle ladder?(26:39) - Accelerated GPUs in Kaggle(30:25) - Speeding up my process(34:37) - What comes up in the discussions?(37:36) - Getting started(41:06) - What's next for Christof(42:57) - Outro
We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants?
Beth Rudden of Bast.ai has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-216.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:42) - Beth Rudden(07:30) - Bringing in ontology(16:27) - Don't infer consciousness(22:18) - Dealing with bias(25:59) - How to create access(31:35) - Using AI responsibly(38:31) - Implementing NLG to more modalities(42:14) - Will AI make you learn better?(44:19) - Wrap up(44:50) - Outro
Neural search and chat-based search are all the rage right now. However, You.com has been innovating in these topics long before ChatGPT. In this episode, Bryan McCann from You.com shares insights related to our mental model of Large Language Model (LLM) interactions and practical tips related to integrating LLMs into production systems.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-215.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:42) - Bryan Mccann(01:47) - What is You.com?(06:21) - What's different in these new algorithms(09:28) - How has the public view changed You.com?(11:53) - Will this change search engines?(15:19) - How will You.com enhance tooling?(17:41) - You and mutli-modality(21:17) - AI tools for the next generation(26:28) - Any wisdom worth sharing?(29:49) - Our future relationship with models(34:11) - Practical tips for practitioners(38:24) - How to get started on You.com(41:06) - Outro
We’ve all experienced pain moving from local development, to testing, and then on to production. This cycle can be long and tedious, especially as AI models and datasets are integrated. Modal is trying to make this loop of development as seamless as possible for AI practitioners, and their platform is pretty incredible!
Erik from Modal joins us in this episode to help us understand how we can run or deploy machine learning models, massively parallel compute jobs, task queues, web apps, and much more, without our own infrastructure.
Changelog++ members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-214.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:42) - Erik Bernhardsson(02:29) - What got Modal started(08:55) - What makes Modal different?(12:13) - Pros and cons of this workflow(15:36) - What it's like in my experience(21:37) - Most unexpected uses for Modal(23:57) - The classic Modal workflow(31:04) - Tips for migrating into Modal(34:35) - How will Modal grow?(37:56) - Will you take Modal to the edge?(40:03) - Wrap up(43:24) - Outro
With the recent proliferation of generative AI models (from OpenAI, co:here, Anthropic, etc.), practitioners are racing to come up with best practices around prompting, grounding, and control of outputs.
Chris and Daniel take a deep dive into the kinds of behavior we are seeing with this latest wave of models (both good and bad) and what leads to that behavior. They also dig into some prompting and integration tips.
Changelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Generative AI model behavior in the news:
Useful guides related to prompt engineering:
Something missing or broken? ai-213.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:43) - Best time for an AI podcast?(05:19) - What makes output good or bad?(15:36) - Sponsor: Changelog++(16:36) - Behind the behavior(19:23) - What can we reliably expect?(29:14) - Prompt Engineering(35:11) - Tips on engineering prompts(43:05) - Outro
We’re super excited to welcome Jay Alammar to the show. Jay is a well-known AI educator, applied NLP practitioner at co:here, and author of the popular blog, “The Illustrated Transformer.” In this episode, he shares his ideas on creating applied NLP solutions, working with large language models, and creating educational resources for state-of-the-art AI.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-212.md">PRs welcome!
Timestamps:
(00:00) - Welcome to Practical AI(00:42) - Jay Alammar(03:33) - Jay's origin story(08:49) - Tips for educational content creators(12:26) - Applied NLP(16:34) - Are data scientists becoming software engineers?(21:47) - When should I be using these tools?(25:55) - co:here(28:39) - How is this landscaoe changing?(32:41) - Multi-modality: How do I use this?(35:56) - Jay, what do you want to explore?(37:42) - Outro
We’ve been hearing about “serverless” CPUs for some time, but it’s taken a while to get to serverless GPUs. In this episode, Erik from Banana explains why its taken so long, and he helps us understand how these new workflows are unlocking state-of-the-art AI for application developers. Forget about servers, but don’t forget to listen to this one!
Changelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Banana - Scale your machine learning inference and training on serverless GPUs.
Something missing or broken? ai-211.md">PRs welcome!
Timestamps:
(00:07) - Welcome to Practical AI(00:43) - Erik Dunteman(02:34) - What does serverless mean to you?(07:17) - What's the secret sauce?(09:30) - How does serverless affect our workflows?(13:21) - Sponsor: Changelog++(14:21) - What languages do you prefer?(17:20) - The Banana workflow(20:33) - The necessary minimum skills to use Banana(24:51) - A typical win(26:20) - Incompatible workflows(30:11) - Future Banana-AWS compatability?(32:53) - Tips to choose your GPU(35:23) - What's the future of serverless?(37:46) - Outro
Worlds are colliding! This week we join forces with the hosts of the MLOps.Community podcast to discuss all things machine learning operations. We talk about how the recent explosion of foundation models and generative models is influencing the world of MLOps, and we discuss related tooling, workflows, perceptions, etc.
Changelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? ai-210.md">PRs welcome!
Timestamps:
(00:07) - Welcome to Practical AI(00:43) - Demetrios and Mihail(01:54) - What is the MLOps Community?(05:14) - Chris throws a hand grenade into the convo(09:44) - MLOps vs DevOps when deploying(15:30) - Ops vs experiment tracking(16:43) - Where is MLOps going?(25:57) - Is this even legal??(32:22) - Sponsor: Changelog++(33:21) - Is generative model bloat an issue?(36:35) - MLOps' diverse uses(42:52) - You are not Google(54:07) - Wrap up(56:07) - Outro
This podcast could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review