Deploying models on GPU with Kyle Morris
Podcast |
MLOps Live
Publisher |
neptune.ai
Media Type |
audio
Categories Via RSS |
Technology
Publication Date |
May 25, 2022
Episode Duration |
00:55:55
In this episode of MLOps Live, Sabine and Stephen are joined by Kyle Morris, Co-Founder of Banana ML. They discuss running ML in production leveraging GPUs. They delve into GPU performance optimization, approaches, infrastructural and memory implications as well as other cases. With the increased interest in building production-ready, end-to-end ML pipelines, there’s an increasing need to employ the optimal toolset, which can scale quicker. Modern commodity PCs have a multi-core CPU and at least one GPU, resulting in a low-cost, easily accessible heterogeneous environment for high-performance computing, but due to physical constraints, hardware development now results in greater parallelism rather than improved performance for sequential algorithms. Machine Learning Build/Train and Production Execution frequently employ disparate controls, management, run time platforms, and sometimes languages. As a result, understanding the hardware on which one is running is critical in order to take advantage of any optimization that is feasible.
In this episode of MLOps Live, Sabine and Stephen are joined by Kyle Morris, Co-Founder of Banana ML. They discuss running ML in production leveraging GPUs. They delve into GPU performance optimization, approaches, infrastructural and memory implications as well as other cases. With the increased interest in building production-ready, end-to-end ML pipelines, there’s an increasing need to employ the optimal toolset, which can scale quicker. Modern commodity PCs have a multi-core CPU and at least one GPU, resulting in a low-cost, easily accessible heterogeneous environment for high-performance computing, but due to physical constraints, hardware development now results in greater parallelism rather than improved performance for sequential algorithms. Machine Learning Build/Train and Production Execution frequently employ disparate controls, management, run time platforms, and sometimes languages. As a result, understanding the hardware on which one is running is critical in order to take advantage of any optimization that is feasible.
Visit our YouTube channel to watch this episode!Learn more about Kyle Morris:
If you enjoyed this episode then please either:
Register for the new live event
To learn more visit Neptune.ai
Previous guests include: Andy McMahon of NatWest Group, Jacopo Tagliabue of Coveo, Adam Sroka of Origami, Amber Roberts of Arize AI, Michal Tadeusiak of deepsense.ai, Danny Leybzon of WhyLabs, Kyle Morris of Banana ML, Federico Bianchi of Università Bocconi, Mateusz Opala of Brainly, Kuba Cieslik of tuul.ai, Adam Becker of Telepath.io and Fernando Rejon & Jakub Zavrel of Zeta Alpha Vector. Check out our three most downloaded episodes:
MLOps Live is handcrafted by our friends over at: fame.so

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review