Ronen Dar on GPU Orchestration for Building ML Models
Podcast |
Data Driven
Publisher |
Data Driven
Media Type |
audio
Categories Via RSS |
Life Sciences
Mathematics
Science
Technology
Publication Date |
Feb 05, 2024
Episode Duration |
00:44:59

In this episode, our Andy Leonard and Frank La Vigne sit down with Ronen Dar, the co-founder and CTO of Run AI, to explore the world of artificial intelligence and GPU orchestration for machine learning models.

Ronen shares insights into the challenges of utilizing GPUs in AI research and how Run AI's platform addresses these issues by optimizing GPU usage and providing tools for easier and faster model training and deployment. The conversation delves into the concept of fractional GPU usage, allowing multiple workloads to run on a single GPU, making expensive GPUs more accessible and cost-effective for organizations.

Links

Show Notes

04:40 GPU technology enabled for cloud AI workloads.

07:00 RunAI enables sharing expensive GPU resources for all.

11:59 As enterprise AI matures, organizations become more savvy.

15:35 Deep learning, GPUs for speed, CPUs backup.

16:54 LLMs running on GPU's, exploding in market.

23:29 NVIDIA created CUDA to simplify GPU use.

26:21 NVIDIA's success lies in accessible technology.

28:25 Solve GPU hugging with quotas and sharing.

31:15 Team lead manages GPU quotas for researchers.

35:51 Rapid changes in business and innovation.

40:34 Passionate problem-solver with diverse tech background.

43:38 Thanks for tuning in, subscribe and review.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review