Steve Orrin on the Importance of Hardware in AI Development
Podcast |
Data Driven
Publisher |
Data Driven
Media Type |
audio
Categories Via RSS |
Life Sciences
Mathematics
Science
Technology
Publication Date |
Jun 27, 2023
Episode Duration |
01:02:40

On this episode of Data Driven, the focus is on hardware from AI optimized chips to edge computing.

Frank and Andy interview Steven Orrin, the CTO of Intel Federal.

Intel has developed new CPU instructions to accelerate AI workloads, and FPGAs allow for faster development in custom applications with specific needs. The speaker emphasizes the importance of data curation and wrangling before jumping into machine learning and AI, 

Links

Moments

00:01:59 Hardware and software infrastructure for AI.

00:07:18 AI benchmarks show importance of GPUs & CPUs

00:14:08 Habana is a two-chip strategy offering AI accelerator chips designed for training flows and inferencing workloads. It is available in the Amazon cloud and data centers. The Habana chips are geared for large-scale training and inference tasks, and they scale with the architecture. One chip, Goya, is for inferencing, while the other chip, Gaudí, is for training. Intel also offers CPUs with added instructions for AI workloads, as well as GPUs for specialized tasks. Custom approaches like using FPGAs and ASICs are gaining popularity, especially for edge computing where low power and performance are essential.

00:19:47 Intel's diverse team stays ahead of AI trends by collaborating with specialists and responding to industry needs. They have a large number of software engineers focused on optimizing software for Intel architecture, contributing to open source, and providing resources to help companies run their software efficiently. Intel's goal is to ensure that everyone's software runs smoothly and continues to raise the bar for the industry.

00:25:24 Moore's Law drives compute by reducing size. Cloud enables cost-effective edge use cases. Edge brings cloud capabilities to devices.

00:31:40 FPGA is programmable hardware allowing customization. It has applications in AI and neuromorphic processing. It is used in cellular and RF communications. Can be rapidly prototyped and deployed in the cloud.

00:41:09 Started in biology, became a hacker, joined Intel.

00:48:01 Coding as a viable and well-paying career.

00:55:50 Looking forward to image-to-code and augmented reality integration in daily life.

01:00:46 Tech show, similar to Halt and Catch Fire.

Key Topics:

Topics Covered:

- The role of infrastructure in AI

- Hardware optimization for training and inferencing

- Intel's range of hardware solutions

- Importance of software infrastructure and collaboration with the open source community

- Introduction to Havana AI accelerator chips

- The concept of collapsing data into a single integer level

- Challenges and considerations in data collection and storage

- Explanation and future of FPGAs

- Moore's Law and its impact on compute

- The rise of edge computing and its benefits

- Bringing cloud capabilities to devices

- Importance of inference and decision-making on the device

- Challenges in achieving high performance and energy efficiency in edge computing

- The role of diverse teams in staying ahead in the AI world

- Overview of Intel Labs and their research domains

- Intel's software engineering capabilities and dedication to open source

- Intel as collaborators in the industry

- Importance of benchmarking across different AI types and stages

- The role of CPUs and GPUs in AI workloads

- Optimizing workload through software to hardware

- Importance of memory...

On this episode of Data Driven, our hosts Andy Leonard, BAILeY, and Frank La Vigne are joined by guest Steven Orrin, an expert in software and hardware innovation at Intel. The episode dives into the crucial role that hardware plays in AI development, from data curation to training and inferencing. Steve emphasizes the importance of hardware optimization for specific workloads to achieve powerful and timely training. They also explore the impact of hardware on inferencing, particularly in real-time applications like autonomous driving. Intel, as Steve explains, is providing a diverse set of hardware architectures, including CPUs, AI accelerators, and edge AI chips, to address various AI workloads. In addition to discussing hardware, the conversation touches on several other interesting topics. Steve dives into the concept of collapsing data related to planes or vehicles into a single integer level or bit, highlighting the value of data but also cautioning against excessive data collection. They also touch on the future of edge computing, the challenges of achieving high performance and energy efficiency, the advantages of a diverse team, and Intel's commitment to collaboration and open source. Overall, this episode provides valuable insights into the importance of hardware in AI development and offers a fascinating glimpse into Intel's approach to providing hardware solutions for different AI workloads. Join us on this episode of Data Driven to learn more about the role of hardware in AI and Intel's contributions to the field.

On this episode of Data Driven, the focus is on hardware from AI optimized chips to edge computing.

Frank and Andy interview Steven Orrin, the CTO of Intel Federal.

Intel has developed new CPU instructions to accelerate AI workloads, and FPGAs allow for faster development in custom applications with specific needs. The speaker emphasizes the importance of data curation and wrangling before jumping into machine learning and AI, 

Links

Moments

00:01:59 Hardware and software infrastructure for AI.

00:07:18 AI benchmarks show importance of GPUs & CPUs

00:14:08 Habana is a two-chip strategy offering AI accelerator chips designed for training flows and inferencing workloads. It is available in the Amazon cloud and data centers. The Habana chips are geared for large-scale training and inference tasks, and they scale with the architecture. One chip, Goya, is for inferencing, while the other chip, Gaudí, is for training. Intel also offers CPUs with added instructions for AI workloads, as well as GPUs for specialized tasks. Custom approaches like using FPGAs and ASICs are gaining popularity, especially for edge computing where low power and performance are essential.

00:19:47 Intel's diverse team stays ahead of AI trends by collaborating with specialists and responding to industry needs. They have a large number of software engineers focused on optimizing software for Intel architecture, contributing to open source, and providing resources to help companies run their software efficiently. Intel's goal is to ensure that everyone's software runs smoothly and continues to raise the bar for the industry.

00:25:24 Moore's Law drives compute by reducing size. Cloud enables cost-effective edge use cases. Edge brings cloud capabilities to devices.

00:31:40 FPGA is programmable hardware allowing customization. It has applications in AI and neuromorphic processing. It is used in cellular and RF communications. Can be rapidly prototyped and deployed in the cloud.

00:41:09 Started in biology, became a hacker, joined Intel.

00:48:01 Coding as a viable and well-paying career.

00:55:50 Looking forward to image-to-code and augmented reality integration in daily life.

01:00:46 Tech show, similar to Halt and Catch Fire.

Key Topics:

Topics Covered:

- The role of infrastructure in AI

- Hardware optimization for training and inferencing

- Intel's range of hardware solutions

- Importance of software infrastructure and collaboration with the open source community

- Introduction to Havana AI accelerator chips

- The concept of collapsing data into a single integer level

- Challenges and considerations in data collection and storage

- Explanation and future of FPGAs

- Moore's Law and its impact on compute

- The rise of edge computing and its benefits

- Bringing cloud capabilities to devices

- Importance of inference and decision-making on the device

- Challenges in achieving high performance and energy efficiency in edge computing

- The role of diverse teams in staying ahead in the AI world

- Overview of Intel Labs and their research domains

- Intel's software engineering capabilities and dedication to open source

- Intel as collaborators in the industry

- Importance of benchmarking across different AI types and stages

- The role of CPUs and GPUs in AI workloads

- Optimizing workload through software to hardware

- Importance of memory in memory-intensive activities

- Security mechanisms in FPGAs

- Programming and development advantages of FPGAs

- Resurgence of FPGAs in AI and other domains

Key Facts about the Speaker:

- Background in molecular biology bioresearch

- Transitioned to hacking and coding

- Started first company in 1995

- Mentored by Bruce Schneier

- Joined Intel in 2005

- Worked on projects related to antimalware technologies, cloud security, web security, and data science

- Transitioned to the federal team at Intel

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review