This episode currently has no reviews.
Submit ReviewOn this episode of Data Driven, the focus is on hardware from AI optimized chips to edge computing.
Frank and Andy interview Steven Orrin, the CTO of Intel Federal.
Intel has developed new CPU instructions to accelerate AI workloads, and FPGAs allow for faster development in custom applications with specific needs. The speaker emphasizes the importance of data curation and wrangling before jumping into machine learning and AI,
00:01:59 Hardware and software infrastructure for AI.
00:07:18 AI benchmarks show importance of GPUs & CPUs
00:14:08 Habana is a two-chip strategy offering AI accelerator chips designed for training flows and inferencing workloads. It is available in the Amazon cloud and data centers. The Habana chips are geared for large-scale training and inference tasks, and they scale with the architecture. One chip, Goya, is for inferencing, while the other chip, Gaudí, is for training. Intel also offers CPUs with added instructions for AI workloads, as well as GPUs for specialized tasks. Custom approaches like using FPGAs and ASICs are gaining popularity, especially for edge computing where low power and performance are essential.
00:19:47 Intel's diverse team stays ahead of AI trends by collaborating with specialists and responding to industry needs. They have a large number of software engineers focused on optimizing software for Intel architecture, contributing to open source, and providing resources to help companies run their software efficiently. Intel's goal is to ensure that everyone's software runs smoothly and continues to raise the bar for the industry.
00:25:24 Moore's Law drives compute by reducing size. Cloud enables cost-effective edge use cases. Edge brings cloud capabilities to devices.
00:31:40 FPGA is programmable hardware allowing customization. It has applications in AI and neuromorphic processing. It is used in cellular and RF communications. Can be rapidly prototyped and deployed in the cloud.
00:41:09 Started in biology, became a hacker, joined Intel.
00:48:01 Coding as a viable and well-paying career.
00:55:50 Looking forward to image-to-code and augmented reality integration in daily life.
01:00:46 Tech show, similar to Halt and Catch Fire.
Topics Covered:
- The role of infrastructure in AI
- Hardware optimization for training and inferencing
- Intel's range of hardware solutions
- Importance of software infrastructure and collaboration with the open source community
- Introduction to Havana AI accelerator chips
- The concept of collapsing data into a single integer level
- Challenges and considerations in data collection and storage
- Explanation and future of FPGAs
- Moore's Law and its impact on compute
- The rise of edge computing and its benefits
- Bringing cloud capabilities to devices
- Importance of inference and decision-making on the device
- Challenges in achieving high performance and energy efficiency in edge computing
- The role of diverse teams in staying ahead in the AI world
- Overview of Intel Labs and their research domains
- Intel's software engineering capabilities and dedication to open source
- Intel as collaborators in the industry
- Importance of benchmarking across different AI types and stages
- The role of CPUs and GPUs in AI workloads
- Optimizing workload through software to hardware
- Importance of memory...
On this episode of Data Driven, the focus is on hardware from AI optimized chips to edge computing.
Frank and Andy interview Steven Orrin, the CTO of Intel Federal.
Intel has developed new CPU instructions to accelerate AI workloads, and FPGAs allow for faster development in custom applications with specific needs. The speaker emphasizes the importance of data curation and wrangling before jumping into machine learning and AI,
00:01:59 Hardware and software infrastructure for AI.
00:07:18 AI benchmarks show importance of GPUs & CPUs
00:14:08 Habana is a two-chip strategy offering AI accelerator chips designed for training flows and inferencing workloads. It is available in the Amazon cloud and data centers. The Habana chips are geared for large-scale training and inference tasks, and they scale with the architecture. One chip, Goya, is for inferencing, while the other chip, Gaudí, is for training. Intel also offers CPUs with added instructions for AI workloads, as well as GPUs for specialized tasks. Custom approaches like using FPGAs and ASICs are gaining popularity, especially for edge computing where low power and performance are essential.
00:19:47 Intel's diverse team stays ahead of AI trends by collaborating with specialists and responding to industry needs. They have a large number of software engineers focused on optimizing software for Intel architecture, contributing to open source, and providing resources to help companies run their software efficiently. Intel's goal is to ensure that everyone's software runs smoothly and continues to raise the bar for the industry.
00:25:24 Moore's Law drives compute by reducing size. Cloud enables cost-effective edge use cases. Edge brings cloud capabilities to devices.
00:31:40 FPGA is programmable hardware allowing customization. It has applications in AI and neuromorphic processing. It is used in cellular and RF communications. Can be rapidly prototyped and deployed in the cloud.
00:41:09 Started in biology, became a hacker, joined Intel.
00:48:01 Coding as a viable and well-paying career.
00:55:50 Looking forward to image-to-code and augmented reality integration in daily life.
01:00:46 Tech show, similar to Halt and Catch Fire.
Topics Covered:
- The role of infrastructure in AI
- Hardware optimization for training and inferencing
- Intel's range of hardware solutions
- Importance of software infrastructure and collaboration with the open source community
- Introduction to Havana AI accelerator chips
- The concept of collapsing data into a single integer level
- Challenges and considerations in data collection and storage
- Explanation and future of FPGAs
- Moore's Law and its impact on compute
- The rise of edge computing and its benefits
- Bringing cloud capabilities to devices
- Importance of inference and decision-making on the device
- Challenges in achieving high performance and energy efficiency in edge computing
- The role of diverse teams in staying ahead in the AI world
- Overview of Intel Labs and their research domains
- Intel's software engineering capabilities and dedication to open source
- Intel as collaborators in the industry
- Importance of benchmarking across different AI types and stages
- The role of CPUs and GPUs in AI workloads
- Optimizing workload through software to hardware
- Importance of memory in memory-intensive activities
- Security mechanisms in FPGAs
- Programming and development advantages of FPGAs
- Resurgence of FPGAs in AI and other domains
Key Facts about the Speaker:
- Background in molecular biology bioresearch
- Transitioned to hacking and coding
- Started first company in 1995
- Mentored by Bruce Schneier
- Joined Intel in 2005
- Worked on projects related to antimalware technologies, cloud security, web security, and data science
- Transitioned to the federal team at Intel
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review