Please login or sign up to post and edit reviews.
Building Real-Time Data Platforms For Large Volumes Of Information With Aerospike
Publisher |
Tobias Macey
Media Type |
audio
Podknife tags |
Data Science
Interview
Technology
Categories Via RSS |
Technology
Publication Date |
Oct 02, 2021
Episode Duration |
01:07:38

Summary

Aerospike is a database engine that is designed to provide millisecond response times for queries across terabytes or petabytes. In this episode Chief Strategy Officer, Lenley Hensarling, explains how the ability to process these large volumes of information in real-time allows businesses to unlock entirely new capabilities. He also discusses the technical implementation that allows for such extreme performance and how the data model contributes to the scalability of the system. If you need to deal with massive data, at high velocities, in milliseconds, then Aerospike is definitely worth learning about.

Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription
  • Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold’s proactive approach to data quality helps data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold.
  • Your host is Tobias Macey and today I’m interviewing Lenley Hensarling about Aerospike and building real-time data platforms

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you describe what Aerospike is and the story behind it?
    • What are the use cases that it is uniquely well suited for?
    • What are the use cases that you and the Aerospike team are focusing on and how does that influence your focus on priorities of feature development and user experience?
  • What are the driving factors for building a real-time data platform?
  • How is Aerospike being incorporated in application and data architectures?
  • Can you describe how the Aerospike engine is architected?
    • How have the design and architecture changed or evolved since it was first created?
    • How have market forces influenced the product priorities and focus?
  • What are the challenges that end users face when determining how to model their data given a key/value storage interface?
    • What are the abstraction layers that you and/or your users build to manage reliational or hierarchical data architectures?
  • What are the operational characteristics of the Aerospike system? (e.g. deployment, scaling, CP vs AP, upgrades, clustering, etc.)
  • What are the most interesting, innovative, or unexpected ways that you have seen Aerospike used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Aerospike?
  • When is Aerospike the wrong choice?
  • What do you have planned for the future of Aerospike?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Summary

Aerospike is a database engine that is designed to provide millisecond response times for queries across terabytes or petabytes. In this episode Chief Strategy Officer, Lenley Hensarling, explains how the ability to process these large volumes of information in real-time allows businesses to unlock entirely new capabilities. He also discusses the technical implementation that allows for such extreme performance and how the data model contributes to the scalability of the system. If you need to deal with massive data, at high velocities, in milliseconds, then Aerospike is definitely worth learning about.

Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription
  • Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold’s proactive approach to data quality helps data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold.
  • Your host is Tobias Macey and today I’m interviewing Lenley Hensarling about Aerospike and building real-time data platforms

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you describe what Aerospike is and the story behind it?
    • What are the use cases that it is uniquely well suited for?
    • What are the use cases that you and the Aerospike team are focusing on and how does that influence your focus on priorities of feature development and user experience?
  • What are the driving factors for building a real-time data platform?
  • How is Aerospike being incorporated in application and data architectures?
  • Can you describe how the Aerospike engine is architected?
    • How have the design and architecture changed or evolved since it was first created?
    • How have market forces influenced the product priorities and focus?
  • What are the challenges that end users face when determining how to model their data given a key/value storage interface?
    • What are the abstraction layers that you and/or your users build to manage reliational or hierarchical data architectures?
  • What are the operational characteristics of the Aerospike system? (e.g. deployment, scaling, CP vs AP, upgrades, clustering, etc.)
  • What are the most interesting, innovative, or unexpected ways that you have seen Aerospike used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Aerospike?
  • When is Aerospike the wrong choice?
  • What do you have planned for the future of Aerospike?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

Summary

Aerospike is a database engine that is designed to provide millisecond response times for queries across terabytes or petabytes. In this episode Chief Strategy Officer, Lenley Hensarling, explains how the ability to process these large volumes of information in real-time allows businesses to unlock entirely new capabilities. He also discusses the technical implementation that allows for such extreme performance and how the data model contributes to the scalability of the system. If you need to deal with massive data, at high velocities, in milliseconds, then Aerospike is definitely worth learning about.

Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription
  • Modern data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days or even weeks. By the time errors have made their way into production, it’s often too late and damage is done. Datafold’s proactive approach to data quality helps data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Visit dataengineeringpodcast.com/datafold today to book a demo with Datafold.
  • Your host is Tobias Macey and today I’m interviewing Lenley Hensarling about Aerospike and building real-time data platforms

Interview

  • Introduction
  • How did you get involved in the area of data management?
  • Can you describe what Aerospike is and the story behind it?
    • What are the use cases that it is uniquely well suited for?
    • What are the use cases that you and the Aerospike team are focusing on and how does that influence your focus on priorities of feature development and user experience?
  • What are the driving factors for building a real-time data platform?
  • How is Aerospike being incorporated in application and data architectures?
  • Can you describe how the Aerospike engine is architected?
    • How have the design and architecture changed or evolved since it was first created?
    • How have market forces influenced the product priorities and focus?
  • What are the challenges that end users face when determining how to model their data given a key/value storage interface?
    • What are the abstraction layers that you and/or your users build to manage reliational or hierarchical data architectures?
  • What are the operational characteristics of the Aerospike system? (e.g. deployment, scaling, CP vs AP, upgrades, clustering, etc.)
  • What are the most interesting, innovative, or unexpected ways that you have seen Aerospike used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Aerospike?
  • When is Aerospike the wrong choice?
  • What do you have planned for the future of Aerospike?

Contact Info

Parting Question

  • From your perspective, what is the biggest gap in the tooling or technology for data management today?

Links

The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

Support Data Engineering Podcast

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review