Data Engineering Podcast Tobias Macey
-
- Technology
-
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
-
Being Data Driven At Stripe With Trino And Iceberg
Summary
Stripe is a company that relies on data to power their products and business. To support that functionality they have invested in Trino and Iceberg for their analytical workloads. In this episode Kevin Liu shares some of the interesting features that they have built by combining those technologies, as well as the challenges that they face in supporting the myriad workloads that are thrown at this layer of their data platform.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'm interviewing Kevin Liu about his use of Trino and Iceberg for Stripe's data lakehouse
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what role Trino and Iceberg play in Stripe's data architecture?
What are the ways in which your job responsibilities intersect with Stripe's lakehouse infrastructure?
What were the requirements and selection criteria that led to the selection of that combination of technologies?
What are the other systems that feed into and rely on the Trino/Iceberg service?
what kinds of questions are you answering with table metadata
what use case/team does that support
comparative utility of iceberg REST catalog
What are the shortcomings of Trino and Iceberg?
What are the most interesting, innovative, or unexpected ways that you have seen Iceberg/Trino used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Stripe's data infrastructure?
When is a lakehouse on Trino/Iceberg the wrong choice?
What do you have planned for the future of Trino and Iceberg at Stripe?
Contact Info
Substack
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
Links
Trino
Iceberg
Stripe
Spark
Redshift
Hive Metastore
Python Iceberg
Python Iceberg REST Catalog
Trino Metadata Table
Flink
Podcast Episode
Tabular
Podcast Episode
Delta Table
Podcast Episode
Databricks Unity Catalog
Starburst
AWS Athena
Kevin Trinofest Presentation
Alluxio
Podcast Episode
Parquet
Hudi
Trino Project Tardigrade
Trino On Ice
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
Starburst: ![Starburst Logo](https://files.fireside.fm/file/fireside-uploads/images/c/c6161a3f-a67b-48ef-b087-52f1f1573292/UpvN7wDT.png)
This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake.
Trusted by the teams at Comcast and Doordash, Starburst -
X-Ray Vision For Your Flink Stream Processing With Datorios
Summary
Streaming data processing enables new categories of data products and analytics. Unfortunately, reasoning about stream processing engines is complex and lacks sufficient tooling. To address this shortcoming Datorios created an observability platform for Flink that brings visibility to the internals of this popular stream processing system. In this episode Ronen Korman and Stav Elkayam discuss how the increased understanding provided by purpose built observability improves the usefulness of Flink.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'm interviewing Ronen Korman and Stav Elkayam about pulling back the curtain on your real-time data streams by bringing intuitive observability to Flink streams
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what Datorios is and the story behind it?
Data observability has been gaining adoption for a number of years now, with a large focus on data warehouses. What are some of the unique challenges posed by Flink?
How much of the complexity is due to the nature of streaming data vs. the architectural realities of Flink?
How has the lack of visibility into the flow of data in Flink impacted the ways that teams think about where/when/how to apply it?
How have the requirements of generative AI shifted the demand for streaming data systems?
What role does Flink play in the architecture of generative AI systems?
Can you describe how Datorios is implemented?
How has the design and goals of Datorios changed since you first started working on it?
How much of the Datorios architecture and functionality is specific to Flink and how are you thinking about its potential application to other streaming platforms?
Can you describe how Datorios is used in a day-to-day workflow for someone building streaming applications on Flink?
What are the most interesting, innovative, or unexpected ways that you have seen Datorios used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Datorios?
When is Datorios the wrong choice?
What do you have planned for the future of Datorios?
Contact Info
Ronen
LinkedIn
Stav
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget -
Practical First Steps In Data Governance For Long Term Success
Summary
Modern businesses aspire to be data driven, and technologists enjoy working through the challenge of building data systems to support that goal. Data governance is the binding force between these two parts of the organization. Nicola Askham found her way into data governance by accident, and stayed because of the benefit that she was able to provide by serving as a bridge between the technology and business. In this episode she shares the practical steps to implementing a data governance practice in your organization, and the pitfalls to avoid.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
Your host is Tobias Macey and today I'm interviewing Nicola Askham about the practical steps of building out a data governance practice in your organization
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of the scope and boundaries of data governance in an organization?
At what point does a lack of an explicit governance policy become a liability?
What are some of the misconceptions that you encounter about data governance?
What impact has the evolution of data technologies had on the implementation of governance practices? (e.g. number/scale of systems, types of data, AI)
Data governance can often become an exercise in boiling the ocean. What are the concrete first steps that will increase the success rate of a governance practice?
Once a data governance project is underway, what are some of the common roadblocks that might derail progress?
What are the net benefits to the data team and the organization when a data governance practice is established, active, and healthy?
What are the most interesting, innovative, or unexpected ways that you have seen data governance applied?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on data governance/training/coaching?
What are some of the pitfalls in data governance?
What are some of the future trends in data governance that you are excited by?
Are there any trends that concern you?
Contact Info
Website
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it -
Data Migration Strategies For Large Scale Systems
Summary
Any software system that survives long enough will require some form of migration or evolution. When that system is responsible for the data layer the process becomes more challenging. Sriram Panyam has been involved in several projects that required migration of large volumes of data in high traffic environments. In this episode he shares some of the valuable lessons that he learned about how to make those projects successful.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
Your host is Tobias Macey and today I'm interviewing Sriram Panyam about his experiences conducting large scale data migrations and the useful strategies that he learned in the process
Interview
Introduction
How did you get involved in the area of data management?
Can you start by sharing some of your experiences with data migration projects?
As you have gone through successive migration projects, how has that influenced the ways that you think about architecting data systems?
How would you categorize the different types and motivations of migrations?
How does the motivation for a migration influence the ways that you plan for and execute that work?
Can you talk us through one or two specific projects that you have taken part in?
Part 1: The Triggers
Section 1: Technical Limitations triggering Data Migration
Scaling bottlenecks: Performance issues with databases, storage, or network infrastructure
Legacy compatibility: Difficulties integrating with modern tools and cloud platforms
System upgrades: The need to migrate data during major software changes (e.g., SQL Server version upgrade)
Section 2: Types of Migrations for Infrastructure Focus
Storage migration: Moving data between systems (HDD to SSD, SAN to NAS, etc.)
Data center migration: Physical relocation or consolidation of data centers
Virtualization migration: Moving from physical servers to virtual machines (or vice versa)
Section 3: Technical Decisions Driving Data Migrations
End-of-life support: Forced migration when older software or hardware is sunsetted
Security and compliance: Adopting new platforms with better security postures
Cost Optimization: Potential savings of cloud vs. on-premise data centers
Part 2: Challenges (and Anxieties)
Section 1: Technical Challenges
Data transformation challenges: Schema changes, complex data mappings
Network bandwidth and latency: Transferring large datasets efficiently
Performance -
Zenlytic Is Building You A Better Coworker With AI Agents
Summary
The purpose of business intelligence systems is to allow anyone in the business to access and decode data to help them make informed decisions. Unfortunately this often turns into an exercise in frustration for everyone involved due to complex workflows and hard-to-understand dashboards. The team at Zenlytic have leaned on the promise of large language models to build an AI agent that lets you converse with your data. In this episode they share their journey through the fast-moving landscape of generative AI and unpack the difference between an AI chatbot and an AI agent.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I'm interviewing Ryan Janssen and Paul Blankley about their experiences building AI powered agents for interacting with your data
Interview
Introduction
How did you get involved in data? In AI?
Can you describe what Zenlytic is and the role that AI is playing in your platform?
What have been the key stages in your AI journey?
What are some of the dead ends that you ran into along the path to where you are today?
What are some of the persistent challenges that you are facing?
So tell us more about data agents. Firstly, what are data agents and why do you think they're important?
How are data agents different from chatbots?
Are data agents harder to build? How do you make them work in production?
What other technical architectures have you had to develop to support the use of AI in Zenlytic?
How have you approached the work of customer education as you introduce this functionality?
What are some of the most interesting or erroneous misconceptions that you have heard about what the AI can and can't do?
How have you balanced accuracy/trustworthiness with user experience and flexibility in the conversational AI, given the potential for these models to create erroneous responses?
What are the most interesting, innovative, or unexpected ways that you have seen your AI agent used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI agent for business intelligence?
When is an AI agent the wrong choice?
What do you have planned for the future of AI in the Zenlytic product?
Contact Info
Ryan
LinkedIn
Paul
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Annou -
Release Management For Data Platform Services And Logic
Summary
Building a data platform is a substrantial engineering endeavor. Once it is running, the next challenge is figuring out how to address release management for all of the different component parts. The services and systems need to be kept up to date, but so does the code that controls their behavior. In this episode your host Tobias Macey reflects on his current challenges in this area and some of the factors that contribute to the complexity of the problem.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
Your host is Tobias Macey and today I want to talk about my experiences managing the QA and release management process of my data platform
Interview
Introduction
As a team, our overall goal is to ensure that the production environment for our data platform is highly stable and reliable. This is the foundational element of establishing and maintaining trust with the consumers of our data. In order to support this effort, we need to ensure that only changes that have been tested and verified are promoted to production.
Our current challenge is one that plagues all data teams. We want to have an environment that mirrors our production environment that is available for testing, but it’s not feasible to maintain a complete duplicate of all of the production data. Compounding that challenge is the fact that each of the components of our data platform interact with data in slightly different ways and need different processes for ensuring that changes are being promoted safely.
Contact Info
LinkedIn
Website
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
Links
Data Platforms and Leaky Abstractions Episode
Building A Data Platform From Scratch
Airbyte
Podcast Episode
Trino
dbt
Starburst Galaxy
Superset
Dagster
LakeFS
Podcast Episode
Nessie
Podcast Episode
Iceberg
Snowflake
LocalStack
DSL == Domain Specific Language
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC B