427 episodes

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

Data Engineering Podcast Tobias Macey

    • Technology
    • 4.7 • 127 Ratings

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

    Zenlytic Is Building You A Better Coworker With AI Agents

    Zenlytic Is Building You A Better Coworker With AI Agents

    Summary

    The purpose of business intelligence systems is to allow anyone in the business to access and decode data to help them make informed decisions. Unfortunately this often turns into an exercise in frustration for everyone involved due to complex workflows and hard-to-understand dashboards. The team at Zenlytic have leaned on the promise of large language models to build an AI agent that lets you converse with your data. In this episode they share their journey through the fast-moving landscape of generative AI and unpack the difference between an AI chatbot and an AI agent.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Ryan Janssen and Paul Blankley about their experiences building AI powered agents for interacting with your data


    Interview


    Introduction
    How did you get involved in data? In AI?
    Can you describe what Zenlytic is and the role that AI is playing in your platform?
    What have been the key stages in your AI journey?


    What are some of the dead ends that you ran into along the path to where you are today?
    What are some of the persistent challenges that you are facing?

    So tell us more about data agents. Firstly, what are data agents and why do you think they're important?
    How are data agents different from chatbots?
    Are data agents harder to build? How do you make them work in production?
    What other technical architectures have you had to develop to support the use of AI in Zenlytic?
    How have you approached the work of customer education as you introduce this functionality?
    What are some of the most interesting or erroneous misconceptions that you have heard about what the AI can and can't do?
    How have you balanced accuracy/trustworthiness with user experience and flexibility in the conversational AI, given the potential for these models to create erroneous responses?
    What are the most interesting, innovative, or unexpected ways that you have seen your AI agent used?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on building an AI agent for business intelligence?
    When is an AI agent the wrong choice?
    What do you have planned for the future of AI in the Zenlytic product?


    Contact Info


    Ryan


    LinkedIn

    Paul


    LinkedIn



    Parting Question


    From your perspective, what is the biggest gap in the tooling or technology for data management today?


    Closing Annou

    • 54 min
    Release Management For Data Platform Services And Logic

    Release Management For Data Platform Services And Logic

    Summary

    Building a data platform is a substrantial engineering endeavor. Once it is running, the next challenge is figuring out how to address release management for all of the different component parts. The services and systems need to be kept up to date, but so does the code that controls their behavior. In this episode your host Tobias Macey reflects on his current challenges in this area and some of the factors that contribute to the complexity of the problem.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I want to talk about my experiences managing the QA and release management process of my data platform


    Interview


    Introduction
    As a team, our overall goal is to ensure that the production environment for our data platform is highly stable and reliable. This is the foundational element of establishing and maintaining trust with the consumers of our data. In order to support this effort, we need to ensure that only changes that have been tested and verified are promoted to production.
    Our current challenge is one that plagues all data teams. We want to have an environment that mirrors our production environment that is available for testing, but it’s not feasible to maintain a complete duplicate of all of the production data. Compounding that challenge is the fact that each of the components of our data platform interact with data in slightly different ways and need different processes for ensuring that changes are being promoted safely.


    Contact Info


    LinkedIn
    Website


    Closing Announcements


    Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.


    Links


    Data Platforms and Leaky Abstractions Episode
    Building A Data Platform From Scratch
    Airbyte


    Podcast Episode

    Trino
    dbt
    Starburst Galaxy
    Superset
    Dagster
    LakeFS


    Podcast Episode

    Nessie


    Podcast Episode

    Iceberg
    Snowflake
    LocalStack
    DSL == Domain Specific Language


    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC B

    • 20 min
    Barking Up The Wrong GPTree: Building Better AI With A Cognitive Approach

    Barking Up The Wrong GPTree: Building Better AI With A Cognitive Approach

    Summary

    Artificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"


    Interview


    Introduction
    How did you get involved in machine learning?
    Can you start by unpacking the idea of "human-like" AI?


    How does that contrast with the conception of "AGI"?

    The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?
    The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models?
    What are the opportunities and limitations of causal modeling techniques for generalized AI models?
    As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?
    What are the practical/architectural methods necessary to build more cognitive AI systems?


    How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?

    What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?
    When is cognitive AI the wrong choice?
    What do you have planned for the future of cognitive AI applications at Aigo?


    Contact Info


    LinkedIn
    Website


    Parting Question


    From your perspective, what is the biggest barrier to adoption of machine learning today?


    Closing Announcements


    Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ cov

    • 54 min
    Build Your Second Brain One Piece At A Time

    Build Your Second Brain One Piece At A Time

    Summary

    Generative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developers


    Interview


    Introduction
    How did you get involved in machine learning?
    Can you describe what Pieces is and the story behind it?
    The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives?
    model selections
    architecture of Pieces application
    local vs. hybrid vs. online models
    model update/delivery process
    data preparation/serving for models in context of Pieces app
    application of AI to developer workflows
    types of workflows that people are building with pieces
    What are the most interesting, innovative, or unexpected ways that you have seen Pieces used?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces?
    When is Pieces the wrong choice?
    What do you have planned for the future of Pieces?


    Contact Info


    LinkedIn


    Parting Question


    From your perspective, what is the biggest barrier to adoption of machine learning today?


    Closing Announcements


    Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.


    Links


    Pieces
    NPU == Neural Processing Unit
    Tensor Chip
    LoRA == Low Rank Adaptation
    Generat

    • 50 min
    Making Email Better With AI At Shortwave

    Making Email Better With AI At Shortwave

    Summary

    Generative AI has rapidly transformed everything in the technology sector. When Andrew Lee started work on Shortwave he was focused on making email more productive. When AI started gaining adoption he realized that he had even more potential for a transformative experience. In this episode he shares the technical challenges that he and his team have overcome in integrating AI into their product, as well as the benefits and features that it provides to their customers.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Andrew Lee about his work on Shortwave, an AI powered email client


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you describe what Shortwave is and the story behind it?


    What is the core problem that you are addressing with Shortwave?

    Email has been a central part of communication and business productivity for decades now. What are the overall themes that continue to be problematic?
    What are the strengths that email maintains as a protocol and ecosystem?
    From a product perspective, what are the data challenges that are posed by email?
    Can you describe how you have architected the Shortwave platform?


    How have the design and goals of the product changed since you started it?
    What are the ways that the advent and evolution of language models have influenced your product roadmap?

    How do you manage the personalization of the AI functionality in your system for each user/team?
    For users and teams who are using Shortwave, how does it change their workflow and communication patterns?
    Can you describe how I would use Shortwave for managing the workflow of evaluating, planning, and promoting my podcast episodes?
    What are the most interesting, innovative, or unexpected ways that you have seen Shortwave used?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shortwave?
    When is Shortwave the wrong choice?
    What do you have planned for the future of Shortwave?


    Contact Info


    LinkedIn
    Blog


    Parting Question


    From your perspective, what is the biggest gap in the tooling or technology for data management today?


    Closing Announcements


    Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with m

    • 53 min
    Designing A Non-Relational Database Engine

    Designing A Non-Relational Database Engine

    Summary

    Databases come in a variety of formats for different use cases. The default association with the term "database" is relational engines, but non-relational engines are also used quite widely. In this episode Oren Eini, CEO and creator of RavenDB, explores the nuances of relational vs. non-relational engines, and the strategies for designing a non-relational database.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold.
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Oren Eini about the work of designing and building a NoSQL database engine


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you describe what constitutes a NoSQL database?


    How have the requirements and applications of NoSQL engines changed since they first became popular ~15 years ago?

    What are the factors that convince teams to use a NoSQL vs. SQL database?


    NoSQL is a generalized term that encompasses a number of different data models. How does the underlying representation (e.g. document, K/V, graph) change that calculus?

    How have the evolution in data formats (e.g. N-dimensional vectors, point clouds, etc.) changed the landscape for NoSQL engines?
    When designing and building a database, what are the initial set of questions that need to be answered?


    How many "core capabilities" can you reasonably design around before they conflict with each other?

    How have you approached the evolution of RavenDB as you add new capabilities and mature the project?


    What are some of the early decisions that had to be unwound to enable new capabilities?

    If you were to start from scratch today, what database would you build?
    What are the most interesting, innovative, or unexpected ways that you have seen RavenDB/NoSQL databases used?
    What are the most interesting, unexpected, or challenging lesson

    • 1 hr 16 min

Customer Reviews

4.7 out of 5
127 Ratings

127 Ratings

Googleduser ,

Interesting topics guests

Tobias does a great job covering the future of data engineering - practical tips, the future of the industry with the founders of new tools, and no-nonsense advice on how to build data pipelines, viz, and process that will scale.

Fkn2013 ,

Azure

I really enjoy this podcast and learn a lot from it. I wonder why none of data tools in Azure is never mentioned.

Thanks

SteveT3ch ,

Best Data Engineering Podcast

Found this podcast by accident and now can’t do without it. Very knowledgeable host and guesses

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
TED Radio Hour
NPR

You Might Also Like

Talk Python To Me
Michael Kennedy (@mkennedy)
DataFramed
DataCamp
Software Engineering Daily
Software Engineering Daily
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
Data Skeptic
Kyle Polich
The Real Python Podcast
Real Python