423 Folgen

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

Data Engineering Podcast Tobias Macey

    • Technologie
    • 5.0 • 2 Bewertungen

This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.

    Making Email Better With AI At Shortwave

    Making Email Better With AI At Shortwave

    Summary

    Generative AI has rapidly transformed everything in the technology sector. When Andrew Lee started work on Shortwave he was focused on making email more productive. When AI started gaining adoption he realized that he had even more potential for a transformative experience. In this episode he shares the technical challenges that he and his team have overcome in integrating AI into their product, as well as the benefits and features that it provides to their customers.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Andrew Lee about his work on Shortwave, an AI powered email client


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you describe what Shortwave is and the story behind it?


    What is the core problem that you are addressing with Shortwave?

    Email has been a central part of communication and business productivity for decades now. What are the overall themes that continue to be problematic?
    What are the strengths that email maintains as a protocol and ecosystem?
    From a product perspective, what are the data challenges that are posed by email?
    Can you describe how you have architected the Shortwave platform?


    How have the design and goals of the product changed since you started it?
    What are the ways that the advent and evolution of language models have influenced your product roadmap?

    How do you manage the personalization of the AI functionality in your system for each user/team?
    For users and teams who are using Shortwave, how does it change their workflow and communication patterns?
    Can you describe how I would use Shortwave for managing the workflow of evaluating, planning, and promoting my podcast episodes?
    What are the most interesting, innovative, or unexpected ways that you have seen Shortwave used?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shortwave?
    When is Shortwave the wrong choice?
    What do you have planned for the future of Shortwave?


    Contact Info


    LinkedIn
    Blog


    Parting Question


    From your perspective, what is the biggest gap in the tooling or technology for data management today?


    Closing Announcements


    Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with m

    • 53 Min.
    Designing A Non-Relational Database Engine

    Designing A Non-Relational Database Engine

    Summary

    Databases come in a variety of formats for different use cases. The default association with the term "database" is relational engines, but non-relational engines are also used quite widely. In this episode Oren Eini, CEO and creator of RavenDB, explores the nuances of relational vs. non-relational engines, and the strategies for designing a non-relational database.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold.
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Oren Eini about the work of designing and building a NoSQL database engine


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you describe what constitutes a NoSQL database?


    How have the requirements and applications of NoSQL engines changed since they first became popular ~15 years ago?

    What are the factors that convince teams to use a NoSQL vs. SQL database?


    NoSQL is a generalized term that encompasses a number of different data models. How does the underlying representation (e.g. document, K/V, graph) change that calculus?

    How have the evolution in data formats (e.g. N-dimensional vectors, point clouds, etc.) changed the landscape for NoSQL engines?
    When designing and building a database, what are the initial set of questions that need to be answered?


    How many "core capabilities" can you reasonably design around before they conflict with each other?

    How have you approached the evolution of RavenDB as you add new capabilities and mature the project?


    What are some of the early decisions that had to be unwound to enable new capabilities?

    If you were to start from scratch today, what database would you build?
    What are the most interesting, innovative, or unexpected ways that you have seen RavenDB/NoSQL databases used?
    What are the most interesting, unexpected, or challenging lesson

    • 1 Std. 16 Min.
    Establish A Single Source Of Truth For Your Data Consumers With A Semantic Layer

    Establish A Single Source Of Truth For Your Data Consumers With A Semantic Layer

    Summary

    Maintaining a single source of truth for your data is the biggest challenge in data engineering. Different roles and tasks in the business need their own ways to access and analyze the data in the organization. In order to enable this use case, while maintaining a single point of access, the semantic layer has evolved as a technological solution to the problem. In this episode Artyom Keydunov, creator of Cube, discusses the evolution and applications of the semantic layer as a component of your data platform, and how Cube provides speed and cost optimization for your data consumers.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold.
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Artyom Keydunov about the role of the semantic layer in your data platform


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you start by outlining the technical elements of what it means to have a "semantic layer"?
    In the past couple of years there was a rapid hype cycle around the "metrics layer" and "headless BI", which has largely faded. Can you give your assessment of the current state of the industry around the adoption/implementation of these concepts?
    What are the benefits of having a discrete service that offers the business metrics/semantic mappings as opposed to implementing those concepts as part of a more general system? (e.g. dbt, BI, warehouse marts, etc.)


    At what point does it become necessary/beneficial for a team to adopt such a service?
    What are the challenges involved in retrofitting a semantic layer into a production data system?

    evolution of requirements/usage patterns
    technical complexities/performance and cost optimization
    What are the most interesting, innovative, or unexpected ways that you have seen Cube used?
    What are the most interesting, unex

    • 56 Min.
    Adding Anomaly Detection And Observability To Your dbt Projects Is Elementary

    Adding Anomaly Detection And Observability To Your dbt Projects Is Elementary

    Summary

    Working with data is a complicated process, with numerous chances for something to go wrong. Identifying and accounting for those errors is a critical piece of building trust in the organization that your data is accurate and up to date. While there are numerous products available to provide that visibility, they all have different technologies and workflows that they focus on. To bring observability to dbt projects the team at Elementary embedded themselves into the workflow. In this episode Maayan Salom explores the approach that she has taken to bring observability, enhanced testing capabilities, and anomaly detection into every step of the dbt developer experience.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold.
    Your host is Tobias Macey and today I'm interviewing Maayan Salom about how to incorporate observability into a dbt-oriented workflow and how Elementary can help


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you start by outlining what elements of observability are most relevant for dbt projects?
    What are some of the common ad-hoc/DIY methods that teams develop to acquire those insights?


    What are the challenges/shortcomings associated with those approaches?

    Over the past ~3 years there were numerous data observability systems/products created. What are some of the ways that the specifics of dbt workflows are not covered by those generalized tools?


    What are the insights that can be more easily generated by embedding into the dbt toolchain and development cycle?

    Can you describe what Elementary is and how it is designed to enhance the development and maintenance work in dbt projects?
    How is Elementary designed/implemented?


    How have the scope and goals of the project changed since you started working on it?
    What are the engineering

    • 50 Min.
    Ship Smarter Not Harder With Declarative And Collaborative Data Orchestration On Dagster+

    Ship Smarter Not Harder With Declarative And Collaborative Data Orchestration On Dagster+

    Summary

    A core differentiator of Dagster in the ecosystem of data orchestration is their focus on software defined assets as a means of building declarative workflows. With their launch of Dagster+ as the redesigned commercial companion to the open source project they are investing in that capability with a suite of new features. In this episode Pete Hunt, CEO of Dagster labs, outlines these new capabilities, how they reduce the burden on data teams, and the increased collaboration that they enable across teams and business units.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Your host is Tobias Macey and today I'm interviewing Pete Hunt about how the launch of Dagster+ will level up your data platform and orchestrate across language platforms


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you describe what the focus of Dagster+ is and the story behind it?


    What problems are you trying to solve with Dagster+?
    What are the notable enhancements beyond the Dagster Core project that this updated platform provides?
    How is it different from the current Dagster Cloud product?

    In the launch announcement you tease new capabilities that would be great to explore in turns:


    Make data a team sport, enabling data teams across the organization
    Deliver reliable, high quality data the organization can trust
    Observe and manage data platform costs
    Master the heterogeneous collection of technologies—both traditional and Modern Data Stack

    What are the business/product goals that you are focused on improving with the launch of Dagster+
    What are the most interesting, innovative, or unexpected ways that you have seen Dagster used?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on the design and launch of Dagster+?
    When is Dagster+ the wrong choice?
    What do you have planned for the future of Dagster/Dagster Cloud/Dagster+?


    Contact Info


    Twitter
    LinkedIn


    Parting Question


    From your perspective, what is the biggest gap in the tooling or technology for data management today?


    Closing Announcements


    Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    I

    • 55 Min.
    Reconciling The Data In Your Databases With Datafold

    Reconciling The Data In Your Databases With Datafold

    Summary

    A significant portion of data workflows involve storing and processing information in database engines. Validating that the information is stored and processed correctly can be complex and time-consuming, especially when the source and destination speak different dialects of SQL. In this episode Gleb Mezhanskiy, founder and CEO of Datafold, discusses the different error conditions and solutions that you need to know about to ensure the accuracy of your data.


    Announcements


    Hello and welcome to the Data Engineering Podcast, the show about modern data management
    Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
    Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
    Join us at the top event for the global data community, Data Council Austin. From March 26-28th 2024, we'll play host to hundreds of attendees, 100 top speakers and dozens of startups that are advancing data science, engineering and AI. Data Council attendees are amazing founders, data scientists, lead engineers, CTOs, heads of data, investors and community organizers who are all working together to build the future of data and sharing their insights and learnings through deeply technical talks. As a listener to the Data Engineering Podcast you can get a special discount off regular priced and late bird tickets by using the promo code dataengpod20. Don't miss out on our only event this year! Visit dataengineeringpodcast.com/data-council and use code dataengpod20 to register today!
    Your host is Tobias Macey and today I'm welcoming back Gleb Mezhanskiy to talk about how to reconcile data in database environments


    Interview


    Introduction
    How did you get involved in the area of data management?
    Can you start by outlining some of the situations where reconciling data between databases is needed?
    What are examples of the error conditions that you are likely to run into when duplicating information between database engines?


    When these errors do occur, what are some of the problems that they can cause?

    When teams are replicating data between database engines, what are some of the common patterns for managing those flows?


    How does that change between continual and one-time replication?

    What are some of the steps involved in verifying the integrity of data replication between database engines?
    If the source or destination isn't a traditional database engine (e.g. data lakehouse) how does that change the work involved in verifying the success of the replication?
    What are the challenges of validating and reconciling data?


    Sheer scale and cost of pulling data out, have to do in-place
    Performance. Pushing databases to the limi

    • 58 Min.

Kundenrezensionen

5.0 von 5
2 Bewertungen

2 Bewertungen

Top‑Podcasts in Technologie

Lex Fridman Podcast
Lex Fridman
Digital Podcast
Schweizer Radio und Fernsehen (SRF)
Flugforensik - Abstürze und ihre Geschichte
Flugforensik
Acquired
Ben Gilbert and David Rosenthal
Dwarkesh Podcast
Dwarkesh Patel
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC

Das gefällt dir vielleicht auch

DataFramed
DataCamp
Talk Python To Me
Michael Kennedy (@mkennedy)
Data Skeptic
Kyle Polich
Software Engineering Daily
Software Engineering Daily
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
The Real Python Podcast
Real Python