An Opinionated Look At End-to-end Code Only Analytical Workflows With Bruin

Data Engineering Podcast

Summary
The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.


Announcements

  • Hello and welcome to the Data Engineering Podcast, the show about modern data management
  • Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today!
  • Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems
Interview
  • Introduction
  • How did you get involved in the area of data management?
  • Can you describe what Bruin is and the story behind it?
    • Who is your target audience?
  • There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users?
  • How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows?
    • How might it act as a limiting factor for organizational involvement?
  • Can you describe how Bruin is designed?
    • How have the design and scope of Bruin evolved since you first started working on it?
  • You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality?
    • What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows?
  • What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems?
  • Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr?
  • What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities?
  • What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin?
  • When is Bruin the wrong choice?
  • What do you have planned for the future of Bruin?
Contact Info
  • LinkedIn
Parting Question
  • From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
  • Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
  • Visit the site

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada