The Data Flowcast: Mastering Airflow for Data Engineering & AI

Astronomer
The Data Flowcast: Mastering Airflow for Data Engineering & AI

Welcome to The Data Flowcast: Mastering Airflow for Data Engineering & AI — the podcast where we keep you up to date with insights and ideas propelling the Airflow community forward. Join us each week, as we explore the current state, future and potential of Airflow with leading thinkers in the community, and discover how best to leverage this workflow management system to meet the ever-evolving needs of data engineering and AI ecosystems. Podcast Webpage: https://www.astronomer.io/podcast/

  1. The Intersection of AI and Data Management at Dosu with Devin Stein

    10月4日

    The Intersection of AI and Data Management at Dosu with Devin Stein

    Unlocking engineering productivity goes beyond coding — it’s about managing knowledge efficiently. In this episode, we explore the innovative ways in which Dosu leverages Airflow for data orchestration and supports the Airflow project.  Devin Stein, Founder of Dosu, shares his insights on how engineering teams can focus on value-added work by automating knowledge management. Devin dives into Dosu’s purpose, the significance of AI in their product, and why they chose Airflow as the backbone for scheduling and data management.  Key Takeaways: (01:33) Dosu's mission to democratize engineering knowledge. (05:00) AI is central to Dosu's product for structuring engineering knowledge. (06:23) The importance of maintaining up-to-date data for AI effectiveness. (07:55) How Airflow supports Dosu’s data ingestion and automation processes. (08:45) The reasoning behind choosing Airflow over other orchestrators. (11:00) Airflow enables Dosu to manage both traditional ETL and dynamic workflows. (13:04) Dosu assists the Airflow project by auto-labeling issues and discussions. (14:56) Thoughtful collaboration with the Airflow community to introduce AI tools. (16:37) The potential of Airflow to handle more dynamic, scheduled workflows in the future. (18:00) Challenges and custom solutions for implementing dynamic workflows in Airflow. Resources Mentioned: Apache Airflow - https://airflow.apache.org/ Dosu Website - https://dosu.dev/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    20 分钟
  2. AI-Powered Vehicle Automation at Ford Motor Company with Serjesh Sharma

    9月12日

    AI-Powered Vehicle Automation at Ford Motor Company with Serjesh Sharma

    Harnessing data at scale is the key to driving innovation in autonomous vehicle technology. In this episode, we uncover how advanced orchestration tools are transforming machine learning operations in the automotive industry. Serjesh Sharma, Supervisor ADAS Machine Learning Operations (MLOps) at Ford Motor Company, joins us to discuss the challenges and innovations his team faces working to enhance vehicle safety and automation. Serjesh shares insights into the intricate data processes that support Ford’s Advanced Driver Assistance Systems (ADAS) and how his team leverages Apache Airflow to manage massive data loads efficiently. Key Takeaways: (01:44) ADAS involves advanced features like pre-collision assist and self-driving capabilities. (04:47) Ensuring sensor accuracy and vehicle safety requires extensive data processing. (05:08) The combination of on-prem and cloud infrastructure optimizes data handling. (09:27) Ford processes around one petabyte of data per week, using both CPUs and GPUs. (10:33) Implementing software engineering best practices to improve scalability and reliability. (15:18) GitHub Issues streamline onboarding and infrastructure provisioning. (17:00) Airflow's modular design allows Ford to manage complex data pipelines. (19:00) Kubernetes pod operators help optimize resource usage for CPU-intensive tasks. (20:35) Ford's scale challenges led to customized Airflow configurations for high concurrency. (21:02) Advanced orchestration tools are pivotal in managing vast data landscapes in automotive innovation. Resources Mentioned: Serjesh Sharma - www.linkedin.com/in/serjeshsharma/ Ford Motor Company - www.linkedin.com/company/ford-motor-company/ Apache Airflow - airflow.apache.org/ Kubernetes - kubernetes.io/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    26 分钟
  3. From Task Failures to Operational Excellence at GumGum with Brendan Frick

    9月6日

    From Task Failures to Operational Excellence at GumGum with Brendan Frick

    Data failures are inevitable but how you manage them can define the success of your operations. In this episode, we dive deep into the challenges of data engineering and AI with Brendan Frick, Senior Engineering Manager, Data at GumGum. Brendan shares his unique approach to managing task failures and DAG issues in a high-stakes ad-tech environment. Brendan discusses how GumGum leverages Apache Airflow to streamline data processes, ensuring efficient data movement and orchestration while minimizing disruptions in their operations. Key Takeaways: (02:02) Brendan’s role at GumGum and its approach to ad tech. (04:27) How GumGum uses Airflow for daily data orchestration, moving data from S3 to warehouses. (07:02) Handling task failures in Airflow using Jira for actionable, developer-friendly responses. (09:13) Transitioning from email alerts to a more structured system with Jira and PagerDuty. (11:40) Monitoring task retry rates as a key metric to identify potential issues early. (14:15) Utilizing Looker dashboards to track and analyze task performance and retry rates. (16:39) Transitioning from Kubernetes operator to a more reliable system for data processing. (19:25) The importance of automating stakeholder communication with data lineage tools like Atlan. (20:48) Implementing data contracts to ensure SLAs are met across all data processes. (22:01) The role of scalable SLAs in Airflow to ensure data reliability and meet business needs. Resources Mentioned: Brendan Frick - https://www.linkedin.com/in/brendan-frick-399345107/ GumGum - https://www.linkedin.com/company/gumgum/ Apache Airflow - https://airflow.apache.org/ Jira - https://www.atlassian.com/software/jira Atlan - https://atlan.com/ Kubernetes - https://kubernetes.io/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    24 分钟
  4. From Sensors to Datasets: Enhancing Airflow at Astronomer with Maggie Stark and Marion Azoulai

    8月29日

    From Sensors to Datasets: Enhancing Airflow at Astronomer with Maggie Stark and Marion Azoulai

    A 13% reduction in failure rates — this is how two data scientists at Astronomer revolutionized their data pipelines using Apache Airflow. In this episode, we enter the world of data orchestration and AI with Maggie Stark and Marion Azoulai, both Senior Data Scientists at Astronomer. Maggie and Marion discuss how their team re-architected their use of Airflow to improve scalability, reliability and efficiency in data processing. They share insights on overcoming challenges with sensors and how moving to datasets transformed their workflows. Key Takeaways: (02:23) The data team’s role as a centralized hub within Astronomer. (05:11) Airflow is the backbone of all data processes, running 60,000 tasks daily. (07:13) Custom task groups enable efficient code reuse and adherence to best practices. (11:33) Sensor-heavy architectures can lead to cascading failures and resource issues. (12:09) Switching to datasets has improved reliability and scalability. (14:19) Building a control DAG provides end-to-end visibility of pipelines. (16:42) Breaking down DAGs into smaller units minimizes failures and improves management. (19:02) Failure rates improved from 16% to 3% with the new architecture. Resources Mentioned: Maggie Stark - https://www.linkedin.com/in/margaretstark/ Marion Azoulai - https://www.linkedin.com/in/marionazoulai/ Astronomer | LinkedIn - https://www.linkedin.com/company/astronomer/ Apache Airflow - https://airflow.apache.org/ Astronomer | Website - https://www.astronomer.io/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    22 分钟
  5. Mastering Data Orchestration with Airflow at M Science with Ben Tallman

    8月26日

    Mastering Data Orchestration with Airflow at M Science with Ben Tallman

    Mastering the flow of data is essential for driving innovation and efficiency in today’s competitive landscape. In this episode, we explore the evolution of data orchestration and the pivotal role of Apache Airflow in modern data workflows. Ben Tallman, Chief Technology Officer at M Science, joins us and shares his extensive experience with Airflow, detailing its early adoption, evolution and the profound impact it has had on data engineering practices. His insights reveal how leveraging Airflow can streamline complex data processes, enhance observability and ultimately drive business success. Key Takeaways: (02:31) Benjamin’s journey with Airflow and its early adoption. (05:36) The transition from legacy schedulers to Airflow at Apigee and later Google. (08:52) The challenges and benefits of running production-grade Airflow instances. (10:46) How Airflow facilitates the management of large-scale data at M Science. (11:56) The importance of reducing time to value for customers using data products. (13:32) Airflow’s role in ensuring observability and reliability in data workflows. (17:00) Managing petabytes of data and billions of records efficiently. (19:08) Integration of various data sources and ensuring data product quality. (20:04) Leveraging Airflow for data observability and reducing time to value. (22:04) Benjamin’s vision for the future development of Airflow, including audit trails for variables. Resources Mentioned: Ben Tallman - https://www.linkedin.com/in/btallman/ M Science - https://www.linkedin.com/company/m-science-llc/ Apache Airflow - https://airflow.apache.org/ Astronomer - https://www.astronomer.io/ Databricks - https://databricks.com/ Snowflake - https://www.snowflake.com/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    25 分钟
  6. Enhancing Business Metrics With Airflow at Artlist with Hannan Kravitz

    8月15日

    Enhancing Business Metrics With Airflow at Artlist with Hannan Kravitz

    Data orchestration is revolutionizing the way companies manage and process data. In this episode, we explore the critical role of data orchestration in modern data workflows and how Apache Airflow is used to enhance data processing and AI model deployment. Hannan Kravitz, Data Engineering Team Leader at Artlist, joins us to share his insights on leveraging Airflow for data engineering and its impact on their business operations. Key Takeaways: (01:00) Hannan introduces Artlist and its mission to empower content creators. (04:27) The importance of collecting and modeling data to support business insights. (06:40) Using Airflow to connect multiple data sources and create dashboards. (09:40) Implementing a monitoring DAG for proactive alerts within Airflow​​. (12:31) Customizing Airflow for business metric KPI monitoring and setting thresholds​​. (15:00) Addressing decreases in purchases due to technical issues with proactive alerts​​. (17:45) Customizing data quality checks with dynamic task mapping in Airflow​​. (20:00) Desired improvements in Airflow UI and logging capabilities​​. (21:00) Enabling business stakeholders to change thresholds using Streamlit​​. (22:26) Future improvements desired in the Airflow project​. Resources Mentioned: Hannan Kravitz - https://www.linkedin.com/in/hannan-kravitz-60563112/ Artlist - https://www.linkedin.com/company/art-list/ Apache Airflow - https://airflow.apache.org/ Snowflake - https://www.snowflake.com/ Streamlit - https://streamlit.io/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    24 分钟
  7. Cutting-Edge Data Engineering at Teya with Alexandre Magno Lima Martins

    8月8日

    Cutting-Edge Data Engineering at Teya with Alexandre Magno Lima Martins

    Data engineering is constantly evolving and staying ahead means mastering tools like Apache Airflow. In this episode, we explore the world of data engineering with Alexandre Magno Lima Martins, Senior Data Engineer at Teya. Alexandre talks about optimizing data workflows and the smart solutions they've created at Teya to make data processing easier and more efficient. Key Takeaways: (02:01) Alexandre explains his role at Teya and the responsibilities of a data platform engineer. (02:40) The primary use cases of Airflow at Teya, especially with dbt and machine learning projects. (04:14) How Teya creates self-service DAGs for dbt models. (05:58) Automating DAG creation with CI/CD pipelines. (09:04) Switching to a multi-file method for better Airflow performance. (12:48) Challenges faced with Kubernetes Executor vs. Celery Executor. (16:13) Using Celery Executor to handle fast tasks efficiently. (17:02) Implementing KEDA autoscaler for better scaling of Celery workers. (19:05) Reasons for not using Cosmos for DAG generation and cross-DAG dependencies. (21:16) Alexandre's wish list for future Airflow features, focusing on multi-tenancy. Resources Mentioned: Alexandre Magno Lima Martins - https://www.linkedin.com/in/alex-magno/ Teya - https://www.linkedin.com/company/teya-global/ Apache Airflow - https://airflow.apache.org/ dbt - https://www.getdbt.com/ Kubernetes - https://kubernetes.io/ KEDA - https://keda.sh/ Thanks for listening to The Data Flowcast: Mastering Airflow for Data Engineering & AI. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss any of the insightful conversations. #AI #Automation #Airflow #MachineLearning

    24 分钟
5
共 5 分
20 个评分

关于

Welcome to The Data Flowcast: Mastering Airflow for Data Engineering & AI — the podcast where we keep you up to date with insights and ideas propelling the Airflow community forward. Join us each week, as we explore the current state, future and potential of Airflow with leading thinkers in the community, and discover how best to leverage this workflow management system to meet the ever-evolving needs of data engineering and AI ecosystems. Podcast Webpage: https://www.astronomer.io/podcast/

若要收听包含儿童不宜内容的单集,请登录。

关注此节目的最新内容

登录或注册,以关注节目、存储单集,并获取最新更新。

选择国家或地区

非洲、中东和印度

亚太地区

欧洲

拉丁美洲和加勒比海地区

美国和加拿大