Engineering Enablement by Abi Noda

DX

This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.

  1. Measuring AI code assistants and agents with the AI Measurement Framework

    15 AOÛT

    Measuring AI code assistants and agents with the AI Measurement Framework

    In this episode of Engineering Enablement, DX CTO Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption. They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput. Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains. Where to find Laura Tacho: • X: https://x.com/rhein_wein • LinkedIn: https://www.linkedin.com/in/lauratacho/ • Website: https://lauratacho.com/ Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  • Substack: ​​https://substack.com/@abinoda  In this episode, we cover: (00:00) Intro (01:26) The challenge of measuring developer productivity in the AI age (04:17) Measuring productivity in the AI era — what stays the same and what changes (07:25) How to use DX’s AI Measurement Framework  (13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability (16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code (18:25) Three ways to gather measurement data (21:55) How Google measures time savings and why self-reported data is misleading (24:25) How to measure agentic workflows and a case for expanding the definition of developer (28:50) A case for not overemphasizing AI’s role (30:31) Measuring second-order effects  (32:26) Audience Q&A: applying metrics in practice (36:45) Wrap up: best practices for rollout and communication  Referenced: DX Core 4 Productivity FrameworkMeasuring AI code assistants and agentsAI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider

    41 min
  2. How to cut through the hype and measure AI’s real impact (Live from LeadDev London)

    8 AOÛT

    How to cut through the hype and measure AI’s real impact (Live from LeadDev London)

    In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it. Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance. Where to find Laura Tacho: • X: https://x.com/rhein_wein • LinkedIn: https://www.linkedin.com/in/lauratacho/ • Website: https://lauratacho.com/ In this episode, we cover: (00:00) Intro: Laura’s keynote from LDX3 (01:44) The problem with asking how much faster can we go with AI? (03:02) How the disappointment gap creates barriers to AI adoption (06:20) What AI adoption looks like at top-performing organizations (07:53) What leaders must do to turn AI into meaningful impact (10:50) Why building better software with AI still depends on fundamentals (12:03) An overview of the DX Core 4 Framework (13:22) Why developer experience is the biggest performance lever (15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings (16:08) How to get started with Core 4 (17:32) Measuring AI with the AI Measurement Framework (21:45) Final takeaways and how to get started with confidence Referenced: LDX3 by LeadDev | The Festival of Software Engineering Leadership | LondonSoftware engineering with LLMs in 2025: reality checkSPACE framework, PRs per engineer, AI researchThe AI adoption playbook: Lessons from Microsoft's internal strategyDX Core 4 Productivity FrameworkNicole ForsgrenMargaret-Anne StoreyDropbox.comEtsyPfizerDrew Houston - Dropbox | LinkedInBlockCursorDora.devSourcegraphBooking.com

    23 min
  3. Unpacking METR’s findings: Does AI slow developers down?

    1 AOÛT

    Unpacking METR’s findings: Does AI slow developers down?

    In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene. Where to find Quentin Anthony:  • LinkedIn: https://www.linkedin.com/in/quentin-anthony/ • X: https://x.com/QuentinAnthon15 Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  In this episode, we cover: (00:00) Intro (01:32) A brief overview of Quentin’s background and current work (02:05) An explanation of METR and the study Quentin participated in  (11:02) Surprising results of the METR study  (12:47) Quentin’s takeaways from the study’s results  (16:30) How developers can avoid bloated code bases through self-reflection (19:31) Signs that you’re not making progress with a model  (21:25) What is “context rot”? (23:04) Advice for combating context rot (25:34) How to make the most of your idle time as a developer (28:13) Developer hygiene: the case for selectively using AI tools (33:28) How to interact effectively with new models (35:28) Why organizations should focus on tasks that AI handles well (38:01) Where AI fits in the software development lifecycle (39:40) How to approach testing with models (40:31) What makes models different  (42:05) Quentin’s thoughts on agents  Referenced: DX Core 4 Productivity FrameworkZyphraEleutherAIMETRCursorClaudeLibreChatGoogle GeminiIntroducing OpenAI o3 and o4-miniMETR’s study on how AI affects developer productivityQuentin Anthony on X: "I was one of the 16 devs in this study."Context rot from Hacker NewsTracing the thoughts of a large language modelKimiGrok 4 | xAI

    44 min
  4. CarGurus’ journey building a developer portal and increasing AI adoption

    11 JUIL.

    CarGurus’ journey building a developer portal and increasing AI adoption

    In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus’ transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in. Where to find Frank Fodera :  • LinkedIn: https://www.linkedin.com/in/frankfodera/ Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  In this episode, we cover: (00:00) Intro: IDPs (Internal Developer Portals) and AI  (02:07) The IDP journey at CarGurus (05:53) A breakdown of the people responsible for building the IDP (07:05) The five pillars of the Showroom IDP (09:12) How DevX worked with infrastructure (11:13) The business impact of Showroom (13:57) The transition from monolith to microservices and struggles along the way (15:54) The benefits of building a custom IDP (19:10) How CarGurus drives AI coding tool adoption  (28:48) Getting started with an AI initiative (31:50) Metrics to track  (34:06) Tips for driving AI adoption Referenced: DX Core 4 Productivity Framework Internal Developer Portals: Use Cases and Key ComponentsStrangler Fig Pattern - Azure Architecture Center | Microsoft LearnSpotify for BackstageThe AI adoption playbook: Lessons from Microsoft's internal strategy

    39 min
  5. Snowflake’s playbook for operational excellence

    20 JUIN

    Snowflake’s playbook for operational excellence

    In this episode, Abi Noda speaks with Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering at Snowflake, about how their team builds and sustains operational excellence. They break down the practices and principles that guide their work—from creating two-way communication channels to treating engineers as customers. The conversation explores how Snowflake fosters trust, uses feedback loops to shape priorities, and maintains alignment through thoughtful planning. You’ll also hear how they engage with teams across the org, convert detractors, and use Customer Advisory Boards to bring voices from across the company into the decision-making process. Where to find Amy Yuan:  • LinkedIn: https://www.linkedin.com/in/amy-yuan-a8ba783/ Where to find Gilad Turbahn: • LinkedIn: https://www.linkedin.com/in/giladturbahn/ Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  In this episode, we cover: (00:00) Intro: an overview of operational excellence (04:13) Obstacles to executing with operational excellence (05:51) An overview of the Snowflake playbook for operational excellence (08:25) Who does the work of reaching out to customers (09:06) The importance of customer engagement (10:19) How Snowflake does customer engagement  (14:13) The types of feedback received and the two camps (supporters and detractors) (16:55) How to influence detractors and how detractors actually help  (18:27) Using insiders as messengers (22:48) An overview of Snowflake’s customer advisory board (26:10) The importance of meeting in person (learnings from Warsaw and Berlin office visits) (28:08) Managing up (30:07) How planning is done at Snowflake (36:25) Setting targets for OKRs, and Snowflake’s philosophy on metrics  (39:22) The annual plan and how it’s shared  Referenced: CTO buy-in, measuring sentiment, and customer focusSnowflakeBenoit Dageville - Snowflake Computing | LinkedInThierry Cruanes - Snowflake Computing | LinkedIn

    45 min
  6. The biggest obstacles preventing GenAI adoption — and how to overcome them

    6 JUIN

    The biggest obstacles preventing GenAI adoption — and how to overcome them

    In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools. Where to find Laura Tacho:  • LinkedIn: https://www.linkedin.com/in/lauratacho/ • Website: https://lauratacho.com/ Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  In this episode, we cover: (00:00) Intro: The full spectrum of AI adoption (03:02) The hype of AI (04:46) Some statistics around the current state of AI coding tool adoption (07:27) The real barriers to AI adoption (09:31) How to drive AI adoption  (15:47) Measuring AI’s impact  (19:49) More strategies for driving AI adoption  (23:54) The Methods companies are actually using to drive impact (29:15) Questions from the chat  (39:48) Wrapping up Referenced: DX Core 4 Productivity FrameworkThe AI adoption playbook: Lessons from Microsoft's internal strategyMicrosoft CEO says up to 30% of the company's code was written by AI | TechCrunchViral Shopify CEO Manifesto Says AI Now Mandatory For All EmployeesDORA | Impact of Generative AI in Software DevelopmentGuide to AI assisted engineeringJustin Reock - DX | LinkedIn

    42 min
  7. DORA’s latest research on AI impact

    23 MAI

    DORA’s latest research on AI impact

    In this episode, Abi Noda speaks with Derek DeBellis, lead researcher at Google’s DORA team, about their latest report on generative AI’s impact on software productivity. They dive into how the survey was built, what it reveals about developer time and “flow,” and the surprising gap between individual and team outcomes. Derek also shares practical advice for leaders on measuring AI impact and aligning metrics with organizational goals. Where to find Derek DeBellis:  • LinkedIn: https://www.linkedin.com/in/derekdebellis/ Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  In this episode, we cover: (00:00) Intro: DORA’s new Impact of Gen AI report (03:24) The methodology used to put together the surveys DORA used for the report  (06:44) An example of how a single word can throw off a question  (07:59) How DORA measures flow  (10:38) The two ways time was measured in the recent survey (14:30) An overview of experiential surveying  (16:14) Why DORA asks about time  (19:50) Why Derek calls survey results ‘observational data’  (21:49) Interesting findings from the report  (24:17) DORA’s definition of productivity  (26:22) Why a 2.1% increase in individual productivity is significant  (30:00) The report’s findings on decreased team delivery throughput and stability  (32:40) Tips for measuring AI’s impact on productivity  (38:20) Wrap up: understanding the data  Referenced: DORA | Impact of Generative AI in Software DevelopmentThe science behind DORAYale Professor Divulges Strategies for a Happy Life Incredible! Listening to ‘When I’m 64’ makes you forget your ageSlow Productivity: The Lost Art of Accomplishment without BurnoutDORA, SPACE, and DevEx: Which framework should you use?SPACE framework, PRs per engineer, AI research

    40 min
  8. Setting targets for developer productivity metrics

    9 MAI

    Setting targets for developer productivity metrics

    In this episode, Abi Noda is joined by Laura Tacho, CTO at DX, engineering leadership coach, and creator of the Core 4 framework. They explore how engineering organizations can avoid common pitfalls when adopting metrics frameworks like SPACE, DORA, and Core 4. Laura shares a practical guide to getting started with Core 4—beginning with controllable input metrics that teams can actually influence. The conversation touches on Goodhart’s Law, why focusing too much on output metrics can lead to data distortion, and how leaders can build a culture of continuous improvement rooted in meaningful measurement. Where to find Laura Tacho:  • LinkedIn: https://www.linkedin.com/in/lauratacho/ • Website: https://lauratacho.com/ Where to find Abi Noda: • LinkedIn: https://www.linkedin.com/in/abinoda  In this episode, we cover: (00:00) Intro: Improving systems, not distorting data (02:20) Goal setting with the new Core 4 framework (08:01) A quick primer on Goodhart’s law (10:02) Input vs. output metrics—and why targeting outputs is problematic (13:38) A health analogy demonstrating input vs. output (17:03) A look at how the key input metrics in Core 4 drive output metrics  (24:08) How to counteract gamification  (28:24) How to get developer buy-in (30:48) The number of metrics to focus on  (32:44) Helping leadership and teams connect the dots to how input goals drive output (35:20) Demonstrating business impact  (38:10) Best practices for goal setting Referenced: DX Core 4 Productivity FrameworkEngineering Enablement PodcastDORA’s software delivery metrics: the four keysThe SPACE of Developer Productivity: There’s more to it than you thinkDevEx: What Actually Drives ProductivityDORA, SPACE, and DevEx: Which framework should you use?Goodhart's law Nicole Forsgren - Microsoft | LinkedInCampbell's law Introducing Core 4: The best way to measure and improve your product velocityDX Core 4: Framework overview, key design principles, and practical applicationsDX Core 4: 2024 benchmarks - by Abi Noda

    43 min
5
sur 5
38 notes

À propos

This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.

Vous aimeriez peut‑être aussi