Developers Who Test

Testery, Inc

A podcast for developers who ship better software. We talk about all things software testing.

  1. 1D AGO

    The Economics of Testing: Making the Business Case for Quality with Vitaly Sharovatov

    In this episode, Chris Harbert sits down with Vitaly Sharovatov, a seasoned developer and engineering manager with over 22 years of experience. Vitaly serves as a developer advocate at Qase, a test case management platform, and has written extensively about AI, testing methodology, and the economics of software quality. The conversation tackles a question every quality advocate faces: how do you convince leadership to invest in testing? Vitaly shares practical frameworks for quantifying the business value of quality and making the case for prevention over firefighting. Key topics covered: Why developers implicitly do testing already—and why they should understand it deeplyA simpler approach: quantifying the costs of bad quality you're already paying (support calls, lost sales, maintenance overhead)The social dynamics of selling quality ideas—finding allies and helping managers "show off" cost savingsWhen to automate vs. when to test manually: understanding the economic inflection pointThe hidden costs of poor quality on team morale, burnout, and employee retentionVitaly shares real-world examples, including a dating app where automated tests passed but a critical button was hidden below the viewport, and an insurance company that staffed 300 people for quarters to work around a poorly tested API. The episode wraps up with a key insight: most quality problems have social roots within organizations. Success requires not just good testing practices, but the ability to win allies, understand incentives, and sell ideas to stakeholders who aren't always rational economic actors. Whether you're trying to justify a testing initiative to leadership, optimize your team's approach to quality, or simply understand the true cost of defects, this episode provides a practical economic lens for thinking about software testing. Find Vitaly at beyondquality.org, a non-commercial community focused on collaborative research into testing economics, or connect with him on LinkedIn.

    44 min
  2. FEB 10

    From Broadway Drummer to Senior SDET: Angel Williams on AI-Assisted Testing, Flaky Tests, and the QA Mindset

    In this episode of Developers Who Test, host Chris Harbert sits down with Angel Williams, Senior SDET at CHG Healthcare, one of the largest healthcare staffing companies in the US. Angel's journey into software quality is unlike any other—she started as a percussionist trying to make it on Broadway before discovering a knack for debugging deployment scripts during IT contract work. The conversation explores the unique personality traits that draw people to quality engineering. Chris shares his fascinating discovery that every member of one of his QA teams scored high on "restorative" in StrengthsFinder—the same trait that had Angel taking apart the family stereo as a kid just to understand how it worked. Angel provides insight into testing in healthcare, where privacy and security aren't just nice-to-haves—they're essential. She explains how protecting both provider and patient data shapes testing strategies at CHG, from scrubbing logs to ensuring sensitive information never travels over live wires. The discussion takes a deep dive into AI-assisted testing. Angel shares practical examples of using Claude Code with Playwright's MCP integration to build performance dashboards and analyze code for risks. She emphasizes that AI shines brightest not when writing tests, but when helping SDETs understand unfamiliar code, identify risks, and—perhaps most valuably—keep documentation up to date. "Every time I look at a PR with major changes, I ask AI if the README reflects the new code," she explains. Chris and Angel swap war stories about flaky tests, including Angel's mysterious 5 PM failures that turned out to be a timezone shift issue—exactly matching one of the patterns in Chris's "14 Reasons for Flaky Tests" presentation. They discuss infrastructure-related flakiness, load balancer issues, and the critical importance of running tests before merge rather than after. The episode wraps with a thought-provoking discussion about leveraging MCP servers not just for automation, but for asking questions about quality itself—combining data from Jira, test results, and documentation to get a complete picture of project health. Key Topics: The "restorative" personality trait and QA professionalsTesting in healthcare: privacy, security, and compliancePractical AI applications for SDETs Running tests before merge vs. afterMCP servers as a new layer for quality insights

    46 min
  3. JAN 13

    Developer Productivity Metrics: DORA, SPACE, and What Really Drives Team Performance with Martijn Goossens

    Martijn Goossens is Director of Advisory Services at Cerios, a Dutch QA company with approximately 450 employees. Martijn has about 20 years of experience helping teams improve their quality and implement test automation. He is a regular speaker at developer and software quality conferences. In this episode, Chris talks with Martijn Goossens about developer experience, productivity metrics, and what actually drives team performance. Martijn shares insights from his recent conference talk at Hustef and breaks down the key frameworks teams use to measure their effectiveness. The conversation explores the DORA metrics (deployment frequency, lead time for changes, change failure rate, and mean time to recovery) and the SPACE framework (satisfaction/wellbeing, performance, adaptiveness/momentum, communication/collaboration, and efficiency/flow). Martijn explains why he prefers DORA for its practical, quantifiable nature, while SPACE tends to be more subjective and developer-focused. Key topics include: The Dutch testing community: Why the Netherlands has become a hub for software testing innovation and how strong community connections accelerate professional growthMeeting culture and productivity: The value of no-meeting days, the danger of "Swiss cheese calendars," and how to prepare teams for focused work timeHackathons and innovation: Different approaches to fostering creativity, from quarterly hackathons to dedicated innovation time, plus Chris's "hackcation" conceptIndividual vs. team metrics: Why metrics should be treated as sensors providing information rather than judgment tools, and the cautionary tale of the "Cobra problem" where rewarding the wrong behaviors leads to perverse outcomesThe flight level concept: How management can monitor high-level metrics and only drill down when signals indicate a problemMartijn emphasizes that metrics don't tell the whole story --- they help you know what questions to ask and who to ask them to. A developer with fewer commits might be the team's primary reviewer or architect, while someone with many commits might just be making small edits. Context matters. The episode wraps up with Martijn's experience speaking at Hustef in Hungary (held in a train museum complete with miniature train rides) and his upcoming keynote in Tokyo.

    44 min

About

A podcast for developers who ship better software. We talk about all things software testing.