The Vernon Richard Show

Vernon Richards and Richard Bradshaw

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.

  1. 6 AI Tool Ideas That Will Transform How You Test

    MAR 2

    6 AI Tool Ideas That Will Transform How You Test

    In this episode, Richard and Vernon explore the evolving concept of automation in quality, especially in the context of AI and Gen AI. They discuss how new technologies are blurring the lines between testing and quality, and what this means for the future of software development and testing practices. 00:00 - Intro00:52 - Welcome and weekly catch-up01:11 - Vern's deep dive into the AI rabbit hole02:39 - Rich’s quit(er) work week, new threads, and dentists04:15 - Richard buys a domain and we started the pod proper06:09 - Tool idea #1: Using an LLM to evaluate user stories and acceptance criteria automatically07:35 - Is analysing a story "testing" or "quality"? The ISTQB static analysis debate10:27 - Vernon's diabetes analogy: AI is forcing us to finally do what we always said we should12:19 - Better stories = better testing: how quality work amplifies everything downstream13:11 - Tool idea #2: "If we made this change, what areas of the system would be impacted?"14:23 - Distilling years of system knowledge into 5–10 questions an agent could ask18:37 - Tool idea #3: The PR Analyser — summarising code changes through a testing and quality lens21:45 - Vernon's "1 unit of effort, 5 units of testing" — the quality multiplier effect23:29 - Comparing story analysis to actual implementation: where did understanding diverge?24:43 - Tool idea #4: Dynamic test selection — cherry-picking the right tests to run first27:05 - Tool idea #5: An agent that analyses failed builds and attempts to fix them27:28 - Why Richard's first attempt always "fixed" the test instead of the code (and what was missing)29:21 - Dan's AI agents: one thinking partner, one employee monitoring production32:42 - The documentation goldmine: why AI-generated RCA notes might matter more than the fix33:39 - Tool idea #6: A holistic quality dashboard pulling insights across stories, code, tests, and process36:43 - John Cutler on context: it's not data you pass around — it's formed through interaction40:43 - More options than ever: whether it's testing, quality, or static analysis — you can do it differently now41:56 - The real skill: spotting the opportunity to make yourself more effective42:30 - Ge Hill's Lump of Code Fallacy and why task analysis matters43:34 - Why Richard got into automation: efficiency, not because he was told to45:03 - Vernon's big question: in a world where agents can do everything, what's your performance review about?46:52 - Context, craft, and product knowledge can't be delegated to tools yet48:29 - Call to action: What are you building? What tools couldn't you build before that you can now?49:29 - Upcoming: Test Automation Days and PeerCon Live in Nottingham Links to stuff we mentioned during the pod: 04:15 - Automation in QualityRichard bought the automationinquality.com domain! The concept explored throughout this episode.05:28 - Kalpesh Sodha aka KalpsShout out to Richard's colleague who played devil's advocate on the "is it testing or quality?" question07:31 - Static analysis29:44 - Dan "The Agile Guy" ElliottHis post about how he uses AI agents as a "thinking partner" and an "employee" with different missions and capabilitiesDan’s websiteDan's LinkedIn36:52 - John CutlerJohn Cutler's piece on how context isn't just data you move around — it's formed through interaction between peopleJohn's newsletterJohn's LinkedIn42:37 - Rob SabourinMy quick Perplexity search for Rob's public material on Task AnalysisRob's Linkedin42:45 - Michael “GeePaw” HillHis Lump of Code Fallacy. The idea that coding isn't just one activity — there are three flavours of work that occur when you codeMichael’s websiteMichaels Mastadon49:35 - Test Automation DaysRichard will be keynoting at Test Automation DaysMake sure you say hi if you’re there50:10 - PeersConVernon and Richard will be recording a live episode at PeersCon!If you're there, come say hi and grab a mic 🎙️

    51 min
  2. Six Principles of Automation in Testing: Still Relevant in 2026?

    FEB 23

    Six Principles of Automation in Testing: Still Relevant in 2026?

    In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies. 00:00 - Intro 01:47 - Welcome (Richard is not at home 👀) 02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷 04:01 - Today's topic: revisiting the AiT principles ahead of a keynote 04:58 - What is Automation in Testing (AiT)? 06:49 - Principle 1: Supporting Testing over Replicating Testing 07:01 - Vernon's take: testing is a performance, not a click sequence 08:22 - What the industry promised vs what automation actually does 08:49 - The serendipity you lose when a human isn't testing 09:59 - Agentic testing: observing more, but still not replicating humans 10:56 - The danger of anthropomorphising AI output 12:10 - LLMs always give an answer — and that's the problem 13:03 - Principle 2: Testability over Automatability 13:14 - Vernon's take: narrow vs broad — operate, control, observe 14:38 - Making apps automatable for the robots but not the humans 15:37 - The shiniest framework in a broken testing context 16:40 - If it's testable, it's probably automatable — but not vice versa 16:55 - Automation strategy vs testing strategy: when they compete, everyone loses 17:46 - The problem has always been testing, not automation 19:57 - Principle 3: Testing Expertise over Coding Expertise 20:18 - Vernon's take: testing expertise lets you leverage the tools 21:47 - The spoonfed tests problem: great at automating, lost without guidance 22:36 - The "code school" era: everyone told to learn to code 22:51 - Coding agents have changed the maths on this 26:01 - The new nuance: test design and framework knowledge over writing the code 28:44 - Evaluating code is a testing problem — and LLMs can help you do it 30:43 - Are agents as good as a junior developer? 31:42 - Outcome Engineering (O16G) and the race to write the AI principles 32:13 - Simon Wardley: we're in the wild west again 33:22 - Principle 4: Problems over Tools 33:29 - Vernon's take: the hammer and the nail 34:07 - Don't let your problems be shaped by the framework you have 34:36 - New automation opportunities beyond testing: PRs, logs, story review 35:30 - Principle 5: Risk over Coverage 36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage 38:00 - The one test case, one automated test fallacy 39:04 - Where in the system is the risk? Do you even know your layers? 39:49 - Probabilistic vs non-deterministic: refining the language around AI 40:53 - Coverage as intentional vs coverage as a number someone picked once 43:15 - Principle 6: Observability over Understanding 43:24 - Vernon's take: just-in-time understanding vs reading everything upfront 44:12 - What the principle was actually about: making automation results observable 47:00 - Does this principle belong in testing, or has it grown into quality? 49:00 - So... what's missing? 50:00 - The four pillars: Strategy, Creation, Usage, and Education 57:05 - Automation in Quality: the bigger opportunity 01:01:00 - Wrap up + Vern's Lead Dev panel Links to stuff we mentioned during the pod: 04:00 - Automation in Testing (AiT)The principles live at automationintesting.comAiT was co-created by Richard Bradshaw and Mark Winteringham04:00 - Test Automation DaysThe conference where Richard is giving his keynote — testautomationdays.com24:48 - James ThomasThe "kid in a candy shop" himself — James's blog and LinkedIn31:42 - Outcome Engineering (016G)The article Richard shared before recording — worth tracking down if you're interested in where agentic development practices are heading32:13 - Simon WardleyIf you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps and situational awareness in strategy is essential readingSimon's LinkedIn43:30 - Abby BangserVern's go-to person for all things observability. Abby's LinkedIn46:04 - Noah SusmanAs it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007.Noah's blog and LinkedIn59:30 - Angie JonesVern's been reading Angie's work on testing AI-enabled applications here and here.Angie's website and LinkedIn01:01:30 - The Lead Dev panel Vernon will be part of"How to Measure the Business Impact of AI" — happening 25th February, free to sign up01:02:00 - Richard's Selenium Conf talk"Redefining Test Automation" — the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to.

    1h 3m
  3. This Was Supposed to Be About Testing

    JAN 26

    This Was Supposed to Be About Testing

    This was supposed to be about testing.Instead, it turned into a conversation about burnout, money, leadership, community, AI, and what it actually takes to build a sustainable life in tech.Richard and Vernon kick off 2026 reflecting on what they’re changing, what they’re rebuilding, and how testing and quality fit into a future shaped by intention rather than hustle. Links to stuff we mentioned during the pod: 05:19 - The Malazan Book of the Fallen by Steven Erikson14:59 - The $1k Challenge by Ali Abdaal Vernon took part in last year17:23 - The video from Daniel Pink on how to have a successful yearHere's where Daniel talks about having a Challenger Network (but the whole video is 😙🤌🏾)18:46 - Toby SinclairToby's websiteToby's LinkedIn19:24 - Keith KlainKeith's blogKeith's podcastKeith's LinkedIn19:25 - Agile Testing Days conference35:45 - What is Model Drift?41:06 - Glue workTanya's Glue Work presentation which you can read or watchVernon's talk about how glue work impacts Quality Engineers, Testers, etc.48:06 - Gary "GaryVee" VaynerchukGary's websiteGary's YouTube00:00 - Intro 00:54 - Greetings & where have we been? 01:32 - The holidays 02:34 - Rest & mood 04:00 - Routines for success 05:59 - Push-up challenge! 08:35 - Dopamine detox 10:28 - THE EPISODE BEGINS! 10:29 - What are our personal 2026 themes (rather than resolutions)? 10:59 - Rich's 2026 themes 13:10 - Vern's themes 17:58 - Friendship, loneliness, and being the initiator 21:28 - Rich has a two itches. One about writing... 21:56 - ...and another about hats 25:23 - Vern's leadership focus and testing foundations 31:06 - AI work: data mindset, agents, and the vibe coding divide 40:11 - Rant about AI testing being stuck in the past 46:37 - Do "cool" shit and "talk" about it. How to stand out from AI Slop 50:10 - Our podcast themes for 2026

    54 min
  4. Shifting Left: Agile vs. Waterfall in QA

    10/21/2025

    Shifting Left: Agile vs. Waterfall in QA

    In this episode of the Vernon and Richard show, the hosts engage in light-hearted banter about football before diving into a deep discussion on QA, QE, and testing. They explore the concept of 'shift left' in software development, comparing its application in agile versus waterfall methodologies. The conversation shifts to the evolving roles of QA and QE in the context of AI's impact on the industry, emphasizing the importance of task analysis and building a quality culture within teams. The episode concludes with reflections on managing expectations in QA roles and the future of jobs in the field. 00:00 - Intro 00:48 - Welcome and "Hey" (may contain traces of ⚽️) 04:45 - Olly's first question: Does shift left lend itself more to waterfall (than other methodologies)? 14:41 - Olly's second question: Does this limit how much agile can be used? Is there potentially a new methodology that can emerge from this? 22:31 - Olly's third question (remixed by Rich a little): "...is it more now a case of making people aware that they can, should be considering things ahead of development?" 34:24 - Olly's fourth question: How far can you shift-left before it becomes overstepping? 51:53 - Olly's... which question is this now?! Next question! That works!: Where does the QA role end? Links to stuff we mentioned during the pod: 04:26 - Olly FairhallOlly's LinkedInHere's a link to what Olly sent us04:45 - Waterfall (in software development)Wikipedia article about the history of the term This article goes into a little more detail about the different phases and characteristics of the model 07:29 - Dan Ashby's (yes DAN'S!) famous diagram is part of his often cited "Continuous Testing" post07:50 - For folks who don't understand that reference, it's... taken (🥁) scene from the movie Taken08:10 - Rich's Whiteboard used to get a lot more love😞 22:31 - Olly's questions and thoughts that are guiding our conversation. Thanks Olly!44:12 - The book "Who Not How" by Dan Sullivan and Dr. Benjamin Hardy46:33 - Elisabeth HendricksonGet Elisabeth's excellent book Explore It!Elisabeth's LinkedIn46:49 - Alan PageAlan's newsletterAlan and Brent's podcastAlan's LinkedIn51:53 - Kelsey HightowerKelsey did a Q&A at Cloud Native PDX and you can listen to the question and answer I was trying to describe here.I urge you to listen to the whole thing. Kelsey is an excellent orator, storyteller, and all-around human ❤️55:33 - Rob SabourinMy quick Perplexity search for Rob's public material on Task AnalysisRob's Linkedin56:59 - Vernon's newsletter "Yeah But Does it Work?!"The issue mentioned is called "What Is The Vaughn Tan Rule and How Does It Impact Testing?" and talks about where we might start with unbundling

    1 hr
  5. Measuring Software Testing When The Labels Don’t Fit

    10/01/2025

    Measuring Software Testing When The Labels Don’t Fit

    This episode is about the struggle to explain, measure, and name the work testers and quality advocates actually do — especially when traditional labels and metrics fall short. Links to stuff we mentioned during the pod: 05:05 - Defect Detection Rate (DDR)The rate at which bugs are detected per test case (automated or manual)No. of defects found by test team / No. of Test Cases executed) *10015:06 - David Evans' LinkedIn24:57 - Janet GregoryJanet's websiteJanet's LinkedIn26:01 - Defect Prevention RatePerplexity search results here28:28 - Jerry WeinbergJerry's Wikipedia page (his books are highly recommended)49:33 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle. Some resources explaining the Shift-Left concept (Perplexity link) 00:00 - Intro01:11 - Welcome & "woke" testing 😳03:15 - QA, QE, Testing… whatever we call it, how do we measure if we're doing a good job?03:44 - Vernon’s first experience with testing metrics: more = better?05:00 - Defect Detection Rate enters the chat06:41 - Rich reverse engineers quality skills needed in the AI era10:54 - How do we know if we’re doing any of this well?12:40 - Trigger warning: the topic of coverage is incoming 😅16:54 - Bugs in production21:09 - Automation metrics: flakiness, pass rates, and execution time24:29 - Can you measure something that didn’t happen? (Prevention metrics)27:43 - Do DORA metrics actually measure prevention?32:03 - Here comes Jerry!33:50 - The one metric the business cares about...36:23 - QA vs QE: whose “quality” are we "assuring"?39:25 - What's the story behind the numbers?48:29 - Rich brings in Shift Left Testing50:14 - Metrics that reach beyond engineering53:14 - Rich gets a new perspective on QE and the business56:50 - Who does this work? Testers? QEs? Or someone else?

    1 hr
  6. When Everything Sounds Like Testing… How Do You Explain What You Really Do?

    09/09/2025

    When Everything Sounds Like Testing… How Do You Explain What You Really Do?

    In this episode, Richard and Vernon delve into the complexities of Quality Assurance (QA), Quality Engineering (QE), and testing in software development. They explore the evolution of these concepts, their interrelations, and the importance of metrics in assessing quality. The conversation highlights the need for a holistic approach to quality, emphasizing that both prevention and detection of bugs are essential. The hosts also discuss the challenges of defining these terms and the future of quality in the industry. Links to stuff we mentioned during the pod: 08:50 - Dan AshbyWe're referring to Dan's's excellent post called "Continuous Testing" (featuring his famous diagram!)17:13 - Jit GosaiJit's blog Jit's Quality Engineering Newsletter Jit's LinkedIn19:24 - Quality Talks PodcastStu's Quality Talks podcast that he co-hosts with Chris HendersonStu's LinkedInChris's Linkedin19:55 - The Testing Peers podcast22:00 - DORA Metrics: DORA metrics are a set of key performance indicators developed by Google’s DevOps Research and Assessment team to measure the effectiveness of software delivery and DevOps processes, focusing on both throughput and stability26:13 - A link from Episode 10 where Vern discusses Glue Work (be sure to check out the show notes on that episode)Quick overview of DORA metrics34:43 - The Credibility PlaybookA video course by Vernon as he experiments with building digital products.Check it out and let him know what you think of it! 😊46:24 - Ali AbdaalAli's websiteAli's YouTube00:00 - Intro 01:36 - Welcome 02:40 - Today's topic: What the hell is QA? QE? Testing? And is it all changing? 03:00 - Why is this bugging Rich? 05:11 - Fruit fly tangent 🍌🍊🍎🪰🐝🦋 06:27 - Rich's take on QA, QE, and Testing 08:31 - Vern's take on QA, QE, and Testing 11:15 - Is shift-left testing the same as QE? 13:05 - When the team tests early... is that QE then?! 16:18 - What's the big deal if we can’t define QE clearly? 19:27 - Why the Efficiency Era makes this even harder 22:55 - Trying to draw the Testing, QA, QE, Venn diagram 27:24 - Getting the QA, QE, Testing blend just right. What's the right mix? 29:52 - The kinds of work we take on as our careers grow 34:08 - What Testers get rewarded for 45:34 - How Ali Abdaal helped Vern think differently about quality 48:18 - Rich talks measurement

    54 min
  7. Embedding Quality Using AI

    08/26/2025

    Embedding Quality Using AI

    In this conversation, Vernon and Richard explore the evolving role of AI in quality engineering and software development. They discuss how AI can enhance quality control processes, the importance of embedding quality early in the development cycle, and the potential challenges and opportunities that arise from integrating AI tools. The conversation also touches on the need for skill development and community engagement in adapting to these changes, as well as the implications for roles within the industry. Description and Thumbnail made with AI to assess the quality, we had to! 00:00 - Intro01:02 - Welcome and footy ⚽️02:15 - Today's topic: The impact that AI may or may not have on Quality Engineering03:22 - Rich's wild idea about AI and software quality14:10 - Vern asks a clarifying question22:45 - Communities of excellence… for machines?!24:03 - Vern thinks there's an obvious risk that follows from this idea...31:31 - Rich addresses the risk (Oracles, prompts, and tester superpowers)36:13 – Reflection: the hidden skill AI forces on us41:40 – Shifting in all directions (not just left)43:04 - Feeding your past self into an AI: smart or scary?45:53 – Operation 400 subscribers (and bot listeners)47:13 – Tony Bruce calls us out on sloppy show notes and outro Links to stuff we mentioned during the pod: 04:18 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.Some resources explaining the Shift-Left concept (Perplexity link)25:35 - Rob BowleyRob's LinkedInThe post Vernon referred to......a follow-up post not long after that one too!26:40 - Alan PageAlan and Brent's podcastAlan's LinkedIn34:43 - Saskia CoplansDigital Interruption Saskia's cybersecurity consultancyREXscan Saskia's automated mobile application vulnerability scannerSaskia's LinkedIn (highly recommended follow)41:49 - Paul ColesPaul Coles published 3 of his 4 part series "The Subtle Art of Hearding Cats" over on Dev.To Recommended reading!Paul's LinkedIn43:09 - Maaret PyhäjärviMaaret's websiteMaaret's blogMaaret's LinkedIn

    48 min
  8. Six Hard Lessons From Building With AI Agents

    08/04/2025

    Six Hard Lessons From Building With AI Agents

    In this episode of the Vernon Richard show, the hosts discuss their experiences with AI tools and agents, focusing on the challenges and lessons learned from using these technologies in coding and software engineering. They explore best practices for utilizing AI effectively, the importance of context in interactions with AI, and the future of AI agents in the workplace. The conversation highlights the balance between leveraging AI for efficiency while maintaining control and understanding of the underlying processes. Links to stuff we mentioned during the pod: 09:16 - The LinkedIn post talking about Replit messing with someone's production code 😳And the link to the thread of person who went through itThe tool in question, Replit13:01 - Rich's LinkedIn post with his tips14:21 - GitHub Copilot18:09 - VS Code29:01 - Folks at different ends of the "AI Enthusiasm Spectrum"On the enthusiastic endJason Arbon is on the positive side and is always creating something interesting like...testers.aiOn the unenthusiastic endKeith Klain has created a reading list to help get us up to speed...Keith's reading AI reading listYou can see his full resources list hereMaaike Brinkhof has a bunch of thought-provoking posts on the topic......like this oneand this one34:44 - Want to know what "conflabulation" means? Listen to Martin explain it on the Ghost in th code podcast (that's not a typo!)37:24 - What is Context Engineering? Perplexity has answers!46:38 - The legendary Lt. Geordi La Forge from Star Trek: The Next Generation.51:48 - After recording, the very cool Paul Coles published his article The Subtle Art of Herding Cats: Why AI Agents Ignore Your Rules (Part 1 of 4, explaining the topic of Context Engineering. It’s brilliant!59:04 - The promises of technology over the years...60:50 - The always insightful Meredith Whittaker of Signal fame, where is the president and services on its board of directors, explains the privacy and security concerns with agentic technology. Watch the clip, then go back and watch the whole thing! 00:00 - Intro01:17 - Welcome01:30 - TANGENT BEGINS... All kinds of egregious waffling follows. Skip to the actual content at 08:3401:31 - Rich VS Tree Stump01:57 - What on earth did Rich need the pulley for?02:26 - Vern's nerdy confession and pulley confusion02:52 - Does Rich live next door to Tony Stark?!03:22 - What to do when you need a steel RSJ03:35 - We admit defeat. 03:36 - Welcome to Rich's Garden Adventures Podcast!07:25 - What has Vern been up to?08:34 - We attempt to segue into the episode at last!08:35 - TANGENT ENDS...08:51 - Rich’s POC: using agents to help build AI tools09:45 - The Replit disaster: vibe coding meets deleted production data 11:12 - Sociopathic assistants and the case for AI gaslighting 11:55 - Vernon wants his team experimenting with AI tools12:50 - Rich explains the context for his latest AI adventures13:18 - Rich’s bench project and “putting the engineering hat on” 15:22 - Setting up the stack and staying in control 16:53 - A familiar story: things were going fine until they weren’t 17:00 - Ask vs Edit vs Agent mode in Copilot explained 19:06 - The innocent linting error that spiralled out of control 21:16 - Stuck in a loop: “I didn’t know what it was doing, but I let it keep going” 22:11 - The fateful click: “I’m going to reset the DB” 23:10 - The aftermath: no data, no damage… but very nearly 23:33 - Security wake-up call: agents are acting as you 24:39 - You can’t fix what you don’t know it broke 25:52 - Can you interrupt an agent mid-task? 27:14 - When agents get “are you sure?” moments 28:15 - Tea breaks as a dev strategy: outsourcing work to agents 29:24 - Jason Aborn vs Keith & Maaike: where Rich sits on the AI enthusiasm spectrum 30:41 - Tip1. The first of Rich’s 6 agent tips: commit after every interaction32:12 - Why trusting the “keep all” button is risky 34:01 - Writing your own commits vs letting the agent do it 35:26 - When agents lose the plot: reset instead of fixing 36:55 - “You’re insane now, GPT. I’m giving you a break.” 37:54 - Tip 2: Make the task as small as possible 39:59 - The middle ground between 'ask' and full agent delegation 41:12 - Tip 3: Ask the agent to break the task down for you 43:36 - The order matters: why you shouldn’t start with the form UI 44:33 - Vernon compares it to shell command pipelines 45:09 - It can now open browsers and run Playwright tests (!) 46:23 - Star Trek and the rise of the engineer-agent hybrid 47:57 - Tips 4–6: Test often, review the code, use other models 49:39 - Pattern drift and the importance of prompt templates 50:51 - Vernon’s nemesis: m dashes, emojis, and being ignored by GPT 51:48 - Context engineering vs prompt engineering 52:43 - When codebases get too big for agents to cope 53:40 - Why agents sometimes act dumber than your IDE 54:32 - The danger of outsourcing good practices to AI 54:48 - Spoilers: Rich’s upcoming keynote at TestIt 55:01 - Agents don’t ask why — they just keep going 56:42 - Goals vs loops: when failure isn’t part of the plan 58:32 - The question of efficiency: is training agents worth it? 59:47 - Rich’s take: we’ll buy agents like we buy SaaS 61:08...

    1h 8m

About

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.

You Might Also Like