Embedded AI Podcast

Embedded AI Podcast

A podcast about using AI in embedded systems -- either as part of your product, or during development.

  1. 2 HR AGO

    E13 Ryan visits Luca (and they talk abut spec-driven development)

    For E13 we recorded live in Luca's garden in Munich, with Ryan dropping by ahead of Embedded World week. Ryan and Luca talk about spec-driven development in the AI era: where the discipline came from, what changes when an LLM is doing the typing, and the failure modes that show up over and over again in trainings. The short version: vibe coding will get you something that demos beautifully, but the moment a stranger asks "what does this button do?", it tends to expose how little was actually thought through. The conversation circles around a few recurring themes — the iterative loop you cannot skip even when the AI lets you, the temptation to one-shot whole projects, and the awkward fact that the AI itself seems to actively prefer working in waterfall mode. We also get into why requirements engineering and product ownership matter more (not less) with AI in the picture, why TDD doubles as a way of describing the goal to your assistant, and why the engineer staying in the loop — with that loop running tighter and faster — is what actually makes this work in practice. Plus an honest digression about all the ditches Luca has fallen into building Claude Code skills around his daily workflow. Key Topics: [02:24] Three artifacts of spec-driven development — and what each one means in the AI era[05:45] Trainings, vibe-coded games, and the "what does this button do?" moment[11:27] Long-lived branches, three months of code in an hour, and why both fail for the same reason[22:25] Beningo's multiplier metaphor: what if the engineer's value is between -1 and 1?[24:25] Engineering as conversations; curly brackets as a side effect[27:17] Why requirements engineering and product ownership become more important with AI[30:59] The AI wants waterfall — and you need to fight it[33:54] One-shot prompts for whole projects: lying to yourself in five lines[37:27] Staying in control: one unit of work, AI in the loop, integrate, done[41:32] Filling ditches one at a time: Luca's Claude Code skills setup[43:06] TDD as the act of describing the goal — to yourself and to the AINotable Quotes: "If you've got a five-line prompt that generates 10,000 lines of code for you, then there's just going to be a lot of blank spots in there, a lot of ambiguity in there. That can't be good." — Luca Ingianni "I've had teammates create a long-lived branch and tell me 'I'll see you in a few months.' And I'm like — no. They don't understand how this is going to interact with the rest of the system. And you're basically doing that same thing — writing three months worth of code in an hour. Cool, now what?" — Ryan Torvik "Engineering is what happens when engineers talk to one another, and the differential equations and the C++ code are just side effects of those conversations." — Luca Ingianni Resources Mentioned: embeddedai.academy — Luca's AI trainings for embedded teamsAgile Embedded Podcast — sister show, more on agile in the embedded worldEmbedded World — annual embedded systems trade fair in Nuremberg, where Ryan was heading nextClaude Code — the AI coding tool Luca built his "skills" workflow aroundJacob Beningo's "multiplier" framing for AI in development teams (referenced from E12)

    45 min
  2. 3 APR

    E12 Learning AI-powered development with Jacob Beningo

    We sit down with Jacob Beningo, a real-time embedded systems consultant with 20 years of experience, to talk about what we've learned teaching engineers to use AI in their development workflows. Turns out, the hard part isn't getting AI to write code—it's all the systems engineering that comes before and after. We discuss common mistakes people make when starting out, like treating AI as a magic code generator instead of a pair programming partner, and why you absolutely cannot skip requirements, architecture, and critical thinking just because the AI can type faster than you. Jacob shares stories from his training sessions, including an AI that refused to follow test-driven development because "that would take too long." We explore why AI actually forces you to become a better engineer by taking away the dopamine hit of typing code yourself, and why IDE plugins might be leading people astray by keeping them at the wrong level of abstraction. The conversation gets real about costs—both in tokens and electricity bills—and why the "set it and forget it" YouTube hype doesn't match reality. If you're skeptical about AI in embedded systems, good—keep that skepticism. You're going to need it. Key Topics: [03:15] The real challenge: systems engineering, not code generation[08:45] Why requirements engineering skills matter more than ever with AI[14:20] The push-button module exercise: spending a full day on design before any code[16:30] When AI refuses to follow TDD: "That would take too long"[22:40] The temperature sensor exercise: when tests pass but the code isn't production-ready[28:15] Code is cheap, but experiments aren't free: finding the balance[35:50] The hidden costs of AI: token budgets and rising electricity bills[42:10] Why IDE plugins might be the wrong interface for AI-assisted development[48:30] Using AI as a pair programming partner, not a code completion tool[53:20] Keep your skepticism: why critical thinking is more important than everNotable Quotes: "The AI finished writing the code and all the tests in like four seconds. I'm like, how was that so fast? Well, I just wrote all the code. You didn't follow the TDD process? No, that would take too long." — Jacob Beningo "If you throw in such a vague requirement, the thing can't read their mind and neither can they read its mind. So it's really just a matter of luck what you're going to get." — Luca Ingianni "The AI is the best puppy you'll ever have. It'll go pick up the stick for you. And you're like, no, not that stick. What do you mean not that stick? You didn't tell me which stick. I grabbed you the stick." — Ryan Torvik Resources Mentioned: Jacob Beningo on LinkedIn - Daily posts about embedded systems development and modernizationBeningo.com - Jacob's consulting and training services for embedded systemsEmbedded Software Academy - Training courses including AI for embedded systems developmentEmbedded Online Conference - Annual May conference for embedded systems education and communityAgile Embedded Podcast Slack - Community discussion channel mentioned by the hosts

    55 min
  3. 20 MAR

    E11 debugging embedded systems using AI

    Ryan and Luca explore practical techniques for using AI to debug embedded systems -- from analyzing breadboard photos to parsing UART output and managing complex debugging workflows. LLMs work best as force multipliers rather than replacements for engineering expertise: they handle tedious tasks like adding printf statements, analyzing logs, and decoding resistor color codes, while the engineer guides the process and catches mistakes. A key theme is context management: balancing deterministic scripts for repeatable tasks with non-deterministic AI analysis, and using separate sessions to keep debugging focused. We share cautionary tales of LLMs getting stuck in loops or reverting to common patterns despite specific instructions -- human oversight remains essential. Experienced engineers benefit most because they can effectively steer the LLM and recognize when it goes off track. Key Topics: [02:30] Debugging hardware via photos -- having LLMs identify wiring errors on breadboards[06:45] The rubber duck effect -- LLMs as interactive debugging partners[11:20] Printf debugging with AI -- adding debug statements and analyzing UART output[15:40] Context management -- separate sessions, distilled datasheet summaries[22:15] LLM failure modes -- loops, pattern reversion, ignoring specific instructions[28:30] Force multiplier vs. replacement -- why experience matters more with AI tools[33:50] Deterministic scripts + non-deterministic AI analysis[38:20] Future: webcams and oscilloscope screenshots for real-time hardware debuggingNotable Quotes: "It's like the over-eager intern -- it's a little naive, but it can type like the devil." -- Luca Ingianni "You cannot use an LLM as a replacement for your brain or for your experience, but you can use it as a force multiplier." -- Luca Ingianni "The LLM does not have that emotional connection to the code. I don't think people understand how emotional that connection is." -- Ryan Torvik Resources Mentioned: Claude Code (Anthropic) -- AI coding assistant used for debugging, image analysis, and code generationArduino/Elegoo Development Boards -- hardware platforms discussed in context of voltage compatibility debuggingAgile Embedded Podcast -- Luca's podcast on agile practices in embedded development

    44 min
  4. 6 MAR

    E10 TDD and AI

    Is test-driven development still relevant when AI can generate thousands of lines of code from a prompt? Ryan argues TDD was designed for human limitations -- if AI can generate complete systems, why write tests first? Luca pushes back: tests are your only defense against AI assumptions. Five lines of prompt becoming 10,000 lines of code means 9,995 lines of hidden assumptions that need to be made explicit and verifiable. Luca presents a systematic approach: start with test ideas (behaviors to verify), progress to test outlines (properties and steps in comments), then implement test code before letting AI write production code. This isn't about micromanaging class hierarchies -- it's about maintaining engineering responsibility. TDD becomes even more crucial in the AI era: it's how you communicate intent, capture assumptions, and keep AI-generated code on track. Key Topics: [02:30] Is TDD obsolete in the age of AI?[05:45] Ryan's argument: TDD was designed for human limitations[08:20] Tests as defense against AI assumptions and 'success theater'[12:15] Hidden assumptions: 5 lines of prompt becoming 10,000 lines of code[16:40] Agentic AI coding and the Swiss cheese model of reliability[21:30] Systematic approach: Test ideas, test outlines, test implementation[28:45] Spec-driven development (SDD) and PRDs in AI-assisted coding[35:20] Unit tests vs. BDD in the AI context[42:10] Why you shouldn't fight the AI over class hierarchies[48:30] Weekend projects that become production systems[52:45] Building features in an unfamiliar language (Kotlin) using TDDNotable Quotes: "The AI writing tests for me is the last thing that I want, because that is my only line of defense against the AI doing stupid things." -- Luca Ingianni "To go from five lines of prompt to 10,000 lines of code means there are 9,995 lines worth of assumptions in there. And sometimes they are correct and sometimes they are not." -- Luca Ingianni "The problem is that weekend projects turn into airplanes. Prototypes always live on." -- Ryan Torvik "You are responsible for this code no matter whether you type the curly brackets or the LLM." -- Luca Ingianni Resources Mentioned: Spec-driven Development (SDD) -- systematic AI-assisted development with three layers: requirements, plan, and tasksBehavior-Driven Development (BDD) -- system-level testing through executable specifications (Gherkin language)Unciv -- open-source Civilization clone in Kotlin, used as example project for TDD with AIAgile Embedded Podcast -- Luca's podcast on agile practices and TDD in embedded systems

    53 min
  5. 20 FEB

    E09 AI Systems Engineering with Darwin Sanoy

    In this episode, Ryan and Luca sit down with Darwin Sanoy from GitLab to explore the intersection of systems engineering, embedded development, and AI. Darwin brings a wealth of experience from his work in ISO 26262 certification and MIT Systems Architecture, helping us understand why embedded systems development differs fundamentally from pure software development. We dig into the core challenge: in embedded systems, software is never the whole product—it's always a part number in a larger physical system. This reality shapes everything from development cycles to how we can apply Agile and DevOps practices. Darwin walks us through the spectrum of embedded systems, from smart machines that benefit from frequent updates to safety-critical systems where the optimal number of software updates is exactly one. The conversation takes a practical turn as we explore how AI can help with systems engineering challenges, particularly around extracting architecture from legacy codebases and working with model-based systems engineering tools like SysML v2. Darwin shares concrete examples of using AI to generate embedded code from systems models and discusses GitLab's approach to making DevOps more accessible to embedded engineers through AI-powered explanations and context-aware tooling. We also touch on the limitations of current AI—it's great at small scope but struggles with broad architectural understanding, much like asking a new person off the street to understand your entire codebase every single time. Key Topics: [03:15] Why embedded systems development differs from pure software: software as a part number vs. software as the whole product[08:45] Three categories of embedded systems: smart machines, stable/boring machines, and safety-critical machines[12:30] The role of systems engineering in managing complex physical products and supplier ecosystems[18:20] SysML v2 and storing systems engineering models as code for AI accessibility[24:10] Generating embedded code from systems engineering models using AI: the flashlight example[28:45] The importance of architecture documentation and using AI to extract architecture from legacy code[35:50] Limitations of AI: excellent at small scope but struggles with broad architectural understanding[42:15] GitLab's approach to embedded DevOps: AI-powered explanations, context-aware tooling, and MCP servers[48:30] Practical AI applications: using imperative AI to create declarative code for model validationNotable Quotes: "Software as it's talked about in the lean startup and in agile and in DevOps is essentially limited to when software is the whole product. As soon as you get into embedded systems, by definition, the software is embedded into a physical product. Software is always a part number." — Darwin Sanoy "You are settling into a roller coaster and the operator gets on and says I'm sorry we're going to be an extra 10 minutes while a software update finishes. You're a DevOps pro so you stay seated right? Because you know that releases are only going to improve your experience, right?" — Darwin Sanoy "AI researchers put up a diagram and said these 169 boxes are roughly the functions that scientists and psychologists agree the human mind has. Next slide: these are the four that AI emulates and it doesn't do a very good job compared to what you do." — Darwin Sanoy "At least if you ask a human expert 'do you know about this?' they'll say 'yeah I'm not certain, ask somebody else.' AI literally cannot know how solid its knowledge is—it has no concept of vagueness or uncertainty." — Luca Ingianni

    54 min
  6. 6 FEB

    E08 - AI-Powered Pipelines with Joe Schneider

    In this episode, we sit down with Joe Schneider, founder of Dojo5 and creator of the EmbedOps framework, to explore how AI is transforming embedded development pipelines. We discuss the practical applications of AI in CI/CD workflows—from summarizing build outputs and triaging static analysis results to enabling smarter hardware-in-the-loop testing through visual analysis. Joe shares his perspective on where AI adds real value: condensing complex data, identifying anomalies, and helping teams move faster without sacrificing quality. We also tackle the challenges: the brittleness of traditional testing approaches, the difficulty of tracking dependencies in embedded systems, and the risks of over-automation. Throughout the conversation, we explore the balance between deterministic tools and AI-assisted workflows, and why human judgment remains essential—especially when it comes to security updates and edge cases that no test script would catch. Whether you're skeptical about AI hype or curious about practical applications, this episode offers a grounded look at how AI can strengthen your development pipeline without replacing the engineers who build it. Key Topics: [00:00] Introduction: Meet Joe Schneider and the focus on AI in embedded DevOps[02:30] The complexity challenge: Why modern embedded development needs better pipelines[05:15] Bob's machine and the problem with manual, hero-driven development[08:00] What AI is good at: Summarization, classification, and expansion[12:45] Exploratory testing: Can AI fish for bugs more effectively than random testing?[18:20] Visual analysis in hardware-in-the-loop testing: Using AI to evaluate screens and physical behavior[24:00] Walking through the pipeline: Build stage, static analysis, and AI-assisted triage[30:15] Compiler flags and configuration: Where AI can help optimize and catch mistakes[35:40] PR review automation: AI as a code reviewer—benefits and limitations[42:00] Self-healing pipelines: Automatic dependency updates and security patching[48:30] The human-in-the-loop debate: When automation goes too far[52:15] Hardware testing challenges: From pixel-perfect comparisons to AI-based visual validation[58:00] War story: Debugging a silicon bug that only appeared under specific conditionsNotable Quotes: "In 2025, there are still many companies that build and release firmware from a folder on a share drive somewhere that says V1.2, or it's Bob's machine. Somebody's literally clicking the build button, and that's just very sad." — Joe Schneider "If your product is destined for a human user, then you need a human to test it at some point in your stack. Humans are not good at following a test script for the 50th time, but they're great at finding the things you didn't think to test for." — Joe Schneider "AI is very helpful when you have a bunch of different situations and you can ask it: does this fall into this bucket or that bucket? That classification capability can be extremely useful in analyzing what's happening in your system or pipeline." — Luca Ingianni "I don't trust the scripts that I write. There's still people clicking buttons and mashing screens to make sure that things are working correctly, because we don't even trust the people that were downstream of us doing this work before." — Ryan Torvik "I think testing a lot like fishing. You don't just drive your boat a random amount in a random direction and drop the line. Fishermen know where the fish are—over in the weeds, under the dock. AI can learn those signals too." — Joe Schneider Resources Mentioned: Dojo5 - Custom firmware development company founded by Joe SchneiderEmbedOps - Industry-leading embedded development DevOps framework created by Dojo5Zephyr RTOS Security Tool - Tool within Zephyr that evaluates compiler flags and security posture—often underutilizedPC-Lint - Traditional static analysis tool for C/C++, known for verbose output

    1hr 7min
  7. E07 - Embedd, and using AI safely, with Michael Lazarenko

    9 JAN

    E07 - Embedd, and using AI safely, with Michael Lazarenko

    In this episode, Ryan and Luca sit down with Michael Lazarenko, co-founder of Embedd, to discuss the real-world challenges of using AI in embedded systems development. Michael shares his journey from manufacturing physical devices to building AI-powered tools that parse datasheets and generate hardware abstraction layers. The conversation dives deep into when AI should—and critically, shouldn't—be used in embedded development. Michael offers a refreshingly pragmatic perspective on AI adoption, explaining how Embedd uses AI to extract information from messy, unstandardized PDFs and technical manuals, while deliberately avoiding AI where deterministic approaches work better. The discussion covers the technical challenges of building RAG systems for embedded documentation, the importance of creating stable intermediate representations, and why accuracy matters more than speed when generating safety-critical code. The episode also explores broader themes around AI adoption in conservative industries like automotive and aerospace, the gap between AI hype and reality in embedded development, and Michael's vision for a unified embedded development platform. Throughout, the conversation maintains a healthy skepticism about AI's current capabilities while acknowledging its potential—a balanced perspective that's rare in today's overheated AI discourse. Key Topics: [02:30] The problem of hardware-software coupling and why embedded documentation is such a mess[08:45] When NOT to use AI: deterministic parsing vs. probabilistic approaches[15:20] Building RAG systems for technical documentation: chunking, context windows, and accuracy challenges[22:10] Creating stable intermediate representations (digital twins) for hardware components[28:40] The verification problem: why AI-generated embedded code is harder to validate than web applications[35:15] AI adoption in conservative industries: automotive, aerospace, and defense taking risks[42:30] The gap between AI hype and reality in embedded development workflows[48:20] How AI forces better testing and requirements engineering practices[54:00] The future of embedded development: unified platforms and model-based designNotable Quotes: "I would be slightly contrary and say at this point at least, I probably wouldn't use it in every possible place, especially in this specific problem set, given the context size that we're facing. AI performs best when the results are as defined and as limited as possible." — Michael Lazarenko "If there is a register list in an SVD file that I can parse with 0% chance of probabilistic error, why would I use RAG? If there isn't one, then I have to use it, and then I need to find a way of using it that gives me the highest possible accuracy." — Michael Lazarenko "The number of VCs that I've talked to in the past year who have told me that they don't need testing frameworks because the AI is just going to generate all the code for us. That's exactly why you need more thorough testing. That's why you need more guardrails." — Ryan "Since I've been using AI seriously to generate code, I've become such a stickler for tests. It's quite remarkable. AI can be a forcing function to really force you to get your development processes in order." — Luca "I'm seeing dumps of AI code going in that no one read. People are outputting requirements and then code that AI spits out, and it's really soul destroying for those who actually review the code." — Michael Lazarenko Resources Mentioned: Embedd - Michael's company that creates stable representations of embedded hardware and generates hardware abstraction layers using AI

    54 min
  8. 19/12/2025

    E06 integrating AI into embedded products with Souvik Pal

    In this episode, Ryan and Luca welcome their first proper guest, Souvik Pal, Chief Product Officer at FyeLabs. Souvik shares his eight years of experience helping customers bring embedded AI projects to life, walking us through two fascinating case studies that highlight the real challenges of deploying AI in resource-constrained environments. We explore a wearable safety device that needed to run computer vision on an ESP32 (spoiler: it didn't work), and a smart door system that had to juggle facial recognition, voice authentication, gesture detection, and 4K video streaming—all while fitting behind a door frame. Souvik breaks down the practical considerations that drive hardware selection, from power budgets and thermal management to the eternal struggle with Bluetooth connectivity. The conversation reveals how different constraints—whether it's battery life, space, or compute power—fundamentally shape what's possible with embedded AI. Beyond the technical war stories, we discuss what makes AI products actually useful rather than just technically impressive. Souvik emphasizes the importance of keeping humans in control, building trust through transparency, and understanding your power budget before anything else. Whether you're working with microcontrollers or mini PCs, this episode offers practical insights into the messy reality of bringing AI-enabled embedded products from concept to reality. Key Topics: [00:00] Introduction and welcoming first guest Souvik Pal from FyeLabs[02:30] Evolution of embedded AI: from cloud-based processing to edge computing[04:00] Case study: Wearable safety device with rear-facing camera for threat detection[08:00] Attempting to run object detection on ESP32: memory constraints and quantization challenges[12:00] Moving to Raspberry Pi Zero: trade-offs between power consumption and capability[15:00] Model selection: working with COCO dataset and YOLO for embedded environments[20:00] Case study: Smart door system with 4K display, facial recognition, and voice authentication[25:00] Running multiple AI models concurrently: video streaming, object detection, voice recognition, and gesture detection[30:00] Wake word detection and voice command processing without full transcription[35:00] Hardware selection: from ESP32 to Raspberry Pi to mini PCs and thermal management[40:00] Linux audio challenges and managing concurrent AI pipelines[45:00] Building good AI products: user experience, trust, and keeping humans in control[50:00] Design process for AI-enabled products: power budget as the primary consideration[55:00] Hardware progression: ESP32, Raspberry Pi Zero, Pi 5, Jetson, and when to use eachNotable Quotes: "The way I define embedded is where we have constraints, either cost in space or compute or power. And that's where it becomes really challenging to deploy any sort of advanced algorithmic solutions." — Souvik Pal "A good AI would strike a balance between what it enables the user to do and what it does for itself. I think we should let the human know that they're interacting with an AI, however smart that AI might be." — Souvik Pal "When I think of an AI solution, it starts with power. That's number one consideration. What is your power budget? That immediately restricts you in terms of what you can do." — Souvik Pal "You know people worried about AGI... the amount of work you've had to do to replace a doorman in this situation." — Ryan Torvik Resources Mentioned: COCO Dataset - Common Objects in Context dataset - a go-to dataset for object detection with 50+ pre-trained classesYOLO (You Only Look Once) - Object detection model well-suited for compute-constrained embedded environments, with recent versions showing promise for edge deploymentOpen Wake Word - Wake word detection engine used for voice-activated systems

    47 min

About

A podcast about using AI in embedded systems -- either as part of your product, or during development.

You Might Also Like