In this episode, Jeff interviews Luca about his intensive experience presenting at five conferences in two and a half days, including the Embedded Online Conference and a German conference where he delivered a keynote on AI-enhanced software development. Luca shares practical insights from running an LLM-only hackathon where participants were prohibited from manually writing any code that entered version control—forcing them to rely entirely on AI tools. The conversation explores technical challenges in AI-assisted embedded development, particularly the importance of context management when working with LLMs. Luca reveals that effective AI-assisted coding requires treating prompts like code itself—version controlling them, refining them iteratively, and building project-specific prompt libraries. He discusses the economics of LLM-based development (approximately one cent per line of code), the dramatic tightening of feedback loops from days to minutes, and how this fundamentally changes agile workflows for embedded teams. The episode concludes with a discussion about the evolving role of embedded developers—from code writers to AI supervisors and eventually to product owners with deep technical skills. Luca and Jeff address concerns about maintaining core software engineering competencies while embracing these powerful new tools, emphasizing that understanding the craft remains essential even as the tools evolve. Key Topics[02:15] LLM-only hackathon constraints: No human-written code in version control[04:30] Context management as the critical skill for effective LLM-assisted development[08:45] Explicit context control: Files, directories, API documentation, and web content integration[11:20] LLM hallucinations: When AI invents file contents and generates diffs against phantom code[13:00] Economics of AI-assisted coding: Approximately $0.01 per line of code[15:30] Tightening feedback loops: From day-long iterations to minutes in agile embedded workflows[17:45] Rapid technical debt accumulation: How LLMs can create problems faster than humans notice[19:30] The essential role of comprehensive testing in AI-assisted development workflows[22:00] Challenges with TDD and LLMs: Getting AI to take small steps and wait for feedback[26:15] Treating prompts like code: Version control, libraries, and project-specific prompt management[29:40] External context management: Coding style guides, plan files, and todo.txt workflows[32:00] LLM attention patterns: Beginning and end of context receive more focus than middle content[34:30] The evolving developer role: From coder to prompt engineer to AI supervisor to technical product owner[38:00] Code wireframing: Rapid prototyping for embedded systems using AI-generated implementations[40:15] Maintaining software engineering skills in the age of AI: The importance of manual practice[43:00] Software engineering vs. software carpentry: Architecture and goals over syntax and implementationNotable Quotes"One of the hardest things to get an LLM to do is nothing. Sometimes I just want to brainstorm with it and say, let's look at the code base, let's figure out how we're going to tackle this next piece of functionality. And then it says, 'Yeah, I think we should do it like this. You know what? I'm going to do it right now.' And it's so terrible. Stop. You didn't even wait for me to weigh in." — Luca Ingianni "LLMs making everything faster also means they can create technical debt at a spectacular rate. And it gets a little worse because if you're not paying close attention and if you're not disciplined, then it kind of passes you by at first. It generates code and the code kind of looks fine. And you say, yeah, let's keep going. And then you notice that actually it's quite terrible." — Luca Ingianni "I would not trust myself to review an LLM's code and be able to spot all of the little subtleties that it gets wrong. But if I at least have tests that express my goals and maybe also my worries in terms of robustness, then I can feel a lot safer to iterate very quickly within those guardrails." — Luca Ingianni "Roughly speaking, the way I was using the tool, I was spending about a cent per line. Which is about two orders of magnitude below what a human programmer roughly costs. It really is a fraction. So that's nice because it makes certain things approachable. It changes certain build versus buy decisions." — Luca Ingianni "You can tighten your feedback loops to an absurd degree. Maybe before, if you had a really tight feedback loop between a product owner and a developer, it was maybe a day long. And now it can be minutes or quarters of an hour. It is so much faster. And that's not just a quantitative step. It's also a qualitative step." — Luca Ingianni "Some of my best performing prompts came from a place of desperation where one of my prompts is literally 'wait wait wait you didn't do what we agreed you would do you did not read the files carefully.' And I'd like to use this prompt now, even before it did something wrong. And then it apologizes as the first step. And I feel terrible because I hurt the LLM's feelings. But it is very effective." — Luca Ingianni "As you tighten your feedback loops, quality must be maintained through code review and tests. Test first, new feature, review, passing tests—you need to go through that red-green-refactor loop. You can just hopefully do it much more quickly, and maybe in slightly bigger steps than you did before manually." — Jeff Gable "A lot of what I'm doing is really intended to rein in an LLM's propensity to sort of ramble. It's very hard to get them to practice TDD because you can ask them to write the test first, then they will. And then they will just trample on and write the implementation right with it without stopping and returning control back to you." — Luca Ingianni "Those prompts tend to be to some degree specific to the particular code base or the particular problem domain. Every now and then you stumble across ways of making an LLM do exactly what you want it to do within the context of the particular code base. And once you find a nugget like this, you keep it. You don't just keep it in the generic library. Some of those tricks will be very specific to a particular code base." — Luca Ingianni "Just like humans, LLMs tend to pay more attention to the stuff at the beginning of the context and at the end, and the middle sort of gets not quite forgotten but kind of fuzzy. You really need to have a way to extract all of that before it becomes fuzzy and store it in a safe place where it can't be damaged, like a file." — Luca Ingianni "I think we will hit this weird valley in the coming five years where everyone's just using LLMs and no one knows how to write code anymore. And there will be a need for people who can leverage the tools, but still have the skills that serve as the solid foundation." — Jeff Gable "Maybe this is essentially software engineering finally becoming true to its name. At the moment, software engineering is sort of more like software carpentry. You're really doing the craft. You're laboring to put the curly brackets at the right places. And maybe now it's more about taking a step back and thinking in terms of architecture, and thinking in terms of goals, as opposed to knowing how to swing a hammer." — Luca Ingianni Resources MentionedEmbedded Online Conference - Premier online conference for embedded systems professionals featuring talks on AI integration, development practices, and cutting-edge embedded technologies. All sessions are recorded and available for on-demand viewing.Aider - AI pair programming tool mentioned for its ability to integrate web content into context using commands like '/web [URL]' to incorporate API documentation and other online resources directly into the development workflow.GitHub Copilot - AI-powered code completion tool integrated with VS Code and other IDEs, enabling context-aware code generation and assistance for embedded development workflows. You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer. Want to join the agile Embedded Slack? Click here Are you looking for embedded-focused trainings? Head to https://agileembedded.academy/Ryan Torvik and Luca have started the Embedded AI podcast, check it out at https://embeddedaipodcast.com/