Waves of Innovation by re:cinq

Deejay

A monthly podcast about the shift from Cloud Native to AI Native. Hosted by Deejay, each episode features a guest picked by our community—engineers, leaders, and thinkers sharing how they’re adapting, experimenting, and figuring it out as they go. Real stories, practical lessons, and things we’re all still learning.

  1. 9H AGO

    Scaling Code Review When AI Writes the Software

    The episode begins by addressing a stark new reality for engineering teams: AI agents are writing code at an unprecedented pace, leading to pull requests that are 150 percent larger and review times that have doubled. Deejay and Jaime Jorge unpack this sudden shift, noting how the friction of software development is being removed faster than ever before. However, this frictionless environment introduces a dangerous side effect known as automation bias, where developers might blindly merge massive blocks of AI-generated code simply because a machine wrote it. The Technical Core A significant pivot occurs when the conversation moves from identifying the problem to exploring architectural solutions. Jaime introduces the cyborg approach to code analysis. He explains that while AI models are incredibly powerful, their non-deterministic nature means they cannot reliably enforce consistent coding standards. To counter this, engineering teams must maintain deterministic rules as a structural backbone. The duo explores how tools like Model Context Protocol servers are allowing AI agents to run local static analysis and security checks before a pull request is ever created. Instead of discarding traditional CI/CD pipelines, Jaime argues that these deterministic gates are becoming even more critical, acting as necessary friction to ensure that AI-generated software is actually secure. Philosophical and Human Implications The heart of the episode explores the evolving role of the software developer. Jaime likens managing modern AI coding tools to opening loot boxes, where the output is a gamble that requires constant supervision and orchestration. As coding becomes less about typing syntax and more about acting as an agent herder, the fundamental principles of software engineering—like rigorous test coverage and clear specifications—are proving more vital than ever. The discussion also touches on the anxiety surrounding software as a defensive moat. If anyone can spin up a prototype over a weekend, the true differentiator for a business becomes trust and reliability, rather than just the codebase itself. Future Outlook Looking ahead, the conversation shifts toward the concept of software factories and autonomous agents operating in isolated environments. Jaime anticipates a future where systemic failures in code quality will no longer be blamed on individual human error, but on poorly designed automated workflows. The episode concludes with a grounding reminder for tech leaders: while the pace of AI innovation is relentless and impossible to track hourly, embracing the change and implementing robust, automated guardrails will be the key to surviving and thriving in this new era. Key Themes Explored The Cyborg Approach to Analysis. Combining deterministic security rules with non-deterministic AI models ensures consistent code quality without sacrificing development speed.The Danger of Automation Bias. As AI generates massive pull requests, developers risk blindly trusting machine output, making rigorous and automated review gates essential.Coding as Agent Orchestration. The developer role is shifting from writing syntax to guiding multiple AI agents, requiring a renewed focus on strict testing and clear specifications.

    55 min
  2. MAR 25

    From Telemetry to Empathy: Measuring AI in Your Teams

    The Opening Context The conversation begins with a critical look at how engineering organizations measure success in the era of AI. Lauren Peate CEO of Multitudes joins Deejay to unpack their recent mixed methods research on the real world impact of agentic coding on software delivery. The duo quickly establishes that the headline grabbing promises of AI often clash with the lived realities of developers. Lauren emphasizes that relying solely on telemetry data can paint a misleading picture necessitating a blend of quantitative metrics and qualitative interviews to uncover what is actually happening within engineering teams. The Technical Core A significant pivot occurs when Lauren reveals one of the most surprising findings of the study: a 19.6 percent increase in out of hours commits among developers using AI tools. While many leaders assumed this spike was driven by the sheer joy of using new technology the qualitative data told a different story. Developers are struggling to balance existing delivery pressures with the steep learning curve of rapidly evolving AI models. The heart of the episode explores how organizations can mitigate this burnout. Lauren advocates for structured peer to peer learning noting that engineers trust AI demos from their codebase peers far more than top down mandates or static playbooks. Deejay parallels this with the military concept of commanders intent arguing that leaders must clearly communicate the why behind AI adoption rather than just dictating the how. Philosophical and Human Implications The discussion deepens as they examine the psychological toll of AI mandates and the shifting dynamics of psychological safety. They lament the disconnect between senior leadership who often view AI as a sheer productivity multiplier and the individual contributors on the ground who feel their job security is threatened. Lauren points out that unaddressed fears stifle adoption making it crucial for leaders to be authentic about economic realities and organizational goals. Deejay highlights the importance of creating spaces for developers to voice their anxieties and share their AI failures transforming skepticism into collective problem solving. Future Outlook Looking ahead the conversation addresses the precarious future of the junior developer pipeline. Lauren shares upcoming research indicating that while senior leaders rarely mention the impact of AI on entry level talent individual contributors are deeply worried about who will mentor the next generation. Deejay voices a dystopian concern: if AI abstracts away the foundational coding work we risk building critical societal infrastructure on layers of digital cruft that no human truly understands. Ultimately they conclude that the successful integration of AI relies not on the tools themselves but on preserving the human elements of mentorship intentional leadership and community learning.

    1h 9m
  3. MAR 4

    Bridging the Skills Gap: Insights from Agentic Coding Training

    The conversation begins by tracing the professional trajectories of Daniel Jones and Benedict Stemmelt, two practitioners who found common ground in the shared Slack channels of the AI-native movement. The opening context establishes a relatable pivot for senior leaders: the rediscovery of the joy of creation. Both hosts describe how agentic tools allow architects and CIOs to bypass the friction of environment setup and syntax memory, returning to the core act of building. However, this initial excitement quickly shifts into a more rigorous technical analysis of the state of the art in early 2026. The technical core of the episode centers on the transition from individual productivity to systemic organizational efficiency. Benedict laments the loss of focus when teams treat AI tools as mere copy-paste assistants rather than integrated agents. A significant pivot occurs when the duo discusses the Paradox of Detail in context management. They debunk the common advice of stuffing every instruction into an agents.md file, noting that reasoning capabilities often hit a cliff after 30,000 tokens. Daniel highlights research showing that over-loading context actually confuses models, making aggressive context curation a more vital skill than prompt engineering. The heart of the episode explores the human and behavioral implications of non-deterministic development. The duo discusses the Ralph Wiggum loop—an experiment in unattended programming—to illustrate how agents can shake themselves out of local maxima through iterative failure. Benedict likens the process of steering an agent to reverse engineering; the developer must understand the model’s default training path to effectively nudge it toward a specific architectural vision. This requires a fundamental behavior change: the willingness to throw away agent-generated code and reset the slate rather than manually fixing every hallucination. The future outlook presented is one of Software Factories. The conversation concludes with a vision of engineers moving from manual labor to machine design. They argue that the job of an engineering leader is no longer just shipping features, but building the machine that ships the features. They warn that according to DORA 2025 data, this transition will widen the gap between high-maturity teams and those struggling with legacy bottlenecks. The episode ends as a call to action for leaders to treat AI adoption not as a tool purchase, but as a total organizational redesign centered on flow efficiency and automated throughput. Key Themes ExploredThe Shift to Software Factories: Engineers are transitioning from writing individual lines of code to designing autonomous systems that manage feature production. This requires a mindset shift where the primary product is the factory itself rather than the code it produces.The Context Reasoning Cliff: Reasoning capabilities often degrade significantly once a context window exceeds 30,000 tokens, regardless of the theoretical maximum limit. Technical leaders must focus on context pruning and relevance rather than simply increasing the volume of provided data.Behavioral Reverse Engineering: Success with agents depends on identifying a model's default behaviors and intentionally steering them toward project-specific requirements. This iterative process uses non-determinism as a feature, allowing agents to find creative solutions through multiple loops.

    1h 13m
  4. FEB 18

    Software Factories: From Outputs to Business Outcomes

    The episode opens with urgency as Daniel Jones and Mike Gehard reflect on a fortnight of agentic breakthroughs—specifically "dark factories" where humans are barred from the inner workings of code production. Daniel cites milestones from OpenAI and Strong DM, noting the industry has moved past simple completion tools into autonomous, multi-agent systems. Mike connects his chemical engineering roots to the current AI landscape, suggesting software is finally colliding with the mature feedback loops of physical refineries and the Toyota Production System. The technical core focuses on the shifting bottleneck of software production. Applying the Theory of Constraints, they argue that because LLMs have solved the "output problem"—generating code faster than any human—the constraint has moved upstream to specification and downstream to validation. Mike shares experiments building a handcrafted software factory, using agents to retrospect on their own traces and PRs. They dismantle traditional reliance on unit tests, highlighting the "holdback set" approach: keeping a human-language specification hidden from the coding agent as a blind validator. This shifts focus from "transmogrifying widgets" to measuring real-world outcomes and user behavior. The dialogue explores human implications of this transition, discussing the "death of legacy lore"—whether TDD and complex architectural patterns remain relevant when an agent can refactor an entire codebase in seconds. Mike introduces Minimum Viable Architecture, positing that while agents need structure to stay within context windows, the mental overhead of traditional architecture is shrinking. They analyze the addictive nature of "vibe coding" and the psychological relief of staying present with family while agents churn through tasks in the background. The future outlook envisions radical software abundance—a world where software has zero market value because it's instantly reproducible, shifting the corporate moat to data, networks, and relationships. They foresee democratization where non-technical domain experts express business logic without the gatekeeping of a "priestly developer class." The episode concludes with a call to abandon dogmatic practices, embrace the role of the Editor, and use these tools to solve persistent human problems like hunger and housing through frictionless, bespoke creation. Key Themes The Industrialization of Logic: Software is moving from artisanal process to closed-loop system modeled after chemical refining, requiring engineers to act as systems designers managing automated loops.The Theory of Relocated Constraints: With code generation solved, the primary hurdles are clarity of specification and rigor of validation—encoding human intent into high-fidelity prompts without ambiguity.Architecture as Context Management: Traditional architecture managed human mental limits; in the agentic era, it prevents LLMs from getting lost mid-file. Structure optimizes the agent's attention.The Economic Collapse of Software Value: As software becomes a commodity generated for token costs, proprietary codebases lose competitive advantage. Future value resides in proprietary data and human relationships.

    1h 13m
  5. JAN 29

    Evals, reducing hallucinations, & AI-native development

    The episode opens with Amy Heineike outlining Tessl's core mission: building documentation registries optimized for coding agents. Daniel Jones notes the pervasive frustration of API hallucinations, where models invent idealized but non-existent methods that waste developer cycles. Amy explains that models often struggle with APIs too new or too old for their training sets, creating a critical need for external grounding. The duo laments lost efficiency when agents trawl through bloated web pages or unoptimized node modules. Amy introduces the Registry as a version-locked context provider that prevents agents from polluting context windows with raw text. Using an MCP server, agents access summary documentation, staying grounded without token-heavy web crawls. The discussion pivots to verification methodology. Amy likens the shift from unit testing to evaluations as moving from hard logic to biological science. In traditional engineering, a unit test fix remains fixed, but in agentic systems, success is measured across a basket of scenarios. This requires developers to think like statisticians, examining success averages and variance rather than binary pass-fail states. The episode explores the paradox of detail: providing more task instructions can cause agents to ignore broader system-level steering. Amy shares research showing that as task prescriptiveness increases, agents weigh local context over global rules. The conversation deepens around non-deterministic high-performing systems. They discuss the Ralph Wiggum loop and Steve Yegge's Gastown framework, illustrating how agentic head-banging against errors can lead to superior, anti-fragile outcomes. Daniel introduces the Van Halen Brown M&M feedback loop as a psychological steering mechanism, where developers can use emoji-triggers to verify if a model respects the context window. The dialogue concludes with forward-looking organizational analysis. As AI capabilities coalesce, rigid boxes of product, design, and engineering begin to merge. Amy and Daniel envision the rise of the Product Engineer, a role focused on intentionality and outcomes rather than syntax. They argue that defining what a good outcome looks like becomes the primary lever of control. Amy encourages embracing the chaos of transition, suggesting stability is found in accepting variability rather than fighting for perfect determinism. Key Themes Explored: Machine-Optimized Contextual Grounding: Tessl provides unpolluted, machine-ready registries that prevent token-heavy hallucinations in cutting-edge or legacy APIs.Probabilistic Verification: Engineering is shifting from binary unit tests toward statistical evaluation modeling, treating systems as biological entities requiring constant observation.The Paradox of Detailed Steering: Hyper-prescriptive prompts often cause loss of global instruction adherence. Architects must balance task detail with system steering.Anti-Fragility via Non-Determinism: Embracing non-deterministic loops allows systems to escape local maxima and discover stable solutions through learning from failures.Outcome-Focused Engineering: AI is merging product management and development into a single outcome-oriented discipline focused on defining intentionality.Multi-Pass Agentic Architectures: Breaking logic, security, and performance into specialized sequential passes prevents cognitive overload and improves reliability.

    1h 1m
  6. 12/23/2025

    DORA 2025, the Psychology of Agentic Coding, and Value Stream Management

    In this week's episode of Waves of Innovation, host Daniel Jones reconnects with "Big" Rob Edwards, a Google Cloud expert, DORA contributor, and long-time collaborator. Their history goes back a decade to the trenches of early cloud platform delivery, giving them a shared language for the massive shifts occurring in the industry today. Rob brings a rare dual perspective to the podcast. By day, he works with enterprises across North America to optimize their software delivery using Google Cloud. By night, he has recently completed a Master’s degree in Psychology, where his thesis focused specifically on "Developer Productivity in the Age of Generative AI." The conversation kicks off with a deep dive into Rob’s contributions to the upcoming DORA report. While the industry obsesses over code generation, Rob and Daniel argue that "writing code faster" is rarely the bottleneck. They explore the critical importance of Value Stream Management (VSM). Rob shares real-world anecdotes—including drawing process maps on glass windows in major banks—to illustrate how invisible friction points kill velocity. They discuss a specific case study where a customer thought they had a CI/CD problem, but VSM revealed they had five manual merges on the critical path to production. The key takeaway from the DORA research? AI is an amplifier. If applied to a bad process, it simply creates a larger pile of inventory at your bottlenecks. VSM is the "force multiplier" that allows AI teams to actually ship value rather than just generating PRs. The heart of the episode is a fascinating exploration of Rob’s academic thesis. Interviewing senior engineers, he uncovered that the identity of a developer is fundamentally changing from a "Coder"—measured by syntax and output—to a "Conductor." Rob explains the concept of "metacognition" (thinking about how we think). As developers move to agentic workflows, they are forced to stop thinking about the for loop and start thinking about system architecture and intent. Rob notes that participants in his study stopped reading technical manuals and started reading architectural books to better direct their AI agents. Key Themes Explored: The "Safe Space" of AI: Rob reveals a surprising psychological benefit: introverted developers are using AI as a non-judgmental "cognitive partner" to validate their ideas. This pre-validation gives them the confidence to speak up in group settings, leading to better team outcomes.The Baseline Trap & Burnout: The discussion takes a serious turn regarding mental health. As AI handles the grunt work, the "baseline" for productivity shifts upward. If you aren't doing 3x the work, you feel unproductive. Rob and Daniel discuss the dangers of this "new normal" and the "J-Curve of Learning" where productivity temporarily dips as we adjust to new tools.The Junior Developer Crisis: If senior engineers are operating at a high level of abstraction, how do juniors learn? The duo laments the loss of "learning by osmosis"—sitting next to a senior dev and watching them work—and questions if the next generation will miss out on foundational struggles that build resilience.Agentic Exhaustion: Daniel shares anecdotes about the mental load of managing multiple AI agents, comparing it to the intense focus required for pair programming. It’s not necessarily easier; it’s just different work. Whether you are a CTO looking to interpret the DORA metrics, or a developer trying to navigate your changing identity in an AI world, this episode offers a blend of hard data and human insight you won't find anywhere else.

    1h 4m
  7. 12/05/2025

    From Tech Stacks to Mindsets: The Psychology of Transformation

    Why do so many digital transformations hit a wall? You can have the fastest cloud platform or the most advanced AI agents, but if you ignore the humans responsible for using them, you are destined to struggle. In this episode of Waves of Innovation, host Daniel Jones reconnects with his former business partner and long-time collaborator, Dan Young. Ten years ago, they set out to transform companies using Cloud Native technology. They quickly realized that the biggest blockers weren't technical—they were psychological. As the industry shifts from Cloud Native to AI Native, the lessons they learned are more relevant than ever. DJ and Dan dive deep into the messy reality of organizational change, exploring why mandates fail and why "invitation" is the secret weapon of successful modern leadership. In this episode, we cover: The Identity Crisis of Change: Why developers and managers resist new workflows—not because they are stubborn, but because it threatens their professional identity.Invitation-Based Change: How to stop dragging people to meetings and start creating spaces where people choose to engage using Open Space technology.Liberating Structures: Practical tools (like TRIZ and 1-2-4-All) you can use tomorrow to break down power dynamics and let the quietest voices in the room solve your biggest problems.The "Make It Fail" Exercise: A counterintuitive strategy to identify organizational weaknesses by asking how to ensure a project fails spectacularly.Leading with Vulnerability: Why the "Iron Triangle" of certainty kills innovation, and how listening can be a leader's most powerful tool. About the Guest: Dan Young is a technologist turned organizational change expert. Formerly the co-founder of Cloud Native consultancy EngineerBetter, he now runs When & How Studios. He specializes in the human side of technology, helping organizations navigate the complex web of motivations, identities, and power dynamics to create healthier, more effective teams. Resources & Links Mentioned: Inviting Leadership: https://www.amazon.co.uk/Inviting-Leadership-Invitation-Based-ChangeTM-World/dp/0984875352Liberating Structures: https://www.liberatingstructures.com/Open Space Technology: https://en.wikipedia.org/wiki/Open_space_technologyWhen & How Studios - Book Of Prompts: https://www.whenandhowstudios.com/book-of-promptsArticle: Are you growing something that matters? (The pandemic/Telco story): https://medium.com/needs-workshop/are-you-growing-something-that-matters-78a7f3a79838Article: Helping the NHS take care of itself (The shielding story): https://medium.com/needs-workshop/helping-the-nhs-take-care-of-itself-4643c0d3f20f

    1h 7m
  8. From Coding to Context Switching: An AI Retrospective

    11/13/2025

    From Coding to Context Switching: An AI Retrospective

    What happens after you go "all in" on AI coding assistants? In this episode, Deejay catches up with Elliott Beatty, host of the Agentic CTO podcast, and VP of Engineering at Fruition, to review his organization’s aggressive adoption of agentic AI. Several months ago, the goal was full automation. Today, the reality is more nuanced. While velocity is up tremendously, the team has hit a new ceiling: the human element. Elliott pulls back the curtain on the unintended consequences of hyper-productivity, including developer burnout, "context switching" fatigue, and the massive bottlenecks created in QA and User Acceptance Testing (UAT) when code is written faster than it can be checked. In this episode, we cover: The Human Cost of Velocity: Why running multiple agents simultaneously led to engineer burnout and mandatory time off. Frontend vs. Backend: Why AI agents excel at React and Flutter "monkey-see-monkey-do" tasks but struggle with complex backend microservices and scalability logic. The "Logjam": How a 100% increase in coding speed exposed critical weaknesses in QA, UAT, and stakeholder approval processes. Tooling Shifts: Why the team ditched Jira for Linear, embraced Model Context Protocol (MCP) servers, and the critical importance of Feature Flags (LaunchDarkly) in an AI-driven workflow. Leadership Advice: Why you need an "AI Quarterback" to manage the friction between engineering, product, and marketing. Tools & Resources Mentioned: Qase: The test management tool mentioned by Elliott (Qase.io). Linear: For issue tracking with AI integrations. Granola: For AI note-taking and meeting summaries. N8n: For workflow automation in QA. Cursor / Windsurf / Claude Code: The current stack of coding assistants. Contact & Feedback:Have you experienced AI burnout in your team? Let us know.Email: wavesofinnovation@re-cinq.comWebsite: re-cinq.com Don't forget to subscribe to catch the next wave of innovation.

    1h 1m

About

A monthly podcast about the shift from Cloud Native to AI Native. Hosted by Deejay, each episode features a guest picked by our community—engineers, leaders, and thinkers sharing how they’re adapting, experimenting, and figuring it out as they go. Real stories, practical lessons, and things we’re all still learning.