The Reasoning Show

Massive Studios

The Reasoning Show AI moves fast. Thinking clearly matters more. The Reasoning Show cuts through the hype to explore how the smartest people in enterprise AI actually make decisions — the strategy, the tradeoffs, and the hard lessons no press release mentions. Every week, hosts Aaron Delp and Brian Gracely sit down with the founders building the tools, investors funding the shift, and operators running AI in the real world. Not hype. Not panic. Just clear-headed conversations with people who have to make actual decisions. Because the AI revolution isn't just happening. It's being reasoned through.  New shows every Wednesday and Sunday.  Topics: Enterprise AI strategy · LLMs in production · AI leadership · Agentic AI ·  Digital Sovereignty · Machine Learning · AI startups ·  Cloud Computing 

  1. 1 DAY AGO

    Is Coding a Solved Problem?

    SUMMARY: Have we reached a point where coding is a solved problem? And if so, what are the downstream effects on companies that need software to differentiate their business? GUEST: Brandon Whichard, Co-Host of Software Defined Talk SHOW: 1019 SHOW TRANSCRIPT: The Reasoning Show #1019 Transcript SHOW VIDEO: https://youtu.be/q0mksIKcBzk SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: The New Kingmakers (Stephen O’Grady - 2014)Developer Growth Rates[Via ChatGPT]  A useful way to think about it: Typing code → mostly commoditizedDesigning systems → partially assistedOwning outcomes → still very humanTopic 1 - How many years into Public Cloud did we assume that Cloud had solved the IT problem?  Topic 2 - Developers - what are we solving for? 10% of time coding, mostly on the last 10-15% Lots of time in planning meetings (decoding requirements, resource planning, updates, etc.)Decent amount of time fixing, troubleshooting, technical debt reductionTopic 2a - Business people have unlimited ideas, and most ideas are money + tech What would be their interface to problem solving without developers? (is this just a shift to consultants)Is this a massive opportunity for a great PaaS 3.0 company (e.g. is Vercel an example?)Topic 3 - [Hypothetical] Let’s assume a fairly normal company fired all their software developers tomorrow. How long before they could get a moderately complex new application of integration into production?  Topic 4 - Nobody likes to work on legacy code - missing source, missing engineers, etc. What do we call any code written by AI that was abandoned within the last 6-12 months?  FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    33 min
  2. 4 DAYS AGO

    Understanding RAG Systems

    SUMMARY: The RAG (Retrieval Augmented Generation) pattern is one of the most frequently used to augment LLMs with context-specific information. Let’s explore RAG.  GUEST: Roie Schwaber-Cohen, Head of Developer Relations at Pinecone SHOW: 1018 SHOW TRANSCRIPT: The Reasoning Show #1018 Transcript SHOW VIDEO: https://youtu.be/-kZZEMR341Q SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Pinecone  Topic 2 - Let’s begin by talking about RAG systems. What are they? Why do companies choose to use them? What benefits do they provide in AI systems? Topic 3 - At a high level, RAG sounds straightforward—retrieve relevant context, generate an answer. But in practice, where does it break first as systems scale? Topic 4 - I’ve heard that RAG systems can return answers that are technically correct but fundamentally wrong. What’s a concrete example of that happening in production—and why does it slip past most teams? Topic 5 - In traditional systems, we assume there’s a single source of truth. But in enterprise environments, ‘truth’ is often versioned, contextual, and conflicting. How should teams rethink ‘truth’ when building AI systems? Topic 6 - A lot of teams assume their knowledge base is ‘good enough’ for RAG. What do they usually underestimate about the messiness of real enterprise data? Topic 7 - There’s a growing narrative that better reasoning models can compensate for weaker retrieval. From what you’ve seen, where does that idea fall apart? Topic 8 - If correctness depends on things like timing, policy scope, or configuration, how should teams design systems that understand context—not just content? Topic 9 - Looking ahead, what replaces today’s RAG architectures? What patterns are emerging among teams that are actually getting this right?” FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    29 min
  3. 8 APR

    How AI is Transforming Software Development

    SUMMARY:  Discover how AI is transforming software development and what it means for engineering leaders.  GUEST: Jeff Keyes, Field CTO at AllStacks  SHOW: 1017 SHOW TRANSCRIPT: The Reasoning Show #1017 Transcript SHOW VIDEO: https://youtu.be/cXPu8iWeB0k SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at AllStacks.  Topic 2 - You’ve been talking to a lot of engineering leaders using AI coding tools—what’s the most surprising gap you’re seeing between increased code generation and actual delivery outcomes? Topic 3 - Why does increasing developer output with AI often lead to more debugging, duplication, or cleanup instead of faster delivery? Topic 4 - You’ve described an ‘invisible rework loop’—can you walk us through what that looks like inside a modern engineering team? Topic 5 - As code generation gets easier, where does the real bottleneck shift in the software delivery lifecycle? Topic 6 - How do unclear product or engineering specifications get amplified in an AI-assisted development environment? Topic 7 - If traditional metrics like lines of code or velocity are becoming misleading, what should engineering leaders actually measure to know if AI is improving delivery? Topic 8 - What does a ‘healthy’ AI-assisted development workflow look like 12–18 months from now? FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    34 min
  4. 5 APR

    AI SRE for Complex Systems

    SUMMARY: With the explosion of AI-generated code and applications, the modern SRE requires an AI-native approach to managing complex systems.  GUEST: Anish Agarwal - CEO/Cofounder of Traversal SHOW: 1016 SHOW TRANSCRIPT: The Reasoning Show #1016 Transcript SHOW VIDEO: https://youtu.be/hF3MCRDhMno SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Traversal (homepage)Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Traversal.  Topic 2 - AI is dramatically accelerating code generation, but not improving production outcomes. What’s fundamentally breaking in the traditional SRE model—and where do you see the biggest friction between speed and reliability? Topic 3 - What are the most common failure patterns or mistakes you’re seeing in production from AI-generated code—and what’s driving them? Topic 4 - AI can generate functional code, but it often lacks context about how systems behave in production. How is this changing what ‘good observability’ needs to look like? Topic 5 - How do you see SRE evolving in an AI-first world? Does it become more automated, more policy-driven, or even partially autonomous? Topic 6 - For organizations that want to embrace AI-assisted development but avoid production chaos, what are the most important guardrails they should put in place? Topic 7 - If we fast-forward 2–3 years, what does a ‘modern’ production stack look like in a world where most code is AI-generated? What capabilities become absolutely essential? In one sentence—what’s the #1 thing a CTO should do right now? FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    33 min
  5. 1 APR

    The Future of Service belongs to Self-Improving AI

    SUMMARY:  Today’s episode is all about a transformation happening in customer service—one that’s moving us from static systems and scripted workflows into something far more dynamic: AI systems that can actually learn and improve over time. GUEST: Shashi Upadhyay (President of Product, Engineering, and AI at Zendesk) SHOW: 1015 SHOW TRANSCRIPT: The Reasoning Show #1015 Transcript SHOW VIDEO: https://youtu.be/IQaxE-DjIpo SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: The future of service belongs to self-improving AITopic 1 - Welcome to the show. Tell us a bit about your background and your focus today.  Topic 2 - You describe this moment as a shift from systems of record to intelligent systems of action. What’s fundamentally broken in today’s customer service model that’s forcing this transition now? What changed in the last 2–3 years to make this possible? Topic 3 - There’s been a lot of AI in customer service that overpromised and underdelivered. What are the biggest gaps between what customers actually need—like resolution—and what legacy automation has been delivering? Topic 4 - The concept of a “self-improving” system is really powerful. What’s actually new here—what enables AI to improve with every interaction without constant human tuning? Topic 5 - You’ve moved from assistive copilots to what you call “agentic AI” that can resolve issues end-to-end. Where are we today on that journey—and what still requires human involvement? Topic 6 - Voice has historically been one of the hardest channels to automate. What changes with this new generation of AI that makes even complex, multi-step voice interactions solvable? Topic 7 - If we fast-forward 2–3 years, what does a “best-in-class” customer service experience look like in an AI-first world? FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    34 min
  6. 25 MAR

    Living the Claude-centric Life

    SUMMARY: With @bwhichard, we dig into how daily work-life changes when you make @AnthropicAI @claudeai the center of all workflow activities.  SHOW: 1013 SHOW TRANSCRIPT: The Reasoning Show #1013 Transcript SHOW VIDEO: https://youtu.be/zEmEH0t67js SHOW SPONSORS: VENTION - Ready for expert developers who actually deliver? Visit ventionteams.comSHOW NOTES: Topic 1 - How long have you been living the Claude-life, and when did it dawn on you to make this central to your day-to-day activities?  Topic 2 - What were the biggest hurdles you had to overcome before you trusted the system and started letting it have ownership over tasks and workflows? Topic 3 - What are some of your best practices in terms of machine setup, how or where you store data, how you decide what to give it access to? Walk me through your thoughts around things like keeping things simple, where to be complex, how you think about security, etc. Topic 4 - How are you learning to give it more responsibilities, or just figure out new ways to be productive with it?  Good resources you’re pulling from? Any tips to make it use less tokens?Skills marketplaces?Topic 5 - What have been some of the biggest barriers to successful adoption, or just areas where you’re still struggling to get it to do the things you want? Or are you still in the learning curve stage and things just keep growing on one another? Topic 6 - If you took the knowledge and skills you have now in Claude-life into your day-job, how do you see yourself working, as well as working with the rest of your team/teams? Would it bother you if you didn’t think they were using AI tools as much?  FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    37 min
  7. 22 MAR

    Three Thoughts from NVIDIA GTC 2026

    SUMMARY: We dig into the NVIDIA GTC keynote and highlight three things - accelerated computing for everything, the complexity of the new inference stack, and NVIDIA’s “open” software stack including NemoClaw. SHOW: 1012 SHOW TRANSCRIPT: The Reasoning Show #1012 Transcript SHOW VIDEO: https://youtu.be/aXOr91q76yM SHOW SPONSORS: VENTION - Ready for expert developers who actually deliver? Visit ventionteams.comSHOW NOTES: NVIDIA GTC 2026 (Keynote)NVIDIA NemoClaw - OpenClaw + OpenShell + NVIDIA Agent ToolkitNVIDIA adds Groq LPU to their rack systemsNVIDIA to invest $26B in Open Weight ModelsInterview with Jensen about Accelerated Computing (Stratechery) Topic 1 - Jensen’s trying to paint the bigger picture of accelerated computing everywhere (robotics, autonomous driving, gen-ai, physical ai - but also just everyday enterprise apps). Everything is about keeping the stock price up, and margins high. The stock price provides the warchest to fight off all foes.  Topic 2 - The inference architecture is a complex mix of GPUs, CPUs, ASICs/LPUs, high-speed networking and seems very different from the training architecture. How big is the burden on data center providers? What are the inference alternatives emerging?  Topic 3 - Jensen talked a lot about OpenClaw and eventually about NVIDIA’s NemoClaw. How does his interest in Agentic AI tie into his interest in building NVIDIA’s own frontier model FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    28 min

About

The Reasoning Show AI moves fast. Thinking clearly matters more. The Reasoning Show cuts through the hype to explore how the smartest people in enterprise AI actually make decisions — the strategy, the tradeoffs, and the hard lessons no press release mentions. Every week, hosts Aaron Delp and Brian Gracely sit down with the founders building the tools, investors funding the shift, and operators running AI in the real world. Not hype. Not panic. Just clear-headed conversations with people who have to make actual decisions. Because the AI revolution isn't just happening. It's being reasoned through.  New shows every Wednesday and Sunday.  Topics: Enterprise AI strategy · LLMs in production · AI leadership · Agentic AI ·  Digital Sovereignty · Machine Learning · AI startups ·  Cloud Computing 

You Might Also Like