The Reasoning Show

Massive Studios

The Reasoning Show AI moves fast. Thinking clearly matters more. The Reasoning Show cuts through the hype to explore how the smartest people in enterprise AI actually make decisions — the strategy, the tradeoffs, and the hard lessons no press release mentions. Every week, hosts Aaron Delp and Brian Gracely sit down with the founders building the tools, investors funding the shift, and operators running AI in the real world. Not hype. Not panic. Just clear-headed conversations with people who have to make actual decisions. Because the AI revolution isn't just happening. It's being reasoned through.  New shows every Wednesday and Sunday.  Topics: Enterprise AI strategy · LLMs in production · AI leadership · Agentic AI ·  Digital Sovereignty · Machine Learning · AI startups ·  Cloud Computing 

  1. 1D AGO

    The Grid’s Breaking Point: Can AI Save the Infrastructure It’s About to Crash?

    SUMMARY: How real-time power flow optimization at the edge is helping data centers and the electrical grid handle surging AI energy demands more efficiently. By unlocking hidden capacity and dynamically managing power systems, we explain how existing infrastructure can support significantly more compute without massive new buildouts. GUEST: Marissa Hummon, CTO Utilidata SHOW: 1020 SHOW TRANSCRIPT: The Reasoning Show #1020 Transcript SHOW VIDEO: https://youtu.be/ItcpU8UjOFE SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Utilidata (homepage)AI Data Center to Receive 50% Capacity Boost with AI Power OrchestrationKEY TOPICS: Differences between grid power dynamics vs. AI workloadsEdge AI for real-time power flow optimizationUnlocking stranded capacity in existing infrastructure“4-to-make-3” vs. “4-to-make-4” data center designAI training vs. inference power consumption patternsRole of NVIDIA-powered edge compute modulesGrid modernization and coordination with utilitiesSecurity and resilience in critical infrastructureKEY MOMENTS: From centralized AI models to edge-based decision-makingDefining efficiency: utilization vs. thermal performanceWhy AI workloads aren’t as constant as they seemNVIDIA partnership and edge compute in power systemsUsing redundancy to increase usable capacityIncreasing density of AI compute and hidden capacityData center vs. utility responsibilitiesAddressing data center bottlenecks and scaling challengesCustomer landscape: hyperscalers to enterpriseSecurity, resilience, and critical infrastructureKEY INSIGHTS: AI workloads are dynamic, not constant: Training and inference create fluctuating power demands that can be optimized.Edge intelligence is critical: Real-time sensing and decision-making at the edge unlock efficiency gains not possible with centralized models.Hidden capacity exists: Many data centers have up to 2x unused power capacity due to lack of visibility and control.Software-defined power is the future: Faster control loops allow systems to safely exceed traditional design limits.Efficiency = utilization: The biggest gains come from better use of existing infrastructure, not just improving hardware efficiency.TAKEAWAYS: AI infrastructure growth is as much an energy challenge as a compute challengeReal-time, edge-based control systems are key to scaling sustainablyExisting grid and data center investments can go further with smarter orchestrationThe future of AI scaling depends on aligning compute innovation with energy intelligenceFEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    25 min
  2. 4D AGO

    Shadow AI is Faster Than Your Governance: Why Guardrails are Failing

    SUMMARY: Shadow AI is growing much faster than known AI adoption across businesses. How can IT teams get Shadow AI under control? GUEST: Uri Haramati, CEO at Torii SHOW: 1020 SHOW TRANSCRIPT: The Reasoning Show #1020 Transcript SHOW VIDEO: https://youtu.be/AUrh_xICPzM SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: Torii (homepage) Topic 1 - Welcome to the show. Tell us about your background and your focus at Torii.  Topic 2 - Is Shadow AI really a security problem—or is it a product-market fit problem inside the enterprise? Topic 3 - Why does Shadow AI spread faster—and become more dangerous—than traditional Shadow IT? Topic 4 - What’s the first signal a company should look for to know Shadow AI is already happening? Topic 5 - How do you balance visibility vs. control without killing the productivity gains that drove Shadow AI in the first place? Topic 6 - How should organizations rethink ‘data loss prevention’ in a world where the leak is a prompt, not a file? Topic 7 - What does a ‘well-governed’ AI environment actually look like in practice—day-to-day for an employee? Topic 8 - “Do you think Shadow AI ever fully goes away—or does it become a permanent operating model that companies need to design around?” FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    29 min
  3. APR 15

    The Junior Dev Crisis: Who Inherits the Code When AI Does the Work?

    SUMMARY: Have we reached a point where coding is a solved problem? And if so, what are the downstream effects on companies that need software to differentiate their business? GUEST: Brandon Whichard, Co-Host of Software Defined Talk SHOW: 1019 SHOW TRANSCRIPT: The Reasoning Show #1019 Transcript SHOW VIDEO: https://youtu.be/q0mksIKcBzk SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: The New Kingmakers (Stephen O’Grady - 2014)Developer Growth Rates[Via ChatGPT]  A useful way to think about it: Typing code → mostly commoditizedDesigning systems → partially assistedOwning outcomes → still very humanTopic 1 - How many years into Public Cloud did we assume that Cloud had solved the IT problem?  Topic 2 - Developers - what are we solving for? 10% of time coding, mostly on the last 10-15% Lots of time in planning meetings (decoding requirements, resource planning, updates, etc.)Decent amount of time fixing, troubleshooting, technical debt reductionTopic 2a - Business people have unlimited ideas, and most ideas are money + tech What would be their interface to problem solving without developers? (is this just a shift to consultants)Is this a massive opportunity for a great PaaS 3.0 company (e.g. is Vercel an example?)Topic 3 - [Hypothetical] Let’s assume a fairly normal company fired all their software developers tomorrow. How long before they could get a moderately complex new application of integration into production?  Topic 4 - Nobody likes to work on legacy code - missing source, missing engineers, etc. What do we call any code written by AI that was abandoned within the last 6-12 months?  FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    33 min
  4. APR 12

    RAG Won’t Save Your Messy Data: The Brutal Truth About AI Reliability

    SUMMARY: The RAG (Retrieval Augmented Generation) pattern is one of the most frequently used to augment LLMs with context-specific information. Let’s explore RAG.  GUEST: Roie Schwaber-Cohen, Head of Developer Relations at Pinecone SHOW: 1018 SHOW TRANSCRIPT: The Reasoning Show #1018 Transcript SHOW VIDEO: https://youtu.be/-kZZEMR341Q SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Pinecone  Topic 2 - Let’s begin by talking about RAG systems. What are they? Why do companies choose to use them? What benefits do they provide in AI systems? Topic 3 - At a high level, RAG sounds straightforward—retrieve relevant context, generate an answer. But in practice, where does it break first as systems scale? Topic 4 - I’ve heard that RAG systems can return answers that are technically correct but fundamentally wrong. What’s a concrete example of that happening in production—and why does it slip past most teams? Topic 5 - In traditional systems, we assume there’s a single source of truth. But in enterprise environments, ‘truth’ is often versioned, contextual, and conflicting. How should teams rethink ‘truth’ when building AI systems? Topic 6 - A lot of teams assume their knowledge base is ‘good enough’ for RAG. What do they usually underestimate about the messiness of real enterprise data? Topic 7 - There’s a growing narrative that better reasoning models can compensate for weaker retrieval. From what you’ve seen, where does that idea fall apart? Topic 8 - If correctness depends on things like timing, policy scope, or configuration, how should teams design systems that understand context—not just content? Topic 9 - Looking ahead, what replaces today’s RAG architectures? What patterns are emerging among teams that are actually getting this right?” FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    29 min
  5. APR 8

    The Productivity Paradox: Why More AI Code is Slowing Down Shiptimes

    SUMMARY:  Discover how AI is transforming software development and what it means for engineering leaders.  GUEST: Jeff Keyes, Field CTO at AllStacks  SHOW: 1017 SHOW TRANSCRIPT: The Reasoning Show #1017 Transcript SHOW VIDEO: https://youtu.be/cXPu8iWeB0k SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at AllStacks.  Topic 2 - You’ve been talking to a lot of engineering leaders using AI coding tools—what’s the most surprising gap you’re seeing between increased code generation and actual delivery outcomes? Topic 3 - Why does increasing developer output with AI often lead to more debugging, duplication, or cleanup instead of faster delivery? Topic 4 - You’ve described an ‘invisible rework loop’—can you walk us through what that looks like inside a modern engineering team? Topic 5 - As code generation gets easier, where does the real bottleneck shift in the software delivery lifecycle? Topic 6 - How do unclear product or engineering specifications get amplified in an AI-assisted development environment? Topic 7 - If traditional metrics like lines of code or velocity are becoming misleading, what should engineering leaders actually measure to know if AI is improving delivery? Topic 8 - What does a ‘healthy’ AI-assisted development workflow look like 12–18 months from now? FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    34 min
  6. APR 5

    The Production Chaos: Why AI-Generated Code is Breaking Traditional SRE

    SUMMARY: With the explosion of AI-generated code and applications, the modern SRE requires an AI-native approach to managing complex systems.  GUEST: Anish Agarwal - CEO/Cofounder of Traversal SHOW: 1016 SHOW TRANSCRIPT: The Reasoning Show #1016 Transcript SHOW VIDEO: https://youtu.be/hF3MCRDhMno SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Traversal (homepage)Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Traversal.  Topic 2 - AI is dramatically accelerating code generation, but not improving production outcomes. What’s fundamentally breaking in the traditional SRE model—and where do you see the biggest friction between speed and reliability? Topic 3 - What are the most common failure patterns or mistakes you’re seeing in production from AI-generated code—and what’s driving them? Topic 4 - AI can generate functional code, but it often lacks context about how systems behave in production. How is this changing what ‘good observability’ needs to look like? Topic 5 - How do you see SRE evolving in an AI-first world? Does it become more automated, more policy-driven, or even partially autonomous? Topic 6 - For organizations that want to embrace AI-assisted development but avoid production chaos, what are the most important guardrails they should put in place? Topic 7 - If we fast-forward 2–3 years, what does a ‘modern’ production stack look like in a world where most code is AI-generated? What capabilities become absolutely essential? In one sentence—what’s the #1 thing a CTO should do right now? FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    33 min
  7. APR 1

    The Future of Service belongs to Self-Improving AI

    SUMMARY:  Today’s episode is all about a transformation happening in customer service—one that’s moving us from static systems and scripted workflows into something far more dynamic: AI systems that can actually learn and improve over time. GUEST: Shashi Upadhyay (President of Product, Engineering, and AI at Zendesk) SHOW: 1015 SHOW TRANSCRIPT: The Reasoning Show #1015 Transcript SHOW VIDEO: https://youtu.be/IQaxE-DjIpo SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: The future of service belongs to self-improving AITopic 1 - Welcome to the show. Tell us a bit about your background and your focus today.  Topic 2 - You describe this moment as a shift from systems of record to intelligent systems of action. What’s fundamentally broken in today’s customer service model that’s forcing this transition now? What changed in the last 2–3 years to make this possible? Topic 3 - There’s been a lot of AI in customer service that overpromised and underdelivered. What are the biggest gaps between what customers actually need—like resolution—and what legacy automation has been delivering? Topic 4 - The concept of a “self-improving” system is really powerful. What’s actually new here—what enables AI to improve with every interaction without constant human tuning? Topic 5 - You’ve moved from assistive copilots to what you call “agentic AI” that can resolve issues end-to-end. Where are we today on that journey—and what still requires human involvement? Topic 6 - Voice has historically been one of the hardest channels to automate. What changes with this new generation of AI that makes even complex, multi-step voice interactions solvable? Topic 7 - If we fast-forward 2–3 years, what does a “best-in-class” customer service experience look like in an AI-first world? FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    34 min
4.6
out of 5
150 Ratings

About

The Reasoning Show AI moves fast. Thinking clearly matters more. The Reasoning Show cuts through the hype to explore how the smartest people in enterprise AI actually make decisions — the strategy, the tradeoffs, and the hard lessons no press release mentions. Every week, hosts Aaron Delp and Brian Gracely sit down with the founders building the tools, investors funding the shift, and operators running AI in the real world. Not hype. Not panic. Just clear-headed conversations with people who have to make actual decisions. Because the AI revolution isn't just happening. It's being reasoned through.  New shows every Wednesday and Sunday.  Topics: Enterprise AI strategy · LLMs in production · AI leadership · Agentic AI ·  Digital Sovereignty · Machine Learning · AI startups ·  Cloud Computing 

You Might Also Like