The Enterprise AI Show

Massive Studios

The Enterprise AI Show explores the AI journey for Enterprise companies around the world.  As the AI revolution moves from experimentation to execution, The Enterprise AI Show provides the clarity needed to lead. Join Aaron Delp and Brian Gracely as they explore the intersection of generative AI, enterprise systems, and global business strategy. Each episode features clear-headed conversations with the people making actual decisions—founders, investors, and practitioners—focusing on the technical architectures and business models that drive real-world ROI. New shows every Wednesday and Sunday.  Topics: Enterprise AI strategy · The AI Economy ·  LLMs in production · AI leadership · Agentic AI ·  Digital Sovereignty · Machine Learning · AI startups ·  Cloud Computing 

  1. -1 H

    AI, Data Centers, and the Power Crunch

    SUMMARY: We  explore one of the most overlooked bottlenecks in the AI boom: energy and infrastructure and  why power availability is becoming the limiting factor. GUEST: Wannie Park, Founder/CEO of PADO AI SHOW: 1026 SHOW TRANSCRIPT: The Reasoning Show #1026 Transcript SHOW VIDEO: https://youtu.be/satMQRxKQC8 SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: 1. AI’s Hidden Constraint: Power AI growth is no longer limited only by GPUs and computePower generation, cooling, and grid interconnects are emerging as major bottlenecksData centers could account for 10–12% of North American power demand in coming years2. Why Data Centers Are Being Reimagined Traditional data centers were built for enterprise IT, not AI-scale workloadsAI infrastructure introduces:Massive power density needsAdvanced cooling challenges3. The Grid Wasn’t Built for AI Utilities are designed around peak demand scenariosMost grids run well below peak capacity most of the timeAI workloads create volatile and unpredictable consumption patternsLong interconnection timelines are pushing companies toward alternative infrastructure models 4. GPU Utilization Is Surprisingly Low GPU clusters are often underutilized because of:Scheduling inefficiencies, Cooling limitations, SLA constraintsEffective GPU utilization may be as low as 12–13% in some environments5. Cooling as a Major Optimization Layer Legacy data centers often cool entire zones inefficientlyPado AI alignsAI workloads, Cooling systems, Power allocationWorkload-aware orchestration helps optimize cooling and compute efficiency 6. The Rise of “Compute Forecasting” Pado forecasts compute demand instead of energy demandThe platform models:GPU workloads, Power consumption, Cooling requirements, SLA prioritiesGoal: maximize “compute per megawatt”7. AI Workloads Become Time-Aware AI providers may increasingly:Shift workloads to off-peak periodsIncentivize delayed non-urgent jobsDynamically balance compute demandUsers are already seeing variable inference latency in real-world AI systems8. Sustainability vs Reliability vs Profitability Operators must balance:Uptime expectations, Infrastructure costs, Sustainability goalsRenewable adoption is growing, but reliability still drives investment in natural gas and battery-backed systems9. Brownfield vs Greenfield Opportunities Pado AI is focused primarily on existing (“brownfield”) data centersExisting enterprise infrastructure can often be extended and optimized instead of rebuiltEnterprises may gain significant AI capability without hyperscale GPU deploymentsFEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    34 min
  2. 29 AVR.

    Halt & Retool: Rewriting Software Development in the Age of AI Agents

    SUMMARY: Exploring how to fully embrace AI-driven, agent-based software development, resulting in dramatically increased productivity and faster feature delivery. It highlights a broader shift in engineering—from writing code to orchestrating AI agents. GUEST: Sam Ramji, CEO/Co-founder at Sailplane SHOW: 1023 SHOW TRANSCRIPT: The Reasoning Show #1023 Transcript SHOW VIDEO: https://youtu.be/q50s0oL37pQ SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Halt and Retool (presentation) OpenAI Harness EngineeringAnthropic Harness Engineering 1. The “Halt and Retool” Moment A single-day build and deployment of a production feature triggered a company-wide realizationPaused all development to reassess how AI fundamentally changes engineering workflowsCreating “shock moments” (like stopping work) is key to driving mindset shifts2. From Coding to Agent Orchestration Developers are shifting from writing code → managing AI agentsWork resembles “multi-boxing” or conducting an orchestra of parallel agentsSuccess depends on coordinating tasks, not executing them directly3. The Rise of Harness Engineering Defined as everything between raw AI prompts and production-ready outputFocus: eliminating friction across the software development lifecycle Key practices:Logging agent errors and friction pointsContinuously refining workflows and toolingLetting AI reflect on and improve its own mistakes4. Spec-Driven Development Becomes Critical Poor specifications lead to exponential inefficienciesTeams now spend significantly more time on design and specs than coding5. Measuring the Impact ~3x increase in code velocityNear-zero “bit rot” Faster feature delivery—sometimes within 24 hours 6. Token Maxing & Developer Fitness Higher token usage often signals better workflows and deeper integration with AIPerformance becomes about system design, not efficiency constraints 7. New Tools & Interfaces Increased use of voice interfaces over typingTerminal-first workflows replacing traditional IDE-centric approachesAI-accessible knowledge bases becoming standard 8. The Future of Software Engineering Within ~6 months: developers may stop writing codeWithin ~12 months: developers may stop reading codeFocus shifts to:Intent, design, and orchestration. Domain expertise and problem modelingFEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    35 min
  3. 26 AVR.

    The Zero-CVE Mirage: Hardening Software in the Age of AI Attacks

    SUMMARY: How software development is rapidly evolving in the age of AI and automation. Matt Moore shares how his team is rethinking secure software supply chains, scaling infrastructure, and safely integrating AI agents into development workflows. GUEST: Matt Moore, CTO at Chainguard  SHOW: 1022 SHOW TRANSCRIPT: The Reasoning Show #1022 Transcript SHOW VIDEO: https://youtu.be/9Q0kWkTYRs8 SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: Chainguard Factory 2.0DriftlessAF Scaling Challenges & “Factory” Evolution Early automation relied on tools like GitHub ActionsAt scale, simple systems broke due to:Massive event volumesAPI rate limits (e.g., GitHub quotas)Exponential fan-out effectsKey innovation: custom work queue + reconciliation model~90% event deduplicationControlled throughput and backpressureImproved reliability and system stabilityIntroduced Driftless Built on reconciliation principles (inspired by Kubernetes):Compare desired vs. actual stateContinuously reconcile differencesBenefits:Resilience to missed eventsAutomatic retries and recoveryScales better than purely event-driven systemsAI Agents in Software Development AI is dramatically accelerating development workflowsChainguard uses agents to:Remediate vulnerabilities (CVEs)Update dependenciesFix failing tests and adapt to upstream changesKey Design Philosophy Least privilege → “least tool call”Avoid giving agents full system accessProvide narrowly scoped tools for specific tasksDelegate execution to sandboxed systems (e.g., CI pipelines)Focus on safe, controlled automationIndustry Shift: Velocity vs. Security Explosion of AI-driven tools (e.g., autonomous PR generation)Massive increase in development velocityNew risks:Poorly secured agent frameworksMalicious or unsafe automation patternsKey Takeaways Scale changes everythingSimple systems break under massive workloadsPurpose-built infrastructure becomes necessaryReconciliation > pure event-driven systems at scaleMore resilient, predictable, and controllableAI is a force multiplier—but requires guardrailsUnrestricted agents introduce serious riskConstrained, purpose-built agents are safer and more effectiveContinuous learning is mandatoryAI tooling is evolving too fast for static skillsetsTeams must actively experiment and adaptFEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    35 min
  4. 22 AVR.

    The Grid’s Breaking Point: Can AI Save the Infrastructure It’s About to Crash?

    SUMMARY: How real-time power flow optimization at the edge is helping data centers and the electrical grid handle surging AI energy demands more efficiently. By unlocking hidden capacity and dynamically managing power systems, we explain how existing infrastructure can support significantly more compute without massive new buildouts. GUEST: Marissa Hummon, CTO Utilidata SHOW: 1021 SHOW TRANSCRIPT: The Reasoning Show #1021 Transcript SHOW VIDEO: https://youtu.be/ItcpU8UjOFE SHOW SPONSORS: Nasuni - Activate your data for AI and request a demoShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!SHOW NOTES: Utilidata (homepage)AI Data Center to Receive 50% Capacity Boost with AI Power OrchestrationKEY TOPICS: Differences between grid power dynamics vs. AI workloadsEdge AI for real-time power flow optimizationUnlocking stranded capacity in existing infrastructure“4-to-make-3” vs. “4-to-make-4” data center designAI training vs. inference power consumption patternsRole of NVIDIA-powered edge compute modulesGrid modernization and coordination with utilitiesSecurity and resilience in critical infrastructureKEY MOMENTS: From centralized AI models to edge-based decision-makingDefining efficiency: utilization vs. thermal performanceWhy AI workloads aren’t as constant as they seemNVIDIA partnership and edge compute in power systemsUsing redundancy to increase usable capacityIncreasing density of AI compute and hidden capacityData center vs. utility responsibilitiesAddressing data center bottlenecks and scaling challengesCustomer landscape: hyperscalers to enterpriseSecurity, resilience, and critical infrastructureKEY INSIGHTS: AI workloads are dynamic, not constant: Training and inference create fluctuating power demands that can be optimized.Edge intelligence is critical: Real-time sensing and decision-making at the edge unlock efficiency gains not possible with centralized models.Hidden capacity exists: Many data centers have up to 2x unused power capacity due to lack of visibility and control.Software-defined power is the future: Faster control loops allow systems to safely exceed traditional design limits.Efficiency = utilization: The biggest gains come from better use of existing infrastructure, not just improving hardware efficiency.TAKEAWAYS: AI infrastructure growth is as much an energy challenge as a compute challengeReal-time, edge-based control systems are key to scaling sustainablyExisting grid and data center investments can go further with smarter orchestrationThe future of AI scaling depends on aligning compute innovation with energy intelligenceFEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    25 min
  5. 19 AVR.

    Shadow AI is Faster Than Your Governance: Why Guardrails are Failing

    SUMMARY: Shadow AI is growing much faster than known AI adoption across businesses. How can IT teams get Shadow AI under control? GUEST: Uri Haramati, CEO at Torii SHOW: 1020 SHOW TRANSCRIPT: The Reasoning Show #1020 Transcript SHOW VIDEO: https://youtu.be/AUrh_xICPzM SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: Torii (homepage) Topic 1 - Welcome to the show. Tell us about your background and your focus at Torii.  Topic 2 - Is Shadow AI really a security problem—or is it a product-market fit problem inside the enterprise? Topic 3 - Why does Shadow AI spread faster—and become more dangerous—than traditional Shadow IT? Topic 4 - What’s the first signal a company should look for to know Shadow AI is already happening? Topic 5 - How do you balance visibility vs. control without killing the productivity gains that drove Shadow AI in the first place? Topic 6 - How should organizations rethink ‘data loss prevention’ in a world where the leak is a prompt, not a file? Topic 7 - What does a ‘well-governed’ AI environment actually look like in practice—day-to-day for an employee? Topic 8 - “Do you think Shadow AI ever fully goes away—or does it become a permanent operating model that companies need to design around?” FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    29 min
  6. 15 AVR.

    The Junior Dev Crisis: Who Inherits the Code When AI Does the Work?

    SUMMARY: Have we reached a point where coding is a solved problem? And if so, what are the downstream effects on companies that need software to differentiate their business? GUEST: Brandon Whichard, Co-Host of Software Defined Talk SHOW: 1019 SHOW TRANSCRIPT: The Reasoning Show #1019 Transcript SHOW VIDEO: https://youtu.be/q0mksIKcBzk SHOW SPONSORS: ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!Nasuni - Activate your data for AI and request a demoSHOW NOTES: The New Kingmakers (Stephen O’Grady - 2014)Developer Growth Rates[Via ChatGPT]  A useful way to think about it: Typing code → mostly commoditizedDesigning systems → partially assistedOwning outcomes → still very humanTopic 1 - How many years into Public Cloud did we assume that Cloud had solved the IT problem?  Topic 2 - Developers - what are we solving for? 10% of time coding, mostly on the last 10-15% Lots of time in planning meetings (decoding requirements, resource planning, updates, etc.)Decent amount of time fixing, troubleshooting, technical debt reductionTopic 2a - Business people have unlimited ideas, and most ideas are money + tech What would be their interface to problem solving without developers? (is this just a shift to consultants)Is this a massive opportunity for a great PaaS 3.0 company (e.g. is Vercel an example?)Topic 3 - [Hypothetical] Let’s assume a fairly normal company fired all their software developers tomorrow. How long before they could get a moderately complex new application of integration into production?  Topic 4 - Nobody likes to work on legacy code - missing source, missing engineers, etc. What do we call any code written by AI that was abandoned within the last 6-12 months?  FEEDBACK? Email: show @ reasoning dot showBluesky: @reasoningshow.bsky.socialTwitter/X: @ReasoningShowInstagram: @reasoningshowTikTok: @reasoningshow

    33 min
4,6
sur 5
150 notes

À propos

The Enterprise AI Show explores the AI journey for Enterprise companies around the world.  As the AI revolution moves from experimentation to execution, The Enterprise AI Show provides the clarity needed to lead. Join Aaron Delp and Brian Gracely as they explore the intersection of generative AI, enterprise systems, and global business strategy. Each episode features clear-headed conversations with the people making actual decisions—founders, investors, and practitioners—focusing on the technical architectures and business models that drive real-world ROI. New shows every Wednesday and Sunday.  Topics: Enterprise AI strategy · The AI Economy ·  LLMs in production · AI leadership · Agentic AI ·  Digital Sovereignty · Machine Learning · AI startups ·  Cloud Computing 

Vous aimeriez peut‑être aussi