The New Stack Podcast

The New Stack

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

  1. 7H AGO

    Edge-forward: Akamai eyes sweet spot between centralized & decentralized AI inference

    At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads. With a global footprint of core and “distributed reach” datacenters, Akamai aims to bring compute closer to users while still leveraging centralized infrastructure for heavier processing. This hybrid model enables faster feedback loops critical for applications like fraud detection, robotics, and conversational AI. To address concerns about complexity, Akamai emphasizes managed infrastructure and self-service tools that abstract away integration challenges. Its platform supports open source through managed Kubernetes and pre-packaged tools, simplifying deployment. Akamai also invests in serverless technologies like WebAssembly-based functions, enabling developers to build and deploy globally distributed applications quickly. Overall, the company prioritizes developer experience, allowing teams to focus on application logic rather than infrastructure management. Learn more from The New Stack about the latest developments around how Akamai is transforming to a developer-focused cloud platform for AI. Akamai Picks Up Hosting for Kernel.org Should You Care About Fermyon Wasm Functions on Akamai? Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    22 min
  2. MAR 24

    Kubernetes co-founder Brendan Burns: AI-generated code will become as invisible as assembly

    In this episode of The New Stack Makers, Microsoft Corporate Vice President and Technical Fellow, Brendan Burns discusses how AI is reshaping Kubernetes and modern infrastructure. Originally designed for stateless applications, Kubernetes is evolving to support AI workloads that require complex GPU scheduling, co-location, and failure sensitivity. Features like Dynamic Resource Allocation and projects such as KAITO introduce AI-specific capabilities, while maintaining Kubernetes’ core strength: vendor-neutral extensibility.  Burns highlights that AI also changes how systems are monitored. Success is no longer binary; it depends on answer quality, user feedback, and large-scale testing using thousands of prompts and even AI evaluators.  On software development, Burns argues that the industry’s focus on reviewing AI-generated code is temporary. Just as developers stopped inspecting compiler output, AI-generated code will become a disposable artifact validated by tests and specifications. This shift will redefine engineering roles and may lead to programming languages designed for machines rather than humans, signaling a fundamental transformation in how software is built and maintained. Learn more from The New Stack about the latest developments around how AI is reshaping Kubernetes and modern infrastructure: How To Use AI To Design Intelligent, Adaptable Infrastructure The AI Infrastructure crisis: When ambition meets ancient systems  Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    44 min
  3. MAR 20

    AI can write your infrastructure code. There's a reason most teams won't let it.

    In this episode ofThe New Stack Agents, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorp’s 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry. However, this creates a dangerous gap between generating infrastructure and truly understanding it—like using a phrasebook to ask questions in a foreign language but not understanding the response. In infrastructure, that lack of comprehension can lead to serious risks. To address this, Spacelift introduced Intent, which allows AI to directly interact with cloud systems in real time while enforcing deterministic guardrails through policy controls. The broader challenge remains balancing speed with control—enabling faster experimentation without sacrificing safety. Wyszynski argues that, like humans, AI can be trusted when constrained by strong guardrails. Learn more from The New Stack about the latest developments around how AI is transforming infrastructure as code (IaC). The Maturing State of Infrastructure as Code in 2025 Generative AI Tools for Infrastructure as Code Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    29 min
  4. MAR 6

    OutSystems CEO on how enterprises can successfully adopt vibe coding

    Woodson Martin, CEO ofOutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that “95% of agent pilots fail” are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions. Current enterprise use cases for agents include document processing, decision support, and personalized outputs. When integrated into broader systems, these applications can deliver measurable productivity gains. For example,Travel Essencebuilt an agentic system that reduced a two-hour customer planning process to three minutes, allowing staff to focus more on sales and helping drive 20% top-line growth. Martin also believes AI will pressure traditional SaaS seat-based pricing and accelerate custom software development. In this environment, governed platforms like OutSystems can help enterprises adopt “vibe coding” while maintaining compliance, security, and lifecycle management. Learn more from The New Stack about the latest developments around enterprise adoption of vibe coding: How To Use Vibe Coding Safely in the Enterprise 5 Challenges With Vibe Coding for Enterprises  Vibe Coding: The Shadow IT Problem No One Saw Coming Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    44 min
  5. FEB 20

    NanoClaw's answer to OpenClaw is minimal code, maximum isolation

    OnThe New Stack Agents, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agencyQwibit, had been running agents across operations, sales, and research usingClaude Code. When Clawdbot (laterOpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependencies—including his own outdated GitHub package—excessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents. In response, he createdNanoClawwith radical minimalism: only a few hundred core lines, minimal dependencies, and containerized agents. Built around Claude Code “skills,” NanoClaw enables modular, build-time integrations while keeping the runtime small enough to audit easily. Cohen argues AI changes coding norms—favoring duplication over DRY, relaxing strict file limits, and treating code as disposable. His goal is simple, secure infrastructure that enterprises can fully understand and trust.   Learn more from The New Stack about the latest around personal AI agents Anthropic: You can still use your Claude accounts to run OpenClaw, NanoClaw and Co. It took a researcher fewer than 2 hours to hijack OpenClaw OpenClaw is being called a security “Dumpster fire,” but there is a way to stay safe Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    52 min
  6. FEB 19

    The developer as conductor: Leading an orchestra of AI agents with the feature flag baton

    A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycle’s feature flagging into the Dynatrace observability platform, the combined solution delivers a “360-degree view” of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production. As “agentic development” accelerates—where AI agents rapidly generate code—feature flags act as a safety net. They allow teams to test, control, and roll back AI-generated changes in live environments, keeping a human in the loop before full releases. This reduces risk while speeding enterprise adoption of AI tools. The discussion also highlighted support for the Cloud Native Computing Foundation’s OpenFeature standard to avoid vendor lock-in. Ultimately, developers are evolving into “conductors,” orchestrating AI agents with feature flags as their baton.   Learn more from The New Stack about the latest around AI enterprise development:  Why You Can't Build AI Without Progressive Delivery  Beyond automation: Dynatrace unveils agentic AI that fixes problems on its own  Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    20 min
  7. FEB 13

    The reason AI agents shouldn’t touch your source code — and what they should do instead

    Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatrace’s acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flags—long used in software development—as a critical safety mechanism in the age of agentic AI.  Rather than allowing AI agents to rewrite and deploy code, Dynatrace envisions them operating within guardrails by adjusting configuration settings through feature flags. This approach limits risk while enabling faster, automated decision-making. Customers, Reitbauer noted, are increasingly comfortable with AI handling defined tasks under constraints, but not with agents making sweeping, unsupervised changes. By combining AI with controlled configuration tools, Dynatrace aims to create a safer path toward truly autonomous operations.  Learn more from The New Stack about the latest in progressive delivery:  Why You Can’t Build AI Without Progressive Delivery  Continuous Delivery: Gold Standard for Software Development  Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    23 min
4.3
out of 5
31 Ratings

About

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

More From The New Stack

You Might Also Like