The New Stack Podcast

The New Stack

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

  1. 7H AGO

    Why the Linux Foundation adopted MCP, with Jim Zemlin and Mazin Gilbert

    Agentic AI is advancing rapidly, with open-source projects racing to keep pace with real-world deployment. To accelerate progress, the Linux Foundation consolidated key technologies—Model Context Protocol (MCP), Goose, and AGENTS.md—under the newly formed Agentic AI Foundation (AAIF) in late 2025. At the MCP Dev Summit in New York City, Linux Foundation CEO Jim Zemlin and newly appointed AAIF executive director Mazin Gilbert discussed this transition. Zemlin explained that leading both organizations was unsustainable, prompting a careful search for a leader with both technical expertise and collaborative leadership skills. Gilbert now takes on the challenge of guiding AAIF as it shapes the emerging agentic AI ecosystem. While the foundation currently oversees three projects, its broader mission involves defining the future architecture of agent-driven systems—deciding what to build, when, and why. These decisions will influence the trajectory of open-source AI development. The conversation also highlights the importance of open collaboration, funding dynamics, and early adopters in shaping the agentic stack’s evolution.   Learn more from The New Stack around the latest in open-source projects and The Linux Foundation:  Anthropic Donates the MCP Protocol to the Agentic AI Foundation SAFE-MCP, a Community-Built Framework for AI Agent Security Google Donates the Agent2Agent Protocol to the Linux Foundation Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    33 min
  2. 4D AGO

    Fresh data has us asking, does AI demand Kubernetes?

    Kubernetes is rapidly emerging as the de facto operating system for AI, with two-thirds of organizations using it for generative AI inference and 82% adopting it in production. Its ecosystem — including tools like Kubeflow — enables organizations to build, scale, and retain control of AI systems through open, community-driven infrastructure. Bob Killen of CNCF and Liam Bollmann-Dodd of SlashData shared insights from recent reports showing that AI success still hinges on strong engineering fundamentals—especially internal developer platforms and overall developer experience. While AI-generated code accelerates development, it shifts bottlenecks to DevOps, reliability, and security, increasing operational complexity. As a result, operator experience and well-defined guardrails have become critical to safely scaling AI. These controls help constrain both human and AI developers, reducing risk while enabling speed. At the same time, organizations are evolving team structures, expanding platform engineering groups to support internal users more effectively. Despite growing complexity, the core lesson remains consistent: open source innovation thrives on people, processes, and collaboration as much as on technology itself. Learn more from The New Stack around the latest in Kubernetes and its emergence as an operating system for AI:  Kubernetes and AI: Are They a Fit? How AI Is Pushing Kubernetes Storage Beyond Its Limits Kubernetes and AI Are Shaping the Next Generation of Platforms Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    23 min
  3. 6D AGO

    Cut AI token usage by 96%? Here’s how AWS Strands Agents does it.

    In this episode of The New Stack Makers, AWS developer advocate Morgan Willis demonstrates Strands Agents, an open source agentic framework with rapid adoption since its launch. Using a simple accounting API, she walks through three approaches to retrieving a customer’s latest invoice, highlighting how design choices dramatically impact efficiency. The initial method maps each API endpoint to a separate tool, requiring five chained calls and consuming about 52,000 tokens. By shifting to intent-based tools—focused on outcomes rather than individual data operations—the same task is completed in a single call using just 2,000 tokens, improving both efficiency and reasoning. In a third iteration, tools are hosted on a remote MCP server via AWS Agent Core Gateway, with semantic search limiting the agent’s toolset to only what’s relevant per query, further reducing token usage. Willis emphasizes that narrowly scoped agents outperform general-purpose ones, delivering better speed, accuracy, and context efficiency. Designing smaller, specialized agents with tailored tools is key as tool ecosystems expand. Learn more from The New Stack around the latest with Strands and MCP: AWS Launches Its Take on an Open Source AI Agents SDK What Is MCP? Game Changer or Just More Hype? MCP’s biggest growing pains for production use will soon be solved Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    28 min
  4. APR 28

    Why Broadcom is betting on a private cloud comeback

    Broadcom’s VMware Cloud Foundation (VCF) is evolving from a turnkey infrastructure stack into a modern application platform, balancing simplicity with the flexibility demanded by Kubernetes-driven environments. AtKubeCon + CloudNativeCon Europe 2026, Broadcom leaders highlighted how VCF is adapting to support platform engineering teams, cloud-native workloads, and large-scale operations. A key industry shift is the return to private cloud, driven by data sovereignty concerns and the growing impact of AI. Enterprises are bringing workloads back on-premises while still expecting a cloud-like operating model. Broadcom is responding by prioritizing on-prem stability and aligning closely with open source, reflecting its strong contributions toKubernetesand related projects. Kubernetes is no longer a bolt-on but the core control plane of VCF, enabling unified management of compute, storage, and networking through declarative APIs. At the same time, the distinction between virtual machines and containers is fading. The focus is shifting toward application-centric platforms, where developers interact through consistent abstractions, allowing infrastructure to be provisioned seamlessly behind the scenes. Learn more from The New Stack around the latest around Broadcom:  Broadcom ‘Doubles Down’ on Open Source, Donates Kubernetes Tool to CNCF Why Broadcom gave Velero to the CNCF Sandbox — and what it means for Kubernetes data protection Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    24 min
  5. APR 25

    Why Broadcom gave Velero to the CNCF Sandbox — and what it means for Kubernetes data protection

    Broadcom continues to expand its role as a major contributor to cloud-native open source, particularly within the Cloud Native Computing Foundation (CNCF) ecosystem. Its recent donation of Velero—originally developed by VMware—to the CNCF Sandbox reflects a strategic move to foster broader community trust and collaboration. By shifting governance away from vendor control, Broadcom aims to position Velero as a truly community-driven data protection standard for Kubernetes environments, encouraging wider adoption and contribution.  At the same time, the company is reinforcing its position as a full-stack Kubernetes provider across both cloud-native and private cloud environments. Despite Kubernetes’ dominance, many organizations still struggle with its complexity. Broadcom is addressing this by focusing on lifecycle management, long-term support, and deep integration with existing infrastructure like vSphere.  In a podcast recorded at KubeCon + CloudNativeCon Europe 2026, Dilpreet Bindra emphasized that open source success comes not just from code contributions, but also from relinquishing control to empower the broader ecosystem and drive sustainable innovation.  Learn more from The New Stack about the latest developments around Velero:  Broadcom donates Velero to CNCF — and it could reshape how Kubernetes users handle backup and disaster recovery  How AI Search Is Supporting Artistic Freedom  Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    23 min
  6. APR 24

    Why AI engineering needs old-school discipline

    In this episode of The New Stack Makers, Nimisha Asthagiri of Thoughtworks explores why many AI initiatives stall between proof of concept and production. A key issue is that organizations focus on speed—asking how to move faster—rather than rethinking what new capabilities AI actually enables. Successful companies take a systems-thinking approach, investing in organizational literacy and aligning teams around meaningful use cases instead of retrofitting AI into existing workflows. Asthagiri highlights that core engineering practices are ফিরে to prominence. As AI-generated code increases, so does the risk of “cognitive debt,” where developers lose understanding of their own systems. To counter this, teams are reviving fundamentals like test-driven development, mutation testing, observability, and zero-trust security, especially as autonomous agents contribute to production code. She also introduces the concept of “dark code”—AI-generated code that may never be used—and argues for more intentional lifecycle management, including ephemeral code. Ultimately, the focus shifts from code itself to specifications, context management, and disciplined engineering practices.   Learn more from The New Stack around the latest about system-thinking approaches:  System Two AI: The Dawn of Reasoning Agents in Business  A practical systems engineering guide: Architecting AI-ready infrastructure for the agentic era  Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    24 min
  7. APR 23

    Jim Bugwadia on why finding a Kubernetes problem is only half the battle for Kyverno users

    Graduating within the CNCF marks a major milestone for an open source project, signaling not just technical maturity but strong governance, security practices, and widespread adoption. Kyverno, a Kubernetes policy engine, reached this stage after five years — becoming only the 35th project to progress from sandbox to graduation. As co-founder Jim Bugwadia explains, incubation reflects production readiness and adoption, while graduation validates the project’s long-term sustainability and governance rigor. Originally built to help teams manage Kubernetes complexity through declarative policies, Kyverno has evolved alongside the ecosystem. Its shift to the Kubernetes-native Common Expression Language (CEL) and rising demand driven by AI workloads have expanded its user base beyond regulated industries to mainstream enterprises. With over three billion downloads, it underscores the growing need for automated policy enforcement across development, security, and operations teams. Commercially, Nirmata maintains a clear boundary between open source and enterprise offerings, focusing on remediation and advanced management. While only 2–5% of users convert, that small percentage becomes meaningful at Kyverno’s scale. Learn more from The New Stack around the latest about Kyverno: Simplify Kubernetes Security With Kyverno and OPA Gatekeeper Using the Kyverno CLI to Write Policy Test Cases Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

    23 min
4.3
out of 5
31 Ratings

About

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

More From The New Stack

You Might Also Like