Pop Goes the Stack

F5

Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.

  1. HACE 1 DÍA

    Measuring what matters: Observability for agents

    Agents break the old rules of observability. Latency, throughput, and error rates still matter, but once software starts making decisions and taking actions on someone else’s behalf, the real question becomes: is it doing the right thing, and is it doing it for the right reasons?   In this episode of Pop Goes the Stack, Lori MacVittie and Joel “OpenClaw” Moses are joined by observability expert Chris Hain to unpack what changes when systems become agentic. Instead of a single prompt-response interaction, you get decision chains that branch, loop, call tools, and evolve over time. A system can “succeed” operationally while still being wrong, expensive, or misaligned with intent.   Chris argues you don’t have to throw away what already works. Distributed tracing still applies, but now each agent step becomes a span, decorated with richer metadata like model identity, tool calls, token usage, prompts, and cost. The discussion also dives into why standardization matters, including OpenTelemetry and emerging semantic conventions for generative and agentic AI, and why auto-instrumentation approaches like eBPF become critical when agents generate code that has no built-in telemetry.   Joel adds a new set of metrics that feel uncomfortably necessary: decision loops per task, drift in tool-call chains, human override frequency, and the cost and token patterns that signal something has changed. The group also tackles the awkward feedback loop of using agents to make observability actionable, while acknowledging the risk of agents optimizing the dashboard instead of the system.   If you’re building agentic workflows, this episode is a practical guide to why “failed successfully” is now a real production state, and why instrumenting for correctness and intent alignment is the next observability frontier.

    20 min
  2. 21 ABR

    Alien autopsy of LLMs: Constitutions, deception, guardrails

    Why do researchers keep describing large language models like aliens? Because in enterprise environments, they often behave like something we didn’t build and can’t fully explain. In this episode of Pop Goes the Stack, Lori MacVittie and Joel Moses are joined by F5's Ken Arora to unpack the “alien autopsy” metaphor and what it reveals about operating LLMs as production systems. They dig into the uncomfortable reality that traditional software offers a blueprint and a causal chain. LLMs don’t. You can probe them, measure them, and red-team them, but you can’t reliably point to a specific internal “part” that generated a decision. That becomes more than philosophical when you need operational answers like why it did something, whether it will repeat it, and how an attacker might steer it. Ken reframes model evolution as moving from a naive, precocious child to a mischievous, goal-driven teenager, including examples where models appear to scheme around constraints or optimize for “keeping the user happy” over correctness. The group also breaks down constitutional AI and why principle-based “be helpful” guidance can collide with enterprise goals, policies, and risk tolerance, especially as agentic systems move from generating outputs to taking actions. A key warning lands near the end: don’t rely on the model to explain itself. These systems can produce plausible narratives that aren’t verifiable, and may behave differently when they know they’re being evaluated. The practical takeaway is straightforward: treat LLMs as risk-managed systems, invest in observability and red teaming, and build defense-in-depth guardrails that assume the agent will try to bypass controls.

    21 min
  3. 14 ABR

    Why Prompt Filters Fail Against LLM Attacks

    Prompt injection has been the headline security problem for the last year, but have we been guarding the wrong layer? Lori MacVittie is joined by cohost Joel Moses and architect Elijah Zupancic to break down why many “prompt filters” miss the real execution surface: models don’t process words, they process tokens, and attackers are increasingly targeting the tokenizer to bypass defenses. Using the research behind Adversarial Tokenization and TokenBreak, they explain how the same text can be segmented into different token paths, changing what the model actually “sees” and how it behaves. That creates a split-brain security challenge across text, tokens, and state, where protecting only the natural-language layer leaves multiple routes around your guardrails. TokenBreak, in particular, highlights how attackers can brute-force and classify responses to infer tokenization behavior, turning the model into its own oracle. So how can you protect models? Hear why a layered security is the only viable approach: narrowing accepted input surfaces, adding language detection to reduce the search space, limiting automation and abuse patterns, and moving toward token-aware inspection and policy enforcement at the tokenizer boundary. But their are tradeoffs when guardrails sit outside the model. Tune in to make sure you’re not already downstream of the attack and what you can do about it if you are. Read Adversarial Tokenization → https://arxiv.org/abs/2503.02174 Read TokenBreak: Bypassing Text Classification Models Through Token Manipulation → https://arxiv.org/abs/2506.07948

    22 min
  4. 7 ABR

    OpenClaw: Multi-agent autonomy, secrets, and blast radius

    OpenClaw is what happens when the industry looks at autonomous agents and decides they should have more autonomy, more persistence, and more chances to surprise you. In this episode of Pop Goes the Stack, Lori MacVittie hosts a wide-ranging discussion with F5's Joel Moses, Jason Rahm, and Kunal Anand on what makes OpenClaw different from the usual “AI assistant” narrative: agents that coordinate, remember, adapt, and operate in shared spaces where emergent behavior is a feature, not a bug. Joel shares a grounded example of using OpenClaw locally for home automation, keeping the blast radius contained while still seeing the upside of continuous, autonomous decision-making. From there, the group digs into what breaks when you move this model toward enterprise operations: persistence of secrets, unclear approval workflows, weak auditability, limited rollback, and the sheer difficulty of diagnosing why an agent took an action after weeks of chained decisions. Kunal expands the conversation to the ecosystem forming around OpenClaw, including experimental offshoots and the uncomfortable reality that “just read the code” doesn’t scale when modern projects are moving at AI-assisted commit velocity. Jason adds a longer lens, drawing a parallel to Ray Bradbury’s "There Will Come Soft Rains" as a reminder that autonomous systems can keep running even when humans stop being in the loop, raising questions beyond tech into how we relate to each other. Tune in for the groups practical takeaways as this technology makes it's way toward the enterprise. Read Kunal's blog diving into mechanistic interpretability: https://kunalanand.com/2026-03-19-your-token-is-a-wonderland/   Read "There Will Come Soft Rains" by Ray Bradbury: https://www.btboces.org/Downloads/7_There%20Will%20Come%20Soft%20Rains%20by%20Ray%20Bradbury.pdf Recorded March 2nd, 2026

    27 min
  5. 31 MAR

    CISO Hot Takes on MCP, PQC, and Data Center Attacks

    Recorded live at F5 AppWorld 2026 in Las Vegas, this episode of Pop Goes the Stack puts Field CISO Chuck Herrin in the hot seat for a fast-moving conversation on what security leaders are really dealing with right now. Joel Moses kicks things off with the agentic AI debate: if teams bypass structured tool interfaces and let agents “just use the CLI,” what happens to authentication, observability, and predictability when autonomy accelerates faster than humans can keep up? From there, Chuck makes the case that fear is a poor long-term strategy for running a business, even when the threats are real. He unpacks the tension he’s seeing across organizations, where executives are driven by FOMO while employees wrestle with FOBO (fear of becoming obsolete), and argues that companies get results when they redesign how they operate rather than bolting AI onto old structures. The conversation shifts to post-quantum cryptography and why it still isn’t getting the attention it deserves. Chuck explains how “future tech” framing, short CISO tenures, and the pressure of today’s fires keep PQC from becoming a priority, even as harvest-now-decrypt-later attacks make it a present-day risk. His advice is practical: assign clear ownership, treat the effort like business continuity planning, and include your supply chain in the readiness scope. Finally, they touch on a new class of concern for CISOs: kinetic targeting of data center infrastructure, and how sovereignty requirements can constrain options when physical risk rises. If you’re navigating AI adoption, cryptographic transition, or resilience planning, tune in for a grounded perspective from the show floor.

    17 min
  6. 24 MAR

    AI Red Teaming in Practice: Scores, guardrails, auto-remediation

    AI in production isn’t just another feature to ship. It’s a non-deterministic system that can be socially engineered, fuzzed, and pushed into failure states you won’t find with traditional testing. Recorded live in Las Vegas at F5’s AppWorld 2026, this episode of Pop Goes the Stack brings Joel Moses together with Jimmy White, F5’s VP of AI Security (via the CalypsoAI acquisition), for a practical look at what AI red teaming actually is and how it works when the attacker is an agent.   Jimmy reframes genAI security as a permutation problem: if there are countless prompt combinations that could unlock sensitive data or trigger unsafe actions, you need genAI-powered red team agents to explore those paths at scale. The discussion covers custom intents, agentic “fingerprints” that reveal not just what was compromised but how it happened, and why that “how” is the key to building protections you can trust.   You’ll also hear how scoring and reporting translate into guardrails, how auto-remediation can be validated with positive and negative test cases before a human publishes changes, and why relying on models to internalize safety isn’t a realistic plan. The conversation closes on agentic AI risk, where tools and permissions matter more than the model’s reasoning, and introduces “thought injection” as a way to redirect unsafe actions without breaking the agent loop. If you’re building AI apps, deploying MCP-connected systems, or worrying about agents becoming tomorrow’s service accounts, this episode gives you a sharper playbook for testing, governance, and resilience.

    27 min
  7. 17 MAR

    Agent Identity Crisis: Access, audit, and “soul.md”

    Coming to you from the AppWorld show floor, Joel Moses and guest co-pilot Oscar Spencer cut through the conference polish to tackle a problem that’s quickly becoming unavoidable: identity in the era of agentic AI. When software can act on your behalf, take initiative, and even spawn other agents, “who did what” stops being a philosophical question and becomes an audit, security, and governance requirement. Joined by F5's Chief Product Officer, Kunal Anand, the conversation digs into why traditional, point-in-time authentication and authorization models don’t map cleanly to agents that operate over time, across contexts, and through chains of delegation. They explore the risks of transitive identity, the expanding blast radius when Agent A creates Agents B and C, and the uncomfortable reality that agents can end up holding the same kinds of long-lived secrets that have historically caused production incidents. Along the way, they discuss emerging ideas like soul.md files that define an agent’s purpose and constraints, and the concept of a dedicated “credential agent” that acts as a gatekeeper for secrets access. The episode also gets practical about what breaks in the real world, including a cautionary story about an agent corrupting a long-running notes database, underscoring why backups, guardrails, and careful rollout matter. If you’re building or adopting agents, this is a timely look at why identity can’t stay static, why service-account thinking is coming for every agent, and what it will take to keep autonomy from turning into the next incident report.

    21 min
  8. 10 MAR

    VibeOps: Guardrailed agents for deterministic production

    Ops used to be a world of YAML, caffeine, and careful deploy rituals. Now it’s probabilistic models, token-based cost surprises, and reliability questions that sound more like, “Will the model mean the same thing tomorrow?” In this episode of Pop Goes the Stack, Lori MacVittie and Joel Moses dig into what happens when production expectations collide with non-deterministic AI systems, and why the next phase of automation needs more than a chat interface and optimism.   They’re joined by John Capobianco from Itential to explore “VibeOps,” an approach to conversational operations that doesn’t throw away deterministic workflows, but connects them to agent reasoning, tool calling, and modern protocols like MCP. The discussion breaks down agent “skills” as a way to describe what an agent can do, constrain what it can’t, and build guardrails in a format teams can manage.   From red-teaming experiments to real-world concerns about failure rates at scale, the conversation stays grounded in what it takes to make AI useful in production: external knowledge, policy alignment, composable skills, and a maturity path from lab-only to read-only to supervised execution, and only then toward autonomy. The takeaway is clear: conversational ops can accelerate work, improve documentation and ticket quality, and reduce toil, but governance and accountability still matter. If you’re navigating AIOps, agent adoption, or the post-MCP tooling wave, this episode offers a realistic starting point.

    25 min

Calificaciones y reseñas

Acerca de

Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.