The DevSecOps Talks Podcast

Mattias Hemmingsson, Julien Bisconti and Andrey Devyatkin

This is the show by and for DevSecOps practitioners who are trying to survive information overload, get through marketing nonsense, do the right technology bets, help their organizations to deliver value, and last but not the least to have some fun. Tune in for talks about technology, ways of working, and news from DevSecOps. This show is not sponsored by any technology vendor and trying to be as unbiased as possible. We talk like no one is listening! For good or bad :) For more info, show notes, and discussion of past and upcoming episodes visit devsecops.fm

  1. 1 APR.

    #96 - Keeping Platforms Simple and Fast with Joachim Hill-Grannec

    This episode with Joachim Hill-Grannec asks: How do platforms bloat, and how do you keep them simple and fast with trunk-based dev and small batches? Which metrics prove it works—cycle time, uptime, or developer experience? Can security act as a partner that speeds delivery instead of a gate?   We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel Summary In this episode of DevSecOps Talks, Mattias speaks with Joachim Hill-Grannec, co-founder of Peltek, a boutique consulting firm specializing in high-availability, cloud-native infrastructure. Following up on a previous episode where Steve discussed cleaning up bloated platforms, Mattias and Joachim dig into why platforms get bloated in the first place and how platform teams should think when building from scratch. Their conversation spans cloud provider preferences, the primacy of cycle time, the danger of adding process in response to failure, and a strong argument for treating security and quality as enablers rather than gatekeepers. Key Topics Platform Teams Should Serve Delivery Teams Joachim frames the core question of platform engineering around who the platform is actually for. His answer is clear: the delivery teams are the client. Platform engineers should focus on making it easier for developers to ship products, not on making their own work more convenient. He connects this directly to platform bloat. In his experience, many platforms grow uncontrollably because platform engineers keep adding tools that help the platform team itself: "Look, I spent this week to make my job this much faster." But Joachim pushes back on this instinct — the platform team is an amplifier for the organization, and every addition should be evaluated by whether it helps a product get to production faster and gives developers better visibility into what they are working on. Choosing a Cloud Provider: Preferences vs. Reality The conversation briefly explores cloud provider choices. Joachim says GCP is his personal favorite from a developer perspective because of cleaner APIs and faster response times, though he acknowledges Google's tendency to discontinue services unexpectedly. He describes AWS as the market workhorse — mature, solid, and widely adopted, comparing it to "the Java of the land." Azure gets the coldest reception; both acknowledge it has improved over time, but Joachim says he still struggles whenever he is forced to use it. They observe that cloud choices are frequently made outside engineering. Finance teams, investors, and existing enterprise agreements often drive the decision more than technical fit. Joachim notes a common pairing: organizations using Google Workspace for productivity but AWS for cloud infrastructure, partly because the Entra ID (formerly Azure AD) integration with AWS Identity Center works more smoothly via SCIM than the equivalent Google Workspace setup, which requires a Lambda function to sync groups. Measuring Platform Success: Cycle Time Above All When Mattias asks how a team can tell whether a platform is actually successful, Joachim separates subjective and objective measures. On the subjective side, he points to developer happiness and developer experience (DX). Feedback from delivery teams matters, even if surveys are imperfect. On the objective side, his favorite metric is cycle time — specifically, the time from when code is ready to when it reaches production. He also mentions uptime and availability, but keeps returning to cycle time as the clearest indicator that a platform is helping teams deliver faster. This aligns with DORA research, which has consistently shown that deployment frequency and lead time for changes are strong predictors of overall software delivery performance. Start With a Highway to Production A major theme of the episode is that platforms should begin with the shortest possible route to production. Mattias calls this a "highway to production," and Joachim strongly agrees. For greenfield projects, Joachim favors extremely fast delivery at first — commit goes to production, commit goes to production — even with minimal process. As usage and risk increase, teams can gradually add automation, testing, and safeguards. The critical thing is to keep the flow and then ask "how do we make those steps faster?" as you add them, rather than letting each new step slow down the pipeline unchallenged. He also makes a strong case for tags and promotions over branch-based deployment, noting his instinctive reaction when someone asks "which branch are we deploying from?" is: "No branches — tags and promotions." The Trap of Slowing Down After Failure Joachim warns about a common and dangerous pattern: when a bug reaches production, the natural organizational reaction is not to fix the pipeline, but to add gates. A QA team does a full pass, a security audit is inserted, a manual review step appears. Each gate slows delivery, which leads to larger batches, which increases risk, which triggers even more controls. He sees this as a vicious cycle. Organizations that respond to incidents by slowing delivery actually get worse security, worse quality, and worse throughput over time. He references a study — likely the research behind the book Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim — showing that faster delivery correlates with better security and quality outcomes. The organizations adding Engineering Review Boards (ERBs) and Architecture Review Boards (ARBs) in the name of safety often do not measure the actual impact, so they never see that the controls are making things worse. Mattias connects this to AI-assisted development, where developers can now produce changes faster than ever. If the pipeline cannot keep up, the pile of unreleased changes grows, making each release riskier. Getting Buy-In: Start With Small Experiments Joachim does not recommend that a slow, process-heavy organization throw everything out overnight. Instead, he suggests starting with small experiments. Code promotions are a good entry point: teams can start producing artifacts more rapidly without changing how those artifacts are deployed. Once that works, the conversation shifts to delivering those artifacts faster. He finds starting on the artifact pipeline side produces quicker wins and more organizational buy-in than starting with the platform deployment side, which tends to be more intertwined and higher-risk to change. Guiding Principles Over a Rigid Golden Path Mattias questions the idea of a single "golden path," saying the term implies one rigid way of working. Joachim leans toward guiding principles instead. His strongest principle is simplicity — specifically, simplicity to understand, not necessarily simplicity to create. He references Rich Hickey's influential talk Simple Made Easy (from Strange Loop 2011), which distinguishes between things that are simple (not intertwined) and things that are easy (familiar or close at hand). Creating simple systems is hard work, but the payoff is systems that are easy to reason about, easy to change, and easy to secure. His second guiding principle is replaceability. When evaluating any tool in the platform, he asks: "How hard would it be to yank this out and replace it?" If swapping a component would be extremely difficult, that is a smell — it means the system has become too intertwined. Even with a tool as established as Argo CD, his team thinks about what it would look like to switch it out. Tooling Choices and Platform Foundations Joachim outlines the patterns his team typically uses when building platforms, organized into two paths: Delivery pipeline (artifact creation): - Trunk-based development over GitFlow - Release tags and promotions rather than branch-based deployment - Containerization early in the pipeline - Release Please for automated release management and changelogs - Renovate for dependency updates (used for production environment promotions from Helm charts and container images) Platform side (environment management): - Kubernetes-heavy, typically EKS on AWS - Karpenter for node scaling - AWS Load Balancer Controller only as a backing service for a separate ingress controller (not using ALB Ingress directly, due to its rough edges) - Argo CD for GitOps synchronization and deployment - Argo Image Updater for lower environments to pull latest images automatically - Helm for packaging, despite its learning curve He notes that NGINX Ingress Controller has been deprecated, so teams need to evaluate alternatives for their ingress layer. Developers Should Not Be Fully Shielded From Operations One of the more nuanced parts of the conversation is how much operational responsibility developers should have. Joachim rejects both extremes. He does not think every developer needs to know everything about infrastructure, but he has seen too many cases where developers completely isolated from runtime concerns make poor decisions — missing simple code changes that would make a system dramatically easier to deploy and operate. He advocates for transparency and collaboration. Platform repos should be open for anyone on the dev team to submit pull requests. When the platform team makes a change, they should pull in developers to work alongside them. This way, the delivery team gradually builds a deeper understanding of how the whole system works. Joachim loves the open-source maintainer model applied inside organizations: platform teams are maintainers of their areas, but anyone in the organization should be able to introduce change. He warns against building custom CLIs or heavy abstractions that create dependencies — if a developer wants to do something the CLI does not support, the platform team becomes a bottleneck. Mattias adds that opening up the platform

    49 min
  2. 23 MARS

    #95 - From Platform Theater to Golden Guardrails with Steve Wade

    Is your Kubernetes stack bloated, slow, and hard to explain? Steve Wade shares simple checks—the hiring treadmill, onboarding time, and the acronym test—to spot platform theater fast. What would a 30-day deletion sprint cut, save, and secure?  We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel Summary In this episode of DevSecOps Talks, Mattias and Paulina speak with Steve Wade, founder of Platform Fix, about why so many Kubernetes and platform initiatives become overcomplicated, expensive, and painful for developers. Steve has helped simplify over 50 cloud-native platforms and estimates he has removed around $100 million in complexity waste. The conversation covers how to spot a bloated platform, why "free" tools are never really free, how to systematically delete what you don't need, and why the best platform engineering is often about subtraction rather than addition. Key Topics Steve's Background: From Complexity Creator to Strategic Deleter Steve introduces himself as the founder of Platform Fix — the person companies call when their Kubernetes migration is 18 months in, millions over budget, and their best engineers are leaving. He has done this over 50 times, and he is candid about why it matters so much to him: he used to be this problem. Years ago, Steve led a migration that was supposed to take six months. Eighteen months later, the team had 70 microservices, three service meshes (they kept starting new ones without finishing the old), and monitoring tools that needed their own monitoring. Two senior engineers quit. The VP of Engineering gave Steve 90 days or the team would be replaced. Those 90 days changed everything. The team deleted roughly 50 of the 70 services, ripped out all the service meshes, and cut deployment time from three weeks of chaos to three days, consistently. Six months later, one of the engineers who had left came back. That experience became the foundation for Platform Fix. As Steve puts it: "While everyone's collecting cloud native tools like Pokemon cards, I'm trying to help teams figure out which ones to throw away and which ones to keep." Why Platform Complexity Happens Steve explains that organizations fall into a complexity trap by continuously adding tools without questioning whether they are actually needed. He describes walking into companies where the platform team spends 65–70% of their time explaining their own platform to the people using it. His verdict: "That's not a team, that's a help desk with infrastructure access." People inside the complexity normalize it. They cannot see the problem because they have been living in it for months or years. Steve identifies several drivers: conference-fueled recency bias (someone sees a shiny tool at KubeCon and adopts it without evaluating the need), resume-driven architecture (engineers choosing tools to pad their CVs), and a culture where everyone is trained to add but nobody asks "what if we remove something instead?" He illustrates the resume-driven pattern with a story from a 200-person fintech. A senior hire — "Mark" — proposed a full stack: Kubernetes, Istio, Argo, Crossplane, Backstage, Vault, Prometheus, Loki, Tempo, and more. The CTO approved it because "Spotify uses it, so it must be best practice." Eighteen months and $2.3 million later, six engineers were needed just to keep it running, developers waited weeks to deploy, and Mark left — with "led Kubernetes migration" on his CV. When Steve asked what Istio was actually solving, nobody could answer. It was costing around $250,000 to run, for a problem that could have been fixed with network policies. He also highlights a telling sign: he asked three people in the same company how many Kubernetes clusters they needed and got three completely different answers. "That's not a technical disagreement. That's a sign that nobody's aligned on what the platform is actually for." The AI Layer: Tool Fatigue Gets Worse Paulina observes that the same tool-sprawl pattern is now being repeated with AI tooling — an additional layer of fatigue on top of what already exists in the cloud-native space. Steve agrees and adds three dimensions to the AI complexity problem: choosing which LLM to use, learning how to write effective prompts, and figuring out who is accountable when AI-written code does not work as expected. Mattias notes that AI also enables anyone to build custom tools for their specific needs, which further expands the toolbox and potential for sprawl. How Leaders Can Spot a Bloated Platform One of the most practical segments is Steve's framework for helping leaders who are not hands-on with engineering identify platform bloat. He gives them three things to watch for: The hiring treadmill: headcount keeps growing but shipping speed stays flat, because all new capacity is absorbed by maintenance. The onboarding test: ask the newest developer how long it took from their first day to their first production deployment. If it is more than a week, "it's a swamp." Steve's benchmark: can a developer who has been there two weeks deploy without asking anyone? If yes, you have a platform. If no, "you have platform theater." The acronym test: ask the platform team to explain any tool of their choosing without using a single acronym. If they cannot, it is likely resume-driven architecture rather than genuine problem-solving. The Sagrada Familia Problem Steve uses a memorable analogy: many platforms are like the Sagrada Familia in Barcelona — they look incredibly impressive and intricate, but they are never actually finished. The question leaders should ask is: what does an MVP platform look like, what tools does it need, and how do we start delivering business value to the developers who use it? Because, as Steve says, "if we're not building any business value, we're just messing around." Who the Platform Is Really For Mattias asks the fundamental question: who is the platform actually for? Steve's answer is direct — the platform's customers are the developers deploying workloads to it. A platform without applications running on it is useless. He distinguishes three stages: - Vanilla Kubernetes: the out-of-the-box cluster - Platform Kubernetes: the foundational workloads the platform needs to function (secret management, observability, perhaps a service mesh) - The actual platform: only real once applications are being deployed and business value is delivered The hosts discuss how some teams build platforms for themselves rather than for application developers or the business, which is a fast track to unnecessary complexity. Kubernetes: Standard Tool or Premature Choice? The episode explores when Kubernetes is the right answer and when it is overkill. Steve emphasizes that he loves Kubernetes — he has contributed to the Flux project and other CNCF projects — but only when it is earned. He gives an example of a startup with three microservices, ten users, and five engineers that chose Kubernetes because "Google uses it" and the CTO went to KubeCon. Six months later, they had infrastructure that could handle ten million users while serving about 97. "Google needs Kubernetes, but your Series B startup needs to ship features." Steve also shares a recent on-site engagement where he ran the unit economics on day two: the proposed architecture needed four times the CPU and double the RAM for identical features. One spreadsheet saved the company from a migration that would have destroyed the business model. "That's the question nobody asks before a Kubernetes migration — does the maths actually work?" Mattias pushes back slightly, noting that a small Kubernetes cluster can still provide real benefits if the team already has the knowledge and tooling. Paulina adds an important caveat: even if a consultant can deploy and maintain Kubernetes, the question is whether the customer's own team can realistically support it afterward. The entry skill set for Kubernetes is significantly higher than, say, managed Docker or ECS. Managed Services and "Boring Is Beautiful" Steve's recommendation for many teams is straightforward: managed platforms, managed databases, CI/CD that just works, deploy on push, and go home at 5 p.m. "Boring is beautiful, especially when you call me at 3 a.m." He illustrates this with a company that spent 18 months and roughly $850,000 in engineering time building a custom deployment system using well-known CNCF tools. The result was about 80–90% as good as GitHub Actions. The migration to GitHub Actions cost around $30,000, and the ongoing maintenance cost was zero. Paulina adds that managed services are not completely zero maintenance either, but the operational burden is orders of magnitude less than self-managed infrastructure, and the cloud provider takes on a share of the responsibility. The New Tool Tax: Why "Free" Tools Are Never Free A central theme is that open-source tools carry hidden costs far exceeding their license fee. Steve introduces the new tool tax framework with four components, using Vault (at a $40,000 license) as an example: Learning tax (~$45,000): three engineers, two weeks each for training, documentation, and mistakes Integration tax (~$20,000): CI/CD pipelines, Kubernetes operators, secret migration, monitoring of Vault itself Operational tax (~$50,000/year): on-call, upgrades, tickets, patching Opportunity tax (~$80,000): while engineers work on Vault, they are not building things that could save hundreds of hours per month Total year-one cost: roughly $243,000 — a 6x multiplier over the $40,000 budget. And as Steve points out, most teams never present this full picture to leadership. Mattias extends the point to tool documentation complexity, noting that anyone who has worked with Envoy's configuration knows ho

    46 min
  3. 11 MARS

    #94 - Small Tasks, Big Wins: The AI Dev Loop at System Initiative

    We bring Paul Stack back to cover the parts we skipped last time. What changed when the models got better and we moved from one-shot Gen AI to agentic, human-in-the-loop work? How do plan mode and tight prompts stop AI from going rogue? Want to hear how six branches, git worktrees, and a TypeScript CLI came together?  We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel Summary In this episode, Mattias, Andre, and Paulina welcome back returning guest Paul from System Initiative to continue a conversation that started in the previous episode about their project Swamp. The discussion digs into how AI-assisted software development has changed over the past year, and why the real shift is not "AI writes code" but humans orchestrating multiple specialized agents with strong guardrails. Paul walks through the practical workflows, multi-layered testing, architecture-first thinking, cost discipline, and security practices his team has adopted — while the hosts push on how this applies across enterprise environments, mentoring newcomers, and the uncomfortable question of who is responsible when AI-built software fails. Key Topics The industry crossroads: layoffs, fear, and a new reality Before diving into technical specifics, Paul acknowledges that the industry is at "a real crazy crossroads." He references Block (formerly Square) cutting roughly 40% of their workforce, citing uncertainty about what AI means for their teams. He wants to be transparent that System Initiative also shrank — but clarifies the company did not cut people because of AI. The decision to reduce headcount came before they even knew what they were going to build next, let alone how they would build it. AI entered the picture only after they started prototyping the next version of their product. Block's February 2026 layoffs, announced by CEO Jack Dorsey, eliminated over 4,000 positions. The move was framed as an AI-driven restructuring, making it one of the most visible examples of AI anxiety playing out in real corporate decisions. From GenAI hype to agentic collaboration Paul explains that AI coding quality shifted significantly around October–November of the previous year. Before that, results were inconsistent — sometimes impressive, often garbage. Then the models improved dramatically in both reasoning and code generation. But the bigger breakthrough, in his view, was not the models themselves. It was the industry's shift from "Gen AI" — one-shot prompting where you hand over a spec and accept whatever comes back — to agentic AI, where the model acts more like a pair programmer. In that setup, the human stays in the loop, challenges the plan, adds constraints, and steers the result toward something that fits the codebase. He gives a concrete early example: System Initiative had a CLI written in Deno (a TypeScript runtime). Because the models were well-trained on TypeScript libraries and the Deno ecosystem, they started producing decent code. Not beautiful, not perfectly architected — but functional. When Paul began feeding the agent patterns, conventions, and existing code to follow, the output became coherent with their codebase. This led to a workflow where Paul would open six Claude Code sessions at once in separate Git worktrees — isolated copies of the repository on different branches — each building a small feature in parallel, feeding them bug reports and data, and continuously interacting with the results rather than one-shotting them. Git worktrees let you check out multiple branches of the same repository simultaneously in separate directories. Each worktree is independent, so you can work on several features at once and merge them back via pull requests. He later expanded this by running longer tasks on a Mac Mini accessible via Tailscale (a mesh VPN), while handling shorter tasks on his laptop — effectively distributing AI workloads across machines. Why architecture matters more than ever One of Paul's strongest themes is that AI shifts engineering attention away from syntax and back toward architecture. He argues that AI can generate plenty of code, but without design principles and boundaries it will produce spaghetti on top of existing spaghetti. He introduces the idea of "the first thousand lines" — an anecdote he read recently claiming that the first thousand lines of code an agent helps write determine its path forward. If those lines are well-structured and follow clear design principles, the agent will build coherently on top of them. If they are messy and unprincipled, everything after will compound the mess. Paul breaks software development into three layers: Architecture — design patterns like DDD (Domain-Driven Design), CQRS (Command Query Responsibility Segregation) Patterns — principles like DRY (Don't Repeat Yourself), YAGNI (You Aren't Gonna Need It), KISS (Keep It Simple) Taste — naming conventions, module layout, project structure, Terraform module organization He argues the industry spent the last decade obsessing over "taste" while often mocking "ivory tower architects" — the people who designed systems but didn't write code. In an AI-driven world, those architectural concerns become critical again because the agent needs clear boundaries, domain structure, and intent to produce coherent output. Paulina agrees and observes that this trend may also blur traditional specialization lines, pushing engineers toward becoming more general "software people" rather than narrowly front-end, back-end, or DevOps specialists. Encoding design docs, rules, and constraints into the repo Paul describes how his team makes architecture actionable for AI by encoding system knowledge directly into the repository. Their approach has several layers: Design documents — Detailed docs covering the model layer (the actual objects, their purposes, how they relate), workflow construction (how models connect and pass data), and expression language behavior. These live in a /design folder in the open-source repo and describe the intent of every part of the system. Architectural rules — The agent is explicitly told to follow Domain-Driven Design: proper separation between domains, infrastructure, repositories, and output layers. The DDD skill is loaded so the agent understands and maintains bounded contexts. Code standards — TypeScript strict mode, no any types, named exports, passing lint and format checks. License compliance is also enforced: because the project is AGPL v3, the agent cannot pull in dependencies with incompatible licenses. Skills — A newer mechanism for lazy-loading contextual information into the AI agent. Rather than stuffing everything into one enormous prompt, skills are loaded on demand when the agent encounters a specific type of task. This keeps context windows lean and focused. AGPL v3 (GNU Affero General Public License) is a copyleft license that requires anyone who runs modified software over a network to make the source code available. This creates strict constraints on what dependencies can be used. Multi-agent development: the full chain A major part of the discussion centers on how Paul's team works with multiple specialized AI agents rather than a single all-knowing assistant. The chain looks like this: Issue triage agent — When a user opens a GitHub issue, an agent evaluates whether it is a legitimate feature request or bug report. The agent's summary is posted back to the issue immediately, creating context for later stages. Planning agent — If the issue is legitimate, the system enters plan mode. A specification is generated and posted for the user to review. Users can push back ("that's not how I think it should work"), and the plan is revised until everyone agrees. Implementation agent — The code is written based on the approved plan, with all the design docs, architectural rules, and skills loaded as context. Happy-path reviewer — A separate agent reviews the code against standards, checking that it loads correctly and appears to function. Adversarial reviewer — Added just days before the recording, this agent is told: "You are a grumpy DevOps engineer and I want you to pull this code apart." It looks for security injection points, failure modes, and anything the happy-path reviewer might miss. Both review agents write their findings as comments on the pull request, creating a visible audit trail. The PR only merges when both agents approve. If the adversarial agent flags a security vulnerability, the implementation goes back for changes. Paul says this "Jekyll and Hyde" review setup caught a path traversal bug in their CLI during its first week. While the CLI runs locally and the risk was limited, it proved the value of adversarial review. Path traversal is a vulnerability where an attacker can access files outside the intended directory by manipulating file paths (e.g., using ../ sequences). Even in CLI tools, this can expose sensitive files on a user's machine. Mattias compares the overall process to a modernized CI/CD pipeline — the same stages exist (commit, test, review, promote, release), but AI replaces some of the manual implementation steps while humans stay focused on architecture, review, and acceptance. Why external pull requests are disabled One of the more provocative decisions Paul describes: the open-source Swamp project does not accept external pull requests. GitHub recently added a feature to disable PR creation from non-collaborators entirely, and the team turned it on immediately. The reasoning is supply chain control. Because the project's code is 100% AI-generated within a tightly controlled context — design docs, architectural rules, skills, adversarial review — they want to ensure that all code en

    53 min
  4. 5 MARS

    #93 - The DevSecOps Perspective: Key Takeaways From Re:Invent 2025

    Andrey and Mattias share a fast re:Invent roundup focused on AWS security. What do VPC Encryption Controls, post-quantum TLS, and org-level S3 block public access change for you? Which features should you switch on now, like ECR image signing, JWT checks at ALB, and air-gapped AWS Backup? Want simple wins you can use today?   We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel Summary In this episode, Andrey and Mattias deliver a security-heavy recap of AWS re:Invent 2025 announcements, while noting that Paulina is absent and wishing her a speedy recovery. Out of the 500+ releases surrounding re:Invent, they narrow the list down to roughly 20 features that security-conscious teams can act on today — covering encryption, access control, detection, backups, container security, and organization-wide guardrails. Along the way, Andrey reveals a new AI-powered product called Boris that watches the AWS release firehose so you don't have to. Key Topics AWS re:Invent Through a Security Lens The hosts frame the episode as the DevSecOps Talks version of a re:Invent recap, complementing a FivexL webinar held the previous month. Despite the podcast's name covering development, security, and operations, the selected announcements lean heavily toward security. Andrey is upfront about it: if security is your thing, stay tuned; otherwise, manage your expectations. At the FivexL webinar, attendees were asked to prioritize areas of interest across compute, security, and networking. AI dominated the conversation, and people were also curious about Amazon S3 Vectors — a new S3 storage class purpose-built for vector embeddings used in RAG (Retrieval-Augmented Generation) architectures that power LLM applications. It is cost-efficient but lacks hybrid search at this stage. VPC Encryption and Post-Quantum Readiness One of the first and most praised announcements is VPC Encryption Control for Amazon VPC, a pre-re:Invent release that lets teams audit and enforce encryption in transit within and across VPCs. The hosts highlight how painful it used to be to verify internal traffic encryption — typically requiring traffic mirroring, spinning up instances, and inspecting packets with tools like Wireshark. This feature offers two modes: monitor mode to audit encryption status via VPC flow logs, and enforce mode to block unencrypted resources from attaching to the VPC. Mattias adds that compliance expectations are expanding. It used to be enough to encrypt traffic over public endpoints, but the bar is moving toward encryption everywhere, including inside the VPC. The hosts also call out a common pattern: offloading SSL at the load balancer and leaving traffic to targets unencrypted. VPC encryption control helps catch exactly this kind of blind spot. The discussion then shifts to post-quantum cryptography (PQC) support rolling out across AWS services including S3, ALB, NLB, AWS Private CA, KMS, ACM, and Secrets Manager. AWS now supports ML-KEM (Module Lattice-Based Key Encapsulation Mechanism), a NIST-standardized post-quantum algorithm, along with ML-DSA (Module Lattice-Based Digital Signature Algorithm) for Private CA certificates. The rationale: state-level actors are already recording encrypted traffic today in a "harvest now, decrypt later" strategy, betting that future quantum computers will crack current encryption. Andrey notes that operational quantum computing feels closer than ever, making it worthwhile to enable post-quantum protections now — especially for sensitive data traversing public networks. S3 Security Controls and Access Management Several S3-related updates stand out. Attribute-Based Access Control (ABAC) for S3 allows access decisions based on resource tags rather than only enumerating specific actions in policies. This is a powerful way to scope permissions — for example, granting access to all buckets tagged with a specific project — though it must be enabled on a per-bucket basis, which the hosts note is a drawback even if necessary to avoid breaking existing security models. The bigger crowd-pleaser is S3 Block Public Access at the organization level. Previously available at the bucket and account level, this control can now be applied across an entire AWS Organization. The hosts call it well overdue and present it as the ultimate "turn it on and forget it" control: in 2026, there is no good reason to have a public S3 bucket. Container Image Signing Amazon ECR Managed Image Signing is a welcome addition. ECR now provides a managed service for signing container images, leveraging AWS Signer for key management and certificate lifecycle. Once configured with a signing rule, ECR automatically signs images as they are pushed. This eliminates the operational overhead of setting up and maintaining container image signing infrastructure — previously a significant barrier for teams wanting to verify image provenance in their supply chains. Backups, Air-Gapping, and Ransomware Resilience AWS Backup gets significant attention. The hosts discuss air-gapped AWS Backup Vault support as a primary backup target, positioning it as especially relevant for teams where ransomware is on the threat list. These logically air-gapped vaults live in an Amazon-owned account and are locked by default with a compliance vault lock to ensure immutability. The strong recommendation: enable AWS Backup for any important data, and keep backups isolated in a separate account from your workloads. If an attacker compromises your production account, they should not be able to reach your recovery copies. Related updates include KMS customer-managed key support for air-gapped vaults for better encryption flexibility, and GuardDuty Malware Protection for AWS Backup, which can scan backup artifacts for malware before restoration. Data Protection in Databases Dynamic data masking in Aurora PostgreSQL draws praise from both hosts. Using the new pg_columnmask extension, teams can configure column-level masking policies so that queries return masked data instead of actual values — for example, replacing credit card numbers with wildcards. The data in the database remains unmodified; masking happens at query time based on user roles. Mattias compares it to capabilities already present in databases like Snowflake and highlights how useful it is when sharing data with external partners or other teams. When the idea of using masked production data for testing comes up, the hosts gently push back — don't do that — but both agree that masking at the database layer is a strong control because it reduces the risk of accidental data exposure through APIs or front-end applications. Identity, IAM, and Federation Improvements The episode covers several IAM-related features. AWS IAM Outbound Identity Federation allows federating AWS identities to external services via JWT, effectively letting you use AWS identity as a platform for authenticating to third-party services — similar to how you connect GitHub or other services to AWS today, but in the other direction. The AWS Login CLI command provides short-lived credentials for IAM users who don't have AWS IAM Identity Center (SSO) configured. The hosts see it as a better alternative than storing static IAM credentials locally, but also question whether teams should still be relying on IAM users at all — their recommendation is to set up IAM Identity Center and move on. The AWS Source VPC ARN condition key gets particular enthusiasm. It allows IAM policies to check which VPC a request originated from, enabling conditions like "allow this action only if the request comes from this VPC." For teams doing attribute-based access control in IAM, this is a significant addition. AWS Secrets Manager Managed External Secrets is another useful feature that removes a common operational burden. Previously, rotating third-party SaaS credentials required writing and maintaining custom Lambda functions. Managed external secrets provides built-in rotation for partner integrations — Salesforce, BigID, and Snowflake at launch — with no Lambda functions needed. Better Security at the Network and Service Layer JWT verification in AWS Application Load Balancer simplifies machine-to-machine and service-to-service authentication. Teams previously had to roll their own Lambda-based JWT verification; now it is supported out of the box. The recommendation is straightforward: drop the Lambda and use the built-in capability. AWS Network Firewall Proxy is in public preview. While the hosts have not explored it deeply, their read is that it could help with more advanced network inspection scenarios — not just outgoing internet traffic through NAT gateways, but potentially also traffic heading toward internal corporate data centers. Developer-Oriented: REST API Streaming Although the episode is mainly security-focused, the hosts include REST API streaming in Amazon API Gateway as a nod to developers. This enables progressive response payload streaming, which is especially relevant for LLM use cases where streaming tokens to clients is the expected interaction pattern. Mattias notes that applications are moving beyond small JSON payloads — streaming is becoming table stakes as data volumes grow. Centralized Observability and Detection CloudWatch unified management for operational, security, and compliance data promises cross-account visibility from a single pane of glass, without requiring custom log aggregation pipelines built from Lambdas and glue code. The hosts are optimistic but immediately flag the cost: CloudWatch data ingest pricing can escalate quickly when dealing with high-volume sources like access logs. Deep pockets may be required. Detection is a recurring theme throughout the episode. The hosts discuss CloudTrai

    28 min
  5. 20 FEB.

    #92 - From System Initiative to SWAMP: Agent-Native Infra with Paul Stack

    What can you automate with SWAMP today, from AWS to a Proxmox home lab? How do skills, scripts, and reusable workflows plug into your stack? Could this be your agent’s missing guardrails?  We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners. DevSecOps Talks podcast LinkedIn page DevSecOps Talks podcast website DevSecOps Talks podcast YouTube channel Summary System Initiative has undergone a dramatic transformation: from a visual SaaS infrastructure platform with 17 employees to Swamp, a fully open-source CLI built for AI agents, maintained by a five-person team whose initials literally spell the product name. Paul Stack returns for his third appearance on the show to explain why the old model failed — and why handing an AI agent raw CLI access to your cloud is, as Andrey puts it, just "console-clicking in the terminal." The conversation gets sharp when the hosts push on what problem Swamp actually solves, whether ops teams are becoming the next bottleneck in AI-era delivery, and why Paul believes the right move is not replacing Terraform but giving AI a structured system it can reason about. Paul also drops a parting bombshell: he hasn't written a single line of code in four weeks. Key Topics System Initiative's pivot from visual editor to AI-first CLI Paul Stack explains that System Initiative spent over five years iterating on a visual infrastructure tool where users could drag, drop, and connect systems. Despite the ambition, the team eventually concluded that visual composition was too slow, too cumbersome, and too alien for practitioners accustomed to code, artifacts, and reviewable changes. The shift started in summer 2025 when Paul spiked a public OpenAPI-spec API. A customer then built an early MCP (Model Context Protocol) server on top of it — a prototype that worked but had no thought given to token usage or tool abstraction. System Initiative responded by building its own official MCP server and pairing it with a CLI. The results were dramatically better: customers could iterate easily from the command line or through AI coding tools like Claude Code. By Christmas 2025 the writing was on the wall. The CLI-plus-agent approach was producing better outcomes, while the company was still carrying hundreds of thousands of lines of code for a distributed SaaS platform built for a previous product direction. In mid-January 2026, the company made the call to rethink everything from first principles. The team behind the name The restructuring was painful. System Initiative went from 17 people to five. Paul explains the reasoning candidly: when you don't know what the tool is going to be, keeping a large team around is unfair to them, bad for their careers, and expensive. The five who stayed were the CEO, VP of Business, COO, Paul (who ran product), and Nick Steinmetz, the head of infrastructure — who also happened to be System Initiative's most active internal user, having used the platform to build System Initiative itself. Those five people's initials spell SWAMP. The name was unintentional but stuck — and Paul notes with a grin that if they ever remove the "P," it becomes "SWAM," so he's safe even if he leaves. Beyond the joke, the name fits: Swamp stores operational data in a local .swamp/ directory — not a neatly formatted data lake, but a structured store that AI agents can pull from to reason about infrastructure state and history. Why raw AI agent access to infrastructure is dangerous A major theme in the conversation is that letting an AI agent operate infrastructure directly — through the AWS CLI or raw API calls — is fundamentally unreliable. Andrey lays out the problem clearly: this kind of interaction is equivalent to clicking around the cloud console, just automated through a terminal. It is not repeatable, not reviewable, and inherits the non-deterministic behavior of LLMs. If the agent's context window fills up, it starts to forget earlier decisions and improvises — a terrifying prospect for production infrastructure. What made System Initiative's earlier MCP-based direction compelling, in Andrey's view, was the combination of guardrails, repeatability, and human review. The agent generates a structured specification, a human reviews it, and only then is it applied. Paul agrees and calls this the "agentic loop with the human loop" — the strongest pattern they found. Token costs and the case for local-first architecture Paul shares a hard-won lesson from building MCP integrations: a poorly designed MCP server burns enormous amounts of tokens and creates unnecessary costs for users. He spent three weeks in December reworking the server to use progressive context reveal rather than flooding the model with data. Even so, the fundamental problem with a SaaS-first architecture remained — constantly transmitting context between a central API and the user's agent was expensive regardless of optimization. That experience pushed the team toward a local-first design. Swamp keeps data on the user's machine, close to where the agent operates, giving AI the context it needs without the round-trip overhead and cost of a remote service. What Swamp actually is Swamp is a general-purpose, open-source CLI automation tool — not just another infrastructure-as-code framework. Its core building blocks are: Models: typed schemas with explicit inputs, outputs, and methods. Unlike traditional IaC resource definitions limited to CRUD operations, Swamp models can have methods like analyze or do_next, with the procedural logic living inside the method itself. Workflows: the orchestration layer that interacts with APIs, CLIs, or any external system. Workflows take inputs, can be composed (a workflow can orchestrate other workflows), and produce artifacts that the AI agent can inspect over time. Skills: Claude Code markdown files and shell scripts that teach the AI agent how to build models and workflows within Swamp's architecture. Critically, Swamp ships with zero built-in models — no pre-packaged AWS EC2, VPC, or GCP resource definitions. Instead, the AI agent uses installed skills to generate models on the fly. Paul describes a user who joined the Discord that very morning, asked Swamp to create a schema for managing Let's Encrypt certificates, and it worked on the first attempt without writing any code. Nick Steinmetz provides another example: he manages his homelab Proxmox hypervisor entirely through Swamp — creating and starting VMs, inspecting hypervisor state, and monitoring utilization. He recently connected it to Discord so friends can run commands like @swamp create vm to spin up Minecraft and gaming servers on demand. How Swamp fits with AI coding tools The hosts spend significant time pinning down where Swamp sits relative to tools like Claude Code, bash access, and existing automation. Paul is clear: Swamp is not an AI wrapper or chatbot. It is a structured runtime that gives agents guardrails and reusable patterns. Mattias works through several analogies to help frame it — is it like n8n or Zapier for the CLI? A CLI-based Jenkins where jobs are agents? Paul settles on this: it is a workflow engine driven by typed models, where data can be chained between steps using CEL (Common Expression Language) expressions — the same dot-notation referencing used in Kubernetes API declarations. A simple example: create a VPC in step one, then reference VPC.resource.attributes.vpcid as input to a subnet model in step two. In Paul's personal workflow, he uses Claude Code to generate models and workflows, checks them into Git for peer review, and then runs them manually or through CI at a time of his choosing. He has explicitly configured Claude with a permission deny on workflow run — the agent helps build automation but never executes it. The same CLI works whether a person or an agent runs it; the difference is timing and approval. Reusability, composition, and Terraform interop Swamp workflows are parameterized and reusable across environments. If they grow unwieldy, workflows can orchestrate other workflows, collect outputs, and manage success conditions — similar to GitHub Actions calling other actions. Paul also demonstrates that Swamp can sit alongside existing tooling rather than replacing it. In a live Discord session, he built infrastructure models in Swamp and then asked the AI agent to generate the equivalent Terraform configuration. Because the agent had typed models with explicit relationships, it produced correct Terraform with proper resource dependencies. This positions Swamp less as a replacement mandate and more as a reasoning and control layer that can output to whatever format teams already use. When one of the hosts compares Swamp to general build systems like Gradle, Paul draws a key distinction: traditional tools were designed for humans to write, review, and debate. Swamp is designed for AI agents to inspect and operate within. He references Anton Babenko's widely-used terraform-aws-vpc module — with its 237+ input variables — as an example of a human-centric design that agents struggle with due to version dependencies, module structure complexity, and stylistic decisions baked in over years. Swamp instead provides the agent with structured context, explicit typing, and historical artifacts it can query. Open source, AGPL v3, and monetization Paulina asks the natural question: if Swamp is fully open source under AGPL v3, how does the company make money? Paul is candid that monetization is not the immediate priority — the focus is building a tool that resonates with users first. But he outlines a potential model: a marketplace-style ecosystem where users can publish their own models and workflows, while System Initiative offers supported, maintained, and paid-for versions of commonly needed building blocks. He draws a loose comparison to Docker Hub's model of communit

    48 min

Betyg och recensioner

4,2
av 5
5 betyg

Om

This is the show by and for DevSecOps practitioners who are trying to survive information overload, get through marketing nonsense, do the right technology bets, help their organizations to deliver value, and last but not the least to have some fun. Tune in for talks about technology, ways of working, and news from DevSecOps. This show is not sponsored by any technology vendor and trying to be as unbiased as possible. We talk like no one is listening! For good or bad :) For more info, show notes, and discussion of past and upcoming episodes visit devsecops.fm

Du kanske också gillar