AI Security Podcast

TechRiot.io

The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.

  1. 7H AGO

    Anthropic's Project Mythos: Why the "Zero-Day Machine" is Terrifying the Security Industry

    In this episode, Ashish and Caleb discuss the internet-breaking preview of Project Mythos, an unreleased AI model from Anthropic that has shown an unprecedented, terrifying ability to reason through code and automatically generate working zero-day exploits .We dive into the conversations surrounding Project Glasswing, Anthropic's initiative to share this model with select partners (like Palo Alto and CrowdStrike) before public release, allowing them a 100-day window to patch critical vulnerabilities . Caleb explains why this level of AI reasoning isn't just hype: early testers are reporting that Mythos is not only finding zero-days, but actively detecting dormant intrusions within their own networks .If you are a CISO or security practitioner, this episode talks about it all. We discuss why the traditional 30-day patch cycle is dead, why "assuming breach" is now mandatory, and why 60% of legacy security vendors might not survive this shift . Questions asked: (00:00) Introduction: The Hype Around Anthropic's Project Mythos (04:00) What is Project Mythos? (Reasoning and Finding Zero-Days) (06:50) Project Glasswing: The 100-Day Partner Patch Window (08:30) The Controversy: Did Anthropic Pick the Right Partners? (12:30) Why Anthropic Doesn't Have the Compute to Scan the Whole Internet (15:10) The Insider View: Mythos is Finding Dormant Intrusions (16:30) Why 60% of Security Vendors Will Go Away (19:30) Hype vs. Reality: GeoHot's Comments on Small Models (21:30) Eliminating False Positives in Static Code Analysis (23:50) The Zero-Day Clock: Time to Exploit Drops to Under 6 Hours (25:50) The Ethics of Zero-Days: Should Mythos Be Released at All? (34:30) The CISO Action Plan: Speeding Up Patching (Hours vs. Days) (44:50) The 3rd Party SaaS Problem: What to Do When You Can't Patch (46:10) "Assume Breach": Why Deception (Honeypots) is the New Priority (57:30) Empowering Non-Tech Teams to Build Detections (01:02:10) AI Makes Cheesy "Hacker Movies" a Reality Resources mentioned during the episode: Assessing Claude Mythos Preview’s cybersecurity capabilities Project Glasswing Zero Day Clock

    1h 3m
  2. 3D AGO

    Are AI Security Startups Faking It? How to Separate Signal from Noise

    With over 70 startups claiming to have built the perfect "AI SOC Analyst" or "AI Threat Hunter," how do you separate the real products from the vaporware? Recorded live at Decibel RSAC Founder Festival, Ashish and Caleb hosted a heated panel with Edward Wu (Founder & CEO, Dropzone AI) and Lou Manousos (Co-Founder & CEO, Ent AI). The group debates the controversial claim that AI can provide 100% threat prevention and exposes the dirty secret of the industry: Many AI startups are "cheating" by hiding human analysts behind their software.If you were a CISO or security practitioner navigating the vendor floor at RSA, this episode provides a BS-detector framework. Learn why an AI wrapper around Claude Code isn't enough, why "consistency" is the ultimate test for AI agents, and how to verify if a startup actually has real-world, paying enterprise deployments (and not just friendly design partners) . Questions asked: (00:00) Introduction: Live with Decibel(01:30) Meet the Panel: Edward Wu (Dropzone) & Lou Manousos (Ent) (03:40) The Great Debate: Has the Industry Given Up on Prevention? (05:50) What Has AI Actually Solved? (Repetitive Work vs. Context) (09:00) How to Spot BS on the RSA Show Floor (11:30) Defining an AI Agent: Chatbots vs. Threat Hunters (13:40) The Claude Code Problem: Is Your Product Just a Wrapper? (16:50) The 80% Accuracy Trap & Why Consistency is Key (21:30) Proving ROI: Evaluating AI Agents Like Human Employees (24:50) The Dirty Secret: Humans Hiding Behind AI Startups (26:30) Spotting Fake Customer Logos (28:30) Audience Q&A: Scaling the SOC vs. Replacing Humans (36:10) Forward Deployed Engineering & Personalized Software (40:30) Reimagining Security Architecture from the Inside Out (43:30) How Ent Detects Remote Workers Outsourcing Their Jobs (45:30) Final Thoughts: Asking Vendors for Real Proof Points

    47 min
  3. MAR 18

    Questions Every CISO Must Ask AI Security Vendors

    RSA Conference 2026 is here and the AI agent hype machine is louder than ever. In this episode, Ashish and Caleb cut through the noise and arm CISOs, practitioners, and security teams with a clear-eyed view of what's actually happening in AI security this year. From the vendor floor at RSAC to the future of internal security automation, Caleb and Ashish speak about why 70% of "AI agent security" vendors can't even define what an agent is, why security team consolidation around 2–3 major platforms (plus internal AI capability) may be the most underrated CISO strategy of 2026, and why the window from vulnerability disclosure to live exploitation has collapsed from months to under two days. They also explore the emerging idea of a centralised AI automation function inside security teams and why the future of security isn't buying more point solutions, it's building internal AI capability on top of a standardised vendor stack. Questions asked: (00:00) Introduction: Preparing for RSAC 2026(03:50) The Year of the "AI Agent" Marketing Hype (06:50) The Secret to AI Context: Enterprise Search (Glean) (09:50) Why Your SOC Needs a Centralized AI Platform Team (13:30) The #1 Question to Ask Vendors at RSAC: API Access (16:50) The Myth of MCP (Model Context Protocol) as the Gold Standard (20:50) Why RSAC is Too Noisy: Vibe Coding & 1,000 New Startups (22:30) Is Capital Raised the Only Signal of Trust? (24:50) Prediction: CISOs Will Fire 500 Vendors and Consolidate (30:50) The Build vs. Buy Debate for AI Security Features (35:50) Surviving RSAC: Sorting Signal from Noise (38:50) The Problem with "End-to-End" AI Agent Claims (41:50) Are AI-Driven Attacks Real? (44:50) The Zero-Day Clock: From 5 Months to 2 Days (48:50) RSAC Events: Live Recordings and CISO Panels Resources spoken about during the episode: RSAC 2026 BSidesSF 2026 Glean Zero Day Clock

    51 min
  4. MAR 5

    Will Foundation Models Kill Security Startups?

    Did Anthropic just kill the AppSec industry? Following the announcement of Claude Code Security, a tool that finds, reasons about, and fixes code vulnerabilities, major security stocks dropped by 8% .In this episode of the AI Security Podcast, Ashish and Caleb break down the reality behind the hype. Caleb explains why using AI for SAST (Static Application Security Testing) is "a no-brainer," noting that many open-source projects and startups have already been doing exactly what Anthropic announced . We discuss why this actually validates the shift toward AI-automated remediation.The conversation goes deeper into the future of the cybersecurity market: Will giant foundation models start acquiring security companies? Will they offer "premium gas" (cheaper tokens) for building on their platforms? And most importantly, what does this mean for AppSec engineers whose jobs involve triaging false positives? Questions asked: (00:00) Introduction: The Claude Code Security Announcement(02:50) What is Claude Code Security? (Finding & Reasoning about VULNs) (03:50) Market Overreaction: Why Security Stocks Dropped 8% (05:10) Why AI-Powered SAST is Not New (OpenAI & Open Source doing it already) (07:20) Will AI Take AppSec Jobs? (Triaging False Positives) (09:00) "Shift Left" on Steroids: Auto-Fixing and PR Submission (11:30) The Threat to Legacy Vendors: Why CrowdStrike's Moat is Safe (14:30) Historical Context: AI is the New Calculator/Typewriter (18:20) The "Gasoline" Theory: Foundation Models as Fuel (21:00) Will Anthropic Acquire Security Startups? (26:30) Anthropic's Go-To-Market Strategy: Building AI SOCs (33:30) Startup Survival: Can Innovation Outpace Big Tech? (41:30) The Future of Threat Intel: Is the Legacy Moat Disappearing? (48:20) Negotiating with Vendors using AI Leverage (53:30) Using Evals for Organizational Anomaly Detection

    1 hr
  5. FEB 11

    How to Build Your Own AI Chief of Staff with Claude Code

    What if you could automate your entire work life with a personal AI Chief of Staff? In this episode, Caleb Sima reveals "Pepper," his custom-built AI agent to Ashish that manages emails, schedules meetings, and even hires other AI experts to solve problems for him . Using Claude Code and a "vibe coding" approach, Caleb built a multi-agent system over a single holiday weekend, without writing a single line of Rust code himself . We discuss how he used this same method to build a black-box testing agent that auto-files bugs on GitHub and even designed the branding for his venture fund, White Rabbit . We explore why "intelligence is becoming a commodity," and how you can survive by becoming an architect of AI agents rather than just a worker Questions asked: (00:00) Introduction(03:20) Meet "Pepper": Caleb's AI Chief of Staff (05:40) How Pepper Dynamically Hires "Expert" Agents (07:30) Pepper Builds its Own Tools (MCP Servers) (11:50) Do You Need to Be a Coder to Do This? (12:50) Using "Claude Superpowers" to Orchestrate Agents (16:50) Automating a Venture Fund: Branding White Rabbit with AI (20:50) Building a "Black Box" Testing Agent in Rust (Without Knowing Rust) (28:50) The Developer Who Went Skiing While AI Did His Job (32:20) The Coming "App Sprawl" Crisis in Enterprise Security (36:00) Security Risks: Managing Shared Memory & Context (41:20) The Future of Work: Is Intelligence Becoming a Commodity? (44:50) Why Plumbers are Safe from AI

    47 min
  6. JAN 28

    AI Security 2026 Predictions: The "Zombie Tool" Crisis & The Rise of AI Platforms

    This is a forward-looking episode, as Ashish Rajan and Caleb Sima break down the 8 critical predictions shaping the future of AI security in 2026 We explore the impending "Age of Zombies", a crisis where thousands of unmaintainable, "vibe-coded" internal tools begin to rot as employees churn . We also unpack controversial theory about the "circular economy" of token costs, suggesting that major providers are artificially keeping prices high to avoid a race to the bottom . The conversation dives deep into the shift from individual AI features to centralized AI Platforms , the reality of the Capability Plateau where models are getting "better but not different" , and the hilarious yet concerning story of Anthropic’s Claude not being able to operate a simple office vending machine without resorting to socialism or buying stun guns Questions asked: (00:00) Introduction: 2026 Predictions(02:50) Prediction 1: The Capability Plateau (Why models feel the same) (05:30) Consumer vs. Enterprise: Why OpenAI wins consumer, but Anthropic wins code (09:40) Prediction 2: The "Evil Conspiracy" of High AI Costs (12:50) Prediction 3: The Rise of the Centralized AI Platform Team (15:30) The "Free License" Trap: Microsoft Copilot & Enterprise fatigue (20:40) Prediction 4: Hyperscalers Shift from Features to Platforms (AWS Agents) (23:50) Prediction 5: Agent Hype vs. Reality (Netflix & Instagram examples) (27:00) Real-World Use Case: Auto-Fixing 1,000 Vulnerabilities in 2 Days (31:30) Prediction 6: Vibe Coding is Replacing Security Vendors (34:30) Prediction 7: Prompt Injection is Still the #1 Unsolved Threat (43:50) Prediction 8: The "Confused Deputy" Identity Problem (51:30) The "Zombie Tool" Crisis: Why Vibe Coded Tools will Rot (56:00) The Claude Vending Machine Failure: Why Operations are Harder than Code

    1h 1m
  7. JAN 23

    Why AI Agents Fail in Production: Governance, Trust & The "Undo" Button

    Is your organization stuck in "read-only" mode with AI agents? You're not alone. In this episode, Dev Rishi (GM of AI at Rubrik, formerly CEO of Predibase) joins Ashish and Caleb to dissect why enterprise AI adoption is stalling at the experimentation phase and how to safely move to production . Dev reveals the three biggest fears holding IT leaders back: shadow agents, lack of real-time governance, and the inability to "undo" catastrophic mistakes . We dive deep into the concept of "Agent Rewind", a capability to roll back changes made by rogue AI agents, like deleting a production database and why this remediation layer is critical for trust . The conversation also explores the technical architecture needed for safe autonomous agents, including the debate between MCP (Model Context Protocol) and A2A (Agent to Agent) standards . Dev explains why traditional "anomaly detection" fails for AI and proposes a new model of AI-driven policy enforcement using small language models (SLMs) as judges . Questions asked: (00:00) Introduction(02:50) Who is Dev Rishi? From Predibase to Rubrik(04:00) The Shift from Fine-Tuning to Foundation Models (07:20) Enterprise AI Use Cases: Background Checks & Call Centers (11:30) The 4 Phases of AI Adoption: Where are most companies? (13:50) The 3 Biggest Fears of IT Leaders: Shadow Agents, Governance, & Undo (18:20) "Agent Rewind": How to Undo a Rogue Agent's Actions (23:00) Why Agents are Stuck in "Read-Only" Mode (27:40) Why Anomaly Detection Fails for AI Security (30:20) Using AI Judges (SLMs) for Real-Time Policy Enforcement (34:30) LLM Firewalls vs. Bespoke Policy Enforcement (44:00) Identity for Agents: Scoping Permissions & Tools (46:20) MCP vs. A2A: Which Protocol Wins? (48:40) Why A2A is Technically Superior but MCP Might Win

    51 min
  8. 12/19/2025

    AI Security 2025 Wrap: 9 Predictions Hit & The AI Bubble Burst of 2026

    It's the season finale of the AI Security Podcast! Ashish Rajan and Caleb Sima look back at their 2025 predictions and reveal that they went 9 for 9. We wrap up the year by dissecting exactly what the industry got right (and wrong) about the trajectory of AI, providing a definitive "state of the union" for AI security. We analyze why SOC Automation became the undisputed king of real-world AI impact in 2025 , while mature AI production systems failed to materialize beyond narrow use cases due to skyrocketing costs and reliability issues . They also review the accuracy of their forecasts on the rise of AI Red Teaming , the continued overhyping of Agentic AI , and why Data Security emerged as a critical winner in a geo-locked world . Looking ahead to 2026, the conversation shifts to bold new predictions: the inevitable bursting of the "AI Bubble" as valuations detach from reality and the rise of self-fine-tuning models . We also explore the controversial idea that the "AI Engineer" is merely a rebrand for data scientists and a lot more… Questions asked: (00:00) Introduction: 2025 Season Wrap Up(02:50) State of AI Utility in late 2025: From coding to daily tasks(09:30) 2025 Report Card: Mature AI Production Systems? (Verdict: Correct)(10:45) The Cost Barrier: Why Production AI is Expensive(13:50) 2025 Report Card: SOC Automation is #1 (Verdict: Correct)(16:00) 2025 Report Card: The Rise of AI Red Teaming (Verdict: Correct)(17:20) 2025 Report Card: AI in the Browser & OS(21:00) Security Reality: Prompt Injection is still the #1 Risk(22:30) 2025 Report Card: Data Security is the Winner(24:45) 2025 Report Card: Geo-locking & Data Sovereignty(28:00) 2026 Outlook: Age Verification & Adult Content Models(33:00) 2025 Report Card: "Agentic AI" is Overhyped (Verdict: Correct)(39:50) 2025 Report Card: CISOs Should NOT Hire "AI Engineers" Yet(44:00) The "AI Engineer" is just a rebranded Data Scientist(46:40) 2026 Prediction: Self-Training & Self-Fine-Tuning Models(47:50) 2026 Prediction: The AI Bubble Will Burst(49:50) Bold Prediction: Will OpenAI Disappear?(01:01:20) Final Thoughts: Looking ahead to Season 4

    1h 3m

Ratings & Reviews

4.9
out of 5
9 Ratings

About

The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.

You Might Also Like