The Boring AppSec Podcast

The Boring AppSec Podcast

In this podcast, we will talk about our experiences having worked at different companies - from startups to big enterprises, from tech companies to security companies, and from building side projects to building startups. We will talk about the good, the bad, and everything in between. So join us for some fun, some real, and some super hot takes about all things Security in the Boring AppSec Podcast.

  1. 12/15/2025

    The Future of Developer Security with Travis McPeak

    In this episode, we sit down with Travis McPeak, one of the most prominent thinkers in the space of developer security. Travis, who built his career at the intersection of security automation and developer productivity, shares his philosophy on achieving security at scale in the AI era. His career spans security leadership roles at major tech companies, including Symantec, IBM, Netflix, and Databricks. Most recently, he founded and served as CEO of Resourcely, a startup built on the idea of making cloud infrastructure secure by default, before being "acqui-hired" by Cursor, the rapidly growing AI-powered code editor, to lead security and enterprise readiness. Key Takeaways AI for Secure by Default: AI tools provide the best injection point to shift security "all the way left" and move past the reactive "whack-a-mole" approach, because developers are already motivated to use these highly effective tools.Changing AppSec Strategy: AI dramatically changes the nature of AppSec by making previously unscalable strategies, such as threat modeling, applicable. AI can generate architecture diagrams on demand by tracing through code.The Compliance Bottleneck: The dramatic consolidation of cloud security vendors reflects how compliance-minded the security industry remains. Critical infrastructure misconfigurations (like public databases being left open) often go unaddressed because they are not measured by compliance standards.Platform vs. Point Solutions: Travis argues against platforms that are often amalgamations of poorly integrated acquired tools. He suggests buying the single best point solution for a high-leverage problem and using AI capabilities to operationalize and wire it into internal systems, thereby simplifying integrations that platforms traditionally provide.The Skeptical Coder: A fundamental limitation of Large Language Models (LLMs) is their desire to "make you happy," causing them to provide answers even if they are incorrect. Therefore, engineers must use AI output only as a starting point and only consider the code finished when they understand it fully end to end.Prompt Injection Defined: Prompt injection is confirmed as a legitimate vulnerability, essentially a rehash of old issues like cross-site scripting and SQL injection, arising from the improper separation between the LLM instruction and the user instruction. Tune in for a deep dive! Contacting Travis * LinkedIn: https://www.linkedin.com/in/travismcpeak/ * Company Website: https://www.cursor.com Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

    52 min
  2. 12/04/2025

    Scaling Product Security In The AI Era with Teja Myneedu

    In this episode, we sit down with Teja Myneedu, Sr. Director, Security and Trust at Navan. He shares his philosophy on achieving security at scale, discussing some challenges and approaches specially in the AI era. Teja's career spans over two decades on the front lines of product security at hyper-growth companies like Splunk. He currently operates at the complex intersection of FinTech and corporate travel, where his responsibilities include securing financial transactions and ensuring the physical duty of care for global travelers. Key Takeaways • Scaling Security Philosophy: Security programs should be built on developer empathy and innovative solutions, scaling with context and automation. • Pragmatic Protection: Focus on incremental, practical improvements (like WAF rules) to secure the enterprise immediately, instead of letting the pursuit of perfection delay necessary defenses; security by obscurity is not always bad. • Flawed Prioritization: Prioritization frameworks are often flawed because they lack organizational and business context, which security tools fail to provide. • AI and Code Fixes: AI is changing the application security field by reducing the cognitive load on engineers and making it easier for security teams to propose vulnerability fixes (PRs). • The Authorization Dilemma: The biggest novel threat introduced by LLMs is the complexity of identity and authorization, as agents require delegate access and dynamically determine business logic. Tune in for a deep dive! Contacting Teja * LinkedIn: https://www.linkedin.com/in/myneedu/ * Company Website: https://www.navan.com Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

    52 min
  3. 11/24/2025

    Architecting AI Security: Standards and Agentic Systems with Ken Huang

    In this episode, we sit down with Ken Huang, a core architect behind modern AI security standards, to discuss the revolutionary challenges posed by agentic AI systems. Ken, who chairs the OWASP AIVSS project and co-chairs the AI safety working groups at the Cloud Security Alliance, breaks down how security professionals are writing the rulebook for a future driven by autonomous agents. Key Takeaways AIVSS for Non-Deterministic Risk: The OWASP AIVSS project aims to provide a quantitative measure for core agent AI risks by applying an agent AI risk factor on top of CVSS, specifically addressing the autonomy and non-deterministic nature of AI agents. Need for Task-Scoped IAM: Traditional OAuth and SAML are inadequate for agentic systems because they provide coarse-grained, session-scoped access control. New authentication standards must be task-scoped, dynamically removing access once a specific task is complete, and driven by verifying the agent’s intent. A2A Security Requires New Protocols: Agent-to-Agent communication (A2A) introduces security issues beyond traditional API security (like BOLA). New systems must utilize protocols for Agent Capability Discovery and Negotiation—validated by digital signatures—to ensure the trustworthiness and promised quality of service from interacting agents. Goal Manipulation is a Critical Threat: Sophisticated attacks often utilize context engineering to execute goal manipulation against agents. These attacks include gradually shifting an agent's objective (crescendo attack), using prompt injection to force the agent to expose secrets (malicious goal expansion), and forcing endless processing loops (exhaustion loop/denial of wallet). Tune in for a deep dive! Contacting Ken * LinkedIn: https://www.linkedin.com/in/kenhuang8/ * Company Website: https://distributedapps.ai/ * Substack: https://kenhuangus.substack.com/ * Paper (Agent Capability Negotiation and Binding Protocol): https://arxiv.org/abs/2506.13590 * Book (Securing AI Agents): https://www.amazon.com/Securing-Agents-Foundations-Frameworks-Real-World/dp/3032021294 * AIVSS: https://aivss.owasp.org/ Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

    46 min
  4. 10/01/2025

    The Attacker's Perspective on AI Security with Aryaman Behera

    In this episode, hosts Sandesh and Anshuman chat with Aryaman Behera, the Co-Founder and CEO of Repello AI. Aryaman shares his unique journey from being a bug bounty hunter and the captain of India's top-ranked CTF team, InfoSec IITR, to becoming the CEO of an AI security startup. The discussion offers a deep dive into the attacker-centric mindset required to secure modern AI applications, which are fundamentally probabilistic and differ greatly from traditional deterministic software. Aryaman explains the technical details behind Repello's platform, which combines automated red teaming (Artemis) with adaptive guardrails (Argus) to create a continuous security feedback loop. The conversation explores the nuanced differences between AI safety and security, the critical role of threat modeling for agentic workflows, and the complex challenges of responsible disclosure for non-deterministic vulnerabilities. Key Takeaways - From Hacker to CEO: Aryaman discusses the transition from an attacker's mindset, focused on quick exploits, to a CEO's mindset, which requires patience and long-term relationship building with customers. - A New Kind of Threat: AI applications introduce a new attack surface built on prompts, knowledge bases, and probabilistic models, which increases the blast radius of potential security breaches compared to traditional software. - Automated Red Teaming and Defense: Repello’s platform consists of two core products: Artemis, an offensive AI red teaming platform that discovers failure modes , and - Argus, a defensive guardrail system. The platforms create a continuous feedback loop where vulnerabilities found by Artemis are used to calibrate and create policies for Argus. - Threat Modeling for AI Agents: For complex agentic systems, a black-box approach is often insufficient. Repello uses a gray-box method where a tool called AgentWiz helps customers generate a threat model based on the agent's workflow and capabilities, without needing access to the source code. - The Challenge of Non-Deterministic Vulnerabilities: Unlike traditional software vulnerabilities which are deterministic, AI exploits are probabilistic. An attack like a system prompt leak only needs to succeed once to be effective, even if it fails nine out of ten times. - The Future of Attacks is Multimodal: Aryaman predicts that as AI applications evolve, major new attack vectors will emerge from new interfaces like voice and image, as their larger latent space offers more opportunities for malicious embeddings. Tune in for a deep dive! Contacting Aryaman * LinkedIn: https://www.linkedin.com/in/aryaman-behera/ * Company Website: https://repello.ai/ Contacting Anshuman * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anshumanbhartiya/ * X: ⁠⁠⁠⁠https://x.com/anshuman_bh * Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/ * ⁠⁠⁠⁠Instagram: ⁠⁠⁠https://www.instagram.com/anshuman.bhartiya Contacting Sandesh * LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/in/anandsandesh/ * X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans * Website: ⁠⁠⁠⁠https://boringappsec.substack.com/

    48 min

About

In this podcast, we will talk about our experiences having worked at different companies - from startups to big enterprises, from tech companies to security companies, and from building side projects to building startups. We will talk about the good, the bad, and everything in between. So join us for some fun, some real, and some super hot takes about all things Security in the Boring AppSec Podcast.