Chasing Entropy Podcast by 1Password

Dave Lewis, 1Password

This podcast is an interview series with career professionals in cyber security as we get their takes on shadow IT, extended access control, agentic AI and how they arrived at this point in their careers. 

  1. 6D AGO

    Chasing Entropy Podcast [Season 2 episode 002]: Allie Mellen on Code War and The Real Logic Behind Cyber Conflict

    Cyber conflict makes more sense when you stop treating it like a technical sideshow and start looking at history, doctrine, and political intent. In this episode of Chasing Entropy, Dave Lewis sits down with analyst and author Allie Mellen to discuss the ideas behind her book Code War, and why the cyber strategies of the United States, China, and Russia reflect much older national patterns. Mellen’s central argument is clear. Cyber attacks are powerful, but not because they replace conventional force. They matter most when they are coordinated with military action, intelligence work, and influence campaigns. That thread runs through the whole conversation, from the Gulf War to Russia’s war in Ukraine. The point is not that cyber stands alone. The point is that cyber becomes far more effective when it is part of a larger campaign with a defined objective. That framing leads to one of the episode’s strongest ideas, history still shapes how nations operate online. Mellen traces the US approach back to a culture of experimentation and technical tinkering. China’s cyber ecosystem grew out of hacktivism and state-linked talent pipelines. Russia’s path was shaped by post-Soviet collapse, where cybercrime became tied to survival and later overlapped with state interests. Those origins still show up in how these countries organize teams, define targets, and pursue advantage. The conversation also pushes back on the way cyber conflict is usually portrayed. Pop culture tends to reduce it to a screen full of code and a few elite operators. Mellen argues that this misses the real story. Cybersecurity is technical, but the motivations behind cyber campaigns are understandable. Power, leverage, coordination, survival, influence. Those are not obscure concepts. They are the same forces that shape conflict everywhere else. One of the more memorable examples in the episode is her explanation of how WarGames helped push US policymakers to take computer security seriously in the 1980s. Public narratives matter, even when they get the details wrong. Another key theme is attribution. Mellen argues that defenders need to understand who is behind an operation, not just what malware was used. Attribution helps explain motivation, likely targets, and what may come next. That matters for governments, but it also matters for enterprises building realistic threat models. If you understand how a group operates and what it wants, you can make better decisions before the next incident lands. The final stretch of the episode focuses on AI, and the tone is sober. Mellen sees real value in automation, especially where AI can speed up workflows and reduce manual effort. She also sees a harder problem taking shape. AI lowers the cost of deception, makes false flag activity easier, and complicates attribution. Add that to a more fragmented internet and a more unstable geopolitical environment, and the result is a tougher operating environment for defenders. This episode is a strong listen for anyone trying to understand how cyber power actually works in practice. Listen to the full conversation, pick up Code War, and then review whether your threat model still treats cyber as a stand-alone technical problem. That assumption is getting harder to defend. Click for Allie's Book

    37 min
  2. MAR 10

    Chasing Entropy Podcast [Season 2 episode 001]: Bob Lord on Hacklore, Secure By Design, and Why Incentives Matter

    SEASON TWO HAS LANDED!  Bob Lord has spent decades building and leading security programs, from early internet crypto work at Netscape to roles at Twitter, Yahoo, the Democratic National Committee, and CISA. In this episode, he and host Dave Lewis get practical about a simple problem, the security advice most people hear does not match how real compromises happen. We start with the myths Bob tracks on Hacklore, then move into what “secure by design” looks like when you treat software security as an outcomes and incentives problem, not a checklist problem. The conversation closes with AI, dependency chains, and the career advice Bob gives to people trying to break into security. “Secure by design” is an incentives problem, not a technology problem When Bob talks about secure by design, he is deliberately not trying to write another technical framework. Plenty exist. His question is different. If we already know how to prevent a long list of common issues, why do we keep shipping the same defects? His answer is uncomfortable and practical: incentives. He draws a line to quality and safety movements outside software, especially automotive safety. Car companies used to compete on lifestyle and appearance, not safety. Customers did not know what to ask for. Manufacturers had little reason to prioritize safety until norms, regulation, and accountability shifted. Software, in his view, is still in the pre-seatbelt era. We have normalized shipping unsafe components, building with unsafe processes, and delivering unsafe defaults. Then we act as if customers should be able to configure their way out of systemic risk. From that lens, CISA’s Secure by Design work focuses on three principles: Take ownership of customer security outcomes. Shipping a patch is not enough if you do not know whether customers update. Measure adoption and remove friction.Embrace radical transparency. Make vulnerability handling easier, not adversarial. Build real safe harbor for good-faith research.Lead from the top. Meaningful change is driven by senior business leadership. You do not delegate quality to the quality team, and you do not delegate security outcomes to security teams alone.AI: the risk is permission amplification, not “AI is spooky” The AI section lands because it stays concrete. Dave shares a story where an internal LLM was asked, “Who at the company doesn’t like me?” The system reportedly queried HR data and responded. Bob uses that to highlight a predictable failure mode: agentic systems can become permission amplifiers. In many organizations, no single person has the ability to pull data from email, chat, and HR systems, then fuse it into a targeted answer. But companies are increasingly giving AI systems broad access paths without mature roles, rights, and auditing. Then we try to patch over it with soft instructions like “don’t be evil.” Bob’s point is not anti-AI. It’s pro-accountability. If the system can take actions and surface sensitive conclusions, you need guardrails that reflect that power. Supply chain reality: “It’s upstream” is not a defense Open source comes up in the context of underfunded teams who cannot afford premium tooling. Bob agrees the constraint is real, but he pushes back on the industry habit of outsourcing responsibility. If a defect ships in your product, it’s yours, even if it came from upstream. He also calls out a common failure pattern: vendors using unmaintained dependencies for years, sometimes far longer, and not giving customers visibility into what is actually inside the product. SBOM practices exist. Some companies do this well. Many do not. Mentioned in the episode https://hacklore.org https://pwn.college

    34 min
  3. 10/28/2025

    Chasing Entropy Podcast 027: Building Zero Trust and Human-Centric Security with Kane Narraway

    In this episode of Chasing Entropy, I sit down with Kane Narraway, a security leader who has built and scaled Zero Trust environments at companies like Atlassian, Shopify, and Canva. Together, we explore the evolution of cybersecurity, from digital forensics to agentic AI, and the ongoing tension between innovation and control. From Forensics to Frameworks Kane’s journey into cybersecurity began with a fascination for hardware, inspired by tinkering with spare computer parts from his grandfather. That curiosity led him into networking, digital forensics, and ultimately enterprise security, laying the foundation for a pragmatic approach to defense. He recalls the early days of building Zero Trust architectures before the term became an industry buzzword, emphasizing how early implementations were often “collections of Python scripts” long before robust vendor solutions emerged. The Last Mile of Zero Trust Kane and I discuss the progress and pitfalls of Zero Trust adoption. While modern identity and access systems have made implementation easier, Kane argues that the industry still leans too heavily on network-level controls. “The point of Zero Trust was to stop relying on networks,” he notes, describing lingering issues like single-factor API keys and limited endpoint-level enforcement. His team’s experiments with proxy-based access models highlight how innovation often means rethinking, not just reinforcing, old ideas. The AI Security Dilemma The conversation turns to agentic AI, autonomous systems capable of acting on credentials and data. Both Kane and I expressed concern that current security strategies, built for humans, are ill-suited for bots. “We’ve spent so long protecting human users,” Kane warns, “but now service accounts and AI agents are our weakest link.” They explore real-world examples, including AI prompt injection attacks, and question how organizations can extend Zero Trust principles to these new autonomous entities. Governance, Responsibility, and “Bot Jail” As AI governance becomes a boardroom topic, Kane and Dave tackle the thorny question of accountability: when an AI system goes rogue, who’s to blame? We mused about the idea of a “bot jail,” underscoring that explainability and traceability, not just prevention, are essential in the age of automation. Building Security Cultures that Fit Beyond technology, Kane offers insights into building effective security teams that align with company culture. At Shopify, for instance, strong platform alignment meant setting clear principles and empowering teams to work autonomously. His advice for leaders: build around your organization’s DNA, not against it. Measuring What Matters Security impact can be hard to quantify. Kane recommends balancing operational metrics with threat intelligence and industry trend data, using reports like Verizon’s DBIR as directional guides. As credential-stuffing attacks decline and software supply chain threats rise, he stresses the importance of adapting defenses to real-world attacker behavior. Advice for the Next Generation For newcomers to cybersecurity, Kane’s advice is simple but grounded: “Do whatever you have to do to get in, and then find your passion.” Not everyone needs to start in red teaming; roles in governance, blue teams, or compliance can open doors and build transferable skills. Closing Notes After a wide-ranging discussion, I close with this question: coffee or tea? For Kane, it’s coffee at heart, but tea in practice. The perfect metaphor, perhaps, for the compromises every security leader makes between passion and practicality. Listen to the full episode of the Chasing Entropy Podcast on YouTube or your favourite podcast platform. Be sure to like and subscribe! Hosted by Dave Lewis, Global Advisory CISO at 1Password.

    36 min
  4. 10/21/2025

    Chasing Entropy Podcast 026: Identity, AI, and the Future of Trust with Joseph Carson

    In this episode of the Chasing Entropy Podcast, I am joined by Joe Carson, Chief Security Evangelist and Advisory CISO at Delinea (formerly Thycotic), to explore how personal history, technology evolution, and emerging AI challenges shape the cybersecurity landscape. From Gaming to Global Security Joe shares his journey from growing up in Belfast with an early passion for gaming and coding, to building a decades-long career in IT and security. His path included pivotal moments—like responding to a massive DDoS attack in the early 2000s—that transformed his focus from systems administration to dedicated security research and identity protection. Identity as the New Perimeter Together, Dave and Joe examine how identity has evolved: from managing devices and offices to today’s world of bring your own identity and now bring your own agent. With AI agents increasingly requiring credentials and access, they emphasize the urgent need to rethink identity governance—not just for humans, but also for machines and autonomous systems. AI, Governance, and Regulation The conversation dives into the EU AI Act, GDPR, and the risks of poorly governed AI adoption. Joe highlights the importance of a risk-based approach to regulation, transparency in AI decision-making, and the critical role of explainability as the foundation of digital trust in the coming years. Practical Analogies and Lessons Using the metaphor of an alarm clock evolving from simple to “agentic,” Joe illustrates how seemingly harmless technologies can become critical risk points as they accumulate access to health, financial, and personal data. The discussion reinforces why privilege management and least-access principles are more crucial than ever. Key Takeaways Identity is central: securing human and non-human access alike is now a strategic priority.AI needs governance: explainability and accountability must be built in from the start.Community matters: cybersecurity is sustained not just by technology, but by mentorship, collaboration, and shared experience.🔗 Be sure to like, subscribe, and share the Chasing Entropy podcast. And if you’re attending a security conference soon—keep an eye out for Joe Carson; he’ll probably be there.

    33 min
  5. 10/14/2025

    Chasing Entropy Podcast 025: Heidi Potter on Building Community and Leading with Kindness

    In this episode of Chasing Entropy, I sit down with Heidi Potter, longtime organizer of ShmooCon and now CEO of Turngate, for a heartfelt conversation about community, chaos, and legacy in cybersecurity. From ShmooCon to What’s Next For 20 years, Heidi helped shape ShmooCon into one of the most influential community-driven conferences in the industry. She reflects on the decision to sunset the event, sharing stories of the unexpected impact it had: first talks that launched careers, lifelong friendships, even marriages that began at the con. What started as a grassroots gathering became a cornerstone of hacker culture, thanks to her team’s dedication and her philosophy of “happy staff, happy event.” Lessons in Transparency and Leadership Heidi shares how ShmooCon embraced radical transparency through its Own the Con sessions—revealing the financial realities, challenges, and choices behind running a conference. She explains why building the right team and treating the venue itself as part of that team are essential to success. Her guiding principle of “lead with kindness” underscores both her event leadership style and her approach to life. Stories, Chaos, and Community Magic From snowstorms that stranded attendees for days, to the legendary “Shmoo Bus,” to the serendipity of LobbyCon, Heidi and Dave trade stories that highlight the humor, chaos, and magic that defined the event. For Heidi, coordinating chaos isn’t just a skill, it’s a way of finding order, meaning, and connection in unpredictable moments. Looking Forward While ShmooCon has closed its doors, Heidi isn’t done building community. She’s already laying the groundwork for new events under her Moose Meat initiative, with plans to create smaller, more flexible gatherings in the future. Above all, her focus remains on giving back to the community and leading with kindness. Listen now to hear Heidi’s reflections on two decades of ShmooCon, her insights on building inclusive communities, and why the stories we create together matter just as much as the code we write.

    36 min
  6. 10/08/2025

    Chasing Entropy Podcast 025: "Agents, the Legacy Web, and Logins that Don’t Leak” with Paul Klein IV

    In this episode of Chasing Entropy Podcast, I spoke with Paul Klein about the emerging “agentic web”, where AI agents perform real-world digital tasks on our behalf. Paul shares how Browserbase builds secure infrastructure for these agents to interact with websites safely, and how new integrations with 1Password’s Agentic Autofill enable secure, human-approved credential use without exposing secrets to AI models. Together, they explore how this evolution of automation can make the web more useful, while keeping it secure, observable, and aligned with human intent. Key takeaways 1. The rise of the “agentic web” The internet still runs on legacy systems with no APIs—think DMV forms and government portals.Browserbase enables AI agents to safely automate tasks on these sites using headless browsers (full browsers without a GUI).These agents can perform structured, repetitive workflows—like procurement, compliance checks, or data lookups—without human micromanagement.2. Automation that works like an intern AI isn’t magic, it needs structure.Klein compares AI agents to interns: they’re capable but need clear instructions, context, and defined steps.Repetitive “SOP-style” tasks are ideal; vague one-line prompts aren’t.3. Stagehand & Director: Building automation for everyone Stagehand (open-source) allows natural-language automation using “fuzzy selectors” like “click the login button”, instead of brittle scripts.Director lets anyone prompt AI to build web workflows, see the generated code in real time, and reuse it in production environments.4. Guardrails: Observability before autonomy Browserbase includes live session replay—you can literally watch what your AI agent is doing in a headless browser.Observability ensures safety and accountability; cached workflows reduce dependency on LLMs over time.Governance best practice: treat AI tool use as remote code execution—sandbox it, restrict tool access, and monitor every action.5. Secure authentication for agents 1Password Agentic Autofill now works in Director, allowing agents to securely log in with stored credentials.The human stays in the loop: every login request is approved (or denied) in real time.Passwords are never shared with the model, 1Password fills them directly into the browser.The pragmatic future of AI automation Paul sees agentic browsing not as a replacement for humans, but as a relief valve for digital drudgery. AI can handle the tedious work, checking orders, renewing passports, filling government forms, so humans can focus on creative and strategic thinking. “We’ve automated the equivalent of a couple thousand human lifetimes of browsing,” Klein notes. “That’s time people get back.” For CISOs and security leaders Paul’s advice: Treat AI agents like RCE: Lock down execution environments, sandbox them, and validate every dependency.Constrain tool access: Only approved connectors or MCPs should be callable.Start with observability: Log every action and enable real-time oversight before allowing automation to run at scale.Memorable quote “AI is your intern. Give it the shopping list and the steps.” ~ Paul Klein Listen to this episode of Chasing Entropy wherever you get your podcasts, no hype, no FUD, just the humans behind the next wave of cybersecurity and AI automation. Also on YouTube: https://www.youtube.com/watch?v=o4tgJz_4WcM

    35 min
  7. 10/07/2025

    Chasing Entropy Podcast 024: Dhillon of Hack in the Box on Conferences, Chaos, and the Future of Security

    In this episode of Chasing Entropy, I sit down with Dhillon Kannabhiran, the founder of the long-running Hack in the Box (HITB) Security Conference, to explore the origins, evolution, and impact of one of the world’s most influential hacker gatherings. From Kuala Lumpur to Global Stages Dhillon shares the unlikely beginnings of HITB in Malaysia, started as a scrappy, accessible alternative to high-cost events like Black Hat. Against all odds, and skepticism that “nobody would come to Malaysia”, HITB attracted global speakers and quickly became a fixture in Asia, the Middle East, and Europe. Along the way came wild stories of last-minute chaos, cultural exchanges, and the conference’s deliberate focus on building community through face-to-face connections. Curating Talks and Building Community The conversation dives into how talks are chosen, balancing technical depth with accessibility, and ensuring new voices get a platform. Dhillon emphasizes that HITB isn’t just about the talks you can rewatch later, it’s about hallway conversations, TCP/IP networking sessions, and serendipitous encounters that spark startups, collaborations, and lifelong friendships. Security Lessons (and Non-Lessons) Looking back at two decades of research presented at HITB, Dhillon is candid: many of the same problems persist, only shifted into new technologies. From classic exploits to today’s “vibe coding” and AI-assisted development, human error and misunderstanding remain the root causes of vulnerabilities. Still, this constant reinvention ensures hackers, and defenders, will never run out of work. AI, Translation, and the Future of Conferences The discussion expands to how AI is reshaping both hacking and events. From bug-hunting orchestration with AI agents to real-time language translation devices, the tools are changing fast. Dhillon warns of risks like AI-generated deepfakes but also highlights opportunities for accessibility, inclusivity, and global collaboration. Words to Hack By Dhillon closes with advice for hackers and builders alike: “Try stuff out. Don’t hold back. Don’t think there’s going to be a tomorrow. Do whatever you can today. Keep hacking, bro.”

    40 min
  8. 09/30/2025

    Chasing Entropy Podcast 23: Cybersecurity Meets M&A with Cole Grolmus

    In this episode of Chasing Entropy, I sit down with Cole Grolmus, founder of Strategy of Security, to explore the often-overlooked world where cybersecurity and mergers & acquisitions (M&A) collide. The Journey to Strategy of Security Cole shares his path from early sysadmin roles in Iowa to a decade at PwC, where he worked on large-scale cybersecurity transformations. Along the way, he blended business acumen with technical expertise, ultimately founding Strategy of Security to bridge the gap between practitioners and the commercial side of the industry. M&A and Cybersecurity: Where Risk Meets Value The conversation dives deep into the realities of cybersecurity in M&A: The real “gotchas” - Rarely do deals fall apart solely due to security issues, but identifying problems early can shape budgets and integration strategies.Integration challenges - From identity platforms to logging, customer management systems, and vendor contracts, successful acquisitions depend on planning for forward-looking integration, not just current posture.Reasonable assurance - Much like audits, due diligence can only go so far. Complete certainty is impossible, and security leaders must manage risk with contingencies like holdbacks and clawbacks.The AI Wild West Cole and Dave touch on the rising role of agentic AI in enterprises. Whether it’s ephemeral developer tools or standing customer-facing agents, the lack of maturity and consistency makes integration during M&A even more complex. Advice for Security Leaders For CISOs facing M&A, Cole emphasizes: Have a playbook - Not all M&A is bad, but leaders must prepare to handle inherited risks.Factor M&A into your vendor strategy - The cybersecurity industry itself is consolidating rapidly, with billion-dollar deals becoming common. Vendor stability (or lack thereof) is now a core risk to manage.Pay attention to the business side - As careers progress, understanding the industry landscape matters as much as technical defenses.Key Takeaway M&A in cybersecurity isn’t just about dollars and deals, it’s about managing complexity, risk, and people. Whether you’re a CISO preparing for an acquisition or a practitioner navigating vendor shakeups, the ability to translate between business imperatives and technical realities is critical.

    36 min

About

This podcast is an interview series with career professionals in cyber security as we get their takes on shadow IT, extended access control, agentic AI and how they arrived at this point in their careers.