Exploring Information Security - Exploring Information Security

Timothy De Block

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.

  1. MAR 3

    Exploring the Bad Advice Cybersecurity Professionals Provide the Public

    Summary: In this episode, Timothy De Block sits down with cybersecurity expert Bob Lord to discuss the dangerous impact of "Hacklore"—obsolete, excessive, and fear-based cybersecurity advice. They explore how bombarding everyday users with spy-thriller scenarios (like juice jacking and evil baristas) leads to security fatigue and inaction. Instead, they advocate for shifting the burden of security away from the user and onto tech companies, while narrowing consumer advice down to the absolute basics: Multi-Factor Authentication (MFA), password managers, and credit freezes. Key Topics Discussed The Origins of Hacklore: Bob Lord started the Hacklore website after a CISO friend emailed him a "trifecta" of problematic security advice concerning public Wi-Fi, juice jacking, and restaurant QR codes. The initiative serves as an expert-backed resource to debunk common myths and promote better, actionable security guidance. Rethinking Security Advice: Providing users with excessive or overly complex advice often results in them ignoring it entirely. Security advice needs to be constantly reevaluated to ensure it addresses actual, common crimes rather than unlikely scenarios like an "evil barista" intercepting data. Shifting the Security Burden: The responsibility for digital safety should move away from the end-user and toward internet service providers and tech companies. Companies must adopt "secure by design" practices, such as requiring password changes upon installation or shipping routers with unique default passwords. The Power of MFA: Multi-Factor Authentication (MFA) is essential for protecting vulnerable populations, such as seniors who are frequently targeted by organized fraud. Even SMS-based MFA is far better than having no MFA at all, as it degrades most common attacks according to a Microsoft study. The Hidden Benefit of Password Managers: A major, underappreciated benefit of password managers is their built-in phishing resistance. If a user is tricked into visiting an imposter website, the password manager will not fill in the credentials, effectively stopping the attack in its tracks. Freezing Credit: Implementing a credit freeze is another highly recommended, fundamental security measure. This action builds directly on the basic security practices promoted by the Hacklore initiative. Learning from Near Misses: At the upcoming RSA conference, Bob Lord will discuss the concept of cyber security "near misses". He advocates that the cybersecurity field should learn from incidents that almost went wrong, similar to the safety approach used in the aviation sector. Memorable Insights Sharing obsolete security advice can be considered an "act of harm" because it distracts people from effective measures and can create a fatalistic mindset that no security action will help. Since most people will only dedicate a few minutes a year to security, recommendations must be strictly limited to what is truly feasible for them to implement. Getting a friend or family member to make just one security change, like enabling MFA on their primary email account, is considered a significant victory. Resources Mentioned Hacklore Initiative: A non-commercial website aimed at replacing obsolete cybersecurity advice with expert-backed guidance (hacklore.org). Hacklore on Bluesky: Follow the movement and join the conversation at @hacklore.bsky.social. "How effective is multifactor authentication at deterring cyberattacks?": The Microsoft research paper (arXiv:2305.00945) referenced by Bob Lord detailing the real-world efficacy of MFA: https://arxiv.org/abs/2305.00945. Bob Lord's Updated Cyber Guidance for Small Businesses: Originally written during his time at CISA, Bob has updated this practical security guide on his personal blog: Read on Medium. Methods of Delivery vs. Intrusion (The Hacklore Edition): A blog post explaining why the security industry shouldn't over-index on flashy threats like parking meter QR codes: Read on Medium. PSA: Elevator (un)safety: In addition to his popular seatbelt analogy, Bob explores the concept of built-in safety in this blog post about elevators: Read on Medium.

    37 min
  2. FEB 24

    Inside Cambodia's Scam Compounds: Pig Butchering, Organized Crime, and Protecting Your Life Savings

    Summary: Timothy De Block sits down with former FBI agent Scott Augenbaum to discuss his eye-opening trip to Cambodia, which has become the "online scam capital of the world". They dive into the terrifying evolution of "pig butchering" scams, how Chinese organized crime and geopolitical investments have fueled a massive criminal ecosystem, and why the ultimate vulnerability is still human psychology. Scott explains the massive scale of these operations and shares the single most important step you can take to avoid losing your money to these syndicates. Key Topics Discussed The Ground Zero of Scams: Scott discusses his trip to Sihanoukville, Cambodia, a city filled with scam compounds hiding in plain sight behind casino facades and fortress-like buildings with their backs facing the street. The Pivot to "Pig Butchering": How China's 2018 ban on online gambling and the 2020 COVID-19 casino shutdowns forced organized crime to pivot to massive, highly organized cryptocurrency and romance advanced-fee scams. A Geopolitical Nightmare: The complexities of combating these compounds when they are backed by Chinese investment and infrastructure (such as a highway built using Huawei routers). This dynamic leaves local law enforcement hesitant to intervene and limits the FBI's power. The Anatomy of a $5.2 Million Scam: Scott breaks down a devastating case of "pig butchering," detailing how scammers use fake simulated trading apps, "spot gold trading," and artificial intelligence to fatten victims up before stealing millions. The Double Crisis: The conversation acknowledges the horrifying human trafficking of compound workers—often lured from underdeveloped nations by fake jobs—while also focusing on the victims in the US and globally who are losing billions. The "Cancer Drug" Problem: Why organizations and individuals often only invest in security after they've been breached to meet compliance requirements. One Essential Tip: The absolute necessity of understanding social engineering and enabling Two-Factor Authentication (2FA) on all mission-critical accounts, such as home routers, cellular providers, iCloud, and Gmail. Memorable Quotes "If you're not going to make money through gambling, you're going to make money through the old-fashioned way, scamming." — Scott Augenbaum "We don't need to make information security people smarter... We need to get the end users up to taking it seriously." — Scott Augenbaum "I deal with people who want to buy the cancer drug after they had cancer. They don't want to buy it before because well, that's too much work." — Scott Augenbaum Resources Mentioned Book: The Secret to Cyber Security by Scott Augenbaum. Special Offer: Scott is generously offering a free audio or electronic copy of his book to listeners. Reach out to him directly to claim it. Contact Scott: scott@cybersecuremindset.com.

    40 min
  3. FEB 17

    What are the AI Vulnerabilities We Need to Worry About?

    Episode Summary Timothy De Block sits down with Keith Hoodlet, Security Researcher and founder of Securing.dev, to navigate the chaotic and rapidly evolving landscape of AI security. They discuss why "learning" is the only vital skill left in security, how Large Language Models (LLMs) actually work (and how to break them), and the terrifying rise of AI Agents that can access your email and bank accounts. Keith explains the difference between inherent AI vulnerabilities—like model inversion—and the reckless implementation of AI agents that leads to "free DoorDash" exploits. They also dive into the existential risks of disinformation, where bots manipulate human outrage and poison the very data future models will train on. Key Topics Learning in the AI Era: The "Zero to Hero" approach: How Keith uses tools like Claude to generate comprehensive learning plans and documentation for his team. Why accessible tools like YouTube and AI make learning technical concepts easier than ever. Understanding the "Black Box": How LLMs Work: Keith breaks down LLMs as a "four-dimensional array of numbers" (weights) where words are converted into tokens and calculated against training data. * Open Weights: The ability for users to manipulate these weights to reinforce specific data (e.g., European history vs. Asian Pacific history). AI Vulnerabilities vs. Attacks: Prompt Injection: "Social engineering" the chatbot to perform unintended actions. Membership Inference: Determining if specific data (like yours) is in a training set, which has massive implications for GDPR and the "right to be forgotten". Model Inversion: Stealing weights and training data. Keith cites speculation that Chinese espionage used this technique to "shortcut" their own model training using US labs' data. Evasion Attacks: A technique rather than a vulnerability. Example: Jason Haddix bypassing filters to generate an image of Donald Duck smoking a cigar by describing the attributes rather than naming the character. The "Agent" Threat: Running with Katanas: Giving AI agents access to browsers, file systems (~/.ssh), and payment methods is a massive security risk. The DoorDash Exploit: A real-world example where a user tricked a friend's email-connected AI bot into ordering them free lunch for a week. Supply Chain & Disinformation: Hallucination Squatting: AI generating code that pulls from non-existent packages, which attackers can then register to inject malware. The Cracker Barrel Outrage: How a bot-driven disinformation campaign manufactured fake outrage over a logo change, fooling a major company and the news media. Data Poisoning: The "Russian Pravda network" seeding false information to shape the training data of future US models. Memorable Quotes "It’s like we’re running with... not just scissors, we’re running with katanas. And the ground that we're on is constantly changing underneath our feet." — Keith Hoodlet "We never should have taught runes to sand and allowed it to think." — Keith Hoodlet "The biggest bombshell here is that we are the vulnerability. Because we're going to get manipulated by AI in some form or fashion." — Timothy De Block Resources Mentioned Books: Active Measures: The Secret History of Disinformation and Political Warfare by Thomas Rid. The Intelligent Investor by Benjamin Graham. Thinking, Fast and Slow by Daniel Kahneman. Churchill: A Life by Martin Gilbert. Videos & Articles: 3Blue1Brown (YouTube): "But what is a neural network?" (Deep Learning Series) . Keith’s Blog: "Life After the AI Apocalypse". About the Guest Keith Hoodlet is a Security Researcher at Trail of Bits and the creator of Securing.dev. A self-described "technologist who wants to move to the woods," Keith specializes in application security, threat modeling, and deciphering the complex intersection of code and human behavior. Website: securing.dev Mastodon: Keith on Infosec.Exchange

    52 min
  4. JAN 27

    How to Build an AI Governance Program

    Summary: Timothy De Block sits down with Walter Haydock, founder of StackAware, to break down the complex world of AI Governance. Walter moves beyond the buzzwords to define AI governance as the management of risk related to non-deterministic systems—systems where the same input doesn't guarantee the same output. They explore why the biggest AI risk facing organizations today isn't necessarily a rogue chatbot or a sophisticated cyber attack, but rather HR systems (like video interviews and performance reviews) that are heavily regulated and often overlooked. Walter provides a practical, three-step roadmap for organizations to move from chaos to calculated risk-taking, emphasizing the need for quantitative risk measurement over vague "high/medium/low" assessments. Key Topics & Insights What is AI Governance? Walter defines it as measuring and managing the risks (security, reputation, contractual, regulatory) of non-deterministic systems. The 3 Buckets of AI Security: AI for Security: AI-powered SOCs, fraud detection. AI for Hacking: Automated pentesting, generating phishing emails. Security for AI: The governance piece—securing the models and data themselves. The "Hidden" HR Vulnerability: While security teams focus on hackers, the most urgent vulnerability is often in Human Resources. Tools for firing, hiring, and performance evaluation are highly regulated (e.g., NYC Local Law 144, Illinois AI Video Interview Act) yet frequently lack proper oversight. How to Build an AI Governance Program (The First 3 Steps): Establish a Policy: Define your risk appetite (what is okay vs. not okay). Inventory Systems (with Amnesty): Ask employees what they are using without fear of punishment to get an accurate picture. Risk Assessment: Assess the inventory against your policy. Use a tiered approach: prioritize regulated/cyber-physical systems first, then confidential data, then public data. Quantitative Risk Management: Move away from "High/Medium/Low" charts. Walter advocates for measuring risk in dollars of loss expectancy using methodologies like FAIR (Factor Analysis of Information Risk) or the Hubbard Seiers method. Emerging Threats: Agentic AI: The next 3-5 years will be defined by "non-deterministic systems interacting with other non-deterministic systems," creating complex governance challenges. Regulation Roundup: Companies are largely unprepared for the wave of state-level AI laws coming online in places like Colorado (SB 205), California, Utah, and Texas. Resources Mentioned ISO 42001: The global standard for building AI management systems (similar to ISO 27001 for info sec). Cloud Security Alliance (CSA): Recommended for their AI Controls Matrix. Book: How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiers. StackAware Risk Register: A free template combining Hubbard Seiers and FAIR methodologies.

    31 min
  5. JAN 20

    Exploring Cirbl: Sifting Gold from Data Noise

    Summary: Timothy De Block and Ed Bailey, a former customer and current Field CISO at Cribl, discuss how the company is tackling the twin problems of data complexity and AI integration. Ed explains that Cribl's core mission—derived from the French word "cribé" (to screen or sift)—is to provide data flexibility and cost management by routing the most valuable data to expensive tools like SIEMs and everything else to cheap object storage. The conversation covers the 40x productivity gains from their "human in the loop AI", Cribl Co-Pilot, and their expansion into "agentic AI" to fight back against sophisticated threats. Cribl's Core Value Proposition Data Flexibility & Cost Management: Cribl's primary value is giving customers the flexibility to route data from "anywhere to anywhere". This allows organizations to manage costs by classifying data: Valuable Data: Sent to high-value, high-cost platforms like SIMs (Splunk, Elastic). Retention Data: Sent to inexpensive object storage (3 to 5 cents per gig). Matching Cost and Value: This approach ensures the most valuable data gets the premium analysis while retaining all data necessary for compliance, addressing the CISO's fear of missing a critical event. SIEM Migration and Onboarding: Cribl mitigates the risk of disruption during SIM migration—a major concern for CISOs—by acting as an abstraction layer. This can dramatically accelerate migration time; one large insurance company was able to migrate to a next-gen SIEM in five months, a process their CISO projected would have taken two years otherwise. Customer Success Story (UBA): Ed shared a story where his team used Cribl Stream to quickly integrate an expensive User and Entity Behavior Analytics (UBA) tool with their SIEM in two hours for a proof-of-concept. This saved 9-10 months and the deployment of 100,000 agents, providing 100% value from the UBA tool in just two weeks. AI Strategy and Productivity Gains "Human in the Loop AI": Cribl's initial AI focus is on Co-Pilot, which helps people use the tools better. This approach prioritizes accuracy and addresses the fact that enterprise tooling is often difficult to use. 40x Productivity Boost: Co-Pilot Editor automates the process of mapping data into complex, esoteric data schemas (for tools like Splunk and Elastic). This reduced the time to create a schema for a custom data type from approximately a week to about one hour, representing a massive gain in workflow productivity. Roadmap Shift to Agentic AI: Following CriblCon, the roadmap is shifting toward "agentic AI" that operates in the background, focused on building trust through carefully controlled and validated value. AI in Search: The Cribl Search product has built-in AI that suggests better ways for users to write searches and utilize features, addressing the fact that many organizations fail to get full value from their searching tools because users don't know how to use them efficiently. Challenges and Business Model Data Classification Pain Point: The biggest challenge during deployment is that many users "have never really looked at their data". This leads to time spent classifying data and defining the "why" (what is the end goal) before working on the "how". Vendor Pushback and MSSP Engagement: Splunk previously sued Cribl over cost management, though resulting damages were only one dollar, demonstrating that some vendors initially get upset. However, Cribl is highly engaged with MSSP/MDR providers because its flexibility dramatically lowers their integration costs and time, allowing them to get paid faster and offer a wider suite of services. Pricing Models: Cribl offers two main models: Self-Managed (Stream & Edge): Uses a topline license (based on capacity/terabytes purchased). Cloud (Lake & Search): Uses a consumption model (based on credits/what is actually used). Empowering the Customer: Cribl's mission is to empower customers by opening choices and enabling their goals, contrasting with other vendors where it's "easy to get in, the data never gets out".

    33 min
4.7
out of 5
43 Ratings

About

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.

You Might Also Like