She Said Privacy/He Said Security

Jodi and Justin Daniels

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

  1. -2 J

    GPC and UOOMS: Do Consumers Want an On/Off Switch or a Dimmer?

    Andy Hepburn is the Co-founder of NEOLAW LLC and General Counsel at SafeGuard Privacy. He is a privacy lawyer with deep experience helping clients in the digital advertising industry navigate complex privacy laws. In this episode… Global Privacy Control (GPC) is transforming the way companies approach consumer consent. The rise of state privacy laws has fueled an explosion of cookie consent banners and other consent mechanisms that tend to confuse consumers about what they’re agreeing to. GPC, also known as a universal opt-out mechanism, offers a simpler alternative by allowing consumers to set their privacy permissions once for electronic tracking at the browser level. Yet, its current all-or-nothing design raises the question: Does a single switch reflect what consumers really want?  Some consumers want to block all digital tracking, while others are open to targeted ads in specific situations, like shopping for a car or clothing. Most consumers fall somewhere in between. Earlier attempts, like the Do Not Track initiative, received pushback from the advertising industry, which argued that a simple on/off switch was too limited in capturing the diversity of consumer privacy preferences. A more nuanced approach would let individuals accept targeted ads in some areas while blocking them in others. Industry standards, such as the Interactive Advertising Bureau's Global Privacy Platform and the Multi-State Privacy Agreement, are designed to help companies ensure that consumer privacy preferences are consistently applied across publishers, advertisers, and the numerous intermediaries in the ad ecosystem. As consumer pressure and regulatory enforcement actions intensify, this may accelerate the adoption of these standards across various industries.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Andy Hepburn, Co-founder of NEOLAW LLC and General Counsel at SafeGuard Privacy, about whether universal opt-out mechanisms meet the needs of today’s consumers. Andy explains why a single opt-out switch falls short of consumer needs and what more flexible models could enable. He highlights how industry standards can help companies and their vendors transmit privacy preferences across the ad ecosystem and why adoption will depend on consumer pressure and regulatory enforcement actions. Andy also explores the challenges smaller companies face in meeting privacy compliance requirements and how cooperation among regulators could shape the next phase of privacy enforcement.

    38 min
  2. 25 SEPT.

    Navigating the New Rules of Healthcare Advertising

    Jeremy Mittler is the Co-founder and CEO of Blueprint Audiences. With nearly two decades in healthcare, advertising, and privacy, Jeremy has shaped how marketers reach patients and providers. At Blueprint, he is creating a new, privacy-safe way to build health audiences that ensures compliance across HIPAA and state privacy laws. In this episode… Healthcare marketers face mounting pressure to deliver personalized ads while ensuring compliance across the Health Insurance Portability and Accountability Act (HIPAA) and the growing list of state privacy laws, where gray areas around sensitive and consumer health information make compliance especially complex. Marketers who rely on broad targeting and legacy ad tech tools are finding that old methods no longer meet legal requirements. So, how can companies target health audiences in a way that is effective and aligns with privacy obligations?  Rather than treating privacy as a trade-off with precision, healthcare marketers can start by building a privacy-safe experience for consumers who see their ads, and optimizing for business goals from there. Proven methods, such as contextual advertising and using opted-in consented data and aggregated insights on personal information, ensure effective and privacy-forward campaigns. Yet these methods alone are not enough. Marketers and companies alike need to perform due diligence on their vendors and third-party ad tech platforms, especially as AI introduces new risks. Marketers can take simple steps, such as testing consumer opt-outs and exercising their privacy rights on vendor sites, to ensure the technology works as intended.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Jeremy Mittler, Co-founder and CEO of Blueprint Audiences, about how companies can create privacy-safe healthcare audience segments. Jeremy explains why relying solely on HIPAA is no longer sufficient in meeting compliance obligations and outlines challenges companies face while navigating the patchwork requirements of evolving state privacy laws. He details practical methods that allow marketers to reach the right audiences without compromising privacy and describes why vendor due diligence must go beyond checklists, urging marketers to test vendor ad tech platforms and to think like consumers when assessing ad experiences. Jeremy also discusses how AI complicates the boundary between aggregated and personal data and how emerging regulatory trends are reshaping healthcare advertising.

    27 min
  3. 18 SEPT.

    How Companies Can Prevent Identity-Based Attacks 

    Jasson Casey is the CEO and Co-founder of Beyond Identity, the first and only identity security platform built to make identity-based attacks impossible. With over 20 years of experience in security and networking, Jasson has built enterprise solutions that protect global organizations from credential-based threats. In this episode… Identity system attacks are on the rise and continue to be a top source of security incidents. Threat actors are using AI deepfakes, stealing user credentials, and taking advantage of weaknesses in the devices people use to connect to company systems. As threat actors become more sophisticated, companies need to find new ways to prevent these incidents rather than just detecting and responding to them. So, what can companies do differently to protect their data and systems?  Most authentication methods still rely on shared credentials like passwords or codes that travel across systems. Any data that moves can be intercepted and stolen by malicious actors. That’s why companies like Beyond Identity are helping businesses strengthen their security posture with a platform that eliminates shared credentials by replacing them with device-bound, hardware-backed cryptography. By leveraging the same secure enclave technology used in mobile payment systems, the platform produces unique, unforgeable signatures tied to each user and device. This approach prevents AI impersonation attacks, phishing, and credential theft, whether users are on company devices or using Bring Your Own Devices (BYOD). In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels sit down with Jasson Casey, CEO and Co-founder of Beyond Identity, to discuss how businesses can prevent identity-based attacks. Jasson explains why chasing AI deepfake detection is less effective than verifying the user and device behind communications. He also shares how Beyond Identity's platform works with existing identity software, enables secure authentication, and provides companies with certainty about user access, including the device used and the conditions under which users log in. Additionally, Jasson highlights how cryptographic signing tools can verify the authenticity of emails, meetings, and other content, helping businesses defend against AI deepfakes.

    28 min
  4. 4 SEPT.

    New CCPA Rules: What Businesses Need to Know 

    Daniel M. Goldberg is the Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC. He advises on a wide range of privacy, security, and AI matters. His expertise spans from handling high-stakes regulatory enforcement actions to shaping the application of privacy and AI laws. Earlier this year, the California Privacy Lawyers Association named him the "California Privacy Lawyer of the Year." In this episode… California is reshaping privacy compliance with its latest updates to the California Consumer Privacy Act (CCPA). These sweeping changes introduce new obligations for businesses operating in California, notably in the areas of Automated Decision-Making Technology (ADMT), cybersecurity audits, and risk assessments. So, what can companies do now to get ahead?  Companies can prepare by understanding the scope of the new rules and whether or not they apply to their business, as the regulations are set to take effect on October 1, 2025, if they are filed with the Secretary of State by August 31. If that filing happens later, the next effective date will shift to January 1, 2026. The rules around ADMT are especially complex, with broad definitions that could apply to any tool or system that processes personal data to make significant decisions about consumers. Beyond ADMT, certain companies will also need to conduct comprehensive cybersecurity audits through an independent auditor, a process that may be challenging for smaller organizations. Risk assessments impose an additional obligation by requiring reviews of activities such as processing, selling, or sharing sensitive data, and using ADMT for significant decision-making, among others, with attestations submitted to regulators. The new rules make it clear that California regulators also expect companies to maintain detailed documentation and demonstrate accountability through governance. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Daniel Goldberg, Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC, about how companies can navigate the CCPA’s new requirements. From ADMT to mandatory cybersecurity audits and risk assessments, Daniel provides a detailed overview of the complex requirements, explaining the scope and its impact on companies. He also outlines how these new rules set the tone for future privacy and AI regulations, why documentation and governance are central to compliance, and shares practical tips on the importance of reviewing AI tool settings to ensure sensitive data and confidential information are not used for AI model training.

    32 min
  5. 28 AOÛT

    How AI Is Rewriting the Rules of Cybersecurity

    John Graves is an innovative legal leader and Senior Counsel at Nisos Holdings, Inc. He has a diverse legal background at the intersection of law, highly regulated industry, and technology. John has over two decades of legal experience advising business leaders, global privacy teams, CISOs and security teams, product groups, and compliance functions. He is a graduate of the University of Oklahoma. In this episode… AI is fundamentally changing the cybersecurity landscape. Threat actors are using AI to move faster, scale attacks, and create synthetic identities that are difficult for companies to detect. At the same time, defenders rely on AI to sift through large amounts of data and separate the signal from noise to determine whether usernames and email addresses are tied to legitimate users or malicious actors. As businesses rush to adopt AI, how can they do so without creating gaps that leave them vulnerable to risks and cyber threats?  To stay ahead of evolving cyber risks, organizations should conduct tabletop exercises with security and technical teams. These exercises help business leaders understand risks like prompt injection, poisoned data, and social engineering by walking through how AI systems operate and asking what would happen if certain situations occurred. They are most effective when conducted early in the AI lifecycle, giving companies the chance to simulate attack scenarios and identify risks before systems are deployed. Companies also need to establish AI governance because, without oversight of inputs, processes, and outputs, AI adoption carries significant risk.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with John Graves, Senior Counsel at Nisos Holdings, Inc., about how AI is reshaping cyber threats and defenses. John shares how threat actors leverage AI to scale ransomware, impersonate real people, and improve social engineering tactics, while defenders use the technology to analyze data and uncover hidden risks. He explains why public digital footprints of executives and their families are becoming prime targets for attackers and why companies must take human risk management seriously. John also highlights why establishing governance and conducting tabletop exercises are essential for identifying vulnerabilities and preparing leaders to respond to real-world challenges.

    28 min
  6. 21 AOÛT

    The Blueprint for a Global Privacy and Security Program

    Robert S. Jett III (“Bob”) serves as the first Global Chief Data Privacy Officer at Bunge, where he leads global privacy initiatives and supports key projects in digital transformation, AI, and data management. With over 30 years of legal and in-house counsel experience across manufacturing, insurance, and financial services, he has built and managed global programs for compliance, data privacy, and incident response. Bob has worked extensively across IT, cybersecurity, information security, and corporate compliance teams. He holds a BA in international relations and political science from Hobart College and a JD from the University of Baltimore School of Law. Bob is active in the ACC, IAPP, Georgia Bar Privacy & Law Section, and the Maryland State Bar Association. In this episode… Managing privacy and security across multiple jurisdictions has never been more challenging for global companies, as regulations evolve and privacy, security, and AI risks accelerate at the same time. The challenge becomes particularly acute for businesses managing supply chains that span dozens of countries, where they must navigate geopolitical shifts and comply with strict employee data regulations that differ by region. These organizations also face the added complexity of governing AI tools to protect sensitive data. Navigating these challenges requires close coordination between privacy, security, and operational teams so risks can be identified quickly and addressed in real time.  A simple way global companies can address these challenges is by embedding privacy leaders into operational teams. For global companies, like Bunge, regular communication between privacy, IT, and cybersecurity teams keeps threats visible in real time, while cross-collaboration helps identify vulnerabilities and mitigate weak points. The company also incorporates environmental, social, and governance (ESG) principles into its privacy framework, using traceability to validate supply chain data and meet regulatory requirements. When it comes to managing emerging technologies like AI, foundational privacy principles apply. Companies need to establish governance for data quality, prompt management, third-party vendors, and automated tools, such as AI notetakers. These steps build transparency, reduce risk, and strengthen trust across the organization.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Robert “Bob” Jett, Global Chief Data Privacy Officer at Bunge, about building and leading a global privacy program. Bob emphasizes the importance of embedding privacy leadership into operational teams, like IT departments, to enable collaboration and build trust. He discusses strategies for adhering to ESG principles, managing global employee data privacy, and applying privacy fundamentals to AI governance. Bob also provides tips for responsible AI use, including the importance of prompt engineering oversight, and explains why relationship-building and transparency are essential for effective global privacy and security programs.

    31 min
  7. 14 AOÛT

    Navigating Privacy Compliance When AI Changes Everything

    Mason Clutter is a Partner and Privacy Lead at Frost Brown Todd Attorneys, previously serving as Chief Privacy Officer for the US Department of Homeland Security. Mason’s practice is at the intersection of privacy, security, and technology. She works with clients to operationalize privacy and security, helping them achieve their goals and build and maintain trust with their clients. In this episode… Companies are facing new challenges trying to build privacy programs that keep up with evolving privacy laws and new AI tools. Laws, like Maryland’s new privacy law, are adding pressure with strict data minimization requirements and expanded protections for sensitive and children’s data. These shifts are driving companies to reconsider how and when privacy is built into operations. So, how can companies effectively design privacy programs that address regulatory, operational, and AI-driven risks?  Companies can start by embedding privacy and security measures into their products and services from the start. AI adds another layer of complexity. While organizations are trying to use AI for efficiency, confidential or personal information is often entered into AI tools without knowing how it will be used or where it will go. Vague third-party vendor contract terms and downstream data sharing compound the risk. Staying compliant means understanding each AI use case, reviewing vendor contracts closely, and choosing AI tools that reflect a company’s risk tolerance and privacy and security practices. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Mason Clutter, Partner and Privacy Lead at Frost Brown Todd Attorneys, about how companies can navigate complex privacy, security, and AI challenges. Mason shares practical insights on navigating Maryland’s new privacy law, managing vendor contracts, and downstream AI risks. She explores common privacy misconceptions, including why privacy should not be one-size-fits-all or checkbox compliance exercise. Mason also addresses growing concerns around AI deepfakes and why regulation alone is not enough without broader public education.

    36 min
  8. 7 AOÛT

    How Privacy is Reshaping the Ad Tech Industry

    Allison Schiff is the Managing Editor at AdExchanger, where she covers mobile, Meta, measurement, privacy, and the app economy. Allison received her MA in journalism from the Dublin Institute of Technology in Ireland (her favorite place) and a BA in history and English from Brandeis University in Waltham, Mass. In this episode… Ad tech companies are under increasing pressure to evolve their privacy practices. What was once considered a “wild west,” loosely regulated environment, is now being reshaped by regulatory enforcement actions and shifting consumer expectations. Many companies are becoming more selective about their vendors, implementing privacy by design, and embracing data minimization practices after years of unchecked data collection. While at the same time, many ad tech companies are rushing to position themselves as AI companies, often without a clear understanding of the risks and how these claims align with consumer trust. To meet rising regulatory and consumer expectations, some ad tech companies are taking concrete steps to improve their privacy posture. This includes auditing third-party tools, removing unnecessary tracking pixels from websites, and gaining more visibility into how data flows through partner systems. On the AI front, research shows that consumer trust drops when AI-generated content is not clearly labeled and that marketing products as AI-powered makes them less appealing. These findings point to the need for greater transparency in company data collection practices and marketing and AI transparency.  In this episode of the She Said Privacy/He Said Security podcast, Jodi and Justin Daniels speak with Allison Schiff, managing editor at AdExchanger, about how ad tech companies are adapting to regulatory scrutiny and evolving consumer privacy expectations. Allison shares how the ad tech industry’s approach to privacy is maturing, and explains how companies are implementing privacy by design, reassessing vendor relationships, and using consent tools more intentionally. She offers insight into how journalists utilize AI while maintaining editorial judgment and presents concerns about AI’s impact on critical thinking. Allison also describes the disconnect between AI marketing hype and consumer preferences, and the need for companies to disclose the use of AI-generated content to maintain trust.

    38 min

À propos

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Vous aimeriez peut‑être aussi