She Said Privacy/He Said Security

Jodi and Justin Daniels

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

  1. 3 DAYS AGO

    How AI Is Transforming the General Counsel Role 

    Eric Greenberg is the Executive Vice President, General Counsel, and Corporate Secretary of Cox Media Group, a multi-platform media company based in Atlanta that serves major US media markets. CMG is a portfolio company of the private equity firm, Apollo Global Management. In this episode… AI is transforming how general counsels and legal teams approach their work, with efficiency being just the beginning. For general counsels, the real opportunity lies in using technology to strengthen strategic thinking and decision making, not replace it. Large language models enable lawyers to analyze complex issues and identify patterns across vast amounts of information, yet they still need to apply critical thinking to interpret the results. So, how can legal professionals leverage AI to elevate their roles without compromising the judgment that defines their value?  Legal professionals should approach AI as a strategic collaborator rather than a simple efficiency tool. Prompt engineering is emerging as a critical skill that bridges tech-savvy younger lawyers with seasoned attorneys who bring deep judgment and experience. Together, they can build more collaborative, strategic teams. Inside companies, AI is changing how legal departments and outside counsel work together by enhancing efficiency and fostering opportunities for shared learning across systems. Embedding institutional knowledge into AI systems offers benefits for consistency and strategic alignment, yet it also carries risk if general counsel and legal teams rely too heavily on its static outputs instead of applying their own judgment. And as AI evolves, organizations need to also prepare for fast-moving threats like deepfakes, building plans that allow them to respond within minutes, not days.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Eric Greenberg, Executive Vice President, General Counsel, and Corporate Secretary of Cox Media Group, about how general counsels can effectively use AI. Eric discusses how AI tools are reshaping due diligence and decision-making, why developing strong prompt engineering skills can strengthen collaboration between junior and senior lawyers, and how in-house and outside counsel can work more effectively through interoperable AI systems. He shares insights from his Bloomberg Law article series on AI’s impact, emphasizing the importance of continuous learning and staying open-minded as technology evolves. Eric also explains the benefits and risks of embedding institutional knowledge into AI systems and offers practical ways legal professionals can experiment with AI tools.

    38 min
  2. 9 OCT

    Why Security Awareness Training Matters

    Dan Thornton is the Co-founder and CEO of Goldphish. He is a former Royal Marine Commando who channeled his operational expertise into cybersecurity. Today, Dan leads a security awareness training company, helping organizations turn their people into their strongest defense with over 2.1 million learners trained worldwide. In this episode… Threat actors don’t just target large corporations. Small and medium-sized businesses (SMBs) are finding themselves in the crosshairs of attackers who use automation, AI, and social engineering to cast a wide net of cyber threats. From convincing phishing scams that capture credentials to AI deepfakes that mimic trusted voices, the methods used to manipulate and exploit unsuspecting employees are becoming more sophisticated. So how can organizations protect themselves when even the most vigilant staff can be fooled? Organizations that believe they are too small to be targeted by threat actors often learn the hard way that one single mistake can have devastating consequences. Yet improving cybersecurity posture and building awareness doesn’t have to be overwhelming or costly. SMBs can take simple steps, such as enabling multifactor authentication (MFA) for all business accounts, updating software and systems, and maintaining regular backups. Security training is also critical because it helps employees recognize threats and avoid mistakes that often lead to incidents. By combining basic security measures with security awareness training, businesses can foster a culture that strengthens their defenses against cyber threats. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Dan Thornton, Co-founder and CEO of Goldphish, about how small and medium-sized businesses can enhance their cybersecurity defenses. Dan emphasizes that attackers do not discriminate based on company size and that common blind spots, such as over-relying on technology, neglecting incident planning, and staying silent after mistakes, can leave organizations vulnerable. He explains why steps like enabling multifactor authentication, performing regular backups, and conducting employee security training make a big difference in reducing risk. Dan also shares insights on how companies can counter the growing threat of AI deepfakes and why business email compromise (BEC) remains one of the most effective scams.

    33 min
  3. 2 OCT

    GPC and UOOMS: Do Consumers Want an On/Off Switch or a Dimmer?

    Andy Hepburn is the Co-founder of NEOLAW LLC and General Counsel at SafeGuard Privacy. He is a privacy lawyer with deep experience helping clients in the digital advertising industry navigate complex privacy laws. In this episode… Global Privacy Control (GPC) is transforming the way companies approach consumer consent. The rise of state privacy laws has fueled an explosion of cookie consent banners and other consent mechanisms that tend to confuse consumers about what they’re agreeing to. GPC, also known as a universal opt-out mechanism, offers a simpler alternative by allowing consumers to set their privacy permissions once for electronic tracking at the browser level. Yet, its current all-or-nothing design raises the question: Does a single switch reflect what consumers really want?  Some consumers want to block all digital tracking, while others are open to targeted ads in specific situations, like shopping for a car or clothing. Most consumers fall somewhere in between. Earlier attempts, like the Do Not Track initiative, received pushback from the advertising industry, which argued that a simple on/off switch was too limited in capturing the diversity of consumer privacy preferences. A more nuanced approach would let individuals accept targeted ads in some areas while blocking them in others. Industry standards, such as the Interactive Advertising Bureau's Global Privacy Platform and the Multi-State Privacy Agreement, are designed to help companies ensure that consumer privacy preferences are consistently applied across publishers, advertisers, and the numerous intermediaries in the ad ecosystem. As consumer pressure and regulatory enforcement actions intensify, this may accelerate the adoption of these standards across various industries.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Andy Hepburn, Co-founder of NEOLAW LLC and General Counsel at SafeGuard Privacy, about whether universal opt-out mechanisms meet the needs of today’s consumers. Andy explains why a single opt-out switch falls short of consumer needs and what more flexible models could enable. He highlights how industry standards can help companies and their vendors transmit privacy preferences across the ad ecosystem and why adoption will depend on consumer pressure and regulatory enforcement actions. Andy also explores the challenges smaller companies face in meeting privacy compliance requirements and how cooperation among regulators could shape the next phase of privacy enforcement.

    38 min
  4. 25 SEPT

    Navigating the New Rules of Healthcare Advertising

    Jeremy Mittler is the Co-founder and CEO of Blueprint Audiences. With nearly two decades in healthcare, advertising, and privacy, Jeremy has shaped how marketers reach patients and providers. At Blueprint, he is creating a new, privacy-safe way to build health audiences that ensures compliance across HIPAA and state privacy laws. In this episode… Healthcare marketers face mounting pressure to deliver personalized ads while ensuring compliance across the Health Insurance Portability and Accountability Act (HIPAA) and the growing list of state privacy laws, where gray areas around sensitive and consumer health information make compliance especially complex. Marketers who rely on broad targeting and legacy ad tech tools are finding that old methods no longer meet legal requirements. So, how can companies target health audiences in a way that is effective and aligns with privacy obligations?  Rather than treating privacy as a trade-off with precision, healthcare marketers can start by building a privacy-safe experience for consumers who see their ads, and optimizing for business goals from there. Proven methods, such as contextual advertising and using opted-in consented data and aggregated insights on personal information, ensure effective and privacy-forward campaigns. Yet these methods alone are not enough. Marketers and companies alike need to perform due diligence on their vendors and third-party ad tech platforms, especially as AI introduces new risks. Marketers can take simple steps, such as testing consumer opt-outs and exercising their privacy rights on vendor sites, to ensure the technology works as intended.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Jeremy Mittler, Co-founder and CEO of Blueprint Audiences, about how companies can create privacy-safe healthcare audience segments. Jeremy explains why relying solely on HIPAA is no longer sufficient in meeting compliance obligations and outlines challenges companies face while navigating the patchwork requirements of evolving state privacy laws. He details practical methods that allow marketers to reach the right audiences without compromising privacy and describes why vendor due diligence must go beyond checklists, urging marketers to test vendor ad tech platforms and to think like consumers when assessing ad experiences. Jeremy also discusses how AI complicates the boundary between aggregated and personal data and how emerging regulatory trends are reshaping healthcare advertising.

    27 min
  5. 18 SEPT

    How Companies Can Prevent Identity-Based Attacks 

    Jasson Casey is the CEO and Co-founder of Beyond Identity, the first and only identity security platform built to make identity-based attacks impossible. With over 20 years of experience in security and networking, Jasson has built enterprise solutions that protect global organizations from credential-based threats. In this episode… Identity system attacks are on the rise and continue to be a top source of security incidents. Threat actors are using AI deepfakes, stealing user credentials, and taking advantage of weaknesses in the devices people use to connect to company systems. As threat actors become more sophisticated, companies need to find new ways to prevent these incidents rather than just detecting and responding to them. So, what can companies do differently to protect their data and systems?  Most authentication methods still rely on shared credentials like passwords or codes that travel across systems. Any data that moves can be intercepted and stolen by malicious actors. That’s why companies like Beyond Identity are helping businesses strengthen their security posture with a platform that eliminates shared credentials by replacing them with device-bound, hardware-backed cryptography. By leveraging the same secure enclave technology used in mobile payment systems, the platform produces unique, unforgeable signatures tied to each user and device. This approach prevents AI impersonation attacks, phishing, and credential theft, whether users are on company devices or using Bring Your Own Devices (BYOD). In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels sit down with Jasson Casey, CEO and Co-founder of Beyond Identity, to discuss how businesses can prevent identity-based attacks. Jasson explains why chasing AI deepfake detection is less effective than verifying the user and device behind communications. He also shares how Beyond Identity's platform works with existing identity software, enables secure authentication, and provides companies with certainty about user access, including the device used and the conditions under which users log in. Additionally, Jasson highlights how cryptographic signing tools can verify the authenticity of emails, meetings, and other content, helping businesses defend against AI deepfakes.

    28 min
  6. 4 SEPT

    New CCPA Rules: What Businesses Need to Know 

    Daniel M. Goldberg is the Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC. He advises on a wide range of privacy, security, and AI matters. His expertise spans from handling high-stakes regulatory enforcement actions to shaping the application of privacy and AI laws. Earlier this year, the California Privacy Lawyers Association named him the "California Privacy Lawyer of the Year." In this episode… California is reshaping privacy compliance with its latest updates to the California Consumer Privacy Act (CCPA). These sweeping changes introduce new obligations for businesses operating in California, notably in the areas of Automated Decision-Making Technology (ADMT), cybersecurity audits, and risk assessments. So, what can companies do now to get ahead?  Companies can prepare by understanding the scope of the new rules and whether or not they apply to their business, as the regulations are set to take effect on October 1, 2025, if they are filed with the Secretary of State by August 31. If that filing happens later, the next effective date will shift to January 1, 2026. The rules around ADMT are especially complex, with broad definitions that could apply to any tool or system that processes personal data to make significant decisions about consumers. Beyond ADMT, certain companies will also need to conduct comprehensive cybersecurity audits through an independent auditor, a process that may be challenging for smaller organizations. Risk assessments impose an additional obligation by requiring reviews of activities such as processing, selling, or sharing sensitive data, and using ADMT for significant decision-making, among others, with attestations submitted to regulators. The new rules make it clear that California regulators also expect companies to maintain detailed documentation and demonstrate accountability through governance. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Daniel Goldberg, Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC, about how companies can navigate the CCPA’s new requirements. From ADMT to mandatory cybersecurity audits and risk assessments, Daniel provides a detailed overview of the complex requirements, explaining the scope and its impact on companies. He also outlines how these new rules set the tone for future privacy and AI regulations, why documentation and governance are central to compliance, and shares practical tips on the importance of reviewing AI tool settings to ensure sensitive data and confidential information are not used for AI model training.

    32 min
  7. 28 AUG

    How AI Is Rewriting the Rules of Cybersecurity

    John Graves is an innovative legal leader and Senior Counsel at Nisos Holdings, Inc. He has a diverse legal background at the intersection of law, highly regulated industry, and technology. John has over two decades of legal experience advising business leaders, global privacy teams, CISOs and security teams, product groups, and compliance functions. He is a graduate of the University of Oklahoma. In this episode… AI is fundamentally changing the cybersecurity landscape. Threat actors are using AI to move faster, scale attacks, and create synthetic identities that are difficult for companies to detect. At the same time, defenders rely on AI to sift through large amounts of data and separate the signal from noise to determine whether usernames and email addresses are tied to legitimate users or malicious actors. As businesses rush to adopt AI, how can they do so without creating gaps that leave them vulnerable to risks and cyber threats?  To stay ahead of evolving cyber risks, organizations should conduct tabletop exercises with security and technical teams. These exercises help business leaders understand risks like prompt injection, poisoned data, and social engineering by walking through how AI systems operate and asking what would happen if certain situations occurred. They are most effective when conducted early in the AI lifecycle, giving companies the chance to simulate attack scenarios and identify risks before systems are deployed. Companies also need to establish AI governance because, without oversight of inputs, processes, and outputs, AI adoption carries significant risk.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with John Graves, Senior Counsel at Nisos Holdings, Inc., about how AI is reshaping cyber threats and defenses. John shares how threat actors leverage AI to scale ransomware, impersonate real people, and improve social engineering tactics, while defenders use the technology to analyze data and uncover hidden risks. He explains why public digital footprints of executives and their families are becoming prime targets for attackers and why companies must take human risk management seriously. John also highlights why establishing governance and conducting tabletop exercises are essential for identifying vulnerabilities and preparing leaders to respond to real-world challenges.

    28 min
  8. 21 AUG

    The Blueprint for a Global Privacy and Security Program

    Robert S. Jett III (“Bob”) serves as the first Global Chief Data Privacy Officer at Bunge, where he leads global privacy initiatives and supports key projects in digital transformation, AI, and data management. With over 30 years of legal and in-house counsel experience across manufacturing, insurance, and financial services, he has built and managed global programs for compliance, data privacy, and incident response. Bob has worked extensively across IT, cybersecurity, information security, and corporate compliance teams. He holds a BA in international relations and political science from Hobart College and a JD from the University of Baltimore School of Law. Bob is active in the ACC, IAPP, Georgia Bar Privacy & Law Section, and the Maryland State Bar Association. In this episode… Managing privacy and security across multiple jurisdictions has never been more challenging for global companies, as regulations evolve and privacy, security, and AI risks accelerate at the same time. The challenge becomes particularly acute for businesses managing supply chains that span dozens of countries, where they must navigate geopolitical shifts and comply with strict employee data regulations that differ by region. These organizations also face the added complexity of governing AI tools to protect sensitive data. Navigating these challenges requires close coordination between privacy, security, and operational teams so risks can be identified quickly and addressed in real time.  A simple way global companies can address these challenges is by embedding privacy leaders into operational teams. For global companies, like Bunge, regular communication between privacy, IT, and cybersecurity teams keeps threats visible in real time, while cross-collaboration helps identify vulnerabilities and mitigate weak points. The company also incorporates environmental, social, and governance (ESG) principles into its privacy framework, using traceability to validate supply chain data and meet regulatory requirements. When it comes to managing emerging technologies like AI, foundational privacy principles apply. Companies need to establish governance for data quality, prompt management, third-party vendors, and automated tools, such as AI notetakers. These steps build transparency, reduce risk, and strengthen trust across the organization.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Robert “Bob” Jett, Global Chief Data Privacy Officer at Bunge, about building and leading a global privacy program. Bob emphasizes the importance of embedding privacy leadership into operational teams, like IT departments, to enable collaboration and build trust. He discusses strategies for adhering to ESG principles, managing global employee data privacy, and applying privacy fundamentals to AI governance. Bob also provides tips for responsible AI use, including the importance of prompt engineering oversight, and explains why relationship-building and transparency are essential for effective global privacy and security programs.

    31 min

About

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

You Might Also Like