Future of Data Security

Qohash

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.

  1. EP 35 — Snyk's Kate Helin on Governing Agentic AI before the Regulatory Guidance Catches Up

    MAY 5

    EP 35 — Snyk's Kate Helin on Governing Agentic AI before the Regulatory Guidance Catches Up

    Kate Helin, Legal Director of Privacy & Data Security at Snyk, argues that agents have already become the biggest security risk in most enterprise tech stacks, and that most organizations are not set up to address it. The core problem is not a lack of controls. It is that no single function has full visibility into how agents behave. Kate's approach is to convene legal, security, R&D, and GRC before any mitigation decision is made, because legal cannot counsel on obligations until the technical teams explain how the technology actually works. The composition of that conversation determines whether the resulting control is technical, human, or both. Kate also draws a direct line from GDPR implementation to today's AI governance challenges. She describes how building privacy programs under early GDPR, when implementation details were absent and community norms had to substitute for regulatory guidance, prepared her to operate in the same conditions now present in AI. Her operating principle is to meet the spirit of the law when the prescriptive details have not been written yet. Topics discussed: Why agentic AI has become the biggest current security risk across most enterprise tech stacks Structuring cross-functional roundtables across legal, security, R&D, and GRC before agentic risk controls are selected How early GDPR implementation under regulatory ambiguity prepared privacy counsel for today's AI governance challenges Applying the spirit of the law when prescriptive AI regulation has not yet been written or enforced Why technology consistently outpaces regulation and what that means for security teams building compliant programs today Using AI as a distillation tool for complex legal and security analysis while maintaining human-in-the-loop validation Why junior lawyers and engineers still need mentorship to develop judgment that AI-generated outputs cannot replace

    26 min
  2. EP 34 — Cyderes’ Stephen Fridakis on Ephemeral Credentials and Just-in-Time Access

    APR 21

    EP 34 — Cyderes’ Stephen Fridakis on Ephemeral Credentials and Just-in-Time Access

    Stephen Fridakis, CISO in Residence at Cyderes, comes to this conversation with a framework that cuts against how most security teams still operate: stop thinking about perimeters, start thinking about consequences. His argument is that the question of "are we secure or not" is not just unhelpful, it's the wrong unit of measurement entirely, and he offers a more honest alternative built around what an organization can afford to lose versus what must never leave. Stephen makes a precise and underappreciated case for why shadow AI is fundamentally different from every other control problem a CISO has faced. Once sensitive data is submitted to a public model, it is embedded, transformed, and learned. There is no rollback. The most effective response is not detection after the fact but building organizational awareness before the decision to submit is ever made. He also breaks down why static trust models have collapsed under AI, arguing that just-in-time data access and ephemeral credentials are no longer aspirational, they are necessary, and why past behavior can no longer serve as a proxy for future safety. Topics discussed: Reframing CISO governance around consequence management rather than perimeter defense or binary secure/not-secure assessments Applying the afford-to-lose framework to prioritize finite security budgets against the data that matters most Understanding AI irreversibility as a distinct control problem where sensitive data submitted to public models cannot be retrieved Shifting shadow AI strategy from post-submission detection to pre-decision awareness building across the organization Replacing static role-based trust models with context-driven identity evaluation that accounts for data stage and purpose Moving toward ephemeral credentials and just-in-time data access as the foundation of modern security architecture Evaluating where AI delivers real operational value versus where uncontrolled use produces unreliable and unexplainable outputs Advising new CISOs to build both technical depth and business fluency to avoid the most common leadership failure points

    29 min
  3. EP 33 — TELUS’ Jesslyn Dymond on the Gap between AI Use and AI Literacy in Enterprise Adoption

    APR 7

    EP 33 — TELUS’ Jesslyn Dymond on the Gap between AI Use and AI Literacy in Enterprise Adoption

    TELUS didn't wait for generative AI to arrive before building governance infrastructure. Jesslyn Dymond, Director of AI Governance & Data Ethics, joined the company in 2019 to stand up responsible AI practices alongside the machine learning teams building them, which meant that when generative AI hit, the governance scaffolding was already there. Jesslyn walks through the specific structures TELUS uses to govern AI at scale: a CEO-led AI board that includes the CIO, Chief AI Officer, and Chief Data and Trust Officer; a network of hundreds of data stewards embedded across business units and appointed by VPs; and a unified intake process called a Data Enablement Plan that consolidates privacy, security, and responsible AI review into a single workflow instead of separate forms and sign-offs. Jesslyn also shares how TELUS certified its first generative AI customer support tool to the international Privacy by Design standard and then had it independently audited, and what that process required the team to work through on transparency and user experience. She makes a pointed case for why shadow AI is best addressed with access to better internal tools rather than policy restriction alone, explains how her team grades levels of agency within their agentic AI framework to determine what controls need to be in place before approving systems, and describes how TELUS took the concept of purple teaming out of the security world and applied it to AI governance, including running those sessions with students and the general public. Topics discussed: Building proactive AI governance infrastructure before adoption by embedding responsible AI practices alongside ML development teams Structuring enterprise AI oversight through a CEO-led board including CIO, Chief AI Officer, and Chief Data and Trust Officer Deploying VP-appointed data stewards across business units to connect governance policy with on-the-ground AI implementation Consolidating privacy, security, and responsible AI review into a single Data Enablement Plan to reduce friction and improve compliance  Certifying a generative AI customer support tool to the international Privacy by Design standard and navigating external audit requirements Grading levels of agency within an agentic AI framework to determine appropriate controls Countering shadow AI by prioritizing internal tool access and functionality over policy restriction alone Applying purple teaming from security practice to AI governance to test systems collaboratively across various teams

    49 min
  4. EP 32 — Polymer's Yasir Ali on Team Composition over Talent When Scaling Interdependent Platforms

    MAR 24

    EP 32 — Polymer's Yasir Ali on Team Composition over Talent When Scaling Interdependent Platforms

    Polymer's runtime security approach operates at the file and message level, intercepting content in real-time within workflows like Slack and Zendesk to redact, block, or grant granular access based on specific entities found inside documents. This contrasts with traditional perimeter-based security where access is binary: you're either in the club or out. Yasir Ali, Founder & CEO of PolymerHQ DLP, explains how financial services has operated under workflow-level distrust for over a decade, with every file interaction requiring labeling and ethical wall policies between trading and investment banking divisions, and why the rest of the enterprise world is finally moving toward this model. Yasir also touches on a critical gap in current security architectures: control planes across network, identity, and content layers don't communicate with each other. His team works to triangulate telemetric data from tools like Zscaler with Polymer's ground-level content controls, creating unified policy layers without forcing organizations into single-vendor platforms. He also addresses a tension in AI-powered security: probabilistic detection models work well for entity recognition, but policy enforcement must remain deterministic. You can't have AI deciding some days to block sensitive data and other days letting it through. Topics discussed: Implementing runtime security at file and message level to enable partial document sharing based on entity-level access policies Solving the binary sharing problem in unstructured datasets where traditional security forces all-or-nothing file access  Adopting financial services workflow-level distrust model that requires labeling and ethical wall policies for all file interactions Addressing enterprise AI adoption barriers through proper identity modeling for non-human agents and machine-to-machine interactions within IAM systems Triangulating telemetric data across network, identity, and content control planes to create unified policy layers without vendor lock-in Balancing probabilistic AI detection models for entity recognition with deterministic policy enforcement to maintain response certainty Building enterprise software teams by prioritizing cultural fit and collaboration ability over hiring 10x engineers

    28 min
  5. EP 31 — Arbor Memorial's Teij Janki on why adding AI before fixing process amplifies weaknesses

    MAR 10

    EP 31 — Arbor Memorial's Teij Janki on why adding AI before fixing process amplifies weaknesses

    Teij Janki, CISO & Director of IT Governance Risk & Compliance at Arbor Memorial, has spent 30 years moving through the full stack of security, and his view is that the sequencing most teams follow is backwards. His principle is that technology does not solve processes, it amplifies them. That means deploying a tool before fixing the underlying process weakness just scales the problem. The implication for AI adoption is direct and worth hearing spelled out. On the budget side, Teij makes a case that privacy legislation is a more reliable governance lever than cybersecurity risk alone because privacy laws carry consequences that executive teams will actually act on. He also walks through the gating sequence his team built for AI tool adoption wherein sensitive data gets slowed down and scrutinized, lower-sensitivity use cases move through faster, and staff have a service catalog to work from rather than a blanket ban.  Topics discussed: Applying a people-process-technology sequence to security programs before introducing AI or automation tooling Using privacy legislation as an executive governance lever when cybersecurity risk alone fails to drive budget decisions Building a gating sequence for AI tool adoption that separates sensitive from low-sensitivity data use cases Replacing blanket AI bans with a structured service catalog that lets staff self-select and move tools through approval Identifying process weaknesses before deploying technology to avoid amplifying existing security vulnerabilities at scale Progressing security from a technical cost center to a strategic business enabler using the CMMI maturity model Applying martial arts principles of discipline, clear expectations, and target-setting to cybersecurity team leadership Evaluating where generative AI delivers in security operations versus where magical thinking still outpaces real-world performance

    24 min
  6. EP 30 — Postman's Sam Chehab on Three Unteachable Traits He Hires For

    FEB 24

    EP 30 — Postman's Sam Chehab on Three Unteachable Traits He Hires For

    At Postman's scale of 40 million developers generating billions of API requests, Sam Chehab, Head of Security & IT, centers on three enforcement domains: authenticated and encrypted data paths, zero-trust inter-service communication, and runtime instrumentation. His vendor evaluation is just as precise, cutting past feature lists to one demand: show me the architecture diagram and walk through exactly how your solution addresses my threat models. Sam identifies why generative AI creates fundamentally new risk: the combination of private data access, untrusted content processing, and external communication capability. This trifecta explains why browser-based AI is nearly impossible to contain; it touches local machines, queries the open web, and executes actions on your behalf. Sam also covers how he screens for three traits he can't train: initiative to self-direct research, attitude to absorb constant setbacks, and aptitude to process how rapidly this field moves. Topics discussed: Implementing data path integrity, zero-trust inter-service authentication, and runtime instrumentation with immutable logs Evaluating cybersecurity vendors by demanding architecture diagrams and specific threat model solutions rather than feature lists Managing freemium platform security with anomaly detection, rate limiting, and abuse prevention across 40 million developers Identifying AI security's dangerous trifecta: private data access, untrusted content processing, and external communication capabilities  Building MCP generators that enable least-privilege API servers by allowing developers to select only required methods before deployment Using AI agents to generate security tests during development, shifting validation from security teams to automated testing Applying security hygiene fundamentals before adopting specialized vendor solutions Hiring security teams based on three unteachable traits: initiative, attitude, and aptitude

    28 min
  7. EP 29 — Age of Learning's Carl Stern on Why Certifications Are Side Effects, Not Final Goals

    FEB 10

    EP 29 — Age of Learning's Carl Stern on Why Certifications Are Side Effects, Not Final Goals

    Carl Stern, VP of Information Security at Age of Learning, explains why forcing controls into place without executive alignment guarantees you'll fight uphill battles every single day, as people begin to see security as a blocker rather than a business enabler. Instead, he starts with identifying crown jewels and acceptable risk levels before selecting any frameworks or tools, ensuring the program fits company culture instead of working against it.  He also asserts that certifications like HITRUST and SOC 2 validate you're already operating securely; the real program is the daily processes people follow because they understand why, not compliance theatre. Carl also argues the cybersecurity industry exists at its current scale because of a systemic failure: companies ship insecure software without liability, pushing security costs downstream. Most breaches exploit preventable defects that should never reach production, not sophisticated zero-days.  Topics discussed: Building security programs from scratch versus inheriting existing programs and why executive alignment prevents daily uphill battles Treating certifications as validation of operational security rather than the primary program goal Pairing administrative controls with technical monitoring to establish baselines before enforcement for unstructured data security policies Applying three-part investment calculus for lean teams: measurable risk reduction, manual work automation, and crown jewel protection Calculating true cost of 24/7 internal SOC coverage including shift staffing, turnover, training, and tooling versus managed services Why attack patterns remain consistent across healthcare, education, gaming, and retail despite different compliance requirements Explaining how AI lowers the barrier for exploit development and expands zero-day risk beyond traditional high-value enterprise targets Arguing that the cybersecurity industry exists at current scale because companies ship insecure software without liability, pushing costs downstream

    30 min
  8. EP 28 — National Bank's Andre Boucher on Managing AI without Shadow IT Friction

    JAN 27

    EP 28 — National Bank's Andre Boucher on Managing AI without Shadow IT Friction

    André Boucher, SVP Technology and Information Security (CTO/CISO) at National Bank of Canada, managed the transition from commanding Canadian Forces Cyber Command to leading security at a systemically important financial institution by recognizing that governance expertise matters more than technical depth at scale. His approach to shadow AI involves enabling experimentation early with secure platforms that business teams actually prefer, reducing the appeal of unauthorized tools. Rather than aggressive detection that drives behavior underground, they created environments where innovation happens within guardrails. This shifts security from adversarial to collaborative, treating 31,000 employees as team participants rather than risks to manage. Andre emphasizes that data inventory across structured and unstructured environments remains the hardest unsolved problem, not because organizations lack tools but because they haven't achieved ecosystem maturity around taxonomy and classification. He explains why third-party risk management is reaching crisis levels as major vendors embed AI features without notice or transparency, creating blind spots in supply chains that regulatory frameworks can't yet address.  Topics discussed: The translation of military governance and strategy frameworks into private sector security at systemically important financial institutions. Shadow AI management through platform enablement and secure experimentation rather than detection and prevention tactics. Data inventory and classification as the foundational challenge most organizations underestimate despite its criticality for AI governance. The board strategy mandate versus grassroots adoption pressure dynamic and how platform teams bridge the gap without creating friction. Third-party risk amplification as vendors embed AI features without transparency, notice, or updated contractual language. How awareness training reaches its limits when synthetic actors become indistinguishable from humans in video communications. AI use cases in security tooling focused on modeling normal behavior and reducing triage burden rather than autonomous response. Building high-performing security teams around ethics, mission, and non-linear career experience rather than purely technical credentials. Treating employees as security team participants at scale and how that shifts organizational dynamics from adversarial to collaborative.

    39 min

About

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.