Future of Data Security

Qohash

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.

  1. EP 29 — Age of Learning's Carl Stern on Why Certifications Are Side Effects, Not Final Goals

    2D AGO

    EP 29 — Age of Learning's Carl Stern on Why Certifications Are Side Effects, Not Final Goals

    Carl Stern, VP of Information Security at Age of Learning, explains why forcing controls into place without executive alignment guarantees you'll fight uphill battles every single day, as people begin to see security as a blocker rather than a business enabler. Instead, he starts with identifying crown jewels and acceptable risk levels before selecting any frameworks or tools, ensuring the program fits company culture instead of working against it.  He also asserts that certifications like HITRUST and SOC 2 validate you're already operating securely; the real program is the daily processes people follow because they understand why, not compliance theatre. Carl also argues the cybersecurity industry exists at its current scale because of a systemic failure: companies ship insecure software without liability, pushing security costs downstream. Most breaches exploit preventable defects that should never reach production, not sophisticated zero-days.  Topics discussed: Building security programs from scratch versus inheriting existing programs and why executive alignment prevents daily uphill battles Treating certifications as validation of operational security rather than the primary program goal Pairing administrative controls with technical monitoring to establish baselines before enforcement for unstructured data security policies Applying three-part investment calculus for lean teams: measurable risk reduction, manual work automation, and crown jewel protection Calculating true cost of 24/7 internal SOC coverage including shift staffing, turnover, training, and tooling versus managed services Why attack patterns remain consistent across healthcare, education, gaming, and retail despite different compliance requirements Explaining how AI lowers the barrier for exploit development and expands zero-day risk beyond traditional high-value enterprise targets Arguing that the cybersecurity industry exists at current scale because companies ship insecure software without liability, pushing costs downstream

    30 min
  2. EP 28 — National Bank's Andre Boucher on Managing AI without Shadow IT Friction

    JAN 27

    EP 28 — National Bank's Andre Boucher on Managing AI without Shadow IT Friction

    André Boucher, SVP Technology and Information Security (CTO/CISO) at National Bank of Canada, managed the transition from commanding Canadian Forces Cyber Command to leading security at a systemically important financial institution by recognizing that governance expertise matters more than technical depth at scale. His approach to shadow AI involves enabling experimentation early with secure platforms that business teams actually prefer, reducing the appeal of unauthorized tools. Rather than aggressive detection that drives behavior underground, they created environments where innovation happens within guardrails. This shifts security from adversarial to collaborative, treating 31,000 employees as team participants rather than risks to manage. Andre emphasizes that data inventory across structured and unstructured environments remains the hardest unsolved problem, not because organizations lack tools but because they haven't achieved ecosystem maturity around taxonomy and classification. He explains why third-party risk management is reaching crisis levels as major vendors embed AI features without notice or transparency, creating blind spots in supply chains that regulatory frameworks can't yet address.  Topics discussed: The translation of military governance and strategy frameworks into private sector security at systemically important financial institutions. Shadow AI management through platform enablement and secure experimentation rather than detection and prevention tactics. Data inventory and classification as the foundational challenge most organizations underestimate despite its criticality for AI governance. The board strategy mandate versus grassroots adoption pressure dynamic and how platform teams bridge the gap without creating friction. Third-party risk amplification as vendors embed AI features without transparency, notice, or updated contractual language. How awareness training reaches its limits when synthetic actors become indistinguishable from humans in video communications. AI use cases in security tooling focused on modeling normal behavior and reducing triage burden rather than autonomous response. Building high-performing security teams around ethics, mission, and non-linear career experience rather than purely technical credentials. Treating employees as security team participants at scale and how that shifts organizational dynamics from adversarial to collaborative.

    39 min
  3. EP 27 — Turntide's Paul Knight on Zero Trust for Unpatchable Production Systems

    JAN 15

    EP 27 — Turntide's Paul Knight on Zero Trust for Unpatchable Production Systems

    When manufacturers discover their IP and other valuable data points have been encrypted or deleted, the company faces existential risk. Paul Knight, VP Information Technology & CISO at Turntide, explains why OT security operates under fundamentally different constraints than IT: you can't patch legacy systems when regulatory requirements lock down production lines, and manufacturer obsolescence means the only "upgrade" path is a pricey machine replacement. His zero trust implementation focuses on compensating controls around unpatchable assets rather than attempting wholesale modernization. Paul's crown jewel methodology starts with regulatory requirements and threat actor motivations specific to manufacturing. Paul also touches on how AI testing delivered 300-400% speed improvements analyzing embedded firmware logs and identifying real-time patterns in test data, eliminating the Monday-morning bottleneck of manual log review. Their NDA automation failed on consistency, revealing the current boundary: AI handles quantitative pattern detection but can't replace judgment-dependent tasks. Paul warns the security industry remains in the "sprinkling stage" where vendors add superficial AI features, while the real shift comes when threat actors weaponize sophisticated models, creating an arms race where defensive operations must match offensive AI processing power.   Topics discussed: Implementing zero trust architecture around unpatchable legacy OT systems when regulatory requirements prevent upgrades Identifying manufacturing crown jewels through threat actor motivation analysis, like production stoppage and CNC instruction sets Achieving 300-400% faster embedded firmware testing cycles using AI for real-time log analysis and pattern detection in test data Understanding AI consistency failures in legal document automation where 80% accuracy creates liability rather than delivering value Applying compensating security controls when manufacturer obsolescence makes the only upgrade path a costly replacement  Navigating the current "sprinkling stage" of security AI where vendors add superficial features rather than reimagining defensive operations Preparing for AI-driven threat landscape evolution where offensive operations force defensive systems to match sophisticated model processing power Building trust frameworks for AI adoption when executives question data exposure risks from systems requiring high-level access

    26 min
  4. EP 26 — Handshake's Rupa Parameswaran on Mapping Happy Paths to Catch AI Data Leakage

    12/19/2025

    EP 26 — Handshake's Rupa Parameswaran on Mapping Happy Paths to Catch AI Data Leakage

    Rupa Parameswaran, VP of Security & IT at Handshake, tackles AI security by starting with mapping happy paths: document every legitimate route for accessing, adding, moving, and removing your crown jewels, then flag everything outside those paths. When vendors like ChatGPT inadvertently get connected to an entire workspace instead of individual accounts (scope creep that she's witnessed firsthand), these baselines become your detection layer. She suggests building lightweight apps that crawl vendor sites for consent and control changes, addressing the reality that nobody reads those policy update emails.   Rupa also reflects on the data labeling bottlenecks that block AI adoption at scale. Most organizations can't safely connect AI tools to Google Drive or OneDrive because they lack visibility into what sensitive data exists across their corpus. Regulated industries handle this better, not because they're more sophisticated, but because compliance requirements force the discovery work. Her recommendation for organizations hitting this wall is self-hosted solutions contained within a single cloud provider rather than reverting to bare metal infrastructure. The shift treats security as quality engineering, making just-in-time access and audit trails the default path, not an impediment to velocity. Topics discussed:   Mapping happy paths for accessing, adding, moving, and removing crown jewels to establish baselines for anomaly detection systems Building lightweight applications that crawl vendor websites to automatically detect consent and control changes in third-party tools Understanding why data labeling and discovery across unstructured corpus databases blocks AI adoption beyond pilot stage deployments Implementing just-in-time access controls and audit trails as default engineering paths rather than friction points for development velocity Evaluating self-hosted AI solutions within single cloud providers versus bare metal infrastructure for containing data exposure risks Preventing inadvertent workspace-wide AI integrations when individual account connections get accidentally expanded in scope during rollouts Treating security as a pillar of quality engineering to make secure options easier than insecure alternatives for teams Addressing authenticity and provenance challenges in AI-curated data where validation of truthfulness becomes nearly impossible currently

    25 min
  5. EP 25 — Cybersecurity Executive Arvind Raman on Hand-in-Glove CDO-CISO Partnership

    12/02/2025

    EP 25 — Cybersecurity Executive Arvind Raman on Hand-in-Glove CDO-CISO Partnership

    Arvind Raman — Board-level Cybersecurity Executive | CISO roles at Blackberry & Mitel, rebuilt cybersecurity from a compliance function into a business differentiator. His approach reveals why organizations focusing solely on tools miss the fundamental issue: without clear data ownership and accountability, no technology stack solves visibility and control problems. He identifies the critical blind spot that too many enterprises overlook in their rush to adopt AI and cloud services without proper governance frameworks, particularly around well-meaning employees who create insider risks through improper data usage rather than malicious intent.   The convergence of cyber risk and resilience is reshaping CISO responsibilities beyond traditional security boundaries. Arvind explains why quantum readiness requires faster encryption agility than most organizations anticipate, and how machine-speed governance will need to operate in real time, embedded directly into tech stacks and business objectives by 2030.  Topics discussed:   How cybersecurity evolved from compliance checkboxes to business enablement and resilience strategies that boards actually care about. The critical blind spots in enterprise data security, including unclear data ownership, accountability gaps, and insider risks. How shadow AI creates different risks than shadow IT, requiring governance committees and internal alternatives, not prohibition. Strategies for balancing security with innovation speed by baking security into development pipelines and business objectives. Why AI functions as both threat vector and defensive tool, particularly in detection, response, and autonomous SOC capabilities. The importance of data governance frameworks that define what data can enter AI models, with proper versioning, testing, and monitoring. How quantum computing readiness requires encryption agility much faster than organizations anticipate. The emerging convergence of cyber risk and resilience, eliminating silos between IT security and business continuity. Why optimal CISO reporting structures depend on organizational maturity and industry. The rise of Chief Data Officers and their partnerships with CISOs for managing data sprawl, ownership, and holistic risk governance.

    22 min
  6. EP 24 — Apiiro's Karen Cohen on Emerging Risk Types in AI-Generated Code

    10/30/2025

    EP 24 — Apiiro's Karen Cohen on Emerging Risk Types in AI-Generated Code

    AI coding assistants are generating pull requests with 3x more commits than human developers, creating a code review bottleneck that manual processes can't handle. Karen Cohen, VP of Product Management of Apiiro, warns how AI-generated code introduces different risk patterns, particularly around privilege management, that are harder to detect than traditional syntax errors. Her research shows the shift from surface-level bugs to deeper architectural vulnerabilities that slip through code reviews, making automation not just helpful but essential for security teams.   Karen’s framework for contextual risk assessment evaluates whether vulnerabilities are actually exploitable by checking if they're deployed, internet-exposed, and tied to sensitive data, moving beyond generic vulnerability scores to application-specific threat modeling. She argues developers overwhelmingly want to ship quality code, but security becomes another checkbox when leadership doesn't prioritize it alongside feature delivery.  Topics discussed: AI coding assistants generating 3x more commits per pull request, overwhelming manual code review processes and security gates. Shift from syntax-based vulnerabilities to privilege management risks in AI-generated code that are harder to identify during reviews. Implementing top-down and bottom-up security strategies to secure executive buy-in while building grassroots developer credibility and engagement. Contextual risk assessment framework evaluating deployment status, internet exposure, and secret validity to prioritize app-specific vulnerabilities beyond CVSS scores. Transitioning from siloed AppSec scanners to unified application risk graphs that connect vulnerabilities, APIs, PII, and AI agents. Developer overwhelm driving security deprioritization when leadership doesn't communicate how vulnerabilities impact real end users and business outcomes. Future of code security involving agentic systems that continuously scan using architecture context and real-time threat intelligence feeds. Balancing career growth by choosing scary positions with psychological safety and gaining experience as both independent contributor and team player.

    20 min
  7. EP 23 — IBM's Nic Chavez on Why Data Comes Before AI

    10/14/2025

    EP 23 — IBM's Nic Chavez on Why Data Comes Before AI

    When IBM acquired Datastax, they inherited an experiment that proved something remarkable about enterprise AI adoption. Project Catalyst gave everyone in the company — not just engineers — a budget to build whatever they wanted using AI coding assistants. Nic Chavez, CISO of Data & AI, explains why this matters for the 99% of enterprise AI projects currently stuck in pilot purgatory: technical barriers for creating useful tools have collapsed.    As a member of the World Economic Forum's CISO reference group, Nic has visibility into how the world's largest organizations approach AI security. The unanimous concern is that employees are accidentally exfiltrating sensitive data into free LLMs faster than security teams can deploy internal alternatives. The winning strategy isn't blocking external AI tools, but deploying better internal options that employees actually want to use.   Topics discussed:   Why less than 1% of enterprise AI projects move from pilot to production. How vendor push versus customer pull dynamics create misalignment with overall enterprise strategy. The emergence of accidental data exfiltration as the primary AI security risk when employees dump confidential information into free LLMs. How Project Catalyst democratized AI development by giving non-technical employees budgets to build with coding assistants, proving the technical barrier for useful tool creation has dropped dramatically. The strategy of making enterprise AI "the cool house to hang out at" by deploying internal tools better than external options. Why the velocity gap between attackers and enterprises in AI deployment comes down to procurement cycles versus instant hacker decisions for deepfake creation. How the World Economic Forum's Chatham House rule enables CISOs from the world's largest companies to freely exchange ideas about AI governance without attribution concerns. The role of LLM optimization in preventing super intelligence trained on poison data by establishing data provenance verification. Why Anthropic's copyright settlement signals the end of the “ask forgiveness not permission” approach to training data sourcing. How edge intelligence versus cloud centralization decisions depend on data freshness requirements and whether streaming updates from vector databases can supplement local models.

    32 min
  8. EP 22 — Databricks' Omar Khawaja on Why Inertia Is Security's Greatest Enemy

    09/18/2025

    EP 22 — Databricks' Omar Khawaja on Why Inertia Is Security's Greatest Enemy

    What if inertia — not attackers — is security's greatest enemy? At Databricks, CISO Omar Khawaja transformed this insight into a systematic approach that flips traditional security thinking on its head and treats employees as assets rather than threats.   Omar offers his T-junction methodology for breaking organizational inertia: instead of letting teams default to existing behaviors, he creates explicit decision points where continuing the status quo becomes impossible. This approach drove thousands of employees to voluntarily take optional security training in a single year.   There’s also Databricks' systematic response to AI security chaos. Rather than succumb to "top five AI risks" thinking, Omar's team catalogued 62 specific AI risks across four subsystems: data operations, model operations, serving layer, and unified governance. Their public Databricks AI Security Framework (DASF) provides enterprise-ready controls for each risk, moving beyond generic guidance to actionable frameworks that work regardless of whether you're a Databricks customer.   Topics discussed:   The T-Junction Framework to systematically break organizational inertia by eliminating default paths and forcing explicit decision-making Human risk management strategy of moving to behavior-driven programs that convert employees from liabilities to champions 62-Risk AI security classifications of data layer, model operations, serving layer, and governance risks with specific controls for each Methods for understanding true organizational risk appetite across business units, including the "double-check your math" approach Four-component agent definition and specific risks emerging from chain-of-thought reasoning and multi-system connectivity Why "AI strategy" creates shiny object syndrome and how to instead use AI to accelerate existing business strategy

    32 min

About

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.