The AI Data Fabric Show

Promethium

Welcome to The AI Data Fabric Show where we explore what it means to create a modern data experience for everyone in an organization, from data analysts to non-technical business users. Each episode features interviews with data leaders, practitioners, and experts who share their insights and strategies for designing and delivering exceptional data experiences that drive business value.

  1. EP 13 – From Half the Workforce to 1.5%: What Agriculture Teaches Us About AI

    1 G FA

    EP 13 – From Half the Workforce to 1.5%: What Agriculture Teaches Us About AI

    What does a 175-year-old farm equipment company have to teach us about the agentic era? More than most expect. Mano Mannoochahr, former CDAIO at Verizon and Travelers and a top 100 AI leader, spent 17 years at John Deere watching petabytes of farm data reshape everything from yield predictions to fuel tank sizing. His argument is simple and grounded: data transformation is not about technology, it is about the decisions it makes possible for the first time. In this episode, Prat Moghe talks with Mano about three hard-won lessons from three decades of data leadership. At John Deere, precision agriculture data turned counterintuitive insights into real business models. At Travelers, becoming the first formal CDO meant training 20,000 people and using aerial imagery to close the gap between recorded data and ground truth. And on the agentic shift, Mano points to the agricultural revolution as the most instructive parallel: in the early 1900s over 50% of the US workforce was in agriculture, today that number is 1.5%, and the overall impact on jobs and productivity was positive. His case for why AI follows the same arc is one of the most grounded takes you will hear from a leader at this level. Topics discussed: GPS-precision agriculture and petabyte-scale farm data Counterintuitive field insights and new farmer business models Optimizing fuel tanks and product engineering from fleet data The "show me, advise me, do it for me" customer trust framework Building a 20,000-person data culture at Travelers Aerial imagery and AI closing the ground-truth gap in insurance Agriculture as a historical lens for workforce displacement and AI Work AI can do that humans could never do at scale How the CDAIO role has evolved from governance to transformation

    33 min
  2. EP 12 — How blending cell phone data with distribution systems drove 4% sales growth without changing products

    3 FEB

    EP 12 — How blending cell phone data with distribution systems drove 4% sales growth without changing products

    Kjersten Margaret Moody's 4% sales uplift in Southeast Asia came from a data combination most CDOs overlook: cell phone movement patterns merged with internal distribution data. The insight was brutally simple yet invisible until the data revealed it. Convenience stores next to elementary schools were being restocked for night shift workers instead of parents picking up kindergarteners. Kjersten Moody, now CEO of North America at Elai and former CDO at Unilever, State Farm, and Prudential, walks through the mechanics of this win and why it represents a broader problem: most business strategies don't actually require AI to succeed, which means your data initiatives lack the forcing function of incentive alignment. Her governance framework from State Farm cuts through the bureaucracy paralysis: cars have brakes so they can go fast, not slow down. Topics discussed: Merging cell phone movement data with internal distribution systems to identify restocking mismatches by location type Structuring data office charters with explicit "what we don't do" boundaries to prevent scope creep and mission drift Aligning AI initiatives to business strategy through incentive structures rather than relying on voluntary adoption Building cross-functional teams that balance institutional knowledge with external thinking while screening for collaboration over brilliance Establishing shared KPIs with CFOs and P&L owners instead of data office vanity metrics Implementing governance frameworks that define clear operational boundaries to enable faster deployment cycles Replacing passive-aggressive meeting culture where verbal agreement doesn't translate into resource commitment or execution Accelerating business domain expertise by attending customer industry conferences instead of data and analytics technical events

    38 min
  3. EP 11 — Generac's Neil Bhandar On Speaking Data Fluently Across Marketing, HR & Supply Chain

    10/12/2025

    EP 11 — Generac's Neil Bhandar On Speaking Data Fluently Across Marketing, HR & Supply Chain

    At Generac, Neil Bhandar expans his 20+ year career into IT. Previously at P&G, JPMorgan, and other companies, he operated within business functions. He's a business executive who works in data, not the other way around. This explains how he sees the same sawtooth pattern in inventory replenishment and credit card balances, and launched Tide Coldwater to 4% market share gains in under four weeks. At P&G during the second Bush administration, oil jumped from $45 to $105 per barrel. His team positioned an enzyme-activated detergent as consumer cost savings, combined it with down-counting to convert Tide from a retail loss leader into a profitable product, then targeted only retailers with high category close rates and golden households. Four weeks later: 4% market share gain for a $7 billion brand. At JPMorgan, he used organizational network analysis to find which business units weren't communicating. When he tested whether "happy employees make happy customers," he found no data that validated it. His book The Cost of Curiosity addresses analytics' broken economics where each follow-up question costs as much as the first. His biggest AI concern isn't hallucination but the sea of sameness when competitors use identical foundation models. Topics discussed: Three-pronged retail strategy delivering 4% market share gains in four weeks for $7 billion brand Down-counting technique converting retail loss leader products into profitable SKUs with built-in retailer margin Targeting golden households doing multiple loads per day through high close-rate retail channels Applying sawtooth geometric patterns across inventory optimization and credit card balance modeling domains Using organizational network analysis to surface invisible communication gaps at 250,000-person institutional scale Testing whether "happy employees make happy customers" assumption holds across full employee survey dataset Moving between supply chain, marketing, financial services, and HR by modulating technical accent per audience Sea of sameness risk when competitors deploy identical pre-trained foundation models without differentiation Resonance effects from multiple autonomous AI agents built on same models making independent decisions Reducing marginal cost of answering analytical questions to approach zero through Alexa-like interaction model

    31 min
  4. EP 10 — Rubrik's Ajay Sabhlok on Embedding Architecture as The IT Decision Layer

    16/10/2025

    EP 10 — Rubrik's Ajay Sabhlok on Embedding Architecture as The IT Decision Layer

    Ajay Sabhlok made architecture the gatekeeper for every IT decision at Rubrik, not a review function that validates choices after they're made. This structural shift meant architects evaluated tools, processes, and vendor capabilities before budget conversations happened. The outcome: scaling from 700 employees to IPO without the shadow IT proliferation or rationalization projects that typically emerge when business units control procurement. The execution reveals how it works. At VMware, he performance tested Salesforce Service Cloud despite enterprise customers not doing that, found telecom provider bottlenecks, and co-developed DR capabilities with their product team. At Rubrik, every enterprise tool passes through a quarterly board with the CFO, CRO, and co-founder before purchase approval. When sales requested territory management software, the CRO delayed it three years until manual processes actually broke at scale. His "comprehensive thinking" framework maps failure scenarios during discovery, not after implementation. It's why Rubrik declared cloud-only infrastructure early and centralized all software budget authority under IT. Topics discussed: Embedding architecture as gatekeeper versus post-decision review function Quarterly C-suite Project Review Boards that control enterprise tool procurement Performance testing SaaS vendors at enterprise scale despite vendor resistance Co-developing missing product capabilities with vendor engineering teams Timing tool purchases to operational breaking points versus business requests Comprehensive thinking framework for mapping failure modes in discovery Declaring cloud-only infrastructure before hitting organizational scale Centralizing software budget authority to eliminate shadow IT formation

    35 min
  5. EP 9 — Fidelity’s Mihir Shah On The Data Revolution: 3 Decades of Architecture Lessons

    08/08/2025

    EP 9 — Fidelity’s Mihir Shah On The Data Revolution: 3 Decades of Architecture Lessons

    How do you convince executives to fund 14-month foundational builds when CDOs typically have quarters to prove value? After orchestrating multiple enterprise transformations at Fidelity—including bringing all enterprise data into a single cataloged, modeled platform—Mihir Shah reveals the tactical playbook that enabled 30 years of technology migrations without losing architectural coherence. Shah's approach stems from hard-won lessons at Churchill Insurance in the late 80s, where his team built real-time underwriting systems that processed "billions and billions of rows" while customers waited on calls. That foundational discipline—investing 14 months in architecture before launching—became his template for navigating technology shifts from mainframes to cloud-native AI platforms. His counterintuitive strategy: aggregate all planned data initiatives across business units to reveal the true cost of fragmented execution, then redirect that same budget toward unified foundations. The result? Technology-portable architectures that survive decades of vendor transitions while delivering immediate business value.   Topics discussed: Shah's enterprise aggregation framework: Map all planned data warehouses, hardware refresh cycles, and modernization costs across business units over five years—this identical budget funds unified architecture while eliminating the compromise of delayed use case delivery. Breaking the CDO death spiral of vertical use case execution that creates technical debt: Aggregate requirements at enterprise or business unit level first, then execute against a unified blueprint rather than chasing individual departmental priorities. The Churchill discipline applied to enterprise scale: 14-month foundation builds that process "billions and billions of rows" in real-time—translating to modern architectures that handle enterprise query loads while maintaining sub-second response times. Technology portability strategy for stateful data layers: Accept vendor lock-in as inevitable for databases while maintaining CSP independence (Azure, GCP, AWS, Oracle) through deliberate technology selection across the entire analytics stack. Data model invariance as competitive moat: While business processes change quarterly with management shifts, properly designed business data models remain stable across decades—making them the core IP worth protecting through technology transitions. Structured data renaissance beyond the LLM hype: Combining first-party transactional intelligence (general ledger, trades, sales) with unstructured processing rather than abandoning traditional analytics for generative AI completely. Self-service at enterprise scale: Enabling power users within departments to create curated views for finance, risk, compliance, and security functions rather than expecting end users to navigate 5,000-table analytics warehouses directly. Role-specific data independence: Eliminating cross-departmental data dependencies—project managers access project status without finance teams, finance reviews project health without project managers—through persona-driven data design.   Listen to more episodes:  Apple  Spotify  YouTube Website

    27 min
  6. EP 8 — PuppyGraph’s Weimo Liu on Unlocking Graph Database Value Without Data Migration

    01/04/2025

    EP 8 — PuppyGraph’s Weimo Liu on Unlocking Graph Database Value Without Data Migration

    Ever wondered why graph databases promise so much but often fail to reach production? In this episode of The Data Fabric Show, Kaycee talks with Weimo Liu, CEO of PuppyGraph, about creating the "Trino of graph databases."    Drawing from his experience at a previous graph startup and Google's F1 team, Weimo describes how PuppyGraph enables organizations to visualize complex relationships like money laundering patterns or supply chain dependencies without moving data from existing platforms like Snowflake, Oracle, or Databricks.    By treating data logically as a graph while physically remaining as tables, PuppyGraph has broken through the adoption barrier that has plagued graph database implementations for years.   Topics discussed:   How PuppyGraph enables organizations to leverage graph capabilities without moving data from existing sources like Oracle, Snowflake, or Databricks. The critical insight that, despite strong interest and clear use cases, many graph database projects fail to reach production due to ETL complexity and data migration costs. Why financial fraud detection benefits from graph visualization when tracing money through multiple fake accounts, offering clear relationship mapping that SQL queries can't match. The strategic decision to partner rather than compete with established graph database vendors like Neo4J by developing complementary technology that expands the overall market. How graph relationships provide enhanced semantic context for large language models compared to raw SQL tables, improving AI understanding of complex data relationships. The development of multi-stage analytics pipelines where SQL processes initial data, graph visualizes relationships, and results feedback to data lakes for machine learning. Why modern data lake architectures enable organizations to solve complex problems using specialized tools for each phase, eliminating the vendor lock-in that previously made integrated workflows prohibitively expensive. Check out Puppygraph on LinkedIn, YouTube and on X!

    34 min
  7. EP 7 — MTSI’s Joyce L. Myers on The Emotional Dimension of Data for Technical Teams

    27/02/2025

    EP 7 — MTSI’s Joyce L. Myers on The Emotional Dimension of Data for Technical Teams

    Joyce L. Myers, CDO of MTSI, considers herself more of a "builder" digital officer who's architecting additions to an already successful foundation. In this episode of The Data Fabric Show, Joyce tells Kaycee her approach to building upon the strong foundation of an employee-owned defense contractor with a 31-year history. She explains how MTSI recognized the need for better data organization as they continued to grow, comparing their situation to an entrepreneur with too many files in cabinets who can't find what they need.   Joyce discusses the unique challenges of implementing data governance and AI solutions within the constraints of the public sector and defense industry, where security and protection of proprietary, sensitive, and classified information are paramount concerns. This security context creates additional complexity when adopting GenAI tools, requiring careful consideration before implementing solutions that might expose sensitive data.   Topics discussed: The architectural challenges of integrating data governance as the first CDO in a 31-year-old employee-owned defense contractor, requiring a "builder" mentality to preserve institutional knowledge while implementing modern practices. How defense-specific security constraints create unique implementation pathways for GenAI, requiring careful validation processes and air-gapped environments that balance innovation against protection of classified information. The implementation of a trust-driven data quality framework that transforms "garbage in, garbage out" to "goodness in, goodness out." Strategies for enabling secure self-service analytics within highly regulated environments, emphasizing documented iterative processes that capture decision context for future governance and compliance requirements. The evolution from tightly-controlled ETL processes toward a more agile data fabric approach that creates a sandbox environment for experimentation before formalizing pipelines. The comprehensive metadata context needed to create effective data products in defense applications — capturing not just data lineage but decision authority, classification levels, and intended analytical purpose. Applying NASA's mission-focused organizational alignment to defense data teams, where every contributor understands how their component supports operational objectives regardless of technical specialty. The development of an iterative data discovery workflow for security-conscious environments that preserves context while enabling business users to safely explore and refine analytical questions.

    27 min
  8. EP 6 — Penn State’s Bruce Desmarais on Social Data Analytics and Combating Misinformation

    07/01/2025

    EP 6 — Penn State’s Bruce Desmarais on Social Data Analytics and Combating Misinformation

    In this episode of The Data Fabric Show, Kaycee speaks with Bruce Desmarais, PhD, Professor & Director of the Center for Social Data Analytics, Penn State University. Together, they dive into the role of advanced data science tools in analyzing social media data to address issues like misinformation and online bullying.    Bruce shares insights on the collaboration between academia and tech companies. The conversation also explores the transformative potential of generative AI in social research, including the creation of synthetic social systems for experimentation.    Topics discussed:   The significance of social data analytics in understanding human behavior and communication through digital traces left on social media. Challenges researchers face in accessing and integrating diverse data sources, especially with changing data formats and proprietary data restrictions from social media companies. The importance of collaboration between academia and industry in addressing real-world problems, with examples of successful partnerships that enhance data analytics efforts. The role of generative AI in streamlining data analysis processes, including its potential to automate data tagging and quality checks in research. Ethical considerations in social research, particularly when experimenting with interventions on social media platforms and the implications for freedom of speech. The impact of generative AI on survey design and data collection, allowing researchers to generate realistic synthetic responses for more efficient analysis. The need for data validation and quality checking in research, emphasizing the importance of unit testing to ensure accurate results from AI-generated outputs. The future of social data analytics and the potential for creating synthetic social systems to explore complex interactions without ethical concerns or real-world consequences. The ongoing challenges of data governance and security in enterprise analytics, highlighting why many organizations are hesitant to fully adopt generative AI technologies.

    27 min

Descrizione

Welcome to The AI Data Fabric Show where we explore what it means to create a modern data experience for everyone in an organization, from data analysts to non-technical business users. Each episode features interviews with data leaders, practitioners, and experts who share their insights and strategies for designing and delivering exceptional data experiences that drive business value.