DatAInnovators & Builders

Nexla

DatAInnovators & Builders features Chief Data Officers and data leaders sharing real strategies for conquering data complexity and building AI solutions that work. Host Saket Saurabh, CEO of Nexla, delivers practical insights on tackling data variety, moving AI from pilot to production, and making transformation actually happen.

Episodes

  1. The delegation test for AI: If an intern can't succeed with your context, neither will your model

    JAN 27

    The delegation test for AI: If an intern can't succeed with your context, neither will your model

    Marcel Santilli built GrowthX to $13 million ARR without hiring an AE until three months ago. His approach: 170 paid workshops at $500+ each validated exactly what the market needed before writing a line of platform code. The methodology behind it came from his time as Deepgram's CMO, where AI workflows plus human judgment generated 3,000 continuously improving pages and helped 4x revenue in three months. His framework for AI implementation challenges the tools-first mentality plaguing most data teams. Topics Discussed Three-role framework replacing traditional GTM engineers: process architects, input calibrators, output bar raisers Context engineering as delegation test: if an intern can't succeed with your inputs, neither will AI Deep research architecture: internal knowledge bases combined with external signal processing before drafting Workshop revenue as product validation: charging $500 to teach eliminates assumption risk Cohort analysis through narrative transformation: converting raw session data into plain English before pattern analysis Planning layer between research and execution: evaluating task requirements against available context Post-processing with human bar raisers: calibrating system improvements rather than fixing individual outputs Forward-deployed delivery model: solving customer problems directly reveals what to automate Output-first engineering: working backwards from deliverable to determine technical requirements

    43 min
  2. Ortecha's Stephen Gatchell On The Data Governance Gap That's Blocking Your AI Production Deployment

    JAN 13

    Ortecha's Stephen Gatchell On The Data Governance Gap That's Blocking Your AI Production Deployment

    Companies turn on Microsoft Copilot or Glean, then shut them off a month later after discovering sensitive data exposure across their environment. Stephen Gatchell, Partner and Head of AI Strategy at Ortecha, explains why this pattern keeps repeating and what it takes to actually get enterprise AI into production safely.  Stephen breaks down the real blockers: unstructured data at petabyte scale that organizations never cataloged, duplicate files spreading PII across 15 different locations, and retention policies that exist on paper but never get enforced. He worked with the EDM Associates to formally define what a data product actually is, and explains why most companies assign data owners without ever telling them what their responsibilities are. His framework for moving AI from lab to production starts with cross-functional governance committees and ends with treating AI models as measurable assets with clear ROI criteria. Topics discussed: Why companies turn on enterprise copilots then shut them off within weeks The shift from structured to unstructured data as AI's primary governance challenge Shadow AI risks from employees uploading sensitive data to public LLMs Building cross-functional governance committees across security, privacy, and data teams Defining data products with owners, purpose of use, and lifecycle management Using generative AI to automate semantic layer creation and business glossary mapping Moving from large language models to small language models for specific agent tasks The production deployment framework from assessment through continuous monitoring New attack surfaces in RAG pipelines including vector databases and prompt storage Why scanning techniques evolved from metadata reading to actual data classification at scale This conversation was recorded while Stephen was Vice President, Data and AI Strategy at BigID.

    47 min
  3. Bigpanda's Alexander Page On Building AI Agents That Internalize Corrections

    12/23/2025

    Bigpanda's Alexander Page On Building AI Agents That Internalize Corrections

    Most AI agent demos still look great but fall apart in production. At BigPanda, Alexander Page's team solved this by building systems that internalize user corrections and improve without requiring source data fixes. The Engineering Director of Applied AI shares with Saket how his team designs production-grade AI agents for IT operations. When a user flags that step seven of a retrieved runbook is outdated, the system internalizes that correction with appropriate weighting and handles conflicts on future retrievals, even when nobody updates the original Confluence page. He argues this capability is becoming a baseline expectation: users accept that AI systems won't be perfect, but they increasingly expect systems to learn when shown the right answer. Page also breaks down multi-agent architecture decisions. When you have 100 tools, giving them all to one agent degrades tool selection. His team isolates decision-making by domain, spinning up specialized sub-agents at runtime based on user intent. For evals, they focus on tool call sequences rather than final outputs, making it easier to pinpoint where agent chains break down. Topics discussed: Internalizing user corrections when source data stays outdated Why correction capability is becoming a baseline user expectation Evaluating agent chains by tool call sequences not outputs Breaking large tool sets into domain-specific agents MCP security tradeoffs and when A2A fits better Runtime decisions on which sub-agents to spin up Maintaining a prototype shelf for future foundation model capabilities Context engineering over expanding context windows

    41 min
  4. "AI build, Human verify, AI refine”: How CurieTech flipped the IT engineering workflow, with Ashish Thusoo

    12/09/2025

    "AI build, Human verify, AI refine”: How CurieTech flipped the IT engineering workflow, with Ashish Thusoo

    Most IT teams burn months integrating business systems. Ashish Thusoo's agents at CurieTech AI deliver 70-80% productivity improvements by changing one thing: the loop shifts from human build, human verify, human refine to AI build, human verify, AI refine. That compression happens because machines can now build and refine while humans focus verification energy where it matters. From co-creating Apache Hive at Facebook to General Manager of AI at AWS, Ashish brings 25 years of infrastructure experience to automating IT engineering. CurieTech targets the reality most companies face: not building software products but making CRM, ERP, and financial systems talk to each other across 1,000+ business systems. His method treats production agent development as a data problem first. Build benchmarks, run systematic error analysis across every failure, then decide whether fixes need more context, fine-tuning, or expanded knowledge bases. Topics discussed: Shifting from human build/verify/refine loops to AI build/human verify/AI refine workflow Building benchmarks and eval sets before prototyping agents for production quality Running painstaking error analysis on every agent mistake to classify root causes Choosing between fine-tuning and RAG based on knowledge stability and response speed requirements Creating synthetic datasets with statistical sampling methods for human verification loops Handling multimodal enterprise data quality with task-specific metrics per workflow Hiring engineers based on how they guide AI through problem decomposition Automating version upgrades across 1,000+ business systems with reasoning-capable agents Applying SaaS-era governance patterns to agent proliferation in enterprises Maintaining speed as core entrepreneurial skill when technology shifts monthly not yearly

    56 min
  5. Databricks' Robin Sutara On Why AI Training Fails - And Persona-Based Enablement That Works

    11/25/2025

    Databricks' Robin Sutara On Why AI Training Fails - And Persona-Based Enablement That Works

    Robin Sutara is Field Chief Data Strategy Officer at Databricks, where she works with organizations facing a common problem: employees sit through AI training, check the box, then nothing changes. The issue isn't awareness. It's that a store manager, plant floor worker, and data scientist need completely different capabilities, but most organizations treat them identically. Robin breaks down why generic AI literacy programs fail to drive behavior change and how to build persona-specific enablement instead. She explains why your data teams need to sit with domain users (like riding in trucks with electric utility line workers) to understand their actual workflows, how to update performance KPIs to reinforce new behaviors, and why organizations should study their pandemic response as a template for AI transformation speed. The conversation covers JetBlue's production agentic systems, JP Morgan's executive-level AI representation, the Databricks AI Security Framework's 62 risk factors for prioritization, and the specific criteria for choosing which use cases to ship when data isn't perfect across your entire estate. Topics discussed: Persona-based enablement replacing one-size-fits-all AI literacy programs Sitting with domain users to translate AI capabilities into changed workflows Updating performance KPIs and organizational processes to reinforce AI behaviors Defining acceptable failure rates and safe experimentation spaces for pilots Applying pandemic-era instant transformation tactics to AI adoption cycles Prioritizing use cases where data foundations are ready while modernizing the rest JetBlue's agentic systems aggregating weather, maintenance, staffing, and customer data JP Morgan Chase's approach to AI representation at executive level Databricks AI Security Framework's 62 risk factors for balancing innovation and controls

    43 min

About

DatAInnovators & Builders features Chief Data Officers and data leaders sharing real strategies for conquering data complexity and building AI solutions that work. Host Saket Saurabh, CEO of Nexla, delivers practical insights on tackling data variety, moving AI from pilot to production, and making transformation actually happen.