Beyond The Pilot: Enterprise AI in Action

VentureBeat

AI gets real here. On “Beyond the Pilot,” top business execs share what actually happens after the AI proof of concept — from infrastructure and org design to wins, failures, and ROI. Not theory, but deep dives into how they scaled AI that works.

  1. 100M Agents: Scaling the New Execution stack with Intuit

    4D AGO

    100M Agents: Scaling the New Execution stack with Intuit

    A QuickBooks customer discovered significant fraud by asking their AI assistant follow-up questions about transaction amounts that didn't add up. This isn't a demo — it's one of 3 million customers now using Intuit's AI agents in production, with 80.5% returning to use them again. Marianna Tessel, EVP and GM of QuickBooks (formerly CTO of Intuit), walks through the architecture decisions behind one of the first enterprise AI deployments at true scale. Intuit's "done-for-you" agents now automate book closing, reconciliation, transaction categorization, and payroll — but the breakthrough came when they realized chatbots alone weren't enough. Businesses wanted human experts integrated directly into AI workflows, creating what Intuit calls the "AI + HI" model (artificial intelligence + human intelligence). The results: invoices paid 5 days faster, 90% more paid in full, 30% reduction in manual work, and 62% of users reporting bookkeeping is easier. Tessel reveals the technical evolution: moving from monolithic agents to a dynamic orchestration layer that routes queries across multiple LLMs (including Intuit's proprietary FinLM built on open-source), 24,000 bank connections, and 600,000 customer attributes. The system now handles proactive anomaly detection, benchmarking against similar businesses, and even nascent vibe coding — all without requiring users to understand they're essentially programming workflows through natural language. She also addresses the "SaaS apocalypse" narrative head-on, explaining why QuickBooks saw 18% growth last quarter while competitors faced market pressure: durable data advantages and customer trust in financial accuracy matter more than ever when AI enters the mix. For enterprise builders navigating agent architecture, data grounding, and human-in-the-loop design, this is a rare look inside a working system serving millions. 🎙️ GUEST: Marianna Tessel | EVP & GM, QuickBooks (Intuit) 🎙️ HOSTS: Matt Marshall | VentureBeat, Sam Witteveen | VentureBeat 00:00 Intro — Customer discovers fraud using QuickBooks AI 03:26 Intuit Intelligence: Agents, BI, and human expertise integration 05:20 First-time AI users and going beyond chatbots 08:02 How Intuit decides which workflows to automate 10:16 Sponsor: Outshift by Cisco 10:38 Human-in-the-loop: When to insert experts vs. full automation 13:00 The AI + HI model: Why customers want human verification 15:24 Human expertise as confidence layer, not just AI check 16:14 Proprietary data advantage: 24K bank connections, 600K attributes 18:39 Benchmarking: "Businesses like me" — using aggregate data for competitive insights 19:52 First-party vs. third-party data strategy 21:38 Addressing the "SaaS apocalypse" narrative — why Intuit grew 18% last quarter 24:39 Proactive AI: Anomaly detection for marketing expense spikes 25:20 Builder perspective: Leaning on LLM orchestration, not use-case-by-use-case builds 27:32 Architecture evolution: From monolithic agents to dynamic tools and skills 29:10 Composite UX: Chat side-by-side with traditional workflows 30:35 Multi-model strategy: Genos platform, FinLM, and model routing 31:16 Vibe coding and actions: Letting users automate without realizing they're coding 32:47 Personalization wave: Memory, persistence, and user-defined workflows 35:08 Docker background and primitives that survive disruption 36:00 Open Claw and agent automation: Real revolution or risky experimentation? #EnterpriseAI #AIAgents #QuickBooks #Intuit #LLMOrchestration #AgenticAI Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com. About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality. 🔗 CONNECT WITH US Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters Visit VentureBeat: Venturebeat.com . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat Learn more about your ad choices. Visit megaphone.fm/adchoices

    38 min
  2. The AI War For Your Personal Context

    MAR 18

    The AI War For Your Personal Context

    Major SaaS companies including Salesforce, Intuit, and ServiceNow saw stock drops of 45-50% as enterprises shift from bloated software suites to personalized AI agents that users can control directly. Microsoft just capitulated this week, opening Copilot to allow Claude Cowork-style functionality — a clear signal that the "build vs. buy" calculus for enterprise software has fundamentally changed. Matt Marshall and Sam Witteveen break down why personalization is no longer optional for enterprise products. Companies like Zoom now offer personalized workflows that access your conversation history and profile context. Infrastructure decisions are moving fast: token budgets must account for per-user context, identity management has become the biggest technical challenge for agent deployments, and "skills" (not just MCP) are emerging as the key abstraction layer. Zoom's Li Juan explains how their AI Companion moved beyond generic templates to user-controlled personalization: tracking opinion divergence in meetings, generating follow-up emails with specific context controls, and giving users explicit prompt examples instead of "good luck with your prompt." This is the new standard. If your product can't reason over which tools to use, which skills to apply, and which context to pull — all personalized to the individual user — you're competing with something that can be built in 10 days (Cowork's timeline). The agents-are-taking-over reality is here: multi-user agent architectures require thinking about context contamination, security postures for computer-use capabilities, and whether you're building internal agents or buying SaaS that will adapt. Sam's take: "AGI is agentic, and we're well along that continuum now." 🎙️ HOSTS: Matt Marshall | CEO, VentureBeat & Sam Witteveen | VentureBeat 📺 CHAPTERS: 00:00 Intro — The SaaS Apocalypse 00:01:00 The Personalization Imperative 00:02:00 Microsoft Copilot Capitulates to Cowork 00:03:00 From Template Selection to Skill Generation 00:04:00 The Land Grab for User Context 00:05:00 Zoom's Li Juan on Personalized Meeting Intelligence 00:06:00 Why Context = Magic in Enterprise AI 00:07:00 Product-Market Fit in the Agent Era 00:08:00 Metrics That Matter: JP Morgan's 30,000 Agents 00:09:00 Build vs. Buy: The New Calculus 00:10:00 Why Slack Might Win on Agent Identity Management 00:11:00 Zoom's AI Companion: Control Over Randomness 00:13:00 Li Juan on Purposeful Prompts and Reference Control 00:15:00 Multi-Agent vs. Multi-User: The Critical Distinction 00:16:00 LinkedIn's GPU Optimization Strategy 00:17:00 AGI Is Agentic: Where Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com. About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality. 🔗 CONNECT WITH US Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters Visit VentureBeat: Venturebeat.com . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat Learn more about your ad choices. Visit megaphone.fm/adchoices

    21 min
  3. LangChain: What OpenClaw Got Right (And Why Enterprises Can't Have It)

    MAR 4

    LangChain: What OpenClaw Got Right (And Why Enterprises Can't Have It)

    LangChain told employees they cannot install OpenClaw on company laptops due to "massive security risk" — yet this unhinged approach is exactly what makes it work. Harrison Chase unpacks why OpenClaw succeeds where AutoGPT failed, and why context engineering, not just smarter models, separates demo agents from production-ready systems. The shift is architectural: Modern agent harnesses like Claude Code now dump 40,000-token API responses to file systems instead of cramming them into message history. LangChain's Deep Agents framework emerged from reverse-engineering Claude Code, Codex, and Deep Research — discovering they all use planning via to-do lists, subagents for focused work, file systems for context control, and 2000-line system prompts. Harrison explains why coding agents make surprisingly good general-purpose agents, how prompt caching creates accuracy trade-offs, and why "context engineering" — bringing the right information in the right format to the LLM at the right time — matters more than framework choice. For enterprise teams: Harrison breaks down LangGraph (agent runtime with durable execution), LangChain (unopinionated agent framework), and Deep Agents (batteries-included harness). The conversation covers when to use graphs vs. loops, how skills differ from tools and subagents, and why nine months ago marked the inflection point where models could finally run reliably in autonomous loops. 🎙️ GUEST: Harrison Chase | Co-founder & CEO, LangChain 🎙️ HOSTS: Matt Marshall | CEO, VentureBeat | Sam Witteveen | VentureBeat **CHAPTERS:** 00:00 Intro — OpenClaw security warning 01:00 LangChain's origin story: From open source library to company 03:00 Early LLM patterns: RAG and SQL agents before ChatGPT 05:00 Why OpenClaw works where AutoGPT failed 08:00 Step change in agent capability: The summer 2024 inflection 11:00 Deep Agents unpacked: Planning, subagents, file systems, prompting 14:00 Skills vs tools vs subagents 16:00 LangGraph, LangChain, and Deep Agents architecture 19:00 Context engineering: What the LLM sees vs what developers see 21:00 File systems for context management vs AutoGPT's approach **LINKS:** Subscribe to VentureBeat: https://www.youtube.com/@VentureBeat Apple Podcasts: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 Website: https://venturebeat.com LinkedIn: https://www.linkedin.com/company/venturebeat Newsletter: https://venturebeat.com/newsletters #EnterpriseAI #AIAgents #LangChain #AgenticAI #LLMInfrastructure Learn more about your ad choices. Visit megaphone.fm/adchoices

    56 min
  4. LexisNexis on Why Standard RAG Fails in Law

    FEB 18

    LexisNexis on Why Standard RAG Fails in Law

    On February 2nd, a single plugin wiped nearly $800 billion off the enterprise software market. Wall Street is terrified that AI agents are about to eat the legal industry's lunch. But LexisNexis isn't scared—they're building the moat. In this episode of Beyond the Pilot, Min Chen (Chief AI Officer, LexisNexis) reveals the sophisticated architecture they built to counter the "LLM wrapper" revolution. Moving beyond standard RAG, Min breaks down their move to "GraphRAG", their deployment of Agentic workflows (using Planner and Reflection agents), and why they created a proprietary "Usefulness Score" because standard accuracy metrics weren't good enough for lawyers. AI Gets Real Here. No theory, just the execution roadmap for deploying AI in a zero-error environment. In this episode, we cover: The "Dangerous RAG" Problem: Why semantic search fails in professional domains (retrieving "relevant" but overruled cases) and how "Point of Law" knowledge graphs fix it. The "Usefulness" Metric: The 8 sub-metrics LexisNexis uses (including Authority, Comprehensiveness, and Fluency) to grade AI quality. Agentic ROI: How deploying a "Planner Agent" to break down complex questions increased answer usefulness by 20%. The "Reflection Agent": Using a secondary agent to critique and refine drafts in real-time. Hallucination Detection: Why you should never rely on an LLM to judge its own hallucinations (and the deterministic code they use instead). ⏱️ TIMESTAMPS 00:00 - Intro: The $800 Billion AI Threat to Legal Tech 02:18 - Min Chen’s Journey: From Feature Engineering to Chief AI Officer 05:55 - Why Standard RAG Fails in Law (and How GraphRAG Fixes It) 10:40 - "Accuracy" is a Vanity Metric: The 8-Point Usefulness Score 14:20 - The "Auto-Eval" Framework: Human-in-the-Loop at Scale 16:40 - The Secret Sauce: Don't Use LLMs to Detect Hallucinations 21:15 - Agentic AI: How "Planner Agents" Drove a 20% Gain 22:00 - The "Reflection Agent": Self-Critique Loops for Drafting 30:30 - Distillation: Balancing Cost, Speed, and Quality 32:45 - Min’s Advice: Don't Build the Product First (Build the Metrics) Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com. About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality. 🔗 CONNECT WITH US Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters Visit VentureBeat: Venturebeat.com . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat Learn more about your ad choices. Visit megaphone.fm/adchoices

    36 min
  5. Mastercard's 160 Billion Transactions: AI's Biggest Test

    FEB 4

    Mastercard's 160 Billion Transactions: AI's Biggest Test

    While most of the world is still running GenAI pilots, Mastercard is running AI inference on 160 billion transactions a year—with a hard latency limit of 50 milliseconds per score. In this episode of Beyond the Pilot, Johan Gerber (EVP of Security Solutions) and Chris Merz (SVP of Data Science) open the hood on one of the world's largest production AI systems: Decision Intelligence Pro. They reveal how they moved beyond legacy rules engines to build Recurrent Neural Networks (RNNs) that act as "inverse recommenders"—predicting legitimate behavior faster than the blink of an eye. AI Gets Real Here. This isn't just about defense. Johan and Chris detail how they are taking the fight to criminals by leveraging Generative AI to engage scammers with "honeypots," expose mule accounts, and map fraud networks globally. In this episode, we cover: The 50ms Inference Challenge: How Mastercard optimized their RNNs to score transactions at a peak rate of 70,000 per second. "Scamming the Scammers": How GenAI agents are being used to automate honeypot conversations and extract mule account data. The "Inverse Recommender" Architecture: Why Mastercard treats fraud detection as a recommendation problem (predicting the next likely merchant). Org Design for Scale: The "Data Science Engineering Requirements Document" (DSERD) Chris used to align four separate engineering teams. The Hybrid Infrastructure: Why moving to Databricks and the cloud was necessary to cut innovation cycles from months to hours. 🚀 CHAPTERS 00:00 - Intro: 160 Billion Transactions & 50ms Decisions 02:08 - Thinking Like a Criminal: Johan’s Law Enforcement Background 06:22 - Org Design: Why AI is the "Middle Lane" of Engineering 11:00 - The Scale: 70k Transactions Per Second 15:47 - Decision Intelligence Pro: The "Inverse Recommender" RNN 23:00 - The "Lego Block" Strategy: Aligning Data Science & Engineering 33:00 - Infrastructure: Why Cloud/Databricks was Non-Negotiable 37:00 - GenAI Offensive: Threat Hunting & "Scamming the Scammers" 46:40 - "Honeypots" and Detecting Mule Accounts 52:00 - Advice for Technical Leaders: Talent & Prioritization Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com. About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality. 🔗 CONNECT WITH US Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters Visit VentureBeat: Venturebeat.com . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat Learn more about your ad choices. Visit megaphone.fm/adchoices

    56 min
  6. Inside LinkedIn’s AI Engineering Playbook

    JAN 21

    Inside LinkedIn’s AI Engineering Playbook

    While the rest of the industry chases massive models, LinkedIn quietly achieved a major engineering breakthrough by going small. In this episode of Beyond the Pilot, Erran Berger (VP of Product Engineering, LinkedIn) opens the "cookbook" on how they distilled massive 7B parameter models down to ultra-efficient 600M parameter "student" models—scaling AI to 1.2 billion users without breaking the bank. AI Gets Real Here. This isn't theory. Erran details the exact architecture, the "Multi-Teacher" distillation process, and the organizational shift that forced Product Managers to write evals instead of specs. In this episode, we cover: The Distillation Pipeline: How to train a 7B "Teacher" and distill it to a 1.7B intermediate and 0.6B "Student" for production. Synthetic Data Strategy: Using GPT-4 to generate the "Golden Dataset" for training. Multi-Teacher Architecture: Why they separated "Product Policy" and "Click Prediction" into different teacher models to solve alignment issues. 10x Efficiency Hacks: Specific techniques (Pruning, Quantization, Context Compression) that slashed latency. Org Design: Why the "Eval First" culture is the new requirement for AI engineering teams. 🚀 CHAPTERS 00:00 - Intro: LinkedIn's Massive "Small Model" Feat 04:00 - Why Commercial Models Failed at LinkedIn Scale 08:00 - The "Product Policy" Funnel & Synthetic Data Generation 12:00 - The Pipeline: 7B → 1.7B → 600M Parameters 19:00 - The "Multi-Teacher" Breakthrough (Relevance vs. Clicks) 23:00 - How They Achieved 10x Latency Reduction (Pruning/Compression) 31:00 - Changing the Culture: Why PMs Must Write Evals 35:00 - The "Bright Green Matrix": Measuring Success & Future Roadmap Presented by Outshift by Cisco Outshift is Cisco’s emerging tech incubation engine and driver of Agentic AI, quantum, and next-gen infrastructure. Learn more at outshift.cisco.com. About VentureBeat: VentureBeat equips enterprise technology leaders with the clearest, expert guidance on AI – and on the data and security foundations that turn it into working reality. 🔗 CONNECT WITH US Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters Visit VentureBeat: Venturebeat.com . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat #EnterpriseAI #LLMDistillation #LinkedInEngineering #SmallLanguageModels #AIArchitecture #TechLeadership Learn more about your ad choices. Visit megaphone.fm/adchoices

    41 min
  7. Most enterprise AI agents are Slop - here’s why they fail

    JAN 7

    Most enterprise AI agents are Slop - here’s why they fail

    The "TAM" for AI Agents isn't software. And there is a $10 Trillion opportunity. In this episode, Replit CEO Amjad Masad reveals why 99% of today's enterprise AI agents are just "Slop"—unreliable, generic toys that fail in production. We dive deep into the engineering reality of building autonomous agents that actually work, moving beyond simple chatbots to systems that can navigate the messy reality of enterprise infrastructure. Amjad breaks down Replit’s "Computer Use" hack that makes agents 10x cheaper than generic models, explains why "Vibe Coding" is the future of the C-Suite, and issues a warning to technical leaders: If you want to ship fast in the AI era, you need to kill your product roadmap. In this episode, we cover: The "Slop" Problem: Why most LLM outputs are generic and how to inject "taste" back into software. The Computer Use "Hack": How Replit built a programmatic verifier loop that outperforms vision-based models. Vibe Coding: Why non-technical domain experts (HR, Sales, Marketing) will build the next generation of enterprise software. The $10T Market: Why the Junior Developer role is disappearing and being replaced by the "Manager of Agents." 🚀 CHAPTERS 0:00 - Intro: Why most AI Agents are "Toys" 03:02 - The only 2 AI use cases making money right now 06:00 - The "Crappy Product" Strategy (Shipping fast) 10:00 - What is "AI Slop"? (And how to fix it) 14:30 - The "Deleted Database" Incident: Solving Reliability 18:00 - The "Squishy" Divide: Why Marketing Agents fail 21:45 - Vibe Coding in the Enterprise 26:00 - Model Wars: Claude Opus vs. Gemini vs. OpenAI 28:10 - The "Computer Use" Hack (10x Cheaper, 3x Faster) 36:00 - Why Product Roadmaps are Dead 43:00 - Replit is the #1 Software Vendor (Ramp Data) 49:00 - The Unit Economics of Agents (Token Costs vs. Value) 53:00 - Open Source vs. Closed: The "Cathedral of Bazaars" 59:00 - The $10 Trillion Opportunity: Replacing Labor 🔗 CONNECT WITH US Subscribe to our Newsletters for technical breakdowns: https://venturebeat.com/newsletters Visit VentureBeat: Venturebeat.com #AgenticAI #Replit #VibeCoding #EnterpriseAI #LLM #SoftwareEngineering #FutureOfWork #AmjadMasad #ArtificialIntelligence #DevOps . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat Learn more about your ad choices. Visit megaphone.fm/adchoices

    1h 3m
  8. How JPMorgan Engineered a 30K AI Agent Economy

    12/17/2025

    How JPMorgan Engineered a 30K AI Agent Economy

    Inside the 'Agent Economy': How 30,000 AI Assistants Took Over JPMorgan While most enterprises were scrambling after ChatGPT launched, JPMorgan Chase was already two years ahead. 🚀 In this episode of Beyond the Pilot, we sit down with Derek Waldron, Chief Analytics Officer at JPMorgan Chase, to reveal how the world’s largest bank built an internal AI platform that is now used by 1 in 2 employees daily. Derek shares the contrarian insight that drove their strategy: AI models are commodities; the real moat is connectivity. Learn how they scaled from zero to 250,000+ users, why they empowered employees to build 30,000+ of their own "Personal Agents," and how they are solving the data privacy challenge at an enterprise scale. 🔥 IN THIS EPISODE: The "Super Intelligence" Thought Experiment: Why raw intelligence is useless without enterprise connectivity. The Agent Economy: How JPM enabled non-technical staff to build 30,000 custom AI assistants. The Adoption Playbook: How to break through the "30% wall" and get the majority of your workforce using AI. Build vs. Buy: Why JPM built their own "LLM Suite" instead of waiting for vendors. ⏳ CHAPTERS: 00:00 - Introduction: The JPMorgan AI Story 01:45 - The 3 Core Principles Behind JPM’s Strategy 03:25 - The "Super Intelligence" Thought Experiment 05:00 - Data Privacy: Why JPM Doesn't Train Public Models 06:00 - Viral Adoption: From 0 to 250k Users 09:20 - Evolution of LLM Suite: From RAG to Ecosystem 14:00 - The "Moat" is Connectivity, Not the Model 23:00 - The Agent Economy: 30,000 Employee-Built Assistants 31:00 - Governance & Guardrails for AI Agents 33:00 - Crossing the Chasm: Getting to 60% Adoption 40:00 - The "Product" Mindset: Solving Business Problems First 42:30 - The Future: End-to-End Process Transformation 46:25 - The "Unsolved" Problem Derek Wants to Fix 🙏 SPECIAL THANKS TO OUR SPONSOR: This episode is presented by Outshift by Cisco. Learn more about their work on the Internet of Agents and the open-source Linux Foundation project: 🔗 https://www.agentcy.org 🎙️ GUEST:  Derek Waldron | Chief Analytics Officer, JPMorgan Chase HOSTS: Matt Marshall | VentureBeat Sam Witteveen | VentureBeat #EnterpriseAI #JPMorgan #GenerativeAI #AgenticAI #FinTech #ArtificialIntelligence #Innovation #BeyondThePilot . . . Subscribe to VentureBeat:     /  @VentureBeat   . . Subscribe to the full podcast here: Apple: https://podcasts.apple.com/us/podcast/venturebeat/id1839285239 Spotify: https://open.spotify.com/show/4Zti73yb4hmiTNa7pEYls4 YouTube: https://www.youtube.com/VentureBeat https://www.youtube.com/playlist?list=PLMQoSwszBxm5dCv2bdqGnJ0QAL9n7Ds4_ 🔗 CONNECT WITH VENTUREBEAT: Website: https://venturebeat.com LinkedIn: https://www.linkedin.com/company/venturebeat X (Twitter): https://twitter.com/VentureBeat Newsletter: https://venturebeat.com/newsletters Learn more about your ad choices. Visit megaphone.fm/adchoices

    46 min

About

AI gets real here. On “Beyond the Pilot,” top business execs share what actually happens after the AI proof of concept — from infrastructure and org design to wins, failures, and ROI. Not theory, but deep dives into how they scaled AI that works.

You Might Also Like