AI 2030

Cadre AI

Conversations about the future of AI, with the builders building it. By CadreAI.com

Episodes

  1. Is Physical AI the Next Frontier for Enterprise? [Ft. Sud Bhatija, Spot AI]

    4D AGO

    Is Physical AI the Next Frontier for Enterprise? [Ft. Sud Bhatija, Spot AI]

    Over a million security guards in the US spend their days watching things happen. Sud Bhatija, Co-Founder and COO at Spot AI, is building the system that makes most of that unnecessary. In this episode, he breaks down how physical AI works at enterprise scale — from the edge-cloud architecture that enables real-time video analysis, to a three-tier multi-agent system that cuts false positives down to the point where automated responses via speakers and lights resolve security incidents 90% of the time with no on-site human intervention required. Sud also gets specific on why having 1,000+ customers before the LLM wave gave Spot AI a structural advantage when models inflected — and why the organizations seeing the highest AI adoption aren't the ones with the best technology. They're the ones paying workers more for learning to use it. Topics discussed: The "small brain / big brain" edge-cloud architecture for low-latency video analysis Three-tier multi-agent system: detection, false positive removal, and cloud-based SOP evaluation Automated speaker and light response that resolves security incidents 90% of the time without on-site intervention Why 600,000+ manufacturing line observers represent the clearest near-term target for video AI How 1,000+ pre-LLM customers shaped which use cases Spot AI prioritized when models inflected Tying pay increases directly to AI adoption: the incentive model driving ground-level buy-in Why AI becomes the only entity that holds the full "physical ontology" of a multi-site enterprise The coming need for physical-world consent frameworks equivalent to digital cookies and permissions

    22 min
  2. How Will AI Transform Executive Search? [Ft. Alex Bates, HelloSky]

    APR 28

    How Will AI Transform Executive Search? [Ft. Alex Bates, HelloSky]

    Alex Bates studied artificial neural networks in middle school, founded Mtell to predict equipment failures at oil rigs and power plants, and has now applied that same thinking to executive search at HelloSky. His core argument cuts against the prevailing AI narrative: as LLMs scale, domain expertise and operating experience become more valuable, not less, because the decisions that actually move companies have never appeared anywhere on the internet for a model to train on. In this episode, Alex gets specific on where executive search breaks down at the data layer — including how HelloSky reconstructs track records of executives whose companies were acquired and scrubbed from the internet entirely. He draws a hard line on where AI belongs in the hiring process (targeting, stack ranking, pre-assessment) and where it doesn't (culture fit, team dynamics, the sixth sense a seasoned operator has about CEO personality). He also makes a pointed case for why the industry's biggest structural failure isn't candidate pipeline — it's that criteria collapse under urgency pressure by month six, and most firms aren't solving for that early enough. Topics discussed: Reconstructing point-in-time company track records erased by acquisitions Scoring weighted relationship ties beyond raw LinkedIn connections Why month-six urgency mode is where hiring criteria collapse AI pre-assessment as a workaround to psychographic survey opt-in failure Back-testing operator outcomes to identify first-time CEO success predictors The "memory of a goldfish" problem in LLM-driven coding at scale Domain expertise becoming more valuable as LLMs scale, not less Why AI still hasn't solved executive interrupt triage

    32 min
  3. Is MCP Actually Broken? The Truth About AI Agent Data Access [Ft. Gil Feig, Merge]

    APR 21

    Is MCP Actually Broken? The Truth About AI Agent Data Access [Ft. Gil Feig, Merge]

    Most teams building AI agents are blaming MCP when their integrations fall flat. Gil Feig, co-founder and CTO of Merge, says that's the wrong diagnosis entirely — and he built the infrastructure layer that connects agents to enterprise systems to prove it. Gil makes the case that MCP is a thin wrapper around API endpoints, and the actual failure point is the access pattern underneath it. He lays out a clear framework for when synced-and-stored data is required versus when live connectors are sufficient, explains why the "talk to your data" promise keeps breaking in practice, and shares how Merge approached agent guardrails from day one — including why prompt-based soft restrictions are already being exploited and why temporary tokens are emerging as a hard security primitive for scoping what an agent can touch and for how long. He also argues that a world where all enterprise data flows into centralized AI-queryable lakes is economically flawed and probably not where the market lands. Topics discussed: MCP as a thin API wrapper and why the access pattern is the real failure point Sync-and-store vs. live connectors: the decision framework for each Hard vs. soft agent guardrails and where soft blocks break down Temporary tokens as a scoped-access security primitive for agents Why "talk to your data" implementations fail without structured local data stores The true cost of full data replication, vectorization, and embedding at scale Enterprise vs. mid-market governance requirements for LLM data routing Why the all-roads-lead-to-data-lake future is economically unlikely

    22 min
  4. How Do You Actually Ship AI-Generated Code to Production? [Ft. Chris Kelly, Augment Code]

    APR 14

    How Do You Actually Ship AI-Generated Code to Production? [Ft. Chris Kelly, Augment Code]

    The engineers most resistant to AI coding tools are not the junior ones. Chris Kelly, Head of Product at Augment Code, has watched senior engineers, people with 20 years of experience shipping production systems, be the last to adopt. The reason is not fear of job loss. They were trained to build deterministic systems where A plus B always equals C, and a non-deterministic model that occasionally writes wrong code breaks a contract they have never had to question. The quality gap is not a model problem, it is a context problem. Most teams point agents at a codebase without giving them the same linters, test suites, and tooling a human engineer relies on, then wonder why the output does not hold up in production. Chris breaks down the exact daily workflow he runs, where he writes almost no individual lines of code himself, how semantic retrieval changes what an agent actually understands about your codebase versus basic file search, and why the bottleneck for non-technical leaders compressing dev timelines was never the coding to begin with. Topics Discussed: Why senior engineers are last to adopt AI coding tools Giving agents linters and test suites to close the production quality gap Semantic codebase retrieval vs. grepping as a context strategy Chris's continuous code review workflow replacing individual code writing Why coding was never the long part and what actually compresses with AI Skill atrophy risk for engineers skipping hands-on coding experience Code review as the highest-leverage engineering skill to hire for now

    26 min
  5. Are Specialized Language Models the Future of AI? [Ft. Iddo Gino, Datawizz]

    MAR 27

    Are Specialized Language Models the Future of AI? [Ft. Iddo Gino, Datawizz]

    Enterprise AI spend on LLM APIs hit $8 billion in just the first half of 2025 — double all of 2024. Most of that spend is going to the same general-purpose models, and Iddo Gino thinks companies are building on a foundation that won't hold. Iddo founded his first company at 17, scaled it to unicorn status by 24, and spent nearly a decade in the API integration space. His position is straightforward: trillion-parameter models optimized to do everything are expensive, slow, and ill-suited for the narrow, repeatable tasks that make up the majority of production AI workloads. The companies gaining ground are decomposing their systems into specialized models — each trained for one specific task, orders of magnitude smaller, and meaningfully more accurate than any general-purpose model in that lane. He also gets specific about why most fine-tuning efforts quietly fail, why model capability is no longer what's slowing agentic systems down, and what the market actually looks like by 2030 when this plays out. Topics discussed: $8B in enterprise LLM API spend in H1 2025 and what's actually driving it Decomposing agentic systems into narrow subtasks vs. single general-purpose model approaches Why fine-tuned models have a shelf life and the case for continuous weekly retraining cycles The integration and data access layer as the real production bottleneck in agentic systems MIT study: 90% enterprise AI initiative failure rate and what separates the 10% that work Iddo's 2030 prediction: 50-60% of tokens flowing to specialized models, not large labs Model agnosticism as a structural hedge against LLM provider lock-in

    25 min
  6. Why Can't AI Agents Pay for the Tools They Need? [Ft. Jim Nguyen, CEO, InFlow]

    MAR 13

    Why Can't AI Agents Pay for the Tools They Need? [Ft. Jim Nguyen, CEO, InFlow]

    Every trust and security system in payments was built around one assumption: a human is on the other end. CAPTCHA, email confirmation, UI-based checkout flows — none of it works when the buyer is headless. That's the wall AI agents are running into right now, and it's not a model problem. It's an infrastructure problem. Jim Nguyen, Co-Founder and CEO of InFlow, has watched payments evolve through the online, mobile, and crypto shifts. Each one introduced a new buyer type and a new interaction model — and the payment layer always had to be rebuilt to match. This shift follows the same arc. The difference is that agents transact through APIs, not interfaces, which means the entire onboarding and trust layer has to be rearchitected from scratch. The bottleneck isn't just payment execution. An agent can do the work — build the site, write the code, complete the output — and then stall completely when it hits a third-party service it can't access because it can't open an account, enter a credit card, or verify an email. Jim's proposed fix is deliberate: the human sets the policy upfront, assigns a spend limit per site, and the agent executes within those bounds. Anything outside that threshold comes back for approval. Wallet solutions don't solve this on their own. If the agent can pay but still can't self-onboard to the service, you've only solved half the problem. Topics discussed: AI agents as the third buyer type requiring a new commerce interaction model Why headless API-only transactions break existing payment trust infrastructure The onboarding gap that wallet-only solutions fail to address Human-defined spend policies as the guardrail model for agent transactions How sellers need to support a third buyer flow alongside mobile and browser to enable agent transactions Jim's vision for agent-managed travel, purchases, and bill management by 2030 Trust and compliance as the ceiling on how fast this gets deployed at scale Listen more at: https://www.cadreai.com/ai-2030-podcast

    17 min
  7. How Does AI Eliminate Moats and Transform Competitive Advantage?[Ft. Dmitry Shapiro, CEO, MindStudio]

    FEB 27

    How Does AI Eliminate Moats and Transform Competitive Advantage?[Ft. Dmitry Shapiro, CEO, MindStudio]

    Traditional competitive moats collapse when anyone can replicate your product by afternoon. Dmitry Shapiro, CEO of MindStudio and former Google executive, explains why the ability to continuously refactor operations in 5-30 minutes matters more than what you initially build, and why the companies resisting this shift are already behind. Over 400,000 agents deployed on MindStudio reveal a pattern enterprises miss: organizations aren't people using tools, they're poorly integrated tech stacks (averaging 130 SaaS products for mid-market) held together by humans acting as connective tissue. Dmitry calls this "AI duct tape"—the intelligence layer that bridges system gaps without traditional integration work. A newspaper holding company deployed 400+ agents built by one non-technical person, automating court case monitoring that previously consumed an hour per journalist daily. When Claude Code can rebuild your entire stack overnight, organizational velocity becomes the only defensible advantage. Companies refactoring in 30 minutes rather than quarters are pulling ahead. The real constraint isn't model capability. Chat works for simple queries, but complex workflows need custom UIs with draggable elements for multi-dimensional control that can't be articulated through prompts. Product managers now outperform engineers because communication mastery beats technical skills when AI writes the code. Topics Discussed AI duct tape bridging 130 SaaS products without data warehouse infrastructure 5-30 minute refactoring cycles as competitive advantage over quarterly roadmaps Product manager communication skills outperforming engineering technical depth Custom UI requirements for high-fidelity AI instruction beyond chat limits Newspaper company: 400+ agents from one non-technical builder Agentic approach replacing data warehouse investments for smaller companies Organizational velocity determining winners when code becomes commodity Ray Kurzweil's 2029 singularity prediction and exponential thinking gaps

    48 min
  8. FEB 4

    Why Do 90% of Enterprise AI Implementations Fail? [Ft. Eva Nahari, Former CPO, Vectara]

    RAG isn't just another AI buzzword, it's the architectural foundation that determines whether enterprise AI delivers value or burns budget. Eva Nahari, former Chief Product Officer at Vectara and four-year venture investor, explains why separating data from models matters more than the models themselves, and why 90% of AI implementations fail at the execution layer, not the technology layer. The standard approach, dumping an 80-page PDF into a custom GPT, fails because accuracy requires proper data architecture, not better prompts. RAG addresses this by feeding models precise context rather than expecting them to ingest everything at once. But implementation creates new problems: multiple teams building isolated RAG systems across the same enterprise, creating governance nightmares when those hobby projects need to scale. The companies succeeding aren't the ones with the best AI talent, they're the ones who treated data management seriously before the AI hype arrived. Topics Discussed: RAG architecture separating data from models for compliance traceability Retrieval quality as the primary bottleneck before generation accuracy RAG sprawl problem from independent team implementations across enterprises Real-time governance systems using guardian agents for multi-step workflows Intent logging requirements for auditing agentic decision paths Agent-in-the-loop pattern replacing human-in-the-loop for workflow efficiency Documentation quality emerging as critical AI infrastructure investment MCP standard adoption for cross-system data retrieval and access control

    33 min

Ratings & Reviews

5
out of 5
5 Ratings

About

Conversations about the future of AI, with the builders building it. By CadreAI.com