Inference by Turing Post

Turing Post

Inference is Turing Post’s way of asking the big questions about AI — and refusing easy answers. Each episode starts with a simple prompt: “When will we…?” – and follows it wherever it leads. Host Ksenia Se sits down with the people shaping the future firsthand: researchers, founders, engineers, and entrepreneurs. The conversations are candid, sharp, and sometimes surprising – less about polished visions, more about the real work happening behind the scenes. It’s called Inference for a reason: opinions are great, but we want to connect the dots – between research breakthroughs, business moves, technical hurdles, and shifting ambitions. If you’re tired of vague futurism and ready for real conversations about what’s coming (and what’s not), this is your feed. Join us – and draw your own inference.

  1. 3D AGO

    What Reflection AI offers to beat closed labs

    In this episode, Ioannis Antonoglou, co-founder and CTO @ReflectionAI (ex-DeepMind, AlphaGo/AlphaZero/MuZero) explains what they are building: a frontier open-weight “general agent model” trained end-to-end with pretraining plus reinforcement learning. And I’ll be honest: I left this conversation more skeptical than I expected. They raised $2 billion last year. But where the results? Reflection’s thesis is huge – build the missing Western open base model, then use RL to push it to the frontier. The problem is that this is also the slowest path in the game. “All hands on deck building the model” means no clear wedge product yet, few concrete proof points, and a lot of execution risk while closed labs keep shipping. Am I missing something? Watch the video and leave your opinion in the comments Chapters: 0:00 Building AGI and the Mission Behind Reflection 0:25 From AlphaGo to Today: How AI Progress Really Happens 2:11 Breakthroughs vs. Engineering: What Still Matters Most 3:10 Defining AGI and Why It May Not Need Huge Breakthroughs 3:41 Why Reflection Shifted from Coding Agents to Frontier Models 5:15 The New Focus: Open Frontier Models and General Agents 6:33 Bottlenecks in Building Frontier AI: Team, Compute, and Scale 7:48 AI Tools, Internal Workflows, and Model-First Strategy 8:24 Can Open Models Catch Closed Labs? 10:34 Reinforcement Learning, Research Priorities, and Advice for Young Builders 14:01 Joining DeepMind, Open Science, and the Book That Shaped Him *Follow on*: https://www.turingpost.com/ *Did you like the episode? You know the drill:* 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Ioannis Antonoglou, Co-Founder, President & CTO at Reflection AI https://x.com/real_ioannis https://www.linkedin.com/in/ioannis-alexandros-antonoglou-45393253 https://reflection.ai/ 📰 Transcript: https://www.turingpost.com/nathan *Turing Post* – AI stories from labs the Valley doesn't cover. https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se Tags: #reflectionai #opensource #deepmind #ai #openclaw #aisafety

    16 min
  2. 3D AGO

    Why Reflection AI Bets Their Business on Open Weights | Ioannis Antonoglou, co-founder and CTO

    Ioannis Antonoglou helped build AlphaGo, AlphaZero, and MuZero at DeepMind. Now he’s CTO and co-founder of Reflection AI, betting that frontier models should be open weights, not a black box behind an API. In Part 1, we talk about openness as an actual strategy: why open models can move faster, why “sovereignty” matters for enterprises and governments, and why safety might improve when the ecosystem can stress-test the system instead of guessing. We also get into the uncomfortable part: capable open agents can misbehave in public, fast (OpenClaw is the recent reminder). Is that a reason to close everything up, or a reason to make the risks visible and fixable? Topics covered:  – Why a former DeepMind builder chose open weights  – Open models as a commercial engine (and what investors bought)  – Openness, safety, and “more eyes on the system”  – Concentration of AI power in closed labs  – Who open frontier models are really for (research, enterprises, governments) Subscribe for Part 2: how Reflection plans to compete with closed labs and what they’re building under the hood. Chapters: 0:00 — “No One Was Sharing This Information” 0:16 — From DeepMind to Reflection AI 0:52 — Why Move from Closed Labs to Open Weights? 2:20 — Pitching Open Models Before the DeepSeek Moment 3:31 — What Changed in the Past Year 4:43 — Why Openness Accelerates Scientific Progress 6:06 — Open Source vs Safety: The Open Claw Case 7:19 — The Real Concern: Concentration of AI Power 8:23 — The Open Source Paradox 9:11 — The Value Proposition of an Open Frontier Model *Follow on*: https://www.turingpost.com/ *Did you like the episode? You know the drill:* 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Ioannis Antonoglou, Co-Founder, President & CTO at Reflection AI https://x.com/real_ioannis https://www.linkedin.com/in/ioannis-alexandros-antonoglou-45393253/ https://reflection.ai/ 📰 Transcript: https://www.turingpost.com/antonoglou_part1 *Turing Post* – AI stories from labs the Valley doesn't cover. https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se Tags: #reflectionai #opensource #deepmind #ai #openclaw #aisafety

    10 min
  3. 3D AGO

    Why the US need Open Models | Nathan Lambert on what matters in the AI and science world

    Open models are often discussed as if they’re competing head-to-head with frontier systems. Are they catching up? Falling behind? Are they “good enough” yet? Nathan Lambert doesn’t believe open models will ever catch up with closed ones, and he explains clearly why. But he also argues that this is the wrong framing. Nathan is a research scientist at the Allen Institute for AI, the author of the RLHF Book, and the writer behind the Interconnects newsletter. He’s also one of the clearest voices on what open models are for, and just as importantly, what they are not. We talk about how academic AI research lost influence as training scaled up, why open models became the main place where experimentation still happens, and why that role matters even when open models trail frontier systems. We also discuss why China’s open model ecosystem developed so differently from the US one, and what that tells us about incentives, talent, and access to resources. From there, the conversation moves into the mechanics: post-training and reinforcement learning complexity, data availability, coding agents, hybrid architectures, and the very practical reasons most people continue to rely on closed models, even when they support openness in principle. This is a conversation about how AI research actually moves, where open models fit into that picture, and what it means to build systems when the frontier is expensive, fast-moving, and increasingly product-driven. This conversation offers a realistic look at where the open ecosystem stands today. Watch it! *Follow on*: https://www.turingpost.com/ *Did you like the episode? You know the drill:* 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Nathan Lambert, Research Scientist at Allen Institute for AI (AI2) https://x.com/natolambert https://www.linkedin.com/in/natolambert/ https://www.interconnects.ai/ (his newsletter on open models + RL + everything important in AI) https://rlhfbook.com/ - The RLHF Book https://allenai.org/ *Links:* State of AI in 2026 (Lex Fridman interview): https://www.youtube.com/watch?v=EV7WhVT270Q&t=10206s NVIDIA’s path to open models https://www.youtube.com/watch?v=Y3Vb6ecvfpU OLMo models: https://allenai.org/olmo NVIDIA Nemotron: https://developer.nvidia.com/nemotron SpaceX + xAI partnership: https://www.spacex.com/updates#xai-joins-spacex Season of the Witch (book): https://www.simonandschuster.com/books/Season-of-the-Witch/David-Talbot/9781439108246 📰 Transcript: https://www.turingpost.com/nathanlambert *Turing Post* – AI stories from labs the Valley doesn't cover. https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se

    47 min
  4. 3D AGO

    Inside MiniMax: How They Build Open Models

    First Western interview with a senior MiniMax researcher. Olive Song explains how they actually build models that work. When MiniMax's RL training wouldn't converge, they debugged layer by layer until they found it: fp32 precision in the LM head. When their models learned to "hack" during training, exploiting loopholes to maximize rewards, they had to rethink alignment from scratch. When benchmarks said their models were good but production said otherwise, they discovered the problem: environment adaptation. Olive talks about working at a pace where new models drop at midnight and you test them at midnight. How they use an internal AI agent to read every new paper published overnight. Why they sit with developers during experiments to catch dangerous behaviors in real-time. What "ICU in the morning, KTV at night" means when results swing wildly. How problem-solving becomes discovery when you're debugging behaviors no one has seen before. This is how Chinese labs are moving fast: first-principles thinking, engineering discipline, and willingness to work whenever the model in experimentation requires you to. We spoke on Sunday at 9 pm Beijing time. Olive was still waiting for results from new model experiments, so my first question was obvious: does everyone at the company work like this? *Follow on*: https://www.turingpost.com/ *Did you like the episode? You know the drill:* 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Olive Song, Senior Researcher at MiniMax MiniMax: https://www.minimaxi.com/ Models: https://huggingface.co/MiniMaxAI *Links:* vLLM: https://github.com/vllm-project/vllm SGLang: https://github.com/sgl-project/sglang 📰 Transcript: https://www.turingpost.com/olive Chapters: 0:00 – Reinforcement Learning and Unexpected Model Behaviors 3:08 – Roleplay, Alignment, and “AI with Everyone” 4:02 – How AI Changes Daily Life and Productivity 4:59 – Inside Miniax: How Researchers and Engineers Work Together 5:32 – Human Alignment and Safety in Open Models 6:16 – Why Engineering Details Matter More Than Algorithms 8:17 – Open Weights: Benefits, Risks, and Responsibility 10:57 – Specialization vs General AI Models 12:07 – Agentic AI and Long-Horizon Tasks 29:50 – AGI, Creativity, and the Future of AI *Turing Post* – AI stories from labs the Valley doesn't cover. https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se #MiniMax #ReinforcementLearning #AIResearch #OpenWeights #ChineseAI #OpensourceAI

    32 min
  5. JAN 27

    This Is a Fight Worth Having: The Case for Open Source AI | Raffi Krikorian, Mozilla CTO

    In the first episode of Inference’s quarterly series on Open Source AI, we talk to Raffi Krikorian, CTO of Mozilla, about when open source AI stops being aspirational and becomes an operational choice. We explore why stories like Pinterest saving $10 million by moving to open models are real, but often misunderstood, and why timing matters more than ideology. Raffi lays out his view of a missing “LAMP stack for AI” and explains why the hardest problem to solve isn’t models or data, but the connective glue that holds AI systems together. Along the way, he shares how Mozilla is navigating these tradeoffs in practice, why even open-source-first organizations still rely on closed tools during experimentation, and what the browser era taught Mozilla about defaults, user choice, and long-term control. He also shares a few practical recommendations in this episode that apply even if you’re still experimenting. Listen closely. This conversation kicks off our Open Source AI series for 2026, focused on real tradeoffs, real economics, and the decisions companies are making right now. Follow on: https://www.turingpost.com/ *Did you like the episode? You know the drill:* 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Raffi Krikorian, CTO at Mozilla LinkedIn: https://www.linkedin.com/in/rkrikorian/ Mozilla AI: https://mozilla.ai/ Mozilla Blog: https://blog.mozilla.org/en/mozilla/mozilla-open-source-ai-strategy/ *Links mentioned:* Raffi's post here about our OSAI strategy: https://blog.mozilla.org/en/mozilla/mozilla-open-source-ai-strategy/ 🌐 #1: Mastering Open Source AI in 2026: Essential Decisions for Builders https://www.turingpost.com/p/opensource1 Mozilla Data Collective: https://data.mozilla.org/ Langchain: https://www.langchain.com/ OpenRouter: https://openrouter.ai/ AI2 (Allen Institute for AI): https://allenai.org/ Flower AI (Federated Learning): https://flower.dev/ Einstein's Dreams by Alan Lightman: https://www.goodreads.com/book/show/14376.Einstein_s_Dreams 📰 The transcript and edited version at https://www.turingpost.com/krikorian *Chapters:* 0:00 Cold Open — Values vs Economics in Open Source AI 0:28 Intro: Why This Season Focuses on Open Source AI 0:54 When Open Source Becomes a Business Decision 1:44 Pinterest Saved $10M + The Shift From Prototyping to Production 2:42 Mozilla’s “Choice Suite” + The Terraform “Exit Door” 5:21 Mozilla’s Mission: Do for AI What Mozilla Did for the Web 7:09 The “LAMP Stack” for AI + Standards Across the Stack 9:52 Small Models, Specialization, and Model Composability 15:45 Data, Privacy, and “I Own My Context” 18:36 “This Is a Fight Worth Having” + The Signal Analogy 21:42 1–2–3 Steps for Companies to Start (Instrument Choice Early) 24:22 Book Pick: Einstein’s Dreams + Closing Turing Post is a newsletter about AI's past, present, and future. Ksenia Se explores how intelligent systems are built – and how they're changing how we think, work, and live. *Follow us →* Turing Post: https://x.com/TheTuringPost Ksenia Se: https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase #OpenSourceAI #LAMPStackForAI #AIEconomics #MozillaAI #AIInfrastructure #DataProvenance #FederatedLearning #OpenModels

    26 min
  6. 12/04/2025

    What AI Is Missing for Real Reasoning? Axiom Math’s Carina Hong on how to build an AI mathematician

    Is math the ultimate test for AI reasoning? Or is next-token prediction fundamentally incapable of discovering new truths and discovering conjectures? Carina Hong, co-founder and CEO of Axiom Math, argues that to build true reasoning capabilities, we need to move beyond "chatty" models to systems that can verify their own work using formal logic. In this episode of Inference, we get into: Why current LLMs are like secretaries (good at retrieval) but bad at de novo mathematics The three pillars of an AI Mathematician How AlphaGeometry proved that symbolic logic and neural networks must merge The difference between AGI and Superintelligence Why "Theory Building" is harder to benchmark than the International Math Olympiad (IMO) The scarcity of formal math data (Lean) compared to Python code We also discuss the bottlenecks: the "chicken and egg" problem of auto-formalization, why Axiom bets on specific superintelligence over general models, and how AI will serve as the algorithmic pillar for the future of hard science. This is a conversation about the structure of truth, the limits of intuition, and what happens when machines start grading their own homework. Watch it! Did you like the episode? You know the drill: 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! *Guest:* Carina Hong, co-founder and CEO of Axiom Math https://www.axiom.xyz/ https://x.com/CarinaLHong https://www.linkedin.com/in/carina-hong/ 📰 The transcript and edited version at https://www.turingpost.com/carina/ Chapters: 0:53 Why LLMs Struggle with Basic Math 2:42 Building an AI Mathematician: The 3 Pillars (Prover, Knowledge Base, Conjecturer) 5:50 The Role of Human-AI Collaboration 6:34 Can AI Have Intuition? (Conjectures & AlphaGeometry) 10:16 A Hybrid Approach: LLMs + Formal Verification 11:24 Specialist Science Models vs. Generalist Giants 13:33 The Problem with Current AI Benchmarks 16:34 Practical Applications: Enterprise & Formal Verification 21:24 The Main Bottleneck: Data Scarcity 23:49 AGI vs. Superintelligence: The "Plate" Analogy 26:31 Book Recommendations (Math, Law, and Literature) 30:56 How to Use AI for Math Discovery Today Turing Post is a newsletter about AI's past, present, and future. Ksenia Se explores how intelligent systems are built – and how they're changing how we think, work, and live. Follow us → Ksenia and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase #AI #FutureOfAI #MathAI #FormalVerification #Lean #AxiomMath #Superintelligence #Reasoning

    33 min
  7. 12/04/2025

    Can We Control AI That Controls Itself? Anneka Gupta from Rubrik on…

    Is security still about patching after the crash? Or do we need to rethink everything when AI can cause failures on its own? Anneka Gupta, Chief Product Officer at Rubrik, argues we're now living in the world before the crash – where autonomous systems can create their own failures. In this episode of Inference, we explore: Why AI agents are "the human problem on steroids" The three pillars of AI resilience: visibility, governance, and reversibility How to log everything an agent does (and why that's harder than it sounds) The mental shift from deterministic code to outcome-driven experimentation Why most large enterprises are stuck in AI prototyping (70-90% never reach production) The tension between letting agents act and keeping them safe What an "undo button" for AGI would actually look like How AGI will accelerate the cat-and-mouse game between attackers and defenders We also discuss why teleportation beats all other sci-fi tech, why Asimov's philosophical approach to robots shaped her thinking, and how the fastest path to AI intuition is just... using it every day. This is a conversation about designing for uncertainty, building guardrails without paralyzing innovation, and what security means when the system can outsmart its own rules. Did you like the episode? You know the drill: 📌 Subscribe for more conversations with the builders shaping real-world AI. 💬 Leave a comment if this resonated. 👍 Like it if you liked it. 🫶 Thank you for watching and sharing! Guest: Anneka Gupta, Chief Product Officer at Rubrik https://www.linkedin.com/in/annekagupta/ https://x.com/annekagupta https://www.rubrik.com/ 📰 Want the transcript and edited version? Subscribe to Turing Post: https://www.turingpost.com/subscribe Chapters: Turing Post is a newsletter about AI's past, present, and future. Ksenia Se explores how intelligent systems are built – and how they're changing how we think, work, and live. Follow us → Ksenia and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase #AI #AIAgents #Cybersecurity #AIGovernance #EnterpriseAI #AIResilience #Rubrik #FutureOfSecurity

    27 min
  8. 12/04/2025

    Spencer Huang: NVIDIA’s Big Plan for Physical AI: Simulation, World Models, and the 3 Computers

    When robots move into the real world, speed and safety come from simulation! In his first sit-down interview, Spencer Huang – NVIDIA’s product lead for robotics software – talks about his role at NVIDIA, a flat organization where “you have access to everything.” We discuss how open source shapes NVIDIA’s robotics ecosystem, how robots learn physics through simulation, and why neural simulators and world models may evolve alongside conventional physics. I also ask him what’s harder: working on robotics or being Jensen Huang’s son. Watch to learn a lot about robotics, NVIDIA, and its big plans ahead. It was a real pleasure chatting with Spencer. *We cover:* - NVIDIA’s big picture - The “three computers” of robotics – training, simulation, deployment - Isaac Lab, Arena, and the path to policy evaluation at scale - Physics engines, interop, and why OpenUSD can unify fragmented toolchains - Neural simulators vs conventional simulators – a data flywheel, not a rivalry - Safety as an architecture problem – graceful failure and functional safety - Synthetic data for manipulation – soft bodies, contact forces, distributional realism - Why the biggest bottleneck is robotics data, and how open ecosystems help reach baseline - NVIDIA’s “Mission is Boss” culture – cross-pollinating research into robotics This is a ground-level look at how robots learn to handle the messy world – and why simulation needs both fidelity and diversity to produce robust skills. *Chapters*: 0:22 The future of Physical AI begins here 1:00 Inside NVIDIA’s secret blueprint for teaching robots 3:46 Why safety is the hardest part of robotics 4:11 Simulation: the new classroom for machines 8:55 Can robots really understand physics? 13:55 How NVIDIA builds robot brains without a PhD 16:47 The plan to unify a fragmented robotics world 20:31 Why open source is NVIDIA’s biggest power move 21:21 What’s harder – robotics or being Jensen Huang’s son? 24:31 The one thing holding robotics back 27:56 The sci-fi books that shaped Spencer's mind *Did you like the episode? You know the drill:*  📌 Subscribe for more conversations with the builders shaping real-world AI.  💬 Leave a comment if this resonated.  👍 Like it if you liked it.  🫶 Thank you for watching and sharing! *Guest:* Spencer Huang, NVIDIA – a product line manager at NVIDIA leading robotics software product. His work centers on open-source simulation frameworks for robot learning, synthetic data generation methodologies, and advancing robot autonomy – from industrial mobile manipulators to generalist humanoid robots. https://www.linkedin.com/in/spencermhuang/ *📰 Want the transcript and edited version?* Find it here: https://www.turingpost.com/spencer *Turing Post* is a newsletter about AI’s past, present, and future – exploring how intelligent systems are built and how they’re changing how we think, work, and live. 📩 Sign up: https://www.turingpost.com Follow Ksenia Se and Turing Post: https://x.com/TheTuringPost https://www.linkedin.com/in/ksenia-se https://huggingface.co/Kseniase #robotics #simulation #NVIDIA #Omniverse #digitaltwins #worldmodels #physicalAI #reinforcementlearning #syntheticdata

    28 min

About

Inference is Turing Post’s way of asking the big questions about AI — and refusing easy answers. Each episode starts with a simple prompt: “When will we…?” – and follows it wherever it leads. Host Ksenia Se sits down with the people shaping the future firsthand: researchers, founders, engineers, and entrepreneurs. The conversations are candid, sharp, and sometimes surprising – less about polished visions, more about the real work happening behind the scenes. It’s called Inference for a reason: opinions are great, but we want to connect the dots – between research breakthroughs, business moves, technical hurdles, and shifting ambitions. If you’re tired of vague futurism and ready for real conversations about what’s coming (and what’s not), this is your feed. Join us – and draw your own inference.

You Might Also Like