TechFirst with John Koetsier

John Koetsier

Deep tech conversations with key innovators in AI, robotics, and smart matter ...

  1. 1 DAY AGO

    Robots won't do chores?

    Humanoid robots are coming into our homes, but they probably won’t be doing your laundry anytime soon. In this episode of TechFirst, host John Koetsier sits down with Jan Liphardt, founder & CEO of OpenMind and Stanford bioengineering professor, to unpack what home robots will actually do in the near future ... and why the “labor-free home” vision is mostly a myth (for now). Jan explains why hands are still one of the hardest unsolved problems in robotics, why folding laundry is far harder than it looks, and why the most valuable early use cases for home robots aren’t chores at all. Instead, we explore where robots are already delivering real value today: • Health companionship and fall detection for aging parents • Personalized education for kids, beyond screens • Home security that respects privacy • And why people form emotional bonds with robots faster than expected We also dive into OM1, OpenMind’s open-source, AI-native operating system for robots, and why openness, transparency, and configurability will matter deeply as robots move from factories into our living rooms. If you’re curious about the real future of humanoid robots — what’s hype, what’s possible today, and what’s coming next — this conversation is for you. 🎙 Guest Jan Liphardt Founder & CEO, OpenMind Stanford Professor of Bioengineering Website: https://openmind.com ⸻ 👉 Subscribe for more conversations on AI, robotics, and the future of technology: https://techfirst.substack.com ⸻ 00:00 Intro: The promise of humanoid robots at home 00:40 Meet Jan Liphardt and OpenMind’s OM1 01:12 Why your “labor droid” isn’t here yet 01:41 The “hand problem” and what robots can realistically do now 03:07 Why economics matters: $300/hour tasks vs. laundry and dishes 04:19 Robot hands today: reliability, repairability, and washing hands 05:16 LG’s laundry-folding demo and why fabric is still hard 06:16 Hospitals and hygiene: why “robot hand-washing” is unsolved 07:41 Hands as a separate system: compute, sensors, and integration 08:31 Why wheeled humanoids exist: hands first, body second 09:26 The real home use cases today: security, education, companionship 10:08 Aging in place: fall detection and remote nurse escalation 11:30 Real-world stories: parents living alone and why this matters 11:54 Privacy tradeoffs: robots vs. always-on home cameras 12:52 AIBO and why people get attached to mobile robots 13:52 Self-charging and the “my mom won’t plug it in” problem 14:21 Beyond falls: autism support and memory care 15:27 The education use case: “do my homework” vs. teach me 16:26 Personalized learning: what current classrooms miss 17:51 Why robot teachers beat screens for younger kids 18:46 Home security basics: unfamiliar face detection + alerts 19:15 Adding sensors: smoke, fire, sound, and anomaly detection 19:41 Quadrupeds vs. humanoids: cost, simplicity, and mobility 20:01 Safety issue: pinch hazards and kids hugging robots 20:46 What’s next for home labor robots 21:43 Why OM1 must be open source: transparency and trust 23:39 Why ROS 2 isn’t enough for human environments 24:37 OM1 approach: LLM-centric “Lego blocks” for robot behavior 25:43 Open-source humanoids for kids and why ownership matters 27:41 What’s missing: simulation is the bottleneck 28:11 Gazebo/Isaac Sim pain and the need for realistic sims 29:57 Why voice + “digital humans” matter in simulation 30:47 Tipping points: factories, warehouses, robotaxis, and humanoids 35:46 Wrap-up and final thoughts

    32 min
  2. 3 DAYS AGO

    Generative Hollywood: E! founder Larry Namer on AI

    AI is hitting entertainment like a sledgehammer ... from algorithmic gatekeepers and AI-written scripts to digital actors and entire movies generated from a prompt. In this episode of TechFirst, host John Koetsier sits down with Larry Namer, founder of E! Entertainment Television and chairman of the World Film Institute, to unpack what AI really means for Hollywood, creators, and the global media economy. Larry explains why AI is best understood as a productivity amplifier rather than a creativity killer, collapsing months of work into hours while freeing creators to focus on what only humans can do. He shares how AI is lowering barriers to entry, enabling underserved niches, and accelerating new formats like vertical drama, interactive storytelling, and global-first content. The conversation also dives into: • Why AI-generated actors still lack true human empathy • How studios and IP owners will be forced to license their content to AI companies • The future of deepfakes, guardrails, and regulation • Why market fragmentation isn’t a threat — it’s an opportunity • How China, Korea, and global platforms are shaping what comes next • Why writers and storytellers may be entering their best era yet Larry brings decades of perspective from every major media transition — cable, streaming, global expansion — and makes the case that AI is just the next tool in a long line of transformative technologies. If you care about the future of movies, television, creators, and culture, this is a conversation you don’t want to miss. ⸻ 🎙 Guest Larry Namer Founder, E! Entertainment Television Chairman, World Film Institute ⸻ 👉 Subscribe for more conversations on AI, media, and the future of technology: https://techfirst.substack.com ⸻ 00:00 – AI, emotion, and the danger of “AI twins” 00:00 – Welcome to Tech First + the AI disruption of entertainment 00:01 – Chaos in Hollywood: Disney, Netflix, Warner Bros, and consolidation 00:02 – AI as a productivity tool, not a creativity replacement 00:03 – How AI gives creators back their most valuable asset: time 00:04 – Regulation, guardrails, and the need for consequences 00:05 – Fragmentation, niche content, and the future economics of media 00:06 – Why streaming has been a gift to writers and storytellers 00:06 – Disney licensing IP to AI and why it was inevitable 00:07 – Contracts, actors’ rights, and why the law must catch up 00:08 – Deepfakes, AI avatars, and digital celebrities 00:09 – AI actors, empathy gaps, and spotting what isn’t human 00:10 – Using GPT to launch a bestselling book in days 00:11 – Big media M&A in an AI-driven world 00:12 – Jobs AI will eliminate vs. jobs AI will create 00:13 – Miniseries, deep storytelling, and why streaming changed everything 00:14 – Vertical video, short-form drama, and old ideas in new formats 00:15 – China vs. the West: who’s ahead in entertainment tech 00:16 – Global storytelling and Game of Thrones–scale opportunities 00:17 – Why Hollywood could ruin vertical video 00:18 – Interactive, immersive, and branched storytelling 00:19 – The future of screens, platforms, and audience choice 00:20 – Why new media never replaces old media 00:20 – Final thoughts on abundance, choice, and creativity

    20 min
  3. 6 DAYS AGO

    Robot reasoning: why data is not enough

    Robots aren’t just software. They’re AI in the physical world. And that changes everything. In this episode of TechFirst, host John Koetsier sits down with Ali Farhadi, CEO of Allen Institute for AI, to unpack one of the biggest debates in robotics today: Is data enough, or do robots need structured reasoning to truly understand the world? Ali explains why physical AI demands more than massive datasets, how concepts like reasoning in space and time differ from language-based chain-of-thought, and why transparency is essential for safety, trust, and human–robot collaboration. We dive deep into MOMO Act, an open model designed to make robot decision-making visible, steerable, and auditable, and talk about why open research may be the fastest path to scalable robotics. This conversation also explores: • Why reasoning looks different in the physical world • How robots can project intent before acting • The limits of “data-only” approaches • Trust, safety, and transparency in real-world robotics • Edge vs cloud AI for physical systems • Why open-source models matter for global AI progress If you’re interested in robotics, embodied AI, or the future of intelligent machines operating alongside humans, this episode is a must-watch. 👤 Guest Ali Farhadi CEO, Allen Institute for AI (AI2) Professor, University of Washington Former Apple researcher ⸻ 👉 Subscribe for more conversations like this: https://techfirst.substack.com ⸻ 00:00 – Plato vs Aristotle… in robotics? 00:55 – What “reasoning” means in the physical world 02:10 – How humans predict actions before they happen 03:45 – Why physical AI is fundamentally different from text AI 04:50 – The next revolution: AI in the real world 05:30 – What is MOMO Act? 06:20 – Chain-of-thought… for robots 07:45 – Trajectories as reasoning and robot transparency 08:55 – Trust, safety, and correcting robots mid-action 10:15 – Why predictability builds trust in machines 11:40 – What’s broken with data-only AI approaches 13:10 – Why reasoning + data isn’t an “either/or” 14:00 – Open sourcing robotics models: why it matters 15:20 – How closed AI slows innovation 16:45 – Global competition and open research 17:40 – What’s next for robotics reasoning models 18:20 – Can these models work across robot types? 19:30 – Temporal and spatial reasoning in MOMO 2 20:40 – Scaling robotics vs scaling LLMs 21:10 – Edge vs cloud AI for robots 22:20 – Specialized models, latency, and privacy 23:00 – Final thoughts on the future of physical AI

    22 min
  4. 16 JAN

    Social humanoid robot for kids under $10,000

    Can we really build a $10,000 humanoid robot on open-source AI? In this episode of TechFirst, John Koetsier talks with Chris Kudla, CEO of Mind Children, about a radically different approach to humanoid robots. Instead of six-figure industrial machines built for factories or war zones, Mind Children is building small, safe, friendly social robots designed for kids, classrooms, and elder care. Meet Cody (MC-1), their first humanoid prototype. Cody is built on open-source AI from SingularityNET, combined with modular hardware, low-torque actuators, and a wheeled base designed for safety, affordability, and mass production. And there's some other AI bits and pieces from all the big name companies that you'd recognize. Mind Children's goal is ambitious: a $10,000 humanoid robot that families, schools, and care facilities can actually afford. In this conversation we explore: • Why social robots may be the real gateway to embodied AI • How Cody is designed for children and elder care instead of factories • Why wheels beat bipedal legs for safety, cost, and stability • How open-source AI and modular software stacks enable faster innovation • The emotional and ethical challenges of building companion robots • And what it takes to bring a humanoid robot to market at scale This is not sci-fi. This is the early blueprint of a future where humanoid robots are personal, affordable, and open-source. 00:00 – The $10,000 open-source humanoid question 01:58 – Meet Cody, the MC-1 prototype 04:10 – Why Cody is small, child-sized, and approachable 06:55 – Designing humanoids for kids and elder care 09:45 – Social robots vs industrial humanoids 12:40 – Wheels instead of legs and why that matters 16:05 – Low-torque actuators, safety, and toy-like design 19:20 – Modular hands, arms, and future upgrades 22:10 – Open-source AI and SingularityNET’s role 25:30 – On-robot vs cloud AI and why it matters 28:40 – Vision, LiDAR, and simulated world models 32:10 – Emotional awareness and social intelligence 35:10 – The $10K target and mass-production strategy 38:15 – The risks of attachment to robot companions 40:00 – Final thoughts on Cody and the future of social robots

    38 min
  5. 14 JAN

    AI is now every UI: generative user interfaces explained

    Is AI really the new UI, or is that just another tech buzzphrase? Or ... is AI actually EVERY user interface now? In this episode of TechFirst, host John Koetsier sits down with Mark Vange, CEO & founder of Automate.ly and former CTO at Electronic Arts, to unpack what happens when interfaces stop being fixed and start being generated on the fly. They explore: • Why generative AI makes it cheaper to create custom interfaces per user • How conversational, auditory, and adaptive experiences redefine “UI” • When consistency still matters (cars, safety systems, frontline work) • Why AI doesn’t replace workers — but radically reshapes workflows • Whether browsers should become AI-native or stay neutral canvases • The unresolved risks around AI agents, payments, and control From hospitals using AI to speak Haitian Creole, to compliance forms that drop from hours to minutes, this conversation shows how every experience can become intelligent, contextual, and helpful. 👉 If you care about product design, AI, UX, or the future of software, this episode is for you. Subscribe for more conversations like this: https://techfirst.substack.com ⸻ 👤 Guest Mark Vange CEO & Founder, Automate.ly Former CTO, Electronic Arts Investor, serial entrepreneur, and builder focused on intent-driven, AI-native software ⸻ ⏱️ Chapter Markers 00:00 – Is AI the New UI? Why generative interfaces are reigniting the UI conversation 02:10 – The Hidden Cost of Traditional Interfaces Why one-size-fits-all software limits users 04:20 – When UIs Are Generated on Demand Adaptive experiences vs fixed screens and buttons 06:15 – Conversational & Multimodal Interfaces Why voice, audio, and language are all “UI” 08:30 – When Consistency Still Matters Safety, muscle memory, and shared interface conventions 10:45 – How Generative UIs Change Work AI as a collaborator, not a replacement 13:05 – Making Every Page an Application Why “dumb forms” and static sites are disappearing 15:10 – The Browser as the Ultimate Interface Neutral canvases vs AI-controlled environments 17:10 – AI Agents, Payments, and Control Why money is the hardest unsolved AI problem 19:25 – The Future of Multimodal UI Why UI goes far beyond pixels and screensIs AI really the new UI — or is that just another tech buzzphrase?

    21 min
  6. 7 JAN

    Agent-first web: awesome or awful?

    The web is turning agentic. And that changes everything from shopping to search to SEO. In this episode of TechFirst, John Koetsier sits down with Dave Anderson (VP at ContentSquare + host of the “Tech Seeking Human” podcast) to unpack what happens when browsers and AI assistants don’t just answer … they do stuff. For you. On your behalf. From Atlas and agentic browsing to the growing backlash from retailers (hello, Amazon vs Perplexity), we explore who benefits, who loses, and what the internet becomes when agents are the default user. You’ll hear why retailers are nervous (security, margins, coupon hunting), why agent-first experiences might create “headless” retailers (like ghost kitchens, but for ecommerce), and why search is shifting from SEO to AI visibility. Plus: real talk about trusting agents with your credit card, hallucinations, and what it means if your agent can look indistinguishable from you. Guest Dave Anderson — VP, ContentSquare https://contentsquare.com Podcast: Tech Seeking Human https://www.techseekinghuman.ai Links & subscribe Subscribe for more conversations on tech, AI, and what’s next: https://techfirst.substack.com Transcripts always available here https://johnkoetsier.com 00:00 Agentic web: what changes when browsers “do stuff” 00:59 Meet Dave Anderson (VP + podcast host) 01:31 30,000 feet: why “agents” suddenly matter 03:48 The agent future John wanted 10 years ago 04:21 Why Amazon doesn’t want your agent shopping on Amazon 05:07 Ticketmaster, bots, and the security nightmare 06:26 Siri’s original promise vs today’s reality 08:31 Are agents just bots… or something different? 10:04 Retail fears: coupon hunting, margins, returns chaos 11:21 Can you trust an agent with your credit card? 11:59 Why retailers want their own agents (and control) 13:14 Amazon’s agent works… but is it the whole internet? 14:19 Ghost kitchens for retail: “headless” agent-first brands 15:17 Hugo Boss jacket test: agents vs manual search 16:40 Agents should talk to your finance agent 17:14 Kids + deepfakes: what even looks real anymore? 18:04 Is this corrosive to apps… or the web? 19:10 Online identity, anonymity, and agent verification 20:28 Two futures: human-first brands vs agent-first retail 21:19 Agentic browsers on your device: can they “look like you”? 22:51 Baseball vs golf: the best analogy for search now 24:44 Instant shopping problem: returns + missing “services layer” 26:10 AI weirdness: wrong names, wrong locations, shifting behavior 27:37 Agents beyond shopping: support is the sleeper win 29:49 Inventing the future: who adopts agents and who won’t 31:13 Will people get tired of AI and crave humans again? 31:45 Serendipity vs optimization: the restaurant debate 32:36 Wrap: nobody solved agents… but the shift is real

    33 min
  7. 6 JAN

    World models: LLMs are not enough

    AI has mastered language, sort of. But the real world is way messier. In this episode of TechFirst, John Koetsier sits down with Kirin Sinha, founder and CEO of Illumix, to explore what comes after large language models: world models, spatial intelligence, and physical AI. They unpack why LLMs alone won’t get us to human-level intelligence, what it actually takes for machines to understand physical space, and how technologies born in augmented reality are now powering robotics, wearables, and real-world AI systems. This conversation goes deep on: • What “world models” really are — and why everyone from Fei-Fei Li to Jeff Bezos is betting on them • Why continuous video and outward-facing cameras are so hard for AI • The perception stack behind robots and smart glasses • Edge vs cloud compute — and why latency and privacy matter more than ever • How AR laid the groundwork for the next generation of physical intelligence If you’re building or betting on robotics, smart wearables, AR, or physical AI, this episode explains the infrastructure shift that’s already underway. Guest Kirin Sinha Founder & CEO, Illumix https://www.illumix.com 👉 Subscribe for more deep conversations on technology, AI, and the future: https://techfirst.substack.com 00:00 Raising the Bar on “Smart” Devices 01:07 Meet Kirin, Founder & CEO of Illumix 01:21 What Is a World Model — and Why It Matters 02:23 Why LLMs Alone Won’t Lead to AGI 03:46 From AR & the Metaverse to Physical AI 05:18 AR vs VR vs the Metaverse — Different Problems, Different Futures 06:32 Spatial Perception, Scene Understanding, and Contextual Intelligence 07:39 Why Continuous Video Is So Hard for Machines 08:39 The Camera Flip: From Selfie AI to World-Facing AI 09:58 Why Cameras Beat LiDAR for Wearables and Robots 10:27 Inside the Perception Stack 11:20 Edge vs Cloud Compute in Physical AI 12:37 Why On-Device Intelligence Matters for UX 13:52 SLMs, Efficiency, and the Limits of “Bigger Is Better” 15:11 Knowing What to Run — and When 16:06 Intent, Memory, and Real-Time AI Decisions 17:32 Physical Intelligence vs Digital Intelligence 18:39 Memory Palaces, Spatial Brains, and Human AI 19:39 Do We Need New Chips for Humanoid Robots? 20:26 How Chip Architectures Will Evolve for Physical AI 21:47 Privacy, On-Device Processing, and Trust 22:48 Final Thoughts on the Future of World-Aware AI

    22 min
  8. 3 JAN

    Quantum computing, meet edge computing (thanks to diamonds)

    Quantum computers usually mean massive machines, cryogenic temperatures, and isolated data centers. But what if quantum computing could run at room temperature, fit inside a server rack — or even a satellite? In this episode of TechFirst, host John Koetsier sits down with Marcus Doherty, Chief Science Officer of Quantum Brilliance, to explore how diamond-based quantum computers work — and why they could unlock scalable, edge-deployed quantum systems. Marcus explains how nitrogen-vacancy (NV) centers in diamond act like atomic-scale qubits, enabling long coherence times without extreme cooling. We dive into quantum sensing, quantum machine learning, and why diamond fabrication — including the world’s first commercial quantum diamond foundry — could be the key to manufacturing quantum hardware at scale. You’ll also hear how diamond quantum systems are already being deployed in data centers, how they could operate in vehicles and satellites, and what the realistic roadmap looks like for logical qubits and real-world impact over the next decade. Topics include: • Why diamonds are uniquely suited for quantum computing • How NV centers work at room temperature • Quantum sensing vs. quantum computing • Manufacturing challenges and timelines • Quantum computing at the edge (satellites, vehicles, sensors) • The future of hybrid classical-quantum systems ⸻ 🎙 Guest Marcus Doherty Chief Science Officer, Quantum Brilliance Professor of Quantum Physics Army Reserve Officer 🌐 https://quantumbrilliance.com ⸻ 👉 Subscribe for more deep dives into the future of technology: https://techfirst.substack.com ⸻ 00:00 Diamonds and the next wave of quantum computing 01:20 Why diamond qubits work at room temperature 03:20 NV centers explained: defects that behave like atoms 05:05 How diamonds replace massive quantum isolation systems 06:40 Building the world’s first quantum diamond foundry 08:30 Defect-free diamonds, isotopes, and qubit engineering 10:15 Quantum sensing vs. quantum computing with diamonds 12:40 From desktop quantum systems to millions of qubits 14:25 Roadmap: logical qubits, timelines, and scale 16:10 Quantum computers at the edge: vehicles and satellites 18:10 Quantum machine learning and real-world deployments 19:50 The long game: why diamond quantum computing scales

    21 min

About

Deep tech conversations with key innovators in AI, robotics, and smart matter ...

You Might Also Like