Eye On A.I.

Craig S. Smith

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.

  1. 6D AGO

    #308 Christopher Bergey: How Arm Enables AI to Run Directly on Devices

    Try OCI for free at http://oracle.com/eyeonai  This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking.  Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved. Why is AI moving from the cloud to our devices, and what makes on device intelligence finally practical at scale? In this episode of Eye on AI, host Craig Smith speaks with Christopher Bergey, Executive Vice President of Arm's Edge AI Business Unit, about how edge AI is reshaping computing across smartphones, PCs, wearables, cars, and everyday devices. We explore how Arm v9 enables AI inference at the edge, why heterogeneous computing across CPUs, GPUs, and NPUs matters, and how developers can balance performance, power, memory, and latency. Learn why memory bandwidth has become the biggest bottleneck for AI, how Arm approaches scalable matrix extensions, and what trade offs exist between accelerators and traditional CPU based AI workloads. You will also hear real world examples of edge AI in action, from smart cameras and hearing aids to XR devices, robotics, and in car systems. The conversation looks ahead to a future where intelligence is embedded into everything you use, where AI becomes the default interface, and why reliable, low latency, on device AI is essential for creating experiences users actually trust. Stay Updated: Craig Smith on X: https://x.com/craigss     Eye on A.I. on X: https://x.com/EyeOn_AI

    52 min
  2. DEC 16

    #307 Steven Brightfield: How Neuromorphic Computing Cuts Inference Power by 10x

    This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit https://agntcy.org/ and add your support. Why is AI so powerful in the cloud but still so limited inside everyday devices, and what would it take to run intelligent systems locally without draining battery or sacrificing privacy? In this episode of Eye on AI, host Craig Smith speaks with Steve Brightfield, Chief Marketing Officer at BrainChip, about neuromorphic computing and why brain inspired architectures may be the key to the future of edge AI. We explore how neuromorphic systems differ from traditional GPU based AI, why event driven and spiking neural networks are dramatically more power efficient, and how on device inference enables faster response times, lower costs, and stronger data privacy. Steve explains why brute force computation works in data centers but breaks down at the edge, and how edge AI is reshaping wearables, sensors, robotics, hearing aids, and autonomous systems. You will also hear real world examples of neuromorphic AI in action, from smart glasses and medical monitoring to radar, defense, and space applications. The conversation covers how developers can transition from conventional models to neuromorphic architectures, what role heterogeneous computing plays alongside CPUs and GPUs, and why the next wave of AI adoption will happen quietly inside the devices we use every day. Stay Updated: Craig Smith on X: https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI

    1 hr
  3. DEC 3

    #305 Rakshit Ghura: How Lenovo Is Turning AI Agents Into Digital Coworkers

    Why are enterprises struggling to turn AI hype into real workplace transformation, and how is Lenovo using agentic AI to actually close that gap? In this episode of Eye on AI, host Craig Smith talks with Rakshit Ghura about how his team is reinventing the modern workplace with an omnichannel AI architecture powered by a fleet of specialized agents. We explore how Lenovo has evolved from a hardware company into a global solutions provider, and how its Care of One platform uses persona based design to improve employee experience, reduce downtime, and personalize support across IT, HR, and operations. You will learn what enterprises get wrong about AI readiness, why trust and change management matter more than technology, and how organizations can design workplace stacks that meet employees where they are. We also cover how Lenovo approaches responsible AI, how enterprises should think about security and governance when deploying agents, and why so many organizations are enthusiastic about AI but still not ready to adopt it. Rakshit shares real examples from retail, manufacturing, and field operations, including how AI can improve uptime, automate ticket resolution, monitor equipment, and provide proactive insights that drive measurable business impact. You will also learn how to evaluate ROI for digital workplace solutions, how to involve employees early in the adoption cycle, and which metrics matter most when scaling agentic AI, including uptime, productivity improvements, and employee satisfaction. Stay Updated: Craig Smith on X: https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI

    46 min
  4. NOV 28

    #304 Matt Zeiler: Why Government And Enterprises Choose Clarifai For AI Ops

    Try OCI for free at http://oracle.com/eyeonai This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking.  Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved. Why is AI inference becoming the new battleground for speed, cost, and real world scalability, and how are companies like Clarifai reshaping the AI stack by optimizing every token and every deployment? In this episode of Eye on AI, host Craig Smith sits down with Clarifai founder and CEO Matt Zeiler to explore why inference is now more important than training and how a unified compute orchestration layer is changing the way teams run LLMs and agentic systems. We look at what makes high performance inference possible across cloud, on prem, and edge environments, how to get faster responses from large language models, and how to cut GPU spend without sacrificing intelligence or accuracy. Learn how organizations operate AI systems in regulated industries, how government teams and enterprises use Clarifai to deploy models securely, and which bottlenecks matter most when running long context, multimodal, or high throughput applications. You will also hear how to optimize your own AI workloads with better token throughput, how to choose the right hardware strategy for scale, and how inference first architecture can turn models into real products. This conversation breaks down the tools, techniques, and design patterns that can help your AI agents run faster, cheaper, and more reliably in production. Stay Updated: Craig Smith on X: https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI

    55 min
  5. NOV 23

    #303 Fei-Fei Li: Spatial Intelligence, World Models & the Future of AI

    This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit https://agntcy.org/ and add your support. How will AI evolve once it can understand and reason about the 3D world, not just text on a screen? In this episode of Eye on AI, host Craig Smith speaks with Fei Fei Li about the rise of spatial intelligence and the world models that could transform how machines perceive, imagine, and interact with reality. We explore how spatial intelligence goes beyond language to connect perception, action, and reasoning in physical environments. You will hear how models like Marble build consistent and persistent 3D spaces, why multimodal inputs matter, and what it takes to create digital worlds that are useful for robotics, simulation, design, and creative workflows. Fei Fei also explains the challenges of long term memory, continuous learning, and the search for training objectives that mirror the role next token prediction plays in language models. Learn how spatial reasoning unlocks new possibilities in robotics and telepresence, why classical physics engines still matter, and how future AI systems may merge perception, planning, and imagination. You will also hear Fei Fei's perspective on the limits of current architectures, why true understanding is different from human understanding, and how world models could shape the next generation of intelligent systems. Stay Updated: Craig Smith on X: https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI

    1h 1m
  6. NOV 19

    #302 Karl Friston: How the Free Energy Principle Could Rewrite AI

    This episode is sponsored by AGNTCY. Unlock agents at scale with an open Internet of Agents.  Visit https://agntcy.org/ and add your support. How could Karl Friston's Free Energy Principle become a blueprint for the future of AI? In this episode of Eye on AI, host Craig Smith sits down with Karl Friston, the neuroscientist behind the Free Energy Principle and advisor to Verses AI, to explore how active inference and brain inspired generative models might move us beyond transformer based systems. They unpack how Axiom, Verses' new architecture, uses probabilistic beliefs and message passing to build agents that learn like brains instead of just predicting the next token. We look at why transformers face scaling and reliability limits, how Free Energy unifies prediction, perception, and action, and what it means for an AI system to carry explicit uncertainty instead of overconfident guesses. Learn how active inference supports continual learning without catastrophic forgetting, how structure learning lets models grow and prune themselves, and why embodiment and interaction with the real world are essential for grounding language and meaning. You will also hear how Axiom can sit beside or beneath large language models, how explicit uncertainty can reduce hallucinations in high stakes workflows, and where these ideas are already being tested in areas like logistics, robotics, and autonomous agents. By the end of the episode, you will have a clearer picture of how Karl Friston's Free Energy blueprint could reshape AI architectures, from enterprise planning systems to embodied agents that understand and act in the world. Stay Updated: Craig Smith on X: https://x.com/craigss  Eye on A.I. on X: https://x.com/EyeOn_AI

    1h 3m
4.7
out of 5
56 Ratings

About

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.

You Might Also Like