101 episodes

Tune in as we dissect recent AI news, explore cutting-edge innovations, and sit down with influential voices shaping the future of AI. Whether you're a seasoned expert or just dipping your toes into the AI waters, our podcast is your go-to resource for staying informed and inspired.

#IntelAI
@IntelAI

Intel on AI Intel Corporation

    • Technology

Tune in as we dissect recent AI news, explore cutting-edge innovations, and sit down with influential voices shaping the future of AI. Whether you're a seasoned expert or just dipping your toes into the AI waters, our podcast is your go-to resource for staying informed and inspired.

#IntelAI
@IntelAI

    Multimodal AI, Self-Supervised Learning, Counterfactual Reasoning, and AI Agents with Vasudev Lal

    Multimodal AI, Self-Supervised Learning, Counterfactual Reasoning, and AI Agents with Vasudev Lal

    Discover the cutting-edge advancements in artificial intelligence with Vasudev Lal, Principal AI Research Scientist at Intel. This episode delves into the benefits of multimodal AI and the enhanced validity achieved through self-supervised learning. Vasudev also explores the applications of counterfactual reasoning in AI and the efficiency gains from using AI agents. Additionally, learn how leveraging multiple Gaudi 2 accelerators can significantly reduce LLM training times. Stay updated with the latest in AI technology and innovations by following #IntelAI and @IntelAI for more information. 

    • 37 min
    Real-world manufacturing applications of AI and autonomous machine learning, with Rao Desineni

    Real-world manufacturing applications of AI and autonomous machine learning, with Rao Desineni

    Learn about real-world applications of AI in manufacturing as Rao Desineni shares how Intel incorporates visual AI in their defect detection processes along with autonomous machine learning for improving product yields & quality.
     
    #IntelAI
    @IntelAI

    • 44 min
    Open ecosystems and AI data foundations, with Dr. Wei Li

    Open ecosystems and AI data foundations, with Dr. Wei Li

    Learn the latest on open ecosystems, AI data foundations and Meta’s new Llama 3 with Dr. Wei Li, VP/GM of AI Software Engineering at Intel.

    • 43 min
    Intel on AI - The future of AI models and how to choose the right one, with Nuri Cankaya

    Intel on AI - The future of AI models and how to choose the right one, with Nuri Cankaya

    Dive deep into the ever-evolving landscape of AI with Intel’s VP of AI Marketing, Nuri Cankaya, as he navigates the intricacies of cutting-edge AI models and their impact on businesses.

    • 54 min
    Evolution, Technology, and the Brain

    Evolution, Technology, and the Brain

    In this episode of Intel on AI host Amir Khosrowshahi talks with Jeff Lichtman about the evolution of technology and mammalian brains.
    Jeff Lichtman is the Jeremy R. Knowles Professor of Molecular and Cellular Biology at Harvard. He received an AB from Bowdoin and an M.D. and Ph.D. from Washington University, where he worked for thirty years before moving to Cambridge. He is now a member of Harvard’s Center for Brain Science and director of the Lichtman Lab, which focuses on connectomics— mapping neural connections and understanding their development.
    In the podcast episode Jeff talks about why researching the physical structure of brain is so important to advancing science. He goes into detail about Brainbrow—a method he and Joshua Sanes developed to illuminate and trace the “wires” (axons and dendrites) connecting neurons to each other. Amir and Jeff discuss how the academic rivalry between Santiago Ramón y Cajal and Camillo Golgi pioneered neuroscience research. Jeff describes his remarkable research taking nanometer slices of brain tissue, creating high-resolution images, and then digitally reconstructing the cells and synapses to get a more complete picture of the brain. The episode closes with Jeff and Amir discussing theories about how the human brain learns and what technologists might discover from the grand challenge of mapping the entire nervous system.
    Academic research discussed in the podcast episode:
    Principles of Neural Development The reorganization of synaptic connexions in the rat submandibular ganglion during post-natal development Development of the neuromuscular junction: Genetic analysis in mice A technicolour approach to the connectome The big data challenges of connectomics Imaging Intracellular Fluorescent Proteins at Nanometer Resolution Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell High-resolution, high-throughput imaging with a multibeam scanning electron microscope Saturated Reconstruction of a Volume of Neocortex A connectomic study of a petascale fragment of human cerebral cortex A Canonical Microcircuit for Neocortex

    • 1 hr 2 min
    Meta-Learning for Robots

    Meta-Learning for Robots

    In this episode of Intel on AI host Amir Khosrowshahi and co-host Mariano Phielipp talk with Chelsea Finn about machine learning research focused on giving robots the capability to develop intelligent behavior.
    Chelsea is Assistant Professor in Computer Science and Electrical Engineering at Stanford University, whose Stanford IRIS (Intelligence through Robotic Interaction at Scale) lab is closely associated with the Stanford Artificial Intelligence Laboratory (SAIL). She received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley, where she worked with Pieter Abbeel and Sergey Levine.
    In the podcast episode Chelsea explains the difference between supervised learning and reinforcement learning. She goes into detail about the different kinds of new reinforcement algorithms that can aid robots to learn more autonomously. Chelsea talks extensively about meta-learning—the concept of helping robots learn to learn­—and her efforts to advance model-agnostic meta-learning (MAML). The episode closes with Chelsea and Mariano discussing the intersection of natural language processing and reinforcement learning. The three also talk about the future of robotics and artificial intelligence, including the complexity of setting up robotic reward functions for seemingly simple tasks.
    Academic research discussed in the podcast episode:
    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Meta-Learning with Memory-Augmented Neural Networks Matching Networks for One Shot Learning Learning to Learn with Gradients Bayesian Model-Agnostic Meta-Learning Meta-Learning with Implicit Gradients Meta-Learning Without Memorization Efficiently Identifying Task Groupings for Multi-Task Learning Three scenarios for continual learning Dota 2 with Large Scale Deep Reinforcement Learning ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback

    • 40 min

Top Podcasts In Technology

TED Radio Hour
NPR
GRC & Cyber Security Podcast
SureCloud
WSJ’s The Future of Everything
The Wall Street Journal
How About Tomorrow?
Adam Elmore & Dax Raad
The Vergecast
The Verge
Lenny's Podcast: Product | Growth | Career
Lenny Rachitsky

You Might Also Like

Practical AI: Machine Learning, Data Science
Changelog Media
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Last Week in AI
Skynet Today
The Ezra Klein Show
New York Times Opinion
Raising Health
Andreessen Horowitz
a16z Podcast
Andreessen Horowitz