97 episodes

Interesting, relevant, outcome-based AI content designed to meet our audience where they are on their AI journey. Join co-hosts Ryan Carson and Tony Mongkolsmai from Intel as they cover recent AI news and interview industry thought leaders.

Intel on AI Intel

    • Technology
    • 4.9 • 13 Ratings

Interesting, relevant, outcome-based AI content designed to meet our audience where they are on their AI journey. Join co-hosts Ryan Carson and Tony Mongkolsmai from Intel as they cover recent AI news and interview industry thought leaders.

    Evolution, Technology, and the Brain

    Evolution, Technology, and the Brain

    In this episode of Intel on AI host Amir Khosrowshahi talks with Jeff Lichtman about the evolution of technology and mammalian brains.
    Jeff Lichtman is the Jeremy R. Knowles Professor of Molecular and Cellular Biology at Harvard. He received an AB from Bowdoin and an M.D. and Ph.D. from Washington University, where he worked for thirty years before moving to Cambridge. He is now a member of Harvard’s Center for Brain Science and director of the Lichtman Lab, which focuses on connectomics— mapping neural connections and understanding their development.
    In the podcast episode Jeff talks about why researching the physical structure of brain is so important to advancing science. He goes into detail about Brainbrow—a method he and Joshua Sanes developed to illuminate and trace the “wires” (axons and dendrites) connecting neurons to each other. Amir and Jeff discuss how the academic rivalry between Santiago Ramón y Cajal and Camillo Golgi pioneered neuroscience research. Jeff describes his remarkable research taking nanometer slices of brain tissue, creating high-resolution images, and then digitally reconstructing the cells and synapses to get a more complete picture of the brain. The episode closes with Jeff and Amir discussing theories about how the human brain learns and what technologists might discover from the grand challenge of mapping the entire nervous system.
    Academic research discussed in the podcast episode:
    Principles of Neural Development The reorganization of synaptic connexions in the rat submandibular ganglion during post-natal development Development of the neuromuscular junction: Genetic analysis in mice A technicolour approach to the connectome The big data challenges of connectomics Imaging Intracellular Fluorescent Proteins at Nanometer Resolution Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell High-resolution, high-throughput imaging with a multibeam scanning electron microscope Saturated Reconstruction of a Volume of Neocortex A connectomic study of a petascale fragment of human cerebral cortex A Canonical Microcircuit for Neocortex

    • 1 hr 2 min
    Meta-Learning for Robots

    Meta-Learning for Robots

    In this episode of Intel on AI host Amir Khosrowshahi and co-host Mariano Phielipp talk with Chelsea Finn about machine learning research focused on giving robots the capability to develop intelligent behavior.
    Chelsea is Assistant Professor in Computer Science and Electrical Engineering at Stanford University, whose Stanford IRIS (Intelligence through Robotic Interaction at Scale) lab is closely associated with the Stanford Artificial Intelligence Laboratory (SAIL). She received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley, where she worked with Pieter Abbeel and Sergey Levine.
    In the podcast episode Chelsea explains the difference between supervised learning and reinforcement learning. She goes into detail about the different kinds of new reinforcement algorithms that can aid robots to learn more autonomously. Chelsea talks extensively about meta-learning—the concept of helping robots learn to learn­—and her efforts to advance model-agnostic meta-learning (MAML). The episode closes with Chelsea and Mariano discussing the intersection of natural language processing and reinforcement learning. The three also talk about the future of robotics and artificial intelligence, including the complexity of setting up robotic reward functions for seemingly simple tasks.
    Academic research discussed in the podcast episode:
    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Meta-Learning with Memory-Augmented Neural Networks Matching Networks for One Shot Learning Learning to Learn with Gradients Bayesian Model-Agnostic Meta-Learning Meta-Learning with Implicit Gradients Meta-Learning Without Memorization Efficiently Identifying Task Groupings for Multi-Task Learning Three scenarios for continual learning Dota 2 with Large Scale Deep Reinforcement Learning ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback

    • 40 min
    AI, Social Media, and Political Influence

    AI, Social Media, and Political Influence

    In this episode of Intel on AI host Amir Khosrowshahi talks with Joshua Tucker about using artificial intelligence to study the influence social media has on politics.
    Joshua is professor of politics at New York University with affiliated appointments in the department of Russian and Slavic Studies and the Center for Data Science. He is also the director of the Jordan Center for the Advanced Study of Russia and co-director of the Center for Social Media and Politics. He was a co-author and editor of an award-winning policy blog at The Washington Post and has published several books, including his latest, where he is co-editor, titled Social Media and Democracy: The State of the Field, Prospects for Reform from Cambridge University Press.
    In the podcast episode, Joshua discusses his background in researching mass political behavior, including Colored Revolutions in Eastern Europe. He talks about how his field of study changed after working with his then PhD student Pablo Barberá (now a professor at the University of Southern California), who proposed a method whereby researchers could estimate people's partisanship based on the social networks in which they had enmeshed themselves. Joshua describes the limitations researchers often have when trying to study data on various platforms, the challenges of big data, utilizing NYU’s Greene HPC Cluster, and the impact that the leak of the Facebook Papers had on the field. He also describes findings regarding people who are more prone to share material from fraudulent media organizations masquerading as news outlets and how researchers like Rebekah Tromble (Director of the Institute for Data, Democracy and Politics at George Washington University) are working with government entities like the European Union on balancing public research with data privacy. The episode closes with Amir and Joshua discussing disinformation campaigns in the context of the Russo-Ukrainian War.
    Academic research discussed in the podcast episode:
    Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data. Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?

    • 33 min
    Machine Learning and Molecular Simulation

    Machine Learning and Molecular Simulation

    In this episode of Intel on AI host Amir Khosrowshahi talks with Ron Dror about breakthroughs in computational biology and molecular simulation.
    Ron is an Associate Professor of Computer Science in the Stanford Artificial Intelligence Lab, leading a research group that uses machine learning and molecular simulation to elucidate biomolecular structure, dynamics, and function, and to guide the development of more effective medicines. Previously, Ron worked on the Anton supercomputer at D.E. Shaw Research after earning degrees in the fields of electrical engineering, computer science, biological sciences, and mathematics from MIT, Cambridge, and Rice. His groundbreaking research has been published in journals such as Science and Nature, presented at conferences like Neural Information Processing Systems (NeurIPS), and won awards from the Association of Computing Machinery (ACM) and other organizations.
    In the podcast episode, Ron talks about his work with several important collaborators, his interdisciplinary approach to research, and how molecular modeling has improved over the years. He goes into detail about the gen-over-gen advancements made in the Anton supercomputer, including its software, and his recent work at Stanford with molecular dynamics simulations and machine learning. The podcast closes with Amir asking detailed questions about Ron and his team’s recent paper concerning RNA structure prediction that was featured on the cover of Science.
    Academic research discussed in the podcast episode:
    Statistics of real-world illumination The Role of Natural Image Statistics in Biological Motion Estimation Surface reflectance recognition and real-world illumination statistics Accuracy of velocity estimation by Reichardt correlators Principles of Neural Design Levinthal's paradox Potassium channels Structural and Thermodynamic Properties of Selective Ion Binding in a K+ Channel Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters Long-timescale molecular dynamics simulations of protein structure and function Parallel random numbers: as easy as 1, 2, 3 Biomolecular Simulation: A Computational Microscope for Molecular Biology Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer Molecular Dynamics Simulation for All Structural basis for nucleotide exchange in heterotrimeric G proteins How GPCR Phosphorylation Patterns Orchestrate Arrestin-Mediated Signaling Highly accurate protein structure prediction with AlphaFold ATOM3D: Tasks on Molecules in Three Dimensions Geometric deep learning of RNA structure  

    • 59 min
    AI and Nanocomputing

    AI and Nanocomputing

    In this episode of Intel on AI host Amir Khosrowshahi, assisted by Dmitri Nikonov, talks with Jean Anne Incorvia about the use of new physics in nanocomputing, specifically with spintronic logic and 2D materials.
    Jean is an Assistant Professor and holds the Fellow of Advanced Micro Devices Chair in Computer Engineering in the Department of Electrical and Computer Engineering at The University of Texas at Austin, where she directs the Integrated Nano Computing Lab.
    Dimitri is a Principal Engineer in the Components Research at Intel. He holds a Master of Science in Aeromechanical Engineering from the Moscow Institute of Physics and Technology and a Ph.D. from Texas A&M. Dimitri works in the discovery and simulation of nanoscale logic devices and manages joint research projects with multiple universities. He has authored dozens of research papers in the areas of quantum nanoelectronics, spintronics, and non-Boolean architectures.
    In the episode Jean talks about her background with condensed matter physics and solid-state electronics. She explains how magnetic properties and atomically thin materials, like graphene, can be leveraged at nanoscale for beyond-CMOS computing. Jean goes into detail about domain wall magnetic tunnel junctions and why such devices might have a lower energy cost than the modern process of encoding information in charge. She sees these new types of devices to be compatible with CMOS computing and part of a larger journey toward beyond-von Neumann architecture that will advance the evolution of artificial intelligence, neural networks, deep learning, machine learning, and neuromorphic computing.
    The episode closes with Jean, Amir, and Dimitri talking about the broadening definition of quantum computing, existential philosophy, and AI ethics.
    Academic research discussed in the podcast episode:
    Being and Time Cosmic microwave background radiation anisotropies: Their discovery and utilization Nanotube Molecular Wires as Chemical Sensors Visualization of exciton transport in ordered and disordered molecular solids Nanoscale Magnetic Materials for Energy-Efficient Spin Based Transistors Lateral Inhibition Pyramidal Neural Network for Image Classification Magnetic domain wall neuron with lateral inhibition Maximized Lateral Inhibition in Paired Magnetic Domain Wall Racetracks for Neuromorphic Computing Domain wall-magnetic tunnel junction spin–orbit torque devices and circuits for in-memory computing High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network

    • 46 min
    Designing Molecules with AI

    Designing Molecules with AI

    In this episode of Intel on AI hosts Amir Khosrowshahi and Santiago Miret talk with Alán Aspuru-Guzik about the chemistry of computing and the future of materials discovery.
    Alán is a professor of chemistry and computer science at the University of Toronto, a Canada 150 Research Chair in theoretical chemistry, a CIFAR AI Chair at the Vector Institute, and a CIFAR Lebovic Fellow in the biology-inspired Solar Energy Program. Alán also holds a Google Industrial Research Chair in quantum computing and is the co-founder of two startups, Zapata Computing and Kebotix.
    Santiago Miret is an AI researcher in Intel Labs, who has an active research collaboration Alán. Santiago studies at the intersection of AI and the sciences, as well as the algorithmic development of AI for real-world problems.
    In the first half of the episode, the three discuss accelerating molecular design and building next generation functional materials. Alán talks about his academic background with high performance computing (HPC) that led him into the field of molecular design. He goes into detail about building a “self-driving lab” for scientific experimentation, which, coupled with advanced automation and robotics, he believes will help propel society to move beyond the era of plastics and into the era of materials by demand. Alán and Santiago talk about their research collaboration with Intel to build sophisticated model-based molecular design platforms that can scale to real-world challenges. Alán talks about the Acceleration Consortium and the need for standardization research to drive greater academic and industry collaborations for self-driving laboratories.
    In the second half of the episode, the three talk about quantum computing, including developing algorithms for quantum dynamics, molecular electronic structure, molecular properties, and more. Alán talks about how a simple algorithm based on thinking of the quantum computer like a musical instrument is behind the concept of the variational quantum eigensolver, which could hold promising advancements alongside classical computers. Amir, and Santiago close the episode by talking about the future of research, including projects at DARPA, oscillatory computing, quantum machine learning, quantum autoencoders, and how young technologists entering the field can advance a more equitable society.
    Academic research discussed in the podcast episode:
    The Hot Topic: What We Can Do About Global Warming Energy, Transport, & the Environment Scalable Quantum Simulation of Molecular Energies The Harvard Clean Energy Project: Large-Scale Computational Screening and Design of Organic Photovoltaics on the World Community Grid Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning Neuroevolution-Enhanced Multi-Objective Optimization for Mixed-Precision Quantization Organic molecules with inverted gaps between first excited singlet and triplet states and appreciable fluorescence rates Simulated Quantum Computation of Molecular Energies Towards quantum chemistry on a quantum computer Gerald McLean and Marcum Jung and others with the concept of the variational quantum eigensolver Experimental investigation of performance differences between coherent Ising machines and a quantum annealer Quantum autoencoders for efficient compression of quantum data

    • 56 min

Customer Reviews

4.9 out of 5
13 Ratings

13 Ratings

LisaIsHereForIt ,

Incredible guests, fascinating AI discussion 💥

No matter the subject, you’re guaranteed to gain something from every episode - can’t recommend Intel on AI enough. 🙌

AColt ,

Not your typical corporate podcast.

The relaunch of this podcast is quite good. Interviews with extremely interesting guests, who discuss a wide range of topics and not just products or their brand or whatever. Looking forward to future episodes about how AI impacts society.

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
The Neuron: AI Explained
The Neuron
Dwarkesh Podcast
Dwarkesh Patel
TED Radio Hour
NPR