98 afleveringen

Tune in as we dissect recent AI news, explore cutting-edge innovations, and sit down with influential voices shaping the future of AI. Whether you're a seasoned expert or just dipping your toes into the AI waters, our podcast is your go-to resource for staying informed and inspired.

Intel on AI Intel Corporation

    • Technologie

Tune in as we dissect recent AI news, explore cutting-edge innovations, and sit down with influential voices shaping the future of AI. Whether you're a seasoned expert or just dipping your toes into the AI waters, our podcast is your go-to resource for staying informed and inspired.

    Intel on AI - The future of AI models and how to choose the right one, with Nuri Cankaya

    Intel on AI - The future of AI models and how to choose the right one, with Nuri Cankaya

    Dive deep into the ever-evolving landscape of AI with Intel’s VP of AI Marketing, Nuri Cankaya, as he navigates the intricacies of cutting-edge AI models and their impact on businesses.

    • 54 min.
    Evolution, Technology, and the Brain

    Evolution, Technology, and the Brain

    In this episode of Intel on AI host Amir Khosrowshahi talks with Jeff Lichtman about the evolution of technology and mammalian brains.
    Jeff Lichtman is the Jeremy R. Knowles Professor of Molecular and Cellular Biology at Harvard. He received an AB from Bowdoin and an M.D. and Ph.D. from Washington University, where he worked for thirty years before moving to Cambridge. He is now a member of Harvard’s Center for Brain Science and director of the Lichtman Lab, which focuses on connectomics— mapping neural connections and understanding their development.
    In the podcast episode Jeff talks about why researching the physical structure of brain is so important to advancing science. He goes into detail about Brainbrow—a method he and Joshua Sanes developed to illuminate and trace the “wires” (axons and dendrites) connecting neurons to each other. Amir and Jeff discuss how the academic rivalry between Santiago Ramón y Cajal and Camillo Golgi pioneered neuroscience research. Jeff describes his remarkable research taking nanometer slices of brain tissue, creating high-resolution images, and then digitally reconstructing the cells and synapses to get a more complete picture of the brain. The episode closes with Jeff and Amir discussing theories about how the human brain learns and what technologists might discover from the grand challenge of mapping the entire nervous system.
    Academic research discussed in the podcast episode:
    Principles of Neural Development The reorganization of synaptic connexions in the rat submandibular ganglion during post-natal development Development of the neuromuscular junction: Genetic analysis in mice A technicolour approach to the connectome The big data challenges of connectomics Imaging Intracellular Fluorescent Proteins at Nanometer Resolution Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell High-resolution, high-throughput imaging with a multibeam scanning electron microscope Saturated Reconstruction of a Volume of Neocortex A connectomic study of a petascale fragment of human cerebral cortex A Canonical Microcircuit for Neocortex

    • 1 u. 2 min.
    Meta-Learning for Robots

    Meta-Learning for Robots

    In this episode of Intel on AI host Amir Khosrowshahi and co-host Mariano Phielipp talk with Chelsea Finn about machine learning research focused on giving robots the capability to develop intelligent behavior.
    Chelsea is Assistant Professor in Computer Science and Electrical Engineering at Stanford University, whose Stanford IRIS (Intelligence through Robotic Interaction at Scale) lab is closely associated with the Stanford Artificial Intelligence Laboratory (SAIL). She received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley, where she worked with Pieter Abbeel and Sergey Levine.
    In the podcast episode Chelsea explains the difference between supervised learning and reinforcement learning. She goes into detail about the different kinds of new reinforcement algorithms that can aid robots to learn more autonomously. Chelsea talks extensively about meta-learning—the concept of helping robots learn to learn­—and her efforts to advance model-agnostic meta-learning (MAML). The episode closes with Chelsea and Mariano discussing the intersection of natural language processing and reinforcement learning. The three also talk about the future of robotics and artificial intelligence, including the complexity of setting up robotic reward functions for seemingly simple tasks.
    Academic research discussed in the podcast episode:
    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Meta-Learning with Memory-Augmented Neural Networks Matching Networks for One Shot Learning Learning to Learn with Gradients Bayesian Model-Agnostic Meta-Learning Meta-Learning with Implicit Gradients Meta-Learning Without Memorization Efficiently Identifying Task Groupings for Multi-Task Learning Three scenarios for continual learning Dota 2 with Large Scale Deep Reinforcement Learning ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback

    • 40 min.
    AI, Social Media, and Political Influence

    AI, Social Media, and Political Influence

    In this episode of Intel on AI host Amir Khosrowshahi talks with Joshua Tucker about using artificial intelligence to study the influence social media has on politics.
    Joshua is professor of politics at New York University with affiliated appointments in the department of Russian and Slavic Studies and the Center for Data Science. He is also the director of the Jordan Center for the Advanced Study of Russia and co-director of the Center for Social Media and Politics. He was a co-author and editor of an award-winning policy blog at The Washington Post and has published several books, including his latest, where he is co-editor, titled Social Media and Democracy: The State of the Field, Prospects for Reform from Cambridge University Press.
    In the podcast episode, Joshua discusses his background in researching mass political behavior, including Colored Revolutions in Eastern Europe. He talks about how his field of study changed after working with his then PhD student Pablo Barberá (now a professor at the University of Southern California), who proposed a method whereby researchers could estimate people's partisanship based on the social networks in which they had enmeshed themselves. Joshua describes the limitations researchers often have when trying to study data on various platforms, the challenges of big data, utilizing NYU’s Greene HPC Cluster, and the impact that the leak of the Facebook Papers had on the field. He also describes findings regarding people who are more prone to share material from fraudulent media organizations masquerading as news outlets and how researchers like Rebekah Tromble (Director of the Institute for Data, Democracy and Politics at George Washington University) are working with government entities like the European Union on balancing public research with data privacy. The episode closes with Amir and Joshua discussing disinformation campaigns in the context of the Russo-Ukrainian War.
    Academic research discussed in the podcast episode:
    Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data. Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?

    • 33 min.
    Machine Learning and Molecular Simulation

    Machine Learning and Molecular Simulation

    In this episode of Intel on AI host Amir Khosrowshahi talks with Ron Dror about breakthroughs in computational biology and molecular simulation.
    Ron is an Associate Professor of Computer Science in the Stanford Artificial Intelligence Lab, leading a research group that uses machine learning and molecular simulation to elucidate biomolecular structure, dynamics, and function, and to guide the development of more effective medicines. Previously, Ron worked on the Anton supercomputer at D.E. Shaw Research after earning degrees in the fields of electrical engineering, computer science, biological sciences, and mathematics from MIT, Cambridge, and Rice. His groundbreaking research has been published in journals such as Science and Nature, presented at conferences like Neural Information Processing Systems (NeurIPS), and won awards from the Association of Computing Machinery (ACM) and other organizations.
    In the podcast episode, Ron talks about his work with several important collaborators, his interdisciplinary approach to research, and how molecular modeling has improved over the years. He goes into detail about the gen-over-gen advancements made in the Anton supercomputer, including its software, and his recent work at Stanford with molecular dynamics simulations and machine learning. The podcast closes with Amir asking detailed questions about Ron and his team’s recent paper concerning RNA structure prediction that was featured on the cover of Science.
    Academic research discussed in the podcast episode:
    Statistics of real-world illumination The Role of Natural Image Statistics in Biological Motion Estimation Surface reflectance recognition and real-world illumination statistics Accuracy of velocity estimation by Reichardt correlators Principles of Neural Design Levinthal's paradox Potassium channels Structural and Thermodynamic Properties of Selective Ion Binding in a K+ Channel Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters Long-timescale molecular dynamics simulations of protein structure and function Parallel random numbers: as easy as 1, 2, 3 Biomolecular Simulation: A Computational Microscope for Molecular Biology Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer Molecular Dynamics Simulation for All Structural basis for nucleotide exchange in heterotrimeric G proteins How GPCR Phosphorylation Patterns Orchestrate Arrestin-Mediated Signaling Highly accurate protein structure prediction with AlphaFold ATOM3D: Tasks on Molecules in Three Dimensions Geometric deep learning of RNA structure  

    • 59 min.
    AI and Nanocomputing

    AI and Nanocomputing

    In this episode of Intel on AI host Amir Khosrowshahi, assisted by Dmitri Nikonov, talks with Jean Anne Incorvia about the use of new physics in nanocomputing, specifically with spintronic logic and 2D materials.
    Jean is an Assistant Professor and holds the Fellow of Advanced Micro Devices Chair in Computer Engineering in the Department of Electrical and Computer Engineering at The University of Texas at Austin, where she directs the Integrated Nano Computing Lab.
    Dimitri is a Principal Engineer in the Components Research at Intel. He holds a Master of Science in Aeromechanical Engineering from the Moscow Institute of Physics and Technology and a Ph.D. from Texas A&M. Dimitri works in the discovery and simulation of nanoscale logic devices and manages joint research projects with multiple universities. He has authored dozens of research papers in the areas of quantum nanoelectronics, spintronics, and non-Boolean architectures.
    In the episode Jean talks about her background with condensed matter physics and solid-state electronics. She explains how magnetic properties and atomically thin materials, like graphene, can be leveraged at nanoscale for beyond-CMOS computing. Jean goes into detail about domain wall magnetic tunnel junctions and why such devices might have a lower energy cost than the modern process of encoding information in charge. She sees these new types of devices to be compatible with CMOS computing and part of a larger journey toward beyond-von Neumann architecture that will advance the evolution of artificial intelligence, neural networks, deep learning, machine learning, and neuromorphic computing.
    The episode closes with Jean, Amir, and Dimitri talking about the broadening definition of quantum computing, existential philosophy, and AI ethics.
    Academic research discussed in the podcast episode:
    Being and Time Cosmic microwave background radiation anisotropies: Their discovery and utilization Nanotube Molecular Wires as Chemical Sensors Visualization of exciton transport in ordered and disordered molecular solids Nanoscale Magnetic Materials for Energy-Efficient Spin Based Transistors Lateral Inhibition Pyramidal Neural Network for Image Classification Magnetic domain wall neuron with lateral inhibition Maximized Lateral Inhibition in Paired Magnetic Domain Wall Racetracks for Neuromorphic Computing Domain wall-magnetic tunnel junction spin–orbit torque devices and circuits for in-memory computing High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network

    • 46 min.

Top-podcasts in Technologie

De Technoloog | BNR
BNR Nieuwsradio
✨Poki - Podcast over Kunstmatige Intelligentie AI
Alexander Klöpping & Wietse Hage
Lex Fridman Podcast
Lex Fridman
Bright Podcast
Bright B.V.
Cryptocast | BNR
BNR Nieuwsradio
Darknet Diaries
Jack Rhysider