702 episodes

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence‪)‬ Sam Charrington

    • Technology

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

    Powering AI with the World's Largest Computer Chip with Joel Hestness

    Powering AI with the World's Largest Computer Chip with Joel Hestness

    Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more.

    The complete show notes for this episode can be found at twimlai.com/go/684.

    • 55 min
    AI for Power & Energy with Laurent Boinot

    AI for Power & Energy with Laurent Boinot

    Today we're joined by Laurent Boinot, power and utilities lead for the Americas at Microsoft, to discuss the intersection of AI and energy infrastructure. We discuss the many challenges faced by current power systems in North America and the role AI is beginning to play in driving efficiencies in areas like demand forecasting and grid optimization. Laurent shares a variety of examples along the way, including some of the ways utility companies are using AI to ensure secure systems, interact with customers, navigate internal knowledge bases, and design electrical transmission systems. We also discuss the future of nuclear power, and why electric vehicles might play a critical role in American energy management.

    The complete show notes for this episode can be found at twimlai.com/go/683.

    • 49 min
    Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand

    Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand

    Today we're joined by Azarakhsh (Aza) Jalalvand, a research scholar at Princeton University, to discuss his work using deep reinforcement learning to control plasma instabilities in nuclear fusion reactors. Aza explains his team developed a model to detect and avoid a fatal plasma instability called ‘tearing mode’. Aza walks us through the process of collecting and pre-processing the complex diagnostic data from fusion experiments, training the models, and deploying the controller algorithm on the DIII-D fusion research reactor. He shares insights from developing the controller and discusses the future challenges and opportunities for AI in enabling stable and efficient fusion energy production.

    The complete show notes for this episode can be found at twimlai.com/go/682.

    • 42 min
    GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple

    GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple

    Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses “prompt compilation” to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables.

    The complete show notes for this episode can be found at twimlai.com/go/681.

    • 47 min
    Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla

    Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla

    Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning.

    The complete show notes for this episode can be found at twimlai.com/go/680.

    • 46 min
    Localizing and Editing Knowledge in LLMs with Peter Hase

    Localizing and Editing Knowledge in LLMs with Peter Hase

    Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

    The complete show notes for this episode can be found at twimlai.com/go/679.

    • 49 min

Top Podcasts In Technology

Радио-Т
Umputun, Bobuk, Gray, Ksenks, Alek.sys
Lenny's Podcast: Product | Growth | Career
Lenny Rachitsky
Next Wave
Jason Moccia
TED Radio Hour
NPR
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman

You Might Also Like

Practical AI: Machine Learning, Data Science
Changelog Media
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
The AI Podcast
NVIDIA
Data Skeptic
Kyle Polich
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)