699 episodes

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence‪)‬ Sam Charrington

    • Technology
    • 4.8 • 45 Ratings

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

    GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple

    GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple

    Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses “prompt compilation” to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables.

    The complete show notes for this episode can be found at twimlai.com/go/681.

    • 47 min
    Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla

    Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla

    Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning.

    The complete show notes for this episode can be found at twimlai.com/go/680.

    • 46 min
    Localizing and Editing Knowledge in LLMs with Peter Hase

    Localizing and Editing Knowledge in LLMs with Peter Hase

    Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

    The complete show notes for this episode can be found at twimlai.com/go/679.

    • 49 min
    Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping

    Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping

    Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.

    The complete show notes for this episode can be found at twimlai.com/go/678.

    • 48 min
    V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran

    V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran

    Today we’re joined by Mido Assran, a research scientist at Meta’s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vision” for true artificial reasoning. V-JEPA, the video version of Meta’s Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI.

    The complete show notes for this episode can be found at twimlai.com/go/677.

    • 47 min
    Video as a Universal Interface for AI Reasoning with Sherry Yang

    Video as a Universal Interface for AI Reasoning with Sherry Yang

    Today we’re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making,” which explores how generative video models can play a role similar to language models as a way to solve tasks in the real world. Sherry draws the analogy between natural language as a unified representation of information and text prediction as a common task interface and demonstrates how video as a medium and generative video as a task exhibit similar properties. This formulation enables video generation models to play a variety of real-world roles as planners, agents, compute engines, and environment simulators. Finally, we explore UniSim, an interactive demo of Sherry's work and a preview of her vision for interacting with AI-generated environments.

    The complete show notes for this episode can be found at twimlai.com/go/676.

    • 49 min

Customer Reviews

4.8 out of 5
45 Ratings

45 Ratings

wiggy woggy ,

Audio quality issues

Episode 570 has some significant audio issues. eg 10:08 etc … otherwise a great episode and a great channel!!

Steve LC ,

Low quality promotional content

It’s just one side sponsored content.

ryanmark1867 ,

Simply the best ML podcast

I have a long commute so I listen to a lot of podcasts in a week. I have several ML podcasts on rotation, and TWiML&AI is simply the best. The topics are just right for the podcast format, neither too lightweight nor so technically gorpy that they are impossible to follow. The range and depth of guests is really astounding. Finally, Sam is a great interviewer. He asks thought-provoking questions and demonstrates the perfect balance for an ML podcast host - he knows what he is talking about but never comes across as a know-it-all. To top it all off, unlike some other ML podcasts, this one has proper, professional production. The sound is always clear, and that is really critical when you're listening in a noisy vehicle. In sum, this is a great podcast that has made a real difference to me. THANKS!

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Dwarkesh Podcast
Dwarkesh Patel
Deep Questions with Cal Newport
Cal Newport
TED Radio Hour
NPR

You Might Also Like

Practical AI: Machine Learning, Data Science
Changelog Media
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
The AI Podcast
NVIDIA
Data Skeptic
Kyle Polich
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)