
666 episodes

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Sam Charrington
-
- Technology
-
-
4.7 • 383 Ratings
-
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
-
Personalization for Text-to-Image Generative AI with Nataniel Ruiz
Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm’s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus.
The complete show notes for this episode can be found at twimlai.com/go/648. -
Ensuring LLM Safety for Production Applications with Shreya Rajpal
Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.
The complete show notes for this episode can be found at twimlai.com/go/647. -
What’s Next in LLM Reasoning? with Roland Memisevic
Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.
The complete show notes for this episode can be found at twimlai.com/go/646. -
Is ChatGPT Getting Worse? with James Zou
Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process.
The complete show notes for this episode can be found at twimlai.com/go/645. -
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn
Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.
The complete show notes for this episode can be found at twimlai.com/go/644. -
Inverse Reinforcement Learning Without RL with Gokul Swamy
Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach.
The complete show notes for this episode can be found at twimlai.com/go/643.
Customer Reviews
Clear and educational
I’m a total noob to these topics but an otherwise smart person, and this podcast strikes a perfect balance between technical and accessible. Looking forward to binging and following. Well done!
A premier podcast on AI/ML
I have enjoyed listening to many of the episodes and had fun participating in one
Lots of potential but incompetent host
The guests are amazing and this could be such an amazing podcast for the ML community. Unfortunately, the host is both a poor conversationalist (interviews lack flow, feel disjointed and tortured), and comes to the interviews so poorly informed that he struggles to put follow up questions together or even understand what the guest is saying.