
649 episodes

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) Sam Charrington
-
- Technology
Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
-
Towards Improved Transfer Learning with Hugo Larochelle
Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system.
The complete show notes for this episode can be found at twimlai.com/go/631. -
Language Modeling With State Space Models with Dan Fu
Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention.
The complete show notes for this episode can be found at https://twimlai.com/go/630 -
Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra
Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation, we discuss Dhruv’s work on the paper Emergence of Maps in the Memories of Blind Navigation Agents, which won an Outstanding Paper Award at the event. We explore navigation with multilayer LSTM and the question of whether embodiment is necessary for intelligence. We delve into the Embodiment Hypothesis and the progress being made in language models and caution on the responsible use of these models. We also discuss the history of AI and the importance of using the right data sets in training. The conversation explores the different meanings of "maps" across AI and cognitive science fields, Dhruv’s experience in navigating mapless systems, and the early discovery stages of memory representation and neural mechanisms.
The complete show notes for this episode can be found at https://twimlai.com/go/629 -
AI Agents and Data Integration with GPT and LLaMa with Jerry Liu
Today we’re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with the latest large language models. We discuss the challenges of adding private data to language models and how Llama Index connects the two for better decision-making. We discuss the role of agents in automation, the evolution of the agent abstraction space, and the difficulties of optimizing queries over large amounts of complex data. We also discuss a range of topics from combining summarization and semantic search, to automating reasoning, to improving language model results by exploiting relationships between nodes in data.
The complete show notes for this episode can be found at twimlai.com/go/628. -
Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos
Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more.
The complete show notes for this episode can be found at https://twimlai.com/go/627. -
Are LLMs Overhyped or Under Appreciated with Marti Hearst
Today we’re joined by Marti Hearst, Professor at UC Berkeley. In our conversation with Marti, we explore the intricacies of AI language models and their usefulness in improving efficiency but also their potential for spreading misinformation. Marti expresses skepticism about whether these models truly have cognition compared to the nuance of the human brain. We discuss the intersection of language and visualization and the need for specialized research to ensure safety and appropriateness for specific uses. We also delve into the latest tools and algorithms such as Copilot and Chat GPT, which enhance programming and help in identifying comparisons, respectively. Finally, we discuss Marti’s long research history in search and her breakthrough in developing a standard interaction that allows for finding items on websites and library catalogs.
The complete show notes for this episode can be found at https://twimlai.com/go/626.