Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders.
Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader.
Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Dask + Data Science Careers with Jacqueline Nolis
Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the Build a Career in Data Science Podcast.
You might remember Jacqueline from our Advancing Your Data Science Career During the Pandemic panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist.
We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more!
The complete show notes for this episode can be found at https://twimlai.com/go/480.
Machine Learning for Equitable Healthcare Outcomes with Irene Chen
Today we’re joined by Irene Chen, a Ph.D. student at MIT.
Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence.
We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research.
The complete show notes for this episode can be found at https://twimlai.com/go/479.
AI Storytelling Systems with Mark Riedl
Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems.
We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more!
The complete show notes for this episode can be found at https://twimlai.com/go/478.
Creating Robust Language Representations with Jamie Macbeth
Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College.
In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language understanding, and how to use AI as a vehicle for better understanding human intelligence. We discuss the tie that binds these domains together, if the tasks are the same as traditional NLU tasks, and what are the specific things he’s trying to gain deeper insights into.
One of the unique aspects of Jamie’s research is that he takes an “old-school AI” approach, and to that end, we discuss the models he handcrafts to generate language. Finally, we examine how he evaluates the performance of his representations if he’s not playing the SOTA “game,” what he bookmarks against, identifying deficiencies in deep learning systems, and the exciting directions for his upcoming research.
The complete show notes for this episode can be found at https://twimlai.com/go/477.
Reinforcement Learning for Industrial AI with Pieter Abbeel
Today we’re joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant.
In our conversation with Pieter, we cover a ton of ground, starting with the specific goals and tasks of his work at Covariant, the shift in needs for industrial AI application and robots, if his experience solving real-world problems has changed his opinion on end to end deep learning, and the scope for the three problem domains of the models he’s building.
We also explore his recent work at the intersection of unsupervised and reinforcement learning, goal-directed RL, his recent paper “Pretrained Transformers as Universal Computation Engines” and where that research thread is headed, and of course, his new podcast Robot Brains, which you can find on all streaming platforms today!
The complete show notes for this episode can be found at twimlai.com/go/476.
AutoML for Natural Language Processing with Abhishek Thakur
Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster!
In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions has evolved over time, what resources he used to prepare for his transition to a full-time practitioner, and the most important lessons he’s learned along the way.
We also spend a great deal of time discussing his new role at HuggingFace, where he's building AutoNLP. We talk through the goals of the project, the primary problem domain, and how the results of AutoNLP compare with those from hand-crafted models. Finally, we discuss Abhishek’s book, Approaching (Almost) Any Machine Learning Problem.
The complete show notes for this episode can be found at https://twimlai.com/go/475.