66 episodes

The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.

We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.

Full show notes always on https://latent.space

www.latent.space

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al Alessio + swyx

    • Technology
    • 4.8 • 43 Ratings

The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.

We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.

Full show notes always on https://latent.space

www.latent.space

    High Agency Pydantic > VC Backed Frameworks — with Jason Liu of Instructor

    High Agency Pydantic > VC Backed Frameworks — with Jason Liu of Instructor

    We are reuniting for the 2nd AI UX demo day in SF on Apr 28. Sign up to demo here!
    And don’t forget tickets for the AI Engineer World’s Fair — for early birds who join before keynote announcements!
    About a year ago there was a lot of buzz around prompt engineering techniques to force structured output. Our friend Simon Willison tweeted a bunch of tips and tricks, but the most iconic one is Riley Goodside making it a matter of life or death:
    Guardrails (friend of the pod and AI Engineer speaker), Marvin (AI Engineer speaker), and jsonformer had also come out at the time. In June 2023, Jason Liu (today’s guest!) open sourced his “OpenAI Function Call and Pydantic Integration Module”, now known as Instructor, which quickly turned prompt engineering black magic into a clean, developer-friendly SDK.
    A few months later, model providers started to add function calling capabilities to their APIs as well as structured outputs support like “JSON Mode”, which was announced at OpenAI Dev Day (see recap here).
    In just a handful of months, we went from threatening to kill grandmas to first-class support from the research labs. And yet, Instructor was still downloaded 150,000 times last month. Why?
    What Instructor looks like
    Instructor patches your LLM provider SDKs to offer a new response_model option to which you can pass a structure defined in Pydantic. It currently supports OpenAI, Anthropic, Cohere, and a long tail of models through LiteLLM.
    What Instructor is for
    There are three core use cases to Instructor:
    * Extracting structured data: Taking an input like an image of a receipt and extracting structured data from it, such as a list of checkout items with their prices, fees, and coupon codes.
    * Extracting graphs: Identifying nodes and edges in a given input to extract complex entities and their relationships. For example, extracting relationships between characters in a story or dependencies between tasks.
    * Query understanding: Defining a schema for an API call and using a language model to resolve a request into a more complex one that an embedding could not handle. For example, creating date intervals from queries like “what was the latest thing that happened this week?” to then pass onto a RAG system or similar.
    Jason called all these different ways of getting data from LLMs “typed responses”: taking strings and turning them into data structures.
    Structured outputs as a planning tool
    The first wave of agents was all about open-ended iteration and planning, with projects like AutoGPT and BabyAGI. Models would come up with a possible list of steps, and start going down the list one by one. It’s really easy for them to go down the wrong branch, or get stuck on a single step with no way to intervene.
    What if these planning steps were returned to us as DAGs using structured output, and then managed as workflows? This also makes it easy to better train model on how to create these plans, as they are much more structured than a bullet point list. Once you have this structure, each piece can be modified individually by different specialized models.
    You can read some of Jason’s experiments here:
    While LLMs will keep improving (Llama3 just got released as we write this), having a consistent structure for the output will make it a lot easier to swap models in and out.
    Jason’s overall message on how we can move from ReAct loops to more controllable Agent workflows mirrors the “Process” discussion from our Elicit episode:

    Watch the talk
    As a bonus, here’s Jason’s talk from last year’s AI Engineer Summit. He’ll also be a speaker at this year’s AI Engineer World’s Fair!
    Timestamps
    * [00:00:00] Introductions
    * [00:02:23] Early experiments with Generative AI at StitchFix
    * [00:08:11] Design philosophy behind the Instructor library
    * [00:11:12] JSON Mode vs Function Calling
    * [00:12:30] Single vs parallel function calling
    * [00:14:00] How many functions is too many?
    * [00:17:39] How to evaluat

    • 52 min
    Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit

    Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit

    Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don’t forget tickets for the AI Engineer World’s Fair — for early birds who join before keynote announcements!
    It’s become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.
    We’ve commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.
    It starts with prompts
    When ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:
    * Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.
    * Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.
    Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.
    From prompts to agents
    As prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task.
    The problem with open-ended agents like AutoGPT is that 1) it’s hard to replicate the same workflow over and over again 2) there isn’t a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn’t what most people want from a product.
    From agents to products
    Prompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products.
    Today’s guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”.
    Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:
    Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.
    One of the main takeaways we had from the episode is how teams should focus on supervising the proce

    • 56 min
    Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

    Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

    Our next 2 big events are AI UX and the World’s Fair. Join and apply to speak/sponsor!
    Due to timing issues we didn’t have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI.
    Enjoy!
    AI Breakdown
    The indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:
    and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta’s AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:

    Thursday Nights in AI
    We’re also including swyx’s interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:

    Dylan Patel on Groq
    We hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):
    Not all of it could be released so we just talked about our Groq estimates:

    Milind Naphade - Capital One
    In relation to conversations at NeurIPS and Nvidia GTC and upcoming at World’s Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:
    * Milind’s learnings from ~25 years in machine learning
    * His first paper citation was 24 years ago
    * Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis
    * Thoughts on relevant AI research
    * GTC takeaways and what makes NVIDIA special
    If you’d like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.
    Personal AI Meetup
    It all started with a meme:
    Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world’s first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.


    Timestamps
    * [00:01:13] AI Breakdown Part 1
    * [00:02:20] Four Wars
    * [00:13:45] Sora
    * [00:15:12] Suno
    * [00:16:34] The GPT-4 Class Landscape
    * [00:17:03] Data War: Reddit x Google
    * [00:21:53] Gemini 1.5 vs Claude 3
    * [00:26:58] AI Breakdown Part 2
    * [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4
    * [00:31:11] Open Source Models - Mistral, Grok
    * [00:34:13] Apple MM1
    * [00:37:33] Meta's $800b AI rebrand
    * [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents
    * [00:47:28] Adept episode - Screen Multimodality
    * [00:48:54] Top Model Research from January Recap
    * [00:53:08] AI Wearables
    * [00:57:26] Groq vs Nvidia month - GPU Chip War
    * [01:00:31] Disagreements
    * [01:02:08] Summer 2024 Predictions
    * [01:04:18] Thursday Nights in AI - swyx
    * [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show
    * [01:34:58] Groq

    Transcript
    [00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.
    [00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspa

    • 2 hr 45 min
    Presenting the AI Engineer World's Fair — with Sam Schillace, Deputy CTO of Microsoft

    Presenting the AI Engineer World's Fair — with Sam Schillace, Deputy CTO of Microsoft

    TL;DR: You can now buy tickets, apply to speak, or join the expo for the biggest AI Engineer event of 2024. We’re gathering *everyone* you want to meet - see you this June.
    In last year’s the Rise of the AI Engineer we put our money where our mouth was and announced the AI Engineer Summit, which fortunately went well:
    With ~500 live attendees and over ~500k views online, the first iteration of the AI Engineer industry affair seemed to be well received. Competing in an expensive city with 3 other more established AI conferences in the fall calendar, we broke through in terms of in-person experience and online impact.
    So at the end of Day 2 we announced our second event: the AI Engineer World’s Fair. The new website is now live, together with our new presenting sponsor:
    We were delighted to invite both Ben Dunphy, co-organizer of the conference and Sam Schillace, the deputy CTO of Microsoft who wrote some of the first Laws of AI Engineering while working with early releases of GPT-4, on the pod to talk about the conference and how Microsoft is all-in on AI Engineering.

    Rise of the Planet of the AI Engineer
    Since the first AI Engineer piece, AI Engineering has exploded:
    and the title has been adopted across OpenAI, Meta, IBM, and many, many other companies:
    1 year on, it is clear that AI Engineering is not only in full swing, but is an emerging global industry that is successfully bridging the gap:
    * between research and product,
    * between general-purpose foundation models and in-context use-cases,
    * and between the flashy weekend MVP (still great!) and the reliable, rigorously evaluated AI product deployed at massive scale, assisting hundreds of employees and driving millions in profit.
    The greatly increased scope of the 2024 AI Engineer World’s Fair (more stages, more talks, more speakers, more attendees, more expo…) helps us reflect the growth of AI Engineering in three major dimensions:
    * Global Representation: the 2023 Summit was a mostly-American affair. This year we plan to have speakers from top AI companies across five continents, and explore the vast diversity of approaches to AI across global contexts.
    * Topic Coverage:
    * In 2023, the Summit focused on the initial questions that the community wrestled with - LLM frameworks, RAG and Vector Databases, Code Copilots and AI Agents. Those are evergreen problems that just got deeper.
    * This year the AI Engineering field has also embraced new core disciplines with more explicit focus on Multimodality, Evals and Ops, Open Source Models and GPU/Inference Hardware providers.
    * Maturity/Production-readiness: Two new tracks are dedicated toward AI in the Enterprise, government, education, finance, and more highly regulated industries or AI deployed at larger scale:
    * AI in the Fortune 500, covering at-scale production deployments of AI, and
    * AI Leadership, a closed-door, side event for technical AI leaders to discuss engineering and product leadership challenges as VPs and Heads of AI in their respective orgs.
    We hope you will join Microsoft and the rest of us as either speaker, exhibitor, or attendee, in San Francisco this June. Contact us with any enquiries that don’t fall into the categories mentioned below.

    Show Notes
    * Ben Dunphy
    * 2023 Summit
    * GitHub confirmed $100m ARR on stage
    * History of World’s Fairs
    * Sam Schillace
    * Writely on Acquired.fm
    * Early Lessons From GPT-4: The Schillace Laws
    * Semantic Kernel
    * Sam on Kevin Scott (Microsoft CTO)’s podcast in 2022
    * AI Engineer World’s Fair (SF, Jun 25-27)
    * Buy Super Early Bird tickets (Listeners can use LATENTSPACE for $100 off any ticket until April 8, or use GROUP if coming in 4 or more)
    * Submit talks and workshops for Speaker CFPs (by April 8)
    * Enquire about Expo Sponsorship (Asap.. selling fast)

    Timestamps
    * [00:00:16] Intro
    * [00:01:04] 2023 AI Engineer Summit
    * [00:03:11] Vendor Neutral
    * [00:05:33] 2024 AIE World's Fair
    * [00:07:34] AIE World's Fair: 9 Tracks
    * [00:08:58] AIE World's F

    • 42 min
    Why Google failed to make GPT-3 + why Multimodal Agents are the path to AGI — with David Luan of Adept

    Why Google failed to make GPT-3 + why Multimodal Agents are the path to AGI — with David Luan of Adept

    Our next SF event is AI UX 2024 - let’s see the new frontier for UX since last year!
    Last call: we are recording a preview of the AI Engineer World’s Fair with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!
    Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an “ex-technical co-founder type”. Reach out to him for more!
    David Luan has been at the center of the modern AI revolution: he was the ~30th hire at OpenAI, he led Google's LLM efforts and co-led Google Brain, and then started Adept in 2022, one of the leading companies in the AI agents space. In today's episode, we asked David for some war stories from his time in early OpenAI (including working with Alec Radford ahead of the GPT-2 demo with Sam Altman, that resulted in Microsoft’s initial $1b investment), and how Adept is building agents that can “do anything a human does on a computer" — his definition of useful AGI.

    Why Google *couldn’t* make GPT-3
    While we wanted to discuss Adept, we couldn’t talk to a former VP Eng of OpenAI and former LLM tech lead at Google Brain and not ask about the elephant in the room.
    It’s often asked how Google had such a huge lead in 2017 with Vaswani et al creating the Transformer and Noam Shazeer predicting trillion-parameter models and yet it was David’s team at OpenAI who ended up making GPT 1/2/3.
    David has some interesting answers:
    “So I think the real story of GPT starts at Google, of course, right? Because that's where Transformers sort of came about. However, the number one shocking thing to me was that, and this is like a consequence of the way that Google is organized…what they (should) have done would be say, hey, Noam Shazeer, you're a brilliant guy. You know how to scale these things up. Here's half of all of our TPUs. And then I think they would have destroyed us. He clearly wanted it too…
    You know, every day we were scaling up GPT-3, I would wake up and just be stressed. And I was stressed because, you know, you just look at the facts, right? Google has all this compute. Google has all the people who invented all of these underlying technologies. There's a guy named Noam who's really smart, who's already gone and done this talk about how he wants a trillion parameter model. And I'm just like, we're probably just doing duplicative research to what he's doing. He's got this decoder only transformer that's probably going to get there before we do.
    And it turned out the whole time that they just couldn't get critical mass. So during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why. At the time, there was a thing called the Brain Credit Marketplace. Everyone's assigned a credit. So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work. And if that's how it works, it's really hard to get that bottom up critical mass to go scale these things. And the team at Google were fighting valiantly, but we were able to beat them simply because we took big swings and we focused.”

    Cloning HGI for AGI
    Human intelligence got to where it is today through evolution. Some argue that to get to AGI, we will approximate all the “FLOPs” that went into that process, an approach most famously mapped out by Ajeya Cotra’s Biological Anchors report:
    The early days of OpenAI were very reinforcement learning-driven with the Dota project, but that's a very inefficient way for these models to re-learn everything. (Kanjun from Imbue shared similar ideas in her episode).
    David argues that there’s a shortcut. We can bootstrap from existing intelligence.
    “Years ago, I had a debate with a Berkeley professor as to what will it actually take to build AGI. And his view is basically that you have to reproduce all the flops that went into evolution

    • 41 min
    Making Transformers Sing - with Mikey Shulman of Suno

    Making Transformers Sing - with Mikey Shulman of Suno

    Giving computers a voice has always been at the center of sci-fi movies; “I’m sorry Dave, I’m afraid I can’t do that” wouldn’t hit as hard if it just appeared on screen as a terminal output, after all. The first electronic speech synthesizer, the Voder, was built at Bell Labs 85 years ago (1939!), and it’s…. something:
    We will not cover the history of Text To Speech (TTS), but the evolution of the underlying architecture has generally been Formant Synthesis → Concatenative Synthesis → Neural Networks. Nowadays, state of the art TTS is just one API call away with models like Eleven Labs and OpenAI’s TTS, or products like Descript. Latency is minimal, they have very good intonation, and can mimic a variety of accents. You can hack together your own voice AI therapist in a day!
    But once you have a computer that can communicate via voice, what comes next? Singing🎶 of course!
    From Barking 🐶 to Singing 🎤
    Today’s guest is Suno’s CEO and co-founder Mikey Shulman. He and his three co-founders, Georg, Martin, and Keenan, previously worked together at Kensho. One of their projects was financially-focused speech recognition (think earnings calls, etc), but all four of them happened to be musicians and audiophiles. They started playing around with text to speech + AI + audio generation and eventually left Kensho to work on it full time.
    A lot of people when we started a company told us to focus on speech. If we wanted to build an audio company, everyone said, speech is a bigger market. But I think there's something about music that's just so human and you almost couldn't prevent us from doing it. Like we just couldn't keep ourselves from building music models and playing with them because it was so much fun.
    Their first big product was Bark, the first open source transformer-based “text-to-audio” model (architecturally inspired by Karpathy’s NanoGPT) that went from 0 to ~19,000 Github stars in a month. At the time they felt like audio was years behind text and image as a generation modality; unlike its predecessors, Bark could not only generate speech, but also music and sound effects like crying, laughing, sighing, etc. You can find a few examples here.
    The main limitation they saw was text to speech training data being extremely limited. So what they did instead is build a new type of foundation model from scratch, trained on audio, and then tweak it to do text to speech. Turning audio into tokens to do self-supervised learning was the most important innovation. Unlike TTS models which are very narrow (and often sound unnatural), Bark was trained on real audio of real people from broad contexts, which made it harder to output unnatural sounding speech.
    As Bark got popular, more and more people started using it to generate music and it became clear that their architecture would work to generate music that people enjoyed, even though it might not be "on the AGI path” of other labs:
    Everybody is so focused on LLMs, for good reason, and information processing and intelligence there. And I think it's way too easy to forget that there's this whole other side of things that makes people feel, and maybe that market is smaller, but it makes people feel and it makes us really happy.
    Suno bursts on the scene
    In December 2023, Suno went viral with a gorgeous new website and launch tweet:
    And rave reviews:
    Music is core to our culture, but very few people are able to create it; Mikey and team want to make everyone an active participant in music making, not just a listener. A “Midjourney of Music”, if you like.
    We definitely had a lot of fun playing with Suno to generate all sort of Latent Space jingles and songs; the product is live at suno.ai if you want to get in the studio yourself!
    If Nas joined Latent Space instead of The Firm:
    182B models > Blink-182
    The soundtrack of the post-scarcity Latent Space ranch
    Scaling with Modal
    Given the December launch, scaling up for the Christmas rush was a

    • 52 min

Customer Reviews

4.8 out of 5
43 Ratings

43 Ratings

Jessyyyy ml ,

IT’S LIT

This is hands down the most informative podcast for AI engineers. I always come here first for any new release - these guys are actual engineers.

sydescroll ,

An AI Fire Hose for my Thirsty Brain

I made my way here via Swyx’s episode on the freeCodeCamp pod. Thank you both so much for putting together a well-researched, thoughtful show. The foundational info in the AI Fundamentals episodes is especially valuable. Keep up the fantastic work!

AustinHale ,

Fantastic Must-Listen

Incredible podcast for coming up the curve and staying abreast of all the changes happening in the tech world right now. Really impressive lineup of thought leaders in the space. Grateful this exists!

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
The Neuron: AI Explained
The Neuron
Dwarkesh Podcast
Dwarkesh Patel
TED Radio Hour
NPR

You Might Also Like

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Conviction | Pod People
Practical AI: Machine Learning, Data Science
Changelog Media
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Dwarkesh Podcast
Dwarkesh Patel
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn