72 episodes

The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.

We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.

Full show notes always on https://latent.space

www.latent.space

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al Alessio + swyx

    • Technology

The podcast by and for AI Engineers! In 2023, over 1 million visitors came to Latent Space to hear about news, papers and interviews in Software 3.0.

We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al.

Full show notes always on https://latent.space

www.latent.space

    How AI is eating Finance — with Mike Conover of Brightwave

    How AI is eating Finance — with Mike Conover of Brightwave

    In April 2023 we released an episode named “Mapping the future of *truly* open source models” to talk about Dolly, the first open, commercial LLM.
    Mike was leading the OSS models team at Databricks at the time. Today, Mike is back on the podcast to give us the “one year later” update on the evolution of large language models and how he’s been using them to build Brightwave, an an AI research assistant for investment professionals.
    Today they are announcing a $6M seed round (led by Alessio and Decibel!), and sharing some of the learnings from serving customers with >$120B of assets under management in production in the last 4 months since launch.
    Losing faith in long context windows
    In our recent “Llama3 1M context window” episode we talked about the amazing progress we have done in context window size, but it’s good to remember that Dolly’s original context size was 1,024 tokens, and this was only 14 months ago.
    But while understanding length has increased, models are still not able to generate very long answers. His empirical intuition (which matches ours while building smol-podcaster) is that most commercial LLMs, as well as Llama, tend to generate responses most of the time. While Needle in a Haystack tests will pass with flying colors at most context sizes, the granularity of the summary decreases as the context increases as it tries to fit the answer in the same tokens range, rather than returning tokens close to the 4,096 max_output, for example.
    Recently Rob Mulla from Dreadnode highlighted how LMSys Arena results prefer longer responses by a large margin, so both LLMs and humans have a well documented length bias which doesn’t necessarily track the quality of answer:
    The way Mike and team solved this is by breaking down the task in multiple subtasks, and then merging them back together. For example, have a book summarized chapter by chapter to preserve more details, and then put those summaries together. In Brightwave’s case, it’s creating multiple subsystems that accomplish different tasks on a large corpus of text separately, and then bringing them all together in a report. For example understanding intent of the question, extracting relations between companies, figuring out if it’s a positive / negative, etc.
    Mike’s question is whether or not we’ll be able to imbue better synthesis capabilities in the models: can you have synthesis-oriented demonstrations at training time rather than single token prediction?
    “LLMs as Judges” Strategies
    In our David Luan episode he mentioned they don’t use any benchmarks for their models, because the benchmarks don’t reflect their customer needs. Brightwave shared some tips on leveraging LLMs as Judges:
    * Human vs LLM reviews: while they work with human annotators to create high quality datasets, that data isn’t just used to fine tune models but also as a reference basis for future LLM reviews. Having a set of trusted data to use as calibration helps you trust the LLM judgement even more.
    * Ensemble consistency checking: rather than using an LLM as judge for one output, you use different LLMs to generate a result for the same task, and then use another LLM to highlight where those generations differ. Do the two outputs differ meaningfully? Do they have different beliefs about the implications of something? If there are a lot of discrepancies between generations coming from different models, you then do additional passes to try and resolve them.
    * Entailment verification: for each unique insight that they generate, they take the output and separately ask LLMs to verify factuality of information based on the original sources. In the actual product, user can then highlight any piece of text and ask it to 1) “Tell Me More” 2) “Show Sources”. Since there’s no way to guarantee factuality of 100% of outputs, and humans have good intuition for things that look out of the ordinary, giving the user access to the review tool helps th

    • 54 min
    ICLR 2024 — Best Papers & Talks (Benchmarks, Reasoning & Agents) — ft. Graham Neubig, Aman Sanger, Moritz Hardt)

    ICLR 2024 — Best Papers & Talks (Benchmarks, Reasoning & Agents) — ft. Graham Neubig, Aman Sanger, Moritz Hardt)

    Our second wave of speakers for AI Engineer World’s Fair were announced! The conference sold out of Platinum/Gold/Silver sponsors and Early Bird tickets! See our Microsoft episode for more info and buy now with code LATENTSPACE.
    This episode is straightforwardly a part 2 to our ICLR 2024 Part 1 episode, so without further ado, we’ll just get right on with it!

    Timestamps
    [00:03:43] Section A: Code Edits and Sandboxes, OpenDevin, and Academia vs Industry — ft. Graham Neubig and Aman Sanger
    * [00:07:44] WebArena
    * [00:18:45] Sotopia
    * [00:24:00] Performance Improving Code Edits
    * [00:29:39] OpenDevin
    * [00:47:40] Industry and Academia
    [01:05:29] Section B: Benchmarks
    * [01:05:52] SWEBench
    * [01:17:05] SWEBench/SWEAgent Interview
    * [01:27:40] Dataset Contamination Detection
    * [01:39:20] GAIA Benchmark
    * [01:49:18] Moritz Hart - Science of Benchmarks
    [02:36:32] Section C: Reasoning and Post-Training
    * [02:37:41] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
    * [02:51:00] Let’s Verify Step By Step
    * [02:57:04] Noam Brown
    * [03:07:43] Lilian Weng - Towards Safe AGI
    * [03:36:56] A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis
    * [03:48:43] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework
    [04:00:51] Bonus: Notable Related Papers on LLM Capabilities

    Section A: Code Edits and Sandboxes, OpenDevin, and Academia vs Industry — ft. Graham Neubig and Aman Sanger
    * Guests
    * Graham Neubig
    * Aman Sanger - Previous guest and NeurIPS friend of the pod!
    * WebArena
    *
    * Sotopia (spotlight paper, website)
    *
    * Learning Performance-Improving Code Edits
    * OpenDevin
    * Junyang Opendevin
    * Morph Labs, Jesse Han
    * SWE-Bench
    * SWE-Agent
    * Aman tweet on swebench
    * LiteLLM
    * Livecodebench
    * the role of code in reasoning
    * Language Models of Code are Few-Shot Commonsense Learners
    * Industry vs academia
    * the matryoshka embeddings incident
    * other directions
    * Unlimiformer
    Section A timestamps
    * [00:00:00] Introduction to Guests and the Impromptu Nature of the Podcast
    * [00:00:45] Graham's Experience in Japan and Transition into Teaching NLP
    * [00:01:25] Discussion on What Constitutes a Good Experience for Students in NLP Courses
    * [00:02:22] The Relevance and Teaching of Older NLP Techniques Like Ngram Language Models
    * [00:03:38] Speculative Decoding and the Comeback of Ngram Models
    * [00:04:16] Introduction to WebArena and Zotopia Projects
    * [00:05:19] Deep Dive into the WebArena Project and Benchmarking
    * [00:08:17] Performance Improvements in WebArena Using GPT-4
    * [00:09:39] Human Performance on WebArena Tasks and Challenges in Evaluation
    * [00:11:04] Follow-up Work from WebArena and Focus on Web Browsing as a Benchmark
    * [00:12:11] Direct Interaction vs. Using APIs in Web-Based Tasks
    * [00:13:29] Challenges in Base Models for WebArena and the Potential of Visual Models
    * [00:15:33] Introduction to Zootopia and Exploring Social Interactions with Language Models
    * [00:16:29] Different Types of Social Situations Modeled in Zootopia
    * [00:17:34] Evaluation of Language Models in Social Simulations
    * [00:20:41] Introduction to Performance-Improving Code Edits Project
    * [00:26:28] Discussion on DevIn and the Future of Coding Agents
    * [00:32:01] Planning in Coding Agents and the Development of OpenDevon
    * [00:38:34] The Changing Role of Academia in the Context of Large Language Models
    * [00:44:44] The Changing Nature of Industry and Academia Collaboration
    * [00:54:07] Update on NLP Course Syllabus and Teaching about Large Language Models
    * [01:00:40] Call to Action: Contributions to OpenDevon and Open Source AI Projects
    * [01:01:56] Hiring at Cursor for Roles in Code Generation and Assistive Coding
    * [01:02:12] Promotion of the AI Engineer Conference

    Section B: Benchmarks
    * Carlos Jimenez & John Yang (Princeton) et al: SWE-bench: Can Language Models Resolve Real-world Github Issues? (ICLR Oral, Paper, website)
    * “We introduce SWE-bench, an evaluation fram

    • 4 hrs 29 min
    How to train a Million Context LLM — with Mark Huang of Gradient.ai

    How to train a Million Context LLM — with Mark Huang of Gradient.ai

    AI Engineer World’s Fair in SF! Prices go up soon.
    Note that there are 4 tracks per day and dozens of workshops/expo sessions; the livestream will air the most stacked speaker list/AI expo floor of 2024.
    Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.
    Exactly a year ago, we declared the Beginning of Context=Infinity when Mosaic made their breakthrough training an 84k token context MPT-7B.

    A Brief History of Long Context
    Of course right when we released that episode, Anthropic fired the starting gun proper with the first 100k context window model from a frontier lab, spawning smol-developer and other explorations. In the last 6 months, the fight (and context lengths) has intensified another order of magnitude, kicking off the "Context Extension Campaigns" chapter of the Four Wars:
    * In October 2023, Claude's 100,000 token windows was still SOTA (we still use it for Latent Space’s show notes to this day).
    * On November 6th, OpenAI launched GPT-4 Turbo with 128k context.
    * On November 21st, Anthropic fired back extending Claude 2.1 to 200k tokens.
    * Feb 15 (the day everyone launched everything) was Gemini's turn, announcing the first LLM with 1 million token context window.
    * In May 2024 at Google I/O, Gemini 1.5 Pro announced a 2m token context window
    In parallel, open source/academia had to fight its own battle to keep up with the industrial cutting edge. Nous Research famously turned a reddit comment into YaRN, extending Llama 2 models to 128k context. So when Llama 3 dropped, the community was ready, and just weeks later, we had Llama3 with 4M+ context!
    A year ago we didn’t really have an industry standard way of measuring context utilization either: it’s all well and good to technically make an LLM generate non-garbage text at 1m tokens, but can you prove that the LLM actually retrieves and attends to information inside that long context? Greg Kamradt popularized the Needle In A Haystack chart which is now a necessary (if insufficient) benchmark — and it turns out we’ve solved that too in open source:
    Today's guest, Mark Huang, is the co-founder of Gradient, where they are building a full stack AI platform to power enterprise workflows and automations. They are also the team behind the first Llama3's 1M+ and 4M+ context window finetunes.
    Long Context Algorithms: RoPE, ALiBi, and Ring Attention
    Positional encodings allow the model to understand the relative position of tokens in the input sequence, present in what (upcoming guest!) Yi Tay affectionately calls the OG “Noam architecture”. But if we want to increase a model’s context length, these encodings need to gracefully extrapolate to longer sequences.
    ALiBi, used in models like MPT (see our "Context=Infinity" episode with the MPT leads, Jonathan Frankle and Abhinav), was one of the early approaches to this space. It lets the context window stretch as it grows, using a linearly decreasing penalty between attention weights of different positions; the further two tokens are, the higher the penalty. Of course, this isn’t going to work for usecases that actually require global attention across a long context.
    In more recent architectures and finetunes, RoPE (Rotary Position Embedding) encoding is more commonly used and is also what Llama3 was based on. RoPE uses a rotational matrix to encode positions, which empirically performs better for longer sequences.
    The main innovation from Gradient was to focus on tuning the theta hyperparameter that governs the frequency of the rotational encoding.
    Audio note: If you want the details, jump to 15:55 in the podcast (or scroll down to the transcript!)
    By carefully increasing theta as context length grew, they were able to scale Llama3 up to 1 million tokens and potentially beyond.
    Once you've scaled positional embeddings, there's still the issue of attention's quadratic complexity, and how longer and longer

    • 57 min
    ICLR 2024 — Best Papers & Talks (ImageGen, Vision, Transformers, State Space Models) ft. Durk Kingma, Christian Szegedy, Ilya Sutskever

    ICLR 2024 — Best Papers & Talks (ImageGen, Vision, Transformers, State Space Models) ft. Durk Kingma, Christian Szegedy, Ilya Sutskever

    Speakers for AI Engineer World’s Fair have been announced! See our Microsoft episode for more info and buy now with code LATENTSPACE — we’ve been studying the best ML research conferences so we can make the best AI industry conf!
    Note that this year there are 4 main tracks per day and dozens of workshops/expo sessions; the free livestream will air much less than half of the content this time.
    Apply for free/discounted Diversity Program and Scholarship tickets here. We hope to make this the definitive technical conference for ALL AI engineers.
    UPDATE: This is a 2 part episode - see Part 2 here.
    ICLR 2024 took place from May 6-11 in Vienna, Austria.
    Just like we did for our extremely popular NeurIPS 2023 coverage, we decided to pay the $900 ticket (thanks to all of you paying supporters!) and brave the 18 hour flight and 5 day grind to go on behalf of all of you. We now present the results of that work!
    This ICLR was the biggest one by far, with a marked change in the excitement trajectory for the conference:
    Of the 2260 accepted papers (31% acceptance rate), of the subset of those relevant to our shortlist of AI Engineering Topics, we found many, many LLM reasoning and agent related papers, which we will cover in the next episode. We will spend this episode with 14 papers covering other relevant ICLR topics, as below.
    As we did last year, we’ll start with the Best Paper Awards. Unlike last year, we now group our paper selections by subjective topic area, and mix in both Outstanding Paper talks as well as editorially selected poster sessions. Where we were able to do a poster session interview, please scroll to the relevant show notes for images of their poster for discussion. To cap things off, Chris Ré’s spot from last year now goes to Sasha Rush for the obligatory last word on the development and applications of State Space Models.
    We had a blast at ICLR 2024 and you can bet that we’ll be back in 2025 🇸🇬.
    Timestamps and Overview of Papers
    [00:02:49] Section A: ImageGen, Compression, Adversarial Attacks
    * [00:02:49] VAEs
    * [00:32:36] Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models
    * [00:37:25] The Hidden Language Of Diffusion Models
    * [00:48:40] Ilya on Compression
    * [01:01:45] Christian Szegedy on Compression
    * [01:07:34] Intriguing properties of neural networks

    [01:26:07] Section B: Vision Learning and Weak Supervision
    * [01:26:45] Vision Transformers Need Registers
    * [01:38:27] Think before you speak: Training Language Models With Pause Tokens
    * [01:47:06] Towards a statistical theory of data selection under weak supervision
    * [02:00:32] Is ImageNet worth 1 video?

    [02:06:32] Section C: Extending Transformers and Attention
    * [02:06:49] LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
    * [02:15:12] YaRN: Efficient Context Window Extension of Large Language Models
    * [02:32:02] Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs
    * [02:44:57] ZeRO++: Extremely Efficient Collective Communication for Giant Model Training

    [02:54:26] Section D: State Space Models vs Transformers
    * [03:31:15] Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors
    * [03:37:08] End of Part 1

    A: ImageGen, Compression, Adversarial Attacks
    * Durk Kingma (OpenAI/Google DeepMind) & Max Welling: Auto-Encoding Variational Bayes (Full ICLR talk)
    * Preliminary resources: Understanding VAEs, CodeEmporium, Arxiv Insights
    * Inaugural ICLR Test of Time Award! “Probabilistic modeling is one of the most fundamental ways in which we reason about the world. This paper spearheaded the integration of deep learning with scalable probabilistic inference (amortized mean-field variational inference via a so-called reparameterization trick), giving rise to the Variational Autoencoder (VAE).”
    * Pablo Pernías (Stability) et al: Würstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models (ICLR oral, pos

    • 3 hrs 38 min
    Emulating Humans with NSFW Chatbots - with Jesse Silver

    Emulating Humans with NSFW Chatbots - with Jesse Silver

    Disclaimer: today’s episode touches on NSFW topics. There’s no graphic content or explicit language, but we wouldn’t recommend blasting this in work environments.
    Product website: https://usewhisper.me/
    For over 20 years it’s been an open secret that porn drives many new consumer technology innovations, from VHS and Pay-per-view to VR and the Internet. It’s been no different in AI - many of the most elite Stable Diffusion and Llama enjoyers and merging/prompting/PEFT techniques were born in the depths of subreddits and 4chan boards affectionately descibed by friend of the pod as The Waifu Research Department. However this topic is very under-covered in mainstream AI media because of its taboo nature.
    That changes today, thanks to our new guest Jesse Silver.
    The AI Waifu Explosion
    In 2023, the Valley’s worst kept secret was how much the growth and incredible retention of products like Character.ai & co was being boosted by “ai waifus” (not sure what the “husband” equivalent is, but those too!).
    And we can look at subreddit growth as a proxy for the general category explosion (10x’ed in the last 8 months of 2023):
    While all the B2B founders were trying to get models to return JSON, the consumer applications made these chatbots extremely engaging and figured out how to make them follow their instructions and “personas” very well, with the greatest level of scrutiny and most demanding long context requirements. Some of them, like Replika, make over $50M/year in revenue, and this is -after- their controversial update deprecating Erotic Roleplay (ERP).
    A couple of days ago, OpenAI announced GPT-4o (see our AI News recap) and the live voice demos were clearly inspired by the movie Her.
    The Latent Space Discord did a watch party and both there and on X a ton of folks were joking at how flirtatious the model was, which to be fair was disturbing to many:

    From Waifus to Fan Platforms
    Where Waifus are known by human users to be explicitly AI chatbots, the other, much more challenging end of the NSFW AI market is run by AIs successfully (plausibly) emulating a specific human personality for chat and ecommerce.
    You might have heard of fan platforms like OnlyFans. Users can pay for a subscription to a creator to get access to private content, similarly to Patreon and the likes, but without any NSFW restrictions or any other content policies. In 2023, OnlyFans had over $1.1B of revenue (on $5.6b of GMV).
    The status quo today is that a lot of the creators outsource their chatting with fans to teams in the Philippines and other lower cost countries for ~$3/hr + 5% commission, but with very poor quality - most creators have fired multiple teams for poor service.
    Today’s episode is with Jesse Silver; along with his co-founder Adam Scrivener, they run a SaaS platform that helps creators from fan platforms build AI chatbots for their fans to chat with, including selling from an inventory of digital content. Some users generate over $200,000/mo in revenue.
    We talked a lot about their tech stack, why you need a state machine to successfully run multi-thousand-turn conversations, how they develop prompts and fine-tune models with DSPy, the NSFW limitations of commercial models, but one of the most interesting points is that often users know that they are not talking to a person, but choose to ignore it. As Jesse put it, the job of the chatbot is “keep their disbelief suspended”.
    There’s real money at stake (selling high priced content, at hundreds of dollars per day per customer). In December the story of the $1 Chevy Tahoe went viral due to a poorly implemented chatbot:
    Now imagine having to run ecommerce chatbots for a potentially $1-4b total addressable market. That’s what these NSFW AI pioneers are already doing today.

    Show Notes
    For obvious reasons, we cannot link to many of the things that were mentioned :)
    * Jesse on X
    * Character AI
    * DSPy
    Chapters
    * [00:00:00] Intros
    * [00:00:24] Building NSFW AI chatb

    • 54 min
    WebSim, WorldSim, and The Summer of Simulative AI — with Joscha Bach of Liquid AI, Karan Malhotra of Nous Research, Rob Haisfield of WebSim.ai

    WebSim, WorldSim, and The Summer of Simulative AI — with Joscha Bach of Liquid AI, Karan Malhotra of Nous Research, Rob Haisfield of WebSim.ai

    We are 200 people over our 300-person venue capacity for AI UX 2024, but you can subscribe to our YouTube for the video recaps.
    Our next event, and largest EVER, is the AI Engineer World’s Fair. See you there!
    Parental advisory: Adult language used in the first 10 mins of this podcast.
    Any accounting of Generative AI that ends with RAG as its “final form” is seriously lacking in imagination and missing out on its full potential. While AI generation is very good for “spicy autocomplete” and “reasoning and retrieval with in context learning”, there’s a lot of untapped potential for simulative AI in exploring the latent space of multiverses adjacent to ours.
    GANs
    Many research scientists credit the 2017 Transformer for the modern foundation model revolution, but for many artists the origin of “generative AI” traces a little further back to the Generative Adversarial Networks proposed by Ian Goodfellow in 2014, spawning an army of variants and Cats and People that do not exist:
    We can directly visualize the quality improvement in the decade since:

    GPT-2
    Of course, more recently, text generative AI started being too dangerous to release in 2019 and claiming headlines. AI Dungeon was the first to put GPT2 to a purely creative use, replacing human dungeon masters and DnD/MUD games of yore.
    More recent gamelike work like the Generative Agents (aka Smallville) paper keep exploring the potential of simulative AI for game experiences.

    ChatGPT
    Not long after ChatGPT broke the Internet, one of the most fascinating generative AI finds was Jonas Degrave (of Deepmind!)’s Building A Virtual Machine Inside ChatGPT:
    The open-ended interactivity of ChatGPT and all its successors enabled an “open world” type simulation where “hallucination” is a feature and a gift to dance with, rather than a nasty bug to be stamped out. However, further updates to ChatGPT seemed to “nerf” the model’s ability to perform creative simulations, particularly with the deprecation of the `completion` mode of APIs in favor of `chatCompletion`.

    WorldSim (https://worldsim.nousresearch.com/)
    It is with this context we explain WorldSim and WebSim. We recommend you watch the WorldSim demo video on our YouTube for the best context, but basically if you are a developer it is a Claude prompt that is a portal into another world of your own choosing, that you can navigate with bash commands that you make up.
    The live video demo was highly enjoyable:
    Why Claude? Hints from Amanda Askell on the Claude 3 system prompt gave some inspiration, and subsequent discoveries that Claude 3 is "less nerfed” than GPT 4 Turbo turned the growing Simulative AI community into Anthropic stans.
    WebSim (https://websim.ai/)
    This was a one day hackathon project inspired by WorldSim that should have won:
    In short, you type in a URL that you made up, and Claude 3 does its level best to generate a webpage that doesn’t exist, that would fit your URL. All form POST requests are intercepted and responded to, and all links lead to even more webpages, that don’t exist, that are generated when you make them. All pages are cachable, modifiable and regeneratable - see WebSim for Beginners and Advanced Guide.
    In the demo I saw we were able to “log in” to a simulation of Elon Musk’s Gmail account, and browse examples of emails that would have been in that universe’s Elon’s inbox. It was hilarious and impressive even back then.
    Since then though, the project has become even more impressive, with both Siqi Chen and Dylan Field singing its praises:

    Joscha Bach
    Joscha actually spoke at the WebSim Hyperstition Night this week, so we took the opportunity to get his take on Simulative AI, as well as a round up of all his other AI hot takes, for his first appearance on Latent Space. You can see it together with the full 2hr uncut demos of WorldSim and WebSim on YouTube!

    Timestamps
    * [00:01:59] WorldSim at Replicate HQ
    * [00:11:03] WebSim at AGI House SF
    * [00:22:0

    • 53 min

Top Podcasts In Technology

AI + a16z
a16z
The Instagram Stories - Social Media News
The Instagram Stories, Daniel Hill
iOS 14
Donald Riebe
Lex Fridman Podcast
Lex Fridman
Apple Events (video)
Apple
go podcast()
Dominic St-Pierre

You Might Also Like

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Practical AI: Machine Learning, Data Science
Changelog Media
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Dwarkesh Podcast
Dwarkesh Patel
This Day in AI Podcast
Michael Sharkey, Chris Sharkey