Latent Space: The AI Engineer Podcast

Latent.Space

The podcast by and for AI Engineers! In 2025, over 10 million readers and listeners came to Latent Space to hear about news, papers and interviews in Software 3.0. We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, Anthropic, Gemini, Meta (Soumith Chintala), Sierra (Bret Taylor), tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al. Full show notes always on https://latent.space www.latent.space

  1. 3日前

    Cursor's Third Era: Cloud Agents

    All speakers are announced at AIE EU, schedule coming soon. Join us there or in Miami with the renowned organizers of React Miami! Singapore CFP also open! We’ve called this out a few times over in AINews, but the overwhelming consensus in the Valley is that “the IDE is Dead”. In November it was just a gut feeling, but now we actually have data: even at the canonical “VSCode Fork” company, people are officially using more agents than tab autocomplete (the first wave of AI coding): Cursor has launched cloud agents for a few months now, and this specific launch is around Computer Use, which has come a long way since we first talked with Anthropic about it in 2024, and which Jonas productized as Autotab: We also take the opportunity to do a live demo, talk about slash commands and subagents, and the future of continual learning and personalized coding models, something that Sam previously worked on at New Computer. (The fact that both of these folks are top tier CEOs of their own startups that have now joined the insane talent density gathering at Cursor should also not be overlooked). Full Episode on YouTube! please like and subscribe! Timestamps 00:00 Agentic Code Experiments00:53 Why Cloud Agents Matter02:08 Testing First Pillar03:36 Video Reviews Second Pillar04:29 Remote Control Third Pillar06:17 Meta Demos and Bug Repro13:36 Slash Commands and MCPs18:19 From Tab to Team Workflow31:41 Minimal Web UI Philosophy32:40 Why No File Editor34:38 Full Stack Cursor Debate36:34 Model Choice and Auto Routing38:34 Parallel Agents and Best Of N41:41 Subagents and Context Management44:48 Grind Mode and Throughput Future01:00:24 Cloud Agent Onboarding and Memory Transcript EP 77 - CURSOR - Audio version [00:00:00] Agentic Code Experiments Samantha: This is another experiment that we ran last year and didn’t decide to ship at that time, but may come back to LM Judge, but one that was also agentic and could write code. So it wasn’t just picking but also taking the learnings from two models or and models that it was looking at and writing a new diff. And what we found was that there were strengths to using models from different model providers as the base level of this process. Basically you could get almost like a synergistic output that was better than having a very unified like bottom model tier. Jonas: We think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we’ll be making the pipe much wider and so paralyzing more, whether that’s swarms of agents or parallel agents, both of those are things that contribute to getting much more done in the same amount of time. Why Cloud Agents Matter swyx: This week, one of the biggest launches that Cursor’s ever done is cloud agents. I think you, you had [00:01:00] cloud agents before, but this was like, you give cursor a computer, right? Yeah. So it’s just basically they bought auto tab and then they repackaged it. Is that what’s going on, or, Jonas: that’s a big part of it. Yeah. Cloud agents already ran in their own computers, but they were sort of site reading code. Yeah. And those computers were not, they were like blank VMs typically that were not set up for the Devrel X for whatever repo the agents working on. One of the things that we talk about is if you put yourself in the model shoes and you were seeing tokens stream by and all you could do was cite read code and spit out tokens and hope that you had done the right thing, swyx: no chance Jonas: I’d be so bad. Like you obviously you need to run the code. And so that I think also is probably not that contrarian of a take, but no one has done that yet. And so giving the model the tools to onboard itself and then use full computer use end-to-end pixels in coordinates out and have the cloud computer with different apps in it is the big unlock that we’ve seen internally in terms of use usage of this going from, oh, we use it for little copy changes [00:02:00] to no. We’re really like driving new features with this kind of new type of entech workflow. Alright, let’s see it. Cool. Live Demo Tour Jonas: So this is what it looks like in cursor.com/agents. So this is one I kicked off a while ago. So on the left hand side is the chat. Very classic sort of agentic thing. The big new thing here is that the agent will test its changes. So you can see here it worked for half an hour. That is because it not only took time to write the tokens of code, it also took time to test them end to end. So it started Devrel servers iterate when needed. And so that’s one part of it is like model works for longer and doesn’t come back with a, I tried some things pr, but a I tested at pr that’s ready for your review. One of the other intuition pumps we use there is if a human gave you a PR asked you to review it and you hadn’t, they hadn’t tested it, you’d also be annoyed because you’d be like, only ask me for a review once it’s actually ready. So that’s what we’ve done with Testing Defaults and Controls swyx: simple question I wanted to gather out front. Some prs are way smaller, [00:03:00] like just copy change. Does it always do the video or is it sometimes, Jonas: Sometimes. swyx: Okay. So what’s the judgment? Jonas: The model does it? So we we do some default prompting with sort. What types of changes to test? There’s a slash command that people can do called slash no test, where if you do that, the model will not test, swyx: but the default is test. Jonas: The default is to be calibrated. So we tell it don’t test, very simple copy changes, but test like more complex things. And then users can also write their agents.md and specify like this type of, if you’re editing this subpart of my mono repo, never tested ‘cause that won’t work or whatever. Videos and Remote Control Jonas: So pillar one is the model actually testing Pillar two is the model coming back with a video of what it did. We have found that in this new world where agents can end-to-end, write much more code, reviewing the code is one of these new bottlenecks that crop up. And so reviewing a video is not a substitute for reviewing code, but it is an entry point that is much, much easier to start with than glancing at [00:04:00] some giant diff. And so typically you kick one off you, it’s done you come back and the first thing that you would do is watch this video. So this is a, video of it. In this case I wanted a tool tip over this button. And so it went and showed me what that looks like in, in this video that I think here, it actually used a gallery. So sometimes it will build storybook type galleries where you can see like that component in action. And so that’s pillar two is like these demo videos of what it built. And then pillar number three is I have full remote control access to this vm. So I can go heat in here. I can hover things, I can type, I have full control. And same thing for the terminal. I have full access. And so that is also really useful because sometimes the video is like all you need to see. And oftentimes by the way, the video’s not perfect, the video will show you, is this worth either merging immediately or oftentimes is this worth iterating with to get it to that final stage where I am ready to merge in. So I can go through some other examples where the first video [00:05:00] wasn’t perfect, but it gave me confidence that we were on the right track and two or three follow-ups later, it was good to go. And then I also have full access here where some things you just wanna play around with. You wanna get a feel for what is this and there’s no substitute to a live preview. And the VNC kind of VM remote access gives you that. swyx: Amazing What, sorry? What is VN. And Jonas: just the remote desktop. Remote desktop. Yeah. swyx: Sam, any other details that you always wanna call out? Samantha: Yeah, for me the videos have been super helpful. I would say, especially in cases where a common problem for me with agents and cloud agents beforehand was almost like under specification in my requests where our plan mode and going really back and forth and getting detailed implementation spec is a way to reduce the risk of under specification, but then similar to how human communication breaks down over time, I feel like you have this risk where it’s okay, when I pull down, go to the triple of pulling down and like running this branch locally, I’m gonna see that, like I said, this should be a toggle and you have a checkbox and like, why didn’t you get that detail? And having the video up front just [00:06:00] has that makes that alignment like you’re talking about a shared artifact with the agent. Very clear, which has been just super helpful for me. Jonas: I can quickly run through some other Yes. Examples. Meta Agents and More Demos Jonas: So this is a very front end heavy one. So one question I was swyx: gonna say, is this only for front Jonas: end? Exactly. One question you might have is this only for front end? So this is another example where the thing I wanted it to implement was a better error message for saving secrets. So the cloud agents support adding secrets, that’s part of what it needs to access certain systems. Part of onboarding that is giving access. This is cloud is working on swyx: cloud agents. Yes. Jonas: So this is a fun thing is Samantha: it can get super meta. It Jonas: can get super meta, it can start its own cloud agents, it can talk to its own cloud agents. Sometimes it’s hard to wrap your mind around that. We have disabled, it’s cloud agents starting more cloud agents. So we currently disallow that. Someday you might. Someday we might. Someday we might. So this actually was mostly a backend change in terms of the error handling here, where if the [00:07:00] secret is far too large, it would oh, this is actually really cool. Wow. That’s the Devrel tools. That

    1時間7分
  2. 5日前

    Every Agent Needs a Box — Aaron Levie, Box

    The reception to our recent post on Code Reviews has been strong. Catch up! Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents. Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party. Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber! Note: We didn’t directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is. We also discuss Your Company is a Filesystem: We also shoutout CTO Ben Kus’ and the AI team, who talked about the technical architecture and will return for AIE WF 2026. Full Video Episode Timestamps * 00:00 Adapting Work for Agents * 01:29 Why Every Agent Needs a Box * 04:38 Agent Governance and Identity * 11:28 Why Coding Agents Took Off First * 21:42 Context Engineering and Search Limits * 31:29 Inside Agent Evals * 33:23 Industries and Datasets * 35:22 Building the Agent Team * 38:50 Read Write Agent Workflows * 41:54 Docs Graphs and Founder Mode * 55:38 Token FOMO Culture * 56:31 Production Function Secrets * 01:01:08 Film Roots to Box * 01:03:38 AI Future of Movies * 01:06:47 Media DevRel and Engineering Transcript Adapting Work for Agents Aaron Levie: Like you don’t write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That’s even probably like, like largely not even what you’re doing. What’s happening is we are changing our work to make the agents effective. In that model, the agent didn’t really adapt to how we work. We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it’s a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you’ll see compounding returns. But that’s just gonna take a while for most companies to actually go and get this deployed. swyx: Welcome to the Lane Space Pod. We’re back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host. Aaron Levie: It’s a pleasure. Wow. How’d you get upgraded to, uh, to that? swyx: Because he’s like the perfect guy to be guest those for you. Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do. We really do. swyx: Uh, and we’re here with, uh, Aaron Levy. Welcome. Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here. swyx: Uh, yeah. So we’ve all met offline and like chatted a little bit, but like, it’s always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You’re, you’re super excited about agents. I love Aaron Levie: agents. swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean? Aaron Levie: Some, some, you know, acquihire. Executive swyx: hire. Aaron Levie: Executive hire. Okay. Executive hire. Say, swyx: hey, that’s my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets. Why Every Agent Needs a Box Aaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we’ve, we’ve built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files. All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there’s been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don’t really see them for a long time. And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that’s onboarding, that needs to ramp up on a project. Um, it contains the answer to the right thing to sell a customer when you’re having a conversation to them, with them contains the roadmap information that’s gonna produce the next feature. So all that data. That previously we’ve been just sort of storing and, and you know, occasionally forgetting about, ‘cause we’re only working on the new active stuff. All of that information becomes valuable to the enterprise and it’s gonna become extremely valuable to end users because now they can have agents go find what they’re looking for and produce new, new [00:03:00] value and new data on that information. And it’s gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they’re gonna need access to that data as well. And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there’s gonna be agents that are just. Effectively autonomous and kind of run on their own and, and you’re gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody’s, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent. It’s on its own system, it’s on its own computer, it has access to its own tools. I probably don’t give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data. We think it’s gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we’re building the right platform to support that. swyx: The sort of shorthand I put it is as people build agents, everybody’s just realizing that every agent needs a box. Yes. And it’s nice to be called box and just give everyone a box. Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that’s the swyx: tagline. Every agent Aaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I’m fine with this. And that’s the billboard I wanna like Yeah, exactly. Every agent needs a box. Um, I like it. Can we ship this? Like, swyx: okay, let’s do it. Yeah. Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks. swyx: Yeah. Agent Governance and Identity Aaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we’re gonna have some order of magnitude more agents than people. That’s inevitable. It has to happen. So then the question is, what is the infrastructure that’s needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they’re only doing [00:05:00] safe things on your information. Make sure that they’re not getting exposed. The data that they shouldn’t have access to. There’s gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you’ll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn’t have access to. Oh, we Jeff Huber: have God, Aaron Levie: right? I mean, that’s just gonna happen all over the place, right? So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don’t yet exactly know in many cases how we’re gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did? Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there’s gonna need to be a layer that manages the, the data they have access to, the workflows that they’re involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents. swyx: You have a piece on agent identities, [00:06:00] which I think was

    1時間17分
  3. 2月27日

    METR’s Joel Becker on exponential Time Horizon Evals, Threat Models, and the Limits of AI Productivity

    This is a free preview of a paid episode. To hear more, visit www.latent.space AIE Europe CFP and AIE World’s Fair paper submissions for CAIS peer review are due TODAY - do not delay! Last call ever. We’re excited to welcome METR for their first LS Pod, hopefully the first of many: METR are keepers of currently the single most infamous chart in AI: But every Latent Space reader should be sophisticated enough to know that the details matter and that hype and hyperbole go hand in hand in AI social media, because the millions of impressions that got, by people who don’t understand or care about the nuances, disclaimers, and error bars, far outreaches the 69k views on the corrections by the people who actually made the chart: There’s a lot of nuance both in making benchmarks (as we discovered with OpenAI on our SWE-Bench Verified podcast) and in extrapolating results from them, especially where exponentials and sigmoids are concerned. METR’s Long Horizons work itself has known biases that the authors have responsibly disclosed, but go far too underappreciated in the pursuit of doomer chart porn. If you’re interested in a short, sharable TED talk version of this pod, over at AIE CODE we were blessed to feature Joel twice, as a stage talk and with a longer form small workshop with Q&A: We also make sure cover some of METR’s lesser known work on Threat Evaluation but also Developer Productivity, where 2x friend of the pod and now Zyphra founder Quentin Anthony was the ONLY productive participant! Finally, if you’re the sort to read these show notes to the end, then you definitely deserve some pictures of Joel shredding the guitar at Love Band Karaoke which we mention at the end: Full Video Pod Timestamps 00:00 What METR Means00:39 Podcast Intro With Joel01:39 ME vs TR03:33 Time Horizon Origin Story04:56 Picking Tasks And Biases09:13 Time Horizon Misconceptions11:37 Opus 4.5 And Trendlines14:27 Productivity Studies And Explosions29:50 Compute Slows Progress30:47 Algorithms Need Compute32:45 Industry Spend and Data34:57 Clusters and Shipping Timelines36:44 Prediction Markets for Models38:10 Manifold Alpha Story43:04 Beyond Benchmarks Evals51:39 METR Roadmap and Farewell Transcript

    56分
  4. 2月25日

    🔬Searching the Space of All Possible Materials — Prof. Max Welling, CuspAI

    Editor’s note: CuspAI raised a $100m Series A in September and is rumored to have reached a unicorn valuation. They have all-star advisors from Geoff Hinton to Yann Lecun and team of deep domain experts to tackle this next frontier in AI applications. In this episode, Max Welling traces the thread connecting quantum gravity, equivariant neural networks, diffusion models, and climate-focused materials discovery (yes, there is one!!!). We begin with a provocative framing: experiments as computation. Welling describes the idea of a “physics processing unit”—a world in which digital models and physical experiments work together, with nature itself acting as a kind of processor. It’s a grounded but ambitious vision of AI for science: not replacing chemists, but accelerating them.Along the way, we discuss: * Why symmetry and equivariance matter in deep learning * The tradeoff between scale and inductive bias * The deep mathematical links between diffusion models and stochastic thermodynamics * Why materials—not software—may be the real bottleneck for AI and the energy transition * What it actually takes to build an AI-driven materials platform Max reflects on moving from curiosity-driven theoretical physics (including work with Gerard ‘t Hooft) toward impact-driven research in climate and energy. The result is a conversation about convergence: physics and machine learning, digital models and laboratory experiments, long-term ambition and incremental progress. Full Video Episode Timestamps * 00:00:00 – The Physics Processing Unit (PPU): Nature as the Ultimate Computer * Max introduces the idea of a Physics Processing Unit — using real-world experiments as computation. * 00:00:44 – From Quantum Gravity to AI for Materials * Brandon frames Max’s career arc: VAE pioneer → equivariant GNNs → materials startup founder. * 00:01:34 – Curiosity vs Impact: How His Motivation Evolved * Max explains the shift from pure theoretical curiosity to climate-driven impact. * 00:02:43 – Why CaspAI Exists: Technology as Climate Strategy * Politics struggles; technology scales. Why materials innovation became the focus. * 00:03:39 – The Thread: Physics → Symmetry → Machine Learning * How gauge symmetry, group theory, and relativity informed equivariant neural networks. * 00:06:52 – AI for Science Is Exploding (Not Emerging) * The funding surge and why AI-for-Science feels like a new industrial era. * 00:07:53 – Why Now? The Two Catalysts Behind AI for Science * Protein folding, ML force fields, and the tipping point moment. * 00:10:12 – How Engineers Can Enter AI for Science * Practical pathways: curriculum, workshops, cross-disciplinary training. * 00:11:28 – Why Materials Matter More Than Software * The argument that everything—LLMs included—rests on materials innovation. * 00:13:02 – Materials as a Search Engine * The vision: automated exploration of chemical space like querying Google. * 01:14:48 – Inside CuspAI: The Platform Architecture * Generative models + multi-scale digital twin + experiment loop. * 00:21:17 – Automating Chemistry: Human-in-the-Loop First * Start manual → modular tools → agents → increasing autonomy. * 00:25:04 – Moonshots vs Incremental Wins * Balancing lighthouse materials with paid partnerships. * 00:26:22 – Why Breakthroughs Will Still Require Humans * Automation is vertical-specific and iterative. * 00:29:01 – What Is Equivariance (In Plain English)? * Symmetry in neural networks explained with the bottle example. * 00:30:01 – Why Not Just Use Data Augmentation? * The optimization trade-off between inductive bias and data scale. * 00:31:55 – Generative AI Meets Stochastic Thermodynamics * His upcoming book and the unification of diffusion models and physics. * 00:33:44 – When the Book Drops (ICLR?) Transcript Max: I want to think of it as what I would call a physics processing unit, like a PPU, right? Which is you have digital processing units and then you have physics processing units. So it’s basically nature doing computations for you. It’s the fastest computer known, as possible even. It’s a bit hard to program because you have to do all these experiments. Those are quite bulky, it’s like a very large thing you have to do. But in a way it is a computation and that’s the way I want to see it. You can do computations in a data center and then you can ask nature to do some computations. Your interface with nature is a bit more complicated. But then these things will have to seamlessly work together to get to a new material that you’re interested in. [01:00:44:14 - 01:01:34:08] Brandon: Yeah, it’s a pleasure to have Max Woehling as a guest today. Max has done so much over his career that I’ve been so excited about. If you’re in the deep learning community, you probably know Max for his work on variational autocoders, which has literally stood the test of prime or officially stood the test of prime. If you are a scientist, you probably know him for his like, binary work on graph neural networks on equivariance. And if you’re a material science, you probably know him about his new startup, CASPAI. Max has a long history doing lots of cool problems. You started in quantum gravity, which is I think very different than all of these other things you worked on. The first question for AI engineers and for scientists, what is the thread in how you think about problems? What is the thread in the type of things which excite you? And how do you decide what is the next big thing you want to work on? [01:01:34:08 - 01:02:41:13] Max: So it has actually evolved a lot. In my young days, let’s breathe, I would just follow what I would find super interesting. I have kind of this sensor. I think many people have, but maybe not really sort of use very much, which is like, you get this feeling about getting very excited about some problem. Like it could be, what’s inside of a black hole or what’s at the boundary of the universe or what are quantum mechanics actually all about. And so I follow that basically throughout my career. But I have to say that as you get older, this changes a little bit in the sense that there’s a new dimension coming to it and there’s this impact. Going in two-dimensional quantum gravity, you pretty much guaranteed there’s going to be no impact on what you do relative, maybe a few papers, but not in this world, this energy scale. As I get closer to retirement, which is fortunately still 10 years away or so, I do want to kind of make a positive impact in the world. And I got pretty worried about climate change. [01:02:43:15 - 01:03:19:11] Max: I think politics seems to have a hard time solving it, especially these days. And so I thought better work on it from the technology side. And that’s why we started CaspAI. But there’s also a lot of really interesting science problems in material science. And so it’s kind of combining both the impact you can make with it as well as the interesting science. So it’s sort of these two dimensions, like working on things which you feel there’s like, well, there’s something very deep going on here. And on the other hand, trying to build tools that can actually make a real impact in the world. [01:03:19:11 - 01:03:39:23] RJ: So the thread that when I look back, look at the different things that you worked out, some of them seem pretty connected, like the physics to equivariance and, yeah, and, uh, gravitational networks, maybe. And that seems to be somewhat related to Casp. Do you have a thread through there? [01:03:39:23 - 01:06:52:16] Max: Yeah. So physics is the thread. So having done, you know, spent a lot of time in theoretical physics, I think there is first very fundamental and exciting questions, like things that haven’t actually been figured out in quantum gravity. So that is really the frontier. There’s also a lot of mathematical tools that you can use, right? In, for instance, in particle physics, but also in general relativity, sort of symmetry space to play an enormously important role. And this goes all the way to gauge symmetries as well. And so applying these kinds of symmetries to, uh, machine learning was actually, you know, I thought of it as a very deep and interesting mathematical problem. I did this with Taco Cohen and Taco was the main driver behind this, went all the way from just simple, like rotational symmetries all the way to gauge symmetries on spheres and stuff like that. So, and, uh, Maurice Weiler, who’s also here, um, when he was a PhD student, he was a very good student with me, you know, he wrote an entire book, which I can really recommend about the role of symmetries in AI and machine learning. So I find this a very deep and interesting problem. So more recently, so I’ve taken a sort of different path, which is the relationship between diffusion models and that field called stochastic thermodynamics. This is basically the thermodynamics, which is a theory of equilibrium. So but then formulated for out of equilibrium systems. And it turns out that the mathematics that we use for diffusion models, but even for reinforcement learning for Schrodinger bridges for MCMC sampling has the same mathematics as this theoretical, this physical theory of non-equilibrium systems. And that got me very excited. And actually, uh, when I taught a course in, um, Mauschenberg, uh, it is South Africa, close to Cape Town at the African Institute for Mathematical Sciences Ames. And I turned that into a book site. Two years later, the book was finished. I’ve sent it to the publisher. And this is about the deep relationship between free energy, diffusion models, basically generative AI and stochastic thermodynamics. So it’s always some kind of, I don’t know, I find physics very deep. I also think a lot about quantum mechanics and it’s, it’s, it’s a completely weird theory that actually nobody really understands. And th

    34分
  5. 2月24日

    Claude Code for Finance + The Global Memory Shortage: Doug O'Laughlin, SemiAnalysis

    This is a free preview of a paid episode. To hear more, visit www.latent.space First speakers for AIE Europe and AIEi Miami have been announced. If you’re in Asia/Aus, come by Singapore and Melbourne. AI Engineering is going global! One year ago today, Anthropic launched Claude Code, to not much fanfare: The word of mouth was incredibly strong however, and so we were glad to be one of the first podcasts to invite Boris and Cat on in early May: As we discussed on the pod, all CC usage was API-based and therefore it was ridiculously expensive to do anything. This was then fixed by the team including Claude Code in the Claude Pro plan in early June, and then the virality caused us to make a rare trend call in late June: Now, 6 months on, Doug has just calculated that around 4% of GitHub is written by Claude Code: We talk about how Doug uses Claude Code to do SemiAnalysis work. Memory Mania In the second part of this episode, we also check in on Memory Mania, which is going to affect you (yes, you) at home if it hasn’t already: Full Episode on YouTube Timestamps 00:00 AI as Junior Analyst00:59 Meet Swyx and Doug03:30 From Value Mule to Semis06:28 Moore’s Law Ends Thesis12:02 Claude Code Awakening32:02 Agent Swarms Reality Check32:53 Kimi Swarm Benchmarks37:31 Bots vs Zapier Automation39:44 Claude Code Workflow Setup57:54 AGI Metrics and GDP01:04:48 Railroad CapEx Analogy01:06:00 Funding Bubbles and Demand01:08:11 Agents Replace Work Tools01:13:56 Codex vs Claude Race01:21:15 Microsoft and TPU Strategy01:34:13 TPU Window vs Nvidia01:36:30 HBM Supply Chain Squeeze01:39:41 Memory Shock and CXL01:45:20 Context Rationing Future01:54:37 Writing and Trail Lessons Transcript [00:00:00] AI as Junior Analyst [00:00:00] Doug: This crap makes mistakes all the time. All the time. It is still just like a, like I think of it once again as like a junior analyst, right? The analyst goes and does all this like really pain in the ass information and you bring it all together to make a good decision at the top. Historically what happens is that junior analyst, who I once was, went and gathered all that information, and after doing this enough times, there’s a meta level thinking that’s happening where it’s like, okay, here’s what I really understand and how this type of analysis, I’m an expert in, actually I’m very good at, I consistently have a hit rate. [00:00:28] Now I’m the expert, right? I don’t think that meta level learning is there yet. We’ll see if l ones do it, right? Everyone who’s spending one quadrillion dollars in the world thinks it will, it better, it better happen by if you’re spending, you know, a trillion dollars and there’s not meta level learning. [00:00:44] But for me, in our firm, that massively amplifies everyone who is an expert. ‘cause like you have to still do something that you can just like lop it up. It’s very obvious to me. What It’s slop. [00:00:59] Meet Swyx and Doug

    2時間4分
  6. 2月23日

    ⚡️The End of SWE-Bench Verified — Mia Glaese & Olivia Watkins, OpenAI Frontier Evals & Human Data

    Olivia Watkins (Frontier Evals team) and Mia Glaese (VP of Research at OpenAI, leading the Codex, human data, and alignment teams) discuss a new blog post (https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/) arguing that SWE-Bench Verified—long treated as a key “North Star” coding benchmark—has become saturated and highly contaminated, making it less useful for measuring real coding progress. SWE-Bench Verified originated as a major OpenAI-led cleanup of the original Princeton SWE-Bench benchmark, including a large human review effort with nearly 100 software engineers and multiple independent reviews to curate ~500 higher-quality tasks. But recent findings show that many remaining failures can reflect unfair or overly narrow tests (e.g., requiring specific naming or unspecified implementation details) rather than true model inability, and cite examples suggesting contamination such as models recalling repository-specific implementation details or task identifiers. From now on, OpenAI plans to stop reporting SWE-Bench Verified and instead focus on SWE-Bench Pro (from Scale), which is harder, more diverse (more repos and languages), includes longer tasks (1–4 hours and 4+ hours), and shows substantially less evidence of contamination under their “contamination auditor agent” analysis. We also discuss what future coding/agent benchmarks should measure beyond pass/fail tests—longer-horizon tasks, open-ended design decisions, code quality/maintainability, and real-world product-building—along with the tradeoffs between fast automated grading and human-intensive evaluation. 00:00 Meet the Frontier Evals Team00:56 Why SWE Bench Stalled01:47 How Verified Was Built04:32 Contamination In The Wild06:16 Unfair Tests And Narrow Specs08:40 When Benchmarks Saturate10:28 Switching To SWE Bench Pro12:31 What Great Coding Evals Measure18:17 Beyond Tests Dollars And Autonomy21:49 Preparedness And Future Directions This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.latent.space/subscribe

    26分
  7. 2月19日

    Bitter Lessons in Venture vs Growth: Anthropic vs OpenAI, Noam Shazeer, World Labs, Thinking Machines, Cursor, ASIC Economics — Martin Casado & Sarah Wang of a16z

    Tickets for AIEi Miami and AIE Europe are live, with first wave speakers announced! From pioneering software-defined networking to backing many of the most aggressive AI model companies of this cycle, Martin Casado and Sarah Wang sit at the center of the capital, compute, and talent arms race reshaping the tech industry. As partners at a16z investing across infrastructure and growth, they’ve watched venture and growth blur, model labs turn dollars into capability at unprecedented speed, and startups raise nine-figure rounds before monetization.Martin and Sarah join us to unpack the new financing playbook for AI: why today’s rounds are really compute contracts in disguise, how the “raise → train → ship → raise bigger” flywheel works, and whether foundation model companies can outspend the entire app ecosystem built on top of them. They also share what’s underhyped (boring enterprise software), what’s overheated (talent wars and compensation spirals), and the two radically different futures they see for AI’s market structure. We discuss: * Martin’s “two futures” fork: infinite fragmentation and new software categories vs. a small oligopoly of general models that consume everything above them * The capital flywheel: how model labs translate funding directly into capability gains, then into revenue growth measured in weeks, not years * Why venture and growth have merged: $100M–$1B hybrid rounds, strategic investors, compute negotiations, and complex deal structures * The AGI vs. product tension: allocating scarce GPUs between long-term research and near-term revenue flywheels * Whether frontier labs can out-raise and outspend the entire app ecosystem built on top of their APIs * Why today’s talent wars ($10M+ comp packages, $B acqui-hires) are breaking early-stage founder math * Cursor as a case study: building up from the app layer while training down into your own models * Why “boring” enterprise software may be the most underinvested opportunity in the AI mania * Hardware and robotics: why the ChatGPT moment hasn’t yet arrived for robots and what would need to change * World Labs and generative 3D: bringing the marginal cost of 3D scene creation down by orders of magnitude * Why public AI discourse is often wildly disconnected from boardroom reality and how founders should navigate the noise Show Notes: * “Where Value Will Accrue in AI: Martin Casado & Sarah Wang” - a16z show * “Jack Altman & Martin Casado on the Future of Venture Capital” * World Labs —Martin Casado• LinkedIn: https://www.linkedin.com/in/martincasado/• X: https://x.com/martin_casadoSarah Wang• LinkedIn: https://www.linkedin.com/in/sarah-wang-59b96a7• X: https://x.com/sarahdingwanga16z• https://a16z.com/ Timestamps 00:00:00 – Intro: Live from a16z00:01:20 – The New AI Funding Model: Venture + Growth Collide00:03:19 – Circular Funding, Demand & “No Dark GPUs”00:05:24 – Infrastructure vs Apps: The Lines Blur00:06:24 – The Capital Flywheel: Raise → Train → Ship → Raise Bigger00:09:39 – Can Frontier Labs Outspend the Entire App Ecosystem?00:11:24 – Character AI & The AGI vs Product Dilemma00:14:39 – Talent Wars, $10M Engineers & Founder Anxiety00:17:33 – What’s Underinvested? The Case for “Boring” Software00:19:29 – Robotics, Hardware & Why It’s Hard to Win00:22:42 – Custom ASICs & The $1B Training Run Economics00:24:23 – American Dynamism, Geography & AI Power Centers00:26:48 – How AI Is Changing the Investor Workflow (Claude Cowork)00:29:12 – Two Futures of AI: Infinite Expansion or Oligopoly?00:32:48 – If You Can Raise More Than Your Ecosystem, You Win00:34:27 – Are All Tasks AGI-Complete? Coding as the Test Case00:38:55 – Cursor & The Power of the App Layer00:44:05 – World Labs, Spatial Intelligence & 3D Foundation Models00:47:20 – Thinking Machines, Founder Drama & Media Narratives00:52:30 – Where Long-Term Power Accrues in the AI Stack Transcript Latent.Space - Inside AI’s $10B+ Capital Flywheel — Martin Casado & Sarah Wang of a16z [00:00:00] Welcome to Latent Space (Live from a16z) + Meet the Guests [00:00:00] Alessio: Hey everyone. Welcome to the Latent Space podcast, live from a 16 z. Uh, this is Alessio founder Kernel Lance, and I’m joined by Twix, editor of Latent Space. [00:00:08] swyx: Hey, hey, hey. Uh, and we’re so glad to be on with you guys. Also a top AI podcast, uh, Martin Cado and Sarah Wang. Welcome, very [00:00:16] Martin Casado: happy to be here and welcome. [00:00:17] swyx: Yes, uh, we love this office. We love what you’ve done with the place. Uh, the new logo is everywhere now. It’s, it’s still getting, takes a while to get used to, but it reminds me of like sort of a callback to a more ambitious age, which I think is kind of [00:00:31] Martin Casado: definitely makes a statement. [00:00:33] swyx: Yeah. [00:00:34] Martin Casado: Not quite sure what that statement is, but it makes a statement. [00:00:37] swyx: Uh, Martin, I go back with you to Netlify. [00:00:40] Martin Casado: Yep. [00:00:40] swyx: Uh, and, uh, you know, you create a software defined networking and all, all that stuff people can read up on your background. Yep. Sarah, I’m newer to you. Uh, you, you sort of started working together on AI infrastructure stuff. [00:00:51] Sarah Wang: That’s right. Yeah. Seven, seven years ago now. [00:00:53] Martin Casado: Best growth investor in the entire industry. [00:00:55] swyx: Oh, say [00:00:56] Martin Casado: more hands down there is, there is. [00:01:00] I mean, when it comes to AI companies, Sarah, I think has done the most kind of aggressive, um, investment thesis around AI models, right? So, worked for Nom Ja, Mira Ia, FEI Fey, and so just these frontier, kind of like large AI models. [00:01:15] I think, you know, Sarah’s been the, the broadest investor. Is that fair? [00:01:20] Venture vs. Growth in the Frontier Model Era [00:01:20] Sarah Wang: No, I, well, I was gonna say, I think it’s been a really interesting tag, tag team actually just ‘cause the, a lot of these big C deals, not only are they raising a lot of money, um, it’s still a tech founder bet, which obviously is inherently early stage. [00:01:33] But the resources, [00:01:36] Martin Casado: so many, I [00:01:36] Sarah Wang: was gonna say the resources one, they just grow really quickly. But then two, the resources that they need day one are kind of growth scale. So I, the hybrid tag team that we have is. Quite effective, I think, [00:01:46] Martin Casado: what is growth these days? You know, you don’t wake up if it’s less than a billion or like, it’s, it’s actually, it’s actually very like, like no, it’s a very interesting time in investing because like, you know, take like the character around, right? [00:01:59] These tend to [00:02:00] be like pre monetization, but the dollars are large enough that you need to have a larger fund and the analysis. You know, because you’ve got lots of users. ‘cause this stuff has such high demand requires, you know, more of a number sophistication. And so most of these deals, whether it’s US or other firms on these large model companies, are like this hybrid between venture growth. [00:02:18] Sarah Wang: Yeah. Total. And I think, you know, stuff like BD for example, you wouldn’t usually need BD when you were seed stage trying to get market biz Devrel. Biz Devrel, exactly. Okay. But like now, sorry, I’m, [00:02:27] swyx: I’m not familiar. What, what, what does biz Devrel mean for a venture fund? Because I know what biz Devrel means for a company. [00:02:31] Sarah Wang: Yeah. [00:02:32] Compute Deals, Strategics, and the ‘Circular Funding’ Question [00:02:32] Sarah Wang: You know, so a, a good example is, I mean, we talk about buying compute, but there’s a huge negotiation involved there in terms of, okay, do you get equity for the compute? What, what sort of partner are you looking at? Is there a go-to market arm to that? Um, and these are just things on this scale, hundreds of millions, you know, maybe. [00:02:50] Six months into the inception of a company, you just wouldn’t have to negotiate these deals before. [00:02:54] Martin Casado: Yeah. These large rounds are very complex now. Like in the past, if you did a series A [00:03:00] or a series B, like whatever, you’re writing a 20 to a $60 million check and you call it a day. Now you normally have financial investors and strategic investors, and then the strategic portion always still goes with like these kind of large compute contracts, which can take months to do. [00:03:13] And so it’s, it’s very different ties. I’ve been doing this for 10 years. It’s the, I’ve never seen anything like this. [00:03:19] swyx: Yeah. Do you have worries about the circular funding from so disease strategics? [00:03:24] Martin Casado: I mean, listen, as long as the demand is there, like the demand is there. Like the problem with the internet is the demand wasn’t there. [00:03:29] swyx: Exactly. All right. This, this is like the, the whole pyramid scheme bubble thing, where like, as long as you mark to market on like the notional value of like, these deals, fine, but like once it starts to chip away, it really Well [00:03:41] Martin Casado: no, like as, as, as, as long as there’s demand. I mean, you know, this, this is like a lot of these sound bites have already become kind of cliches, but they’re worth saying it. [00:03:47] Right? Like during the internet days, like we were. Um, raising money to put fiber in the ground that wasn’t used. And that’s a problem, right? Because now you actually have a supply overhang. [00:03:58] swyx: Mm-hmm. [00:03:59] Martin Casado: And even in the, [00:04:00] the time of the, the internet, like the supply and, and bandwidth overhang, even as massive as it was in, as massive as the crash was only lasted about four years. [00:04:09] But we don’t have a supply

    55分

番組について

The podcast by and for AI Engineers! In 2025, over 10 million readers and listeners came to Latent Space to hear about news, papers and interviews in Software 3.0. We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, Anthropic, Gemini, Meta (Soumith Chintala), Sierra (Bret Taylor), tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al. Full show notes always on https://latent.space www.latent.space

その他のおすすめ