We sit down with prominent blogger and economist Noah Smith to dig into the disconnect between AI hype and current macroeconomic reality. The central puzzle: if a “god machine” driving 20% annual GDP growth is truly imminent, why aren’t real interest rates skyrocketing as people borrow against a much wealthier future? Noah’s take is that markets are pricing in significant growth, but not civilizational rapture. The culprits keeping digital intelligence from exploding into physical productivity? Land use, energy constraints, and the usual Baumol suspects. But Noah’s through-line is more hopeful than skeptical: even modest AI is humanity rolling the dice against stagnation. Ideas were getting harder to find (Bloom, Jones, Van Reenen & Webb were right), fertility was collapsing, and social media was degrading public discourse. We were hitting the Malthusian ceiling again. AI is the steam engine moment — chaotic, potentially catastrophic, but a genuine escape attempt. And crucially, Noah finds it reassuring that today’s AI is LLM-based and derived from human thought rather than some alien RL agent that evolved in a digital environment. We also discuss sociopolitical issues. Noah reframes “elite overproduction” as a revolution of rising expectations: the professional-managerial class expected a smooth escalator to the upper-middle class, found it stalled, and watched their technical peers keep soaring. Social media makes the gap hyper-visible. The result is deep-seated animus toward the tech bro class. Noah argues that Acemoglu’s Power and Progress is “fractally bad”: the overall thesis is wrong, the chapter-level arguments supporting it are wrong, and the specific data points supporting those are wrong too. Henry Ford raised efficiency wages and then had union organizers shot. No citations. Power defined as outcomes. Noah doesn’t mince words. He’s more generous on Krugman’s intellectual honesty, Sumner’s gunslinger independence, and the genuine influence of Michael Pettis — even if sectoral balances aren’t really a predictive model so much as a coherent-sounding way to feel like you understand macroeconomics. We also touch on Tooze’s polycrisis and what Kevin Kelly’s “technium” tells us about why people who think AI might destroy us are building it anyway. Chapter Timestamps: [00:00:00] – Introduction: academia vs. blogging [00:08:14] – P(doom), P(TAI), and bottlenecks to 20% GDP growth [00:14:59] – Employment optimism and AI autonomy [00:17:30 ]– Should AIs be allowed to own assets? [00:19:05] – How Noah uses AI today [00:20:54] – What happens when AI can replicate your writing? [00:25:14] – Was Noah’s success luck or skill? [00:30:37] – Meaning collapse vs. the Coasean utopia [00:50:12] – Thinker takes: Daron Acemoglu and *Power and Progress* [01:02:23] – Michael Pettis [01:09:25] – Adam Tooze [01:11:21] – Paul Krugman [01:12:54] – Elite overproduction [01:20:47] – Vibes, expectations, and the economics of happiness [01:25:21] – Humanity was hitting a wall; AI as new hope Transcript: Seth Benzell: Welcome to the Justified Posteriors podcast, the podcast that updates its beliefs about the economics of AI and technology. I’m Seth Benzel, a man who has never been accused of having no opinions, coming to you from Chapman University in sunny Southern California. Andrey Fradkin: And I’m Andrey Fradkin, excited to learn how we can post our way to the top of the Sub Stack, business ratings, coming to you from San Francisco, California. And, our guest today is, the prominent blogger, Noah Smith. Welcome to the show. Noah Smith: Hey, thanks for having me on. Andrey Fradkin: Yeah, of course. well, why don’t we get started? well, we were curious, as, still academics, how your life is different now, as a blogger/commentator versus when you were a professor. Noah Smith: Well, I meet a lot fewer young people. Andrey Fradkin: Oh, okay. Noah Smith: Oh, yeah, I, I definitely feel younger. I don’t feel as much of like a- as much of like a wise elder as I used to. yeah, instead I feel like I, I feel younger. Seth Benzell: I remember when I was just f- going to grad school you had recently made the transition to commentating, and I was thinking about going through my PhD program and thinking about, like, “Do I really wanna do full academia? Do I really wanna, like, be more of like a public s- communicator about economic issues?” and so I’ve What sort of- what do you think about people making that decision? Do you think there are marginal academics or marginal commentators who should have gone in one direction or the other direction? Noah Smith: I think, there’s f- there are too few commentators with an academic background, probably. So yeah, there probably are. people like the academic lifestyle. The commentator lifestyle doesn’t suit as many people, because it’s more uncertain. you have a lot of people yelling that you’re an idiot all day. whereas in academia, they just yell that you’re like identification strategy’s bad, or the methodological- Seth Benzell: [laughing] Noah Smith: Error, and then, and then call you an idiot in like back rooms in like whatever. But it’s, it’s very genteel, it’s very easy. And then most people are looking up to you. You’ve got all these, like, young people just adulating you and looking up to you, and you get all this respect. And in commentating, you get respect, but then you get like hordes of people saying, “This person’s an idiot,” just because if you say anything that disagrees with what people already thought or want to think, they will call you an idiot, regardless of how smart you are. and so there will always be people calling you, an idiot, and they’ll always be right in your face, and so that can be, difficult. Also, people don’t know how they’ll, like, make money from it. It’s with being an academic, you have, like, this benevolent patron of university that hands you salaries for, like, well-understood metrics, whereas with commentating, you don’t. Seth Benzell: Do we need a dedicated good AI or transformative AI journal? I was just talking to Andre about this. Why isn’t, why doesn’t that exist, Noah? Do we need that- Noah Smith: You mean a journal about AI or a journal made of papers made by AI? Seth Benzell: Oh, an economics, a, prestigious economics journal that would be the topic of economics of AI or economics of transformative AI specifically. Andrey Fradkin: I’m not sure we need a journal, Seth. Seth Benzell: It’s in the seed. Andrey Fradkin: I just think that we put it out there- Seth Benzell: Why not? Andrey Fradkin: And then have the AI referee it. I mean, the, I just feel like thinking in journals is just, like, old, out- outmoded at this point. Noah Smith: AI is moving so, is moving so much- Seth Benzell: Well, there’s- Noah Smith: Faster than the economics journal publication cycle, that, like, I’m not sure that- Seth Benzell: Right Noah Smith: Like, I’m not sure what utility this has for the world. So maybe doesn’t matter. Andrey Fradkin: Yeah. Seth Benzell: It would give a, it would give, it would give people a prestige stamp- Seth Benzell: For working in the area, and you could set it up differently. Seth Benzell: It could be faster Andrey Fradkin: There’s no way we’re giving anyone prestige stamp, because our profession famously gives no prestige to no-name journals. So, if you truly wrote a great Tai paper, how, why wouldn’t it be published in the AR? That’s what an economist would say. Seth Benzell: Well, I So there’s, there’s a taste issue, right? So to the extent you were concerned that the top journals have the wrong taste on these subjects, this would be a potential solution- Andrey Fradkin: It’s not a solution Seth Benzell: And everybody starts with zero prestige sometimes. Andrey Fradkin: You can just put out the working paper and get everyone to read it. This is exactly what we covered with, Basil Halperin’s paper. So Noah, we were gonna ask you this at some point, so we might as well ask you now. Have you read, his paper? Well, the argument here goes is that if we will have transformative AI, then interest rates should go up. Have you heard this argument before? Noah Smith: What’s the paper? Seth Benzell: It’s called something to the effect of transformative AI and interest rates. Noah Smith: Okay. Seth Benzell: And the argument in a sentence is, if we have really powerful economic growth that we’re anticipating Tai in five, ten years, then you should be wanting to balance consumption between today and tomorrow, anticipate interest rates to go up, and therefore lower savings today, which would move the increased interest rates up into the present. So anticipated positive A- transformative AI increases interest rates today. And then if you have negative foom, if we think we’re gonna blow up the world in five years, well, that’s even more a reason to consume today. You should just save today and bid up interest rates. So the argument is, because interest rates haven’t been skyrocketing, Tai cannot be imminent. Do you buy that argument? Noah, why not? [00:05:00] Noah Smith: ‘cause all propositions about real interest rates are wrong. [chuckles] - Andrey Fradkin: Yeah Noah Smith: Because we, because people- Seth Benzell: Henry’s second law, of course. Noah Smith: This, the reason why So I’m trying to think of whether I buy it as a, as a general case, because, like, if you massively increase productivity growth, you will increase, -- if you massively increase productivity growth, you should increase the safe rate of interest. Like, basically, like- Seth Benzell: Right Noah Smith: It’s stocks are so certain to go up, that bonds have to, have to sort of match that, right? So you have some sort of, like, weak risk arbitrage argument right there. But then, if you’ve got, like, AI that’s