Faster, Please! — The Podcast

James Pethokoukis

Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth. fasterplease.substack.com

  1. FEB 19

    🌎 Storm watch: My chat with climate policy expert Roger Pielke Jr.

    My fellow pro-growth/progress/abundance Up Wingers in America and around the world: Headlines portend rising seas, raging storms, and a planet in crisis. It’s easy to feel like the future is something to fear; however, the key to cooling things down isn’t scaling civilization back. If the world wants to cut back on carbon emissions without sacrificing growth, the answer lies in bold innovation. A sustainable tomorrow requires smart energy investment and long-term thinking today. On this episode of Faster, Please! — The Podcast, I chat with Roger Pielke Jr. about the ever-evolving discussion around climate change. We talk about the benefits of embracing new energy technology and identifying some easy wins. Pielke is a senior fellow at the American Enterprise Institute where his research focuses on science and technology policy. He is also a professor emeritus at University of Colorado Boulder, a distinguished fellow at Japan’s Institute of Energy Economics, a research associate with Risk Frontiers in Australia, and an honorary professor at University College London. Pielke has authored and edited several books, including The Climate Fix: What Scientists and Politicians Won’t Tell You About Global Warming. He also writes The Honest Broker Substack. In This Episode * The Shale Story (1:42) * Unknown Unknowns (7:42) * The Weather Forecast (14:19) * Alternate History (25:23) * The Path Forward (28:25) (A lightly edited transcript of our conversation will be appear in my Week in Review issue on Saturday. Another option is using the Substack auto transcript function.) On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

    35 min
  2. FEB 4

    ☄️Awaiting apocalypse: My chat with journalist and author Dorian Lynskey

    My fellow pro-growth/progress/abundance Up Wingers in America and around the world: Since humanity’s beginning, we’ve been pondering about our end. From war, to disease, to divine reckoning, the means of our destruction seem endless. The advent of the atomic bomb, concerns around climate change, and now AI have prompted many to wonder whether our demise will be random, or if it will come as the result of our own actions. Today on Faster, Please! — The Podcast, I chat with Dorian Lynskey about the way we talk about the end times. We discuss whether catastrophizing leads to action or paralysis and the role of hope in our narratives. Lynskey is a prolific journalist and the author of three books. His most recent: Everything Must Go: The Stories We Tell About the End of the World, which was released last month in the US. He also co-hosts two podcasts, Origin Story and Oh God, What Now?. In This Episode * Scare Tactics (1:32) * Effects of Hopefulness (10:25) * AI Doomsayers (17:01) * Countdown to Catastrophe (21:15) (A lightly edited transcript of our conversation will be appear in my Week in Review issue on Saturday. Another option is using the Substack auto transcript function.) On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

    27 min
  3. JAN 30

    ⤴️ Beyond Abundance: My chat with Brink Lindsey about his new book, 'The Permanent Problem'

    My fellow pro-growth/progress/abundance Up Wingers in America and around the world: The human pursuit of progress stems from our desire for security and a higher quality of life. Yet, even as today’s advanced economies are the richest and most comfortable they’ve ever been, something is amiss. What explains the decline in R&D growth, mental health, and birth rates, just to name a few challenges? In his new book, The Permanent Problem: The Uncertain Transition from Mass Plenty to Mass Flourishing, author Brink Lindsey identifies the critical gap between material abundance and abundant human flourishing. Today on Faster, Please! — The Podcast, Brink and I chat about what constitutes a truly healthy society, beyond surface-level affluence. We identify the conditions for continual progress after our basic needs have been met and far exceeded. Linsey is a senior vice president at the Niskanen Center. He previously served as vice president for research at the Cato Institute and as a senior scholar at the Kauffman Foundation. He has authored and co-authored six books on economics and culture, and is the author of his own Substack, also titled The Permanent Problem. In This Episode * More of everything . . . !? (1:54) * Falling fertility (7:31) * What we’ve lost (10:20) * Evaluating flourishing (13:13) * A culture of growth (20:24) * Future-world problems (28:04) (A lightly edited transcript of our conversation will be appear in my Week in Review issue on Saturday. Another option is using the Substack auto transcript function.) On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

    32 min
  4. JAN 20

    ⚛️ A final (and lasting?) nuclear revival: My chat with nuclear energy advocate Jessica Lovering

    My fellow pro-growth/progress/abundance Up Wingers in America and around the world: Headlines abound with news of the coming nuclear renaissance — a long-awaited era of clean, abundant energy to power our future. But this is hardly the first time the media has heralded the dawn of the atomic age. Still, this round of nuclear optimism is seeing unprecedented corporate investment, more cost-effective modular reactors, and a greater sense of political consensus. Today on Faster, Please! — The Podcast, I chat with Jessica Lovering about past obstacles to growth, and what we might expect from the US going forward. Lovering is an advocate for nuclear power currently based in Sweden. She is the co-founder and former executive director of the Good Energy Collective, as well as a senior fellow with the Nuclear Innovation Alliance and the Energy for Growth Hub. She also authors her own Substack, Nuclear Power to the People. In This Episode * The lost Atomic Age (1:30) * To regulate or not to regulate (8:26) * Reactor capacity past and future (10:44) * The economics of nuclear (14:51) * Power projection (18:32) * The new nuclear status quo (24:04) (A lightly edited transcript of our conversation will be appear in my Week in Review issue on Saturday. Another option is using the Substack auto transcript function.) On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

    27 min
  5. 12/04/2025

    🪐 NASA and beyond: My chat with space policy analyst Casey Dreier

    My fellow pro-growth/progress/abundance Up Wingers in America and around the world: NASA is attempting the difficult task of juggling highly ambitious goals, but also possibly intense budget cuts. Despite personnel losses and unclear leadership, the administration is racing to put humans on the Moon — ideally ahead of China — and then Mars. Today on Faster, Please! — The Podcast, I’m chatting with Casey Dreier about this complicated new era in NASA’s history. We’ll discuss whether or not we’re really in a space race, what to make of the differing visions of Elon Musk and Jeff Bezos, and the rise of planetary defense. Dreier is chief of space policy at The Planetary Society where he advocates for planetary exploration, defense, and the search for extraterrestrial life. He has been featured in major publications from The New York Times to the Washington Post, and hosts his own podcast, Planetary Radio: Space Policy Edition. In This Episode * The return of Isaacman (1:32) * Ditch the Space Race (7:42) * Visions of space (14:48) * Planetary defense (21:23) * Proceed with optimism (24:51) (A lightly edited transcript of our conversation will be appear in my Week in Review issue on Saturday. Another option is using the Substack auto transcript function.) On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

    29 min
  6. 11/20/2025

    ✨🔬 Acceleration though AI-automated R&D: My chat (+transcript) with researcher Tom Davidson

    My fellow pro-growth/progress/abundance Up Wingers in America and around the world: What really gets AI optimists excited isn’t the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson’s new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario. Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality. Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government’s AI Security Institute. In This Episode * Making human minds (1:43) * Theory to reality (6:45) * The world with automated research (10:59) * Considering constraints (16:30) * Worries and what-ifs (19:07) Below is a lightly edited transcript of our conversation. Making human minds (1:43) . . . you don’t have to build any more computer chips, you don’t have to build any more fabs . . . In fact, you don’t have to do anything at all in the physical world. Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?” The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn’t happen. When GDP goes up, that doesn’t mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we’ve got more output, we’re getting even fewer people as a result, so that’s been blocked. This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That’s the first paper. The second paper double clicks on one specific way that we can use output to create more human minds. It’s actually, in a way, the scariest way because it’s the way of creating human minds which can happen the quickest. So this is the way where you don’t have to build any more computer chips, you don’t have to build any more fabs, as they’re called, these big factories that make computer chips. In fact, you don’t have to do anything at all in the physical world. It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you’re not looking at compute, you’re looking at software. Exactly, software. So the idea is you don’t have to build anything. You’ve already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn’t yet a big deal because AI isn’t that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven’t built anything, but you’ve got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you’ve got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we’ve got maybe a few hundred people that are advancing state-of-the-art AI algorithms. I think they’re all getting paid a billion dollars a person, too. Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you’re just running them more efficiently, and then the cycle continues. You’re throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop. In this case, it seems me that you’re not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it’s narrow. It doesn’t have to do everything, it doesn’t have to play chess, it just has to be able to do research. It’s certainly not fully general. You don’t need it to be able to control a robot body. You don’t need it to be able to solve the Riemann hypothesis. You don’t need it to be able to even be very persuasive or charismatic to a human. It’s not narrow, I wouldn’t say, it has to be able to do literally anything that AI researchers do, and that’s a wide range of tasks: They’re coding, they’re communicating with each other, they’re managing people, they are planning out what to work on, they are thinking about reviewing the literature. There’s a fairly wide range of stuff. It’s extremely challenging. It’s some of the hardest work in the world to do, so I wouldn’t say it’s now, but it’s not everything. It’s some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything. Theory to reality (6:45) I think it’s a much smaller gap for AI research than it is for many other parts of the economy. I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you’re referring to coming about within a decade, but it’s going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close? Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It’s impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we’ll probably need another couple of breakthroughs of that kind of size. We’re not talking a completely new approach, throw everything out, but we’re talking like, okay, we need to extend the current approach in a meaningfully different way. It’s going to take some inventiveness, it’s going to take some creativity, we’re going to have to try out a few things. I think, probably, we’ll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn’t employ any humans anymore, they’ve just got AIs there. There’s a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That’s why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that’s the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory? It’s definitely a gap. I think it’s a pretty big gap. I think it’s a much smaller gap for AI research than it is for many other parts of the economy. Let’s say we are talking about car manufacturing and you’re trying to get an AI to do everything that happens there. Man, it’s such a messy process. There’s a million different parts of the supply chain. There’s all this tacit knowledge and all the human workers’ minds. It’s going to be really tough. There’s going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars. For automating what OpenAI does, there’s still a gap, but it’s much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they’re just on a computer all day. They’re not picking up bricks and doing stuff like that. So also that already means it’s a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update. That said, there’s still a lot of mes

    25 min
  7. 11/04/2025

    🚀 The trillion-dollar space race: My chat (+transcript) with journalist Christian Davenport

    My fellow pro-growth/progress/abundance Up Wingers, China’s spacefaring ambitions pose tough competition for America. With a focused, centralized program, Beijing seems likely to land taikonauts on the moon before another American flag is planted. Meanwhile, NASA faces budget cuts, leadership gaps, and technical setbacks. In his new book, journalist Christian Davenport chronicles the fierce rivalry between American firms, mainly SpaceX and Blue Origin. It’s a contest that, despite the challenges, promises to propel humanity to the moon, Mars, and maybe beyond. Davenport is an author and a reporter for the Washington Post, where he covers NASA and the space industry. His new book, Rocket Dreams: Musk, Bezos, and the Inside Story of the New, Trillion-Dollar Space Race, is out now. In This Episode * Check-in on NASA (1:28) * Losing the Space Race (5:49) * A fatal flaw (9:31) * State of play (13:33) * The long-term vision (18:37) * The pace of progress (22:50) * Friendly competition (24:53) Below is a lightly edited transcript of our conversation. Check-in on NASA (1:28) The Chinese tend to do what they say they’re going to do on the timeline that they say they’re going to do it. That said, they haven’t gone to the moon . . . It’s really hard. Pethokoukis: As someone — and I’m speaking about myself — who wants to get America back to the moon as soon as possible, get cooking on getting humans to Mars for the first time, what should I make of what’s happening at NASA right now? They don’t have a lander. I’m not sure the rocket itself is ready to go all the way, we’ll find out some more fairly soon with Artemis II. We have flux with leadership, maybe it’s going to not be an independent-like agency anymore, it’s going to join the Department of Transportation. It all seems a little chaotic. I’m a little worried. Should I be? Davenport: Yes, I think you should be. And I think a lot of the American public isn’t paying attention and they’re going to see the Artemis II mission, which you mentioned, and that’s that mission to send a crew of astronauts around the moon. It won’t land on the moon, but it’ll go around, and I think if that goes well, NASA’s going to take a victory leap. But as you correctly point out, that is a far cry from getting astronauts back on the lunar surface. The lander isn’t ready. SpaceX, as acting NASA administrator Sean Duffy just said, is far behind, reversing himself from like a month earlier when he said no, they appear to be on track, but everybody knew that they were well behind because they’ve had 11 test flights, and they still haven’t made it to orbit with their Starship rocket. The rocket itself that’s going to launch them into the vicinity of the moon, the SLS, launches about once every two years. It’s incredibly expensive, it’s not reusable, and there are problems within the agency itself. There are deep cuts to it. A lot of expertise is taking early retirements. It doesn’t have a full-time leader. It hasn’t had a full-time leader since Trump won the election. At the same time, they’re sort of beating the drum saying we’re going to beat the Chinese back to the lunar surface, but I think a lot of people are increasingly looking at that with some serious concern and doubt. For what it’s worth, when I looked at the betting markets, it gave the Chinese a two-to-one edge. It said that it was about a 65 percent chance they were going to get there first. Does that sound about right to you? I’m not much of a betting man, but I do think there’s a very good chance. The Chinese tend to do what they say they’re going to do on the timeline that they say they’re going to do it. That said, they haven’t gone to the moon, they haven’t done this. It’s really hard. They’re much more secretive, if they have setbacks and delays, we don’t necessarily know about them. But they’ve shown over the last 10, 20 years how capable they are. They have a space station in low earth orbit. They’ve operated a rover on Mars. They’ve gone to the far side of the moon twice, which nobody has done, and brought back a sample return. They’ve shown the ability to keep people alive in space for extended periods of times on the space station. The moon seems within their capabilities and they’re saying they’re going to do it by 2030, and they don’t have the nettlesome problem of democracy where you’ve got one party come in and changing the budget, changing the direction for NASA, changing leadership. They’ve just set the moon — and, by the way, the south pole of the moon, which is where we want to go as well — as the destination and have been beating a path toward that for several years now. Is there anyone for merging NASA into the Department of Transportation? Is there a hidden reservoir? Is that an idea people have been talking about now that’s suddenly emerged to the surface? It’s not something that I particularly heard. The FAA is going to regulate the launches, and they coordinate with the airspace and make sure that the air traffic goes around it, but I think NASA has a particular expertise. Rocket science is rocket science — it’s really difficult. This isn’t for the faint of heart. I think a lot of people look at human space flight and it’s romanticized. It’s romanticized in books and movies and in popular culture, but the fact of the matter is it’s really, really hard, it’s really dangerous, every time a human being gets on one of those rockets, there’s a chance of an explosion, of something really, really bad happening, because a million things have to go right in order for them to have a successful flight. The FAA does a wonderful job managing — or, depending on your point of view, some people don’t think they do such a great job, but I think space is a whole different realm, for sure. Losing the Space Race (5:49) . . . the American flags that the Apollo astronauts planted, they’re basically no longer there anymore. . . There are, however, two Chinese flags on the moon Have you thought about what it will look like the day after, in this country, if China gets to the moon first and we have not returned there yet? Actually, that’s a scenario I kind of paint out. I’ve got this new book called Rocket Dreams and we talk about the geopolitical tensions in there. Not to give too much of a spoiler, but NASA has said that the first person to return to the moon, for the US, is going to be a woman. And there’s a lot of people thinking, who could that be? It could be Jessica Meir, who is a mother and posted a picture of herself pregnant and saying, “This is what an astronaut looks like.” But it could very well be someone like Wang Yaping, who’s also a mother, and she came back from one of her stays on the International Space Station and had a message for her daughter that said, “I come back bringing all the stars for you.” So I think that I could see China doing it and sending a woman, and that moment where that would be a huge coup for them, and that would obviously be symbolic. But when you’re talking about space as a tool of soft power and diplomacy, I think it would attract a lot of other nations to their side who are sort of waiting on the sidelines or who frankly aren’t on the sidelines, who have signed on to go to the United States, but are going to say, “Well, they’re there and you’re not, so that’s who we’re going to go with.” I think about the wonderful alt-history show For All Mankind, which begins with the Soviets beating the US to the moon, and instead of Neil Armstrong giving the “one small step for man,” basically the Russian cosmonaut gives, “Its one small step for Marxism-Leninism,” and it was a bummer. And I really imagine that day, if China beats us, it is going to be not just, “Oh, I guess now we have to share the moon with someone else,” but it’s going to cause some national soul searching. And there are clues to this, and actually I detail these two anecdotes in the book, that all of the flags, the American flags that the Apollo astronauts planted, they’re basically no longer there anymore. We know from Buzz Aldrin‘s memoir that the flag that he and Neil Armstrong planted in the lunar soil in 1969, Buzz said that he saw it get knocked over by the thrust in the exhaust of the module lifting off from the lunar surface. Even if that hadn’t happened, just the radiation environment would’ve bleached the flag white, as scientists believe it has to all the other flags that are on there. So there are essentially really no trace of the Apollo flags. There are, however, two Chinese flags on the moon, and the first one, which was planted a couple of years ago, or unveiled a couple of years ago, was made not of cloth, but their scientists and engineers spent a year building a composite material flag designed specifically to withstand the harsh environment of the moon. When they went back last summer for their farside sample return mission, they built a flag, — and this is pretty amazing — out of basalt, like volcanic rock, which you find on Earth. And they use basalt from earth, but of course basalt is common on the moon. They were able to take the rock, turn it into lava, extract threads from the lava and weave this flag, which is now near the south pole of the moon. The significance of that is they are showing that they can use the resources of the moon, the basalt, to build flags. It’s called ISR: in situ resource utilization. So to me, nothing symbolizes their intentions more than that. A fatal flaw (9:31) . . . I tend to think if it’s a NASA launch . . . and there’s an explosion . . . I still think there are going to be investigations, congressional reports, I do think things would slow down dramatically. In the book, you really suggest a new sort of golden age of space. We have multiple countries launching. We seem t

    30 min
  8. 10/24/2025

    🤖 Thoughts of a (rare) free-market AI doomer: My chat (+transcript) with economist James Miller

    My fellow pro-growth/progress/abundance Up Wingers, Some Faster, Please! readers have told me I spend too little time on the downsides of AI. If you’re one of those folks, today is your day. On this episode of Faster, Please! — The Podcast, I talk with self-described “free-market AI doomer” James Miller. Miller and I talk about the risks inherent with super-smart AI, some possible outcomes of a world of artificial general intelligence, and why government seems uninterested in the existential risk conversation. Miller is a professor at Smith College where he teaches law and economics, game theory, and the economics of future technology. He has his own podcast, Future Strategist, and a great YouTube series on game theory and intro to microeconomics. On X (Twitter), you can find him at @JimDMiller. In This Episode * Questioning the free market (1:33) * Reading the markets (7:24) * Death (or worse) by AI (10:25) * Friend and foe (13:05) * Pumping the breaks (20:36) * The only policy issue (24:32) Below is a lightly edited transcript of our conversation. Questioning the free market (1:33) Most technologies have gone fairly well and we adapt . . . I’m of the belief that this is different. Pethokoukis: What does it mean to be a free-market AI doomer and why do you think it’s important to put in the “free-market” descriptor? Miller: It really means to be very confused. I’m 58, and I was basically one of the socialists when I was young, studied markets, became a committed free-market person, think they’re great for economic growth, great for making everyone better off — and then I became an AI doomer, like wait, markets are pushing us towards more and more technology, but I happen to think that AI is eventually going to lead to destruction of humanity. So it means to kind of reverse everything — I guess it’s the equivalent of losing faith in your religion. Is this a post-ChatGPT, November 2022 phenomenon? Well, I’ve lost hope since then. The analogy is we’re on a plane, we don’t know how to land, but hopefully we’ll be able to fly for quite a bit longer before we have to. Now I think we’ve got to land soon and there doesn’t seem to be an easy way of doing it. So yeah, the faster AI has gone — and certainly ChatGPT has been an amazing advance — the less time I think we have and the less time I think we can get it right. What really scared me, though, was the Chinese LLMs. I think you really need coordination among all the players and it’s going to be so much harder to coordinate now that we absolutely need China to be involved, in my opinion, to have any hope of surviving for the next decade. When I speak to people from Silicon Valley, there may be some difference about timelines, but there seems to be little doubt that — whether it’s the end of the 2020s or the end of the 2030s — there will be a technology worthy of being called artificial general intelligence or superintelligence. Certainly, I feel like when I talk to economists, whether it’s on Wall Street or in Washington, think tanks, they tend to speak about AI as a general purpose technology like the computer, the internet, electricity, in short, something we’ve seen before and there’s, and as far as something beyond that, certainly the skepticism is far higher. What are your fellow economists who aren’t in California missing? I think you’re properly characterizing it, I’m definitely an outlier. Most technologies have gone fairly well and we adapt, and economists believe in the difference between the seen and the unseen. It’s really easy to see how technologies, for example, can destroy jobs — harder to see new jobs that get created, but new jobs keep getting created. I’m of the belief that this is different. The best way to predict the future is to go by trends, and I fully admit, if you go by trends, you shouldn’t be an AI doomer — but not all trends apply. I think that’s why economists were much better at modeling the past and modeling old technologies. They’re naturally thinking this is going to be similar, but I don’t think that it is, and I think the key difference is that we’re not going to be in control. We’re creating something smarter than us. So it’s not like having a better rifle and saying it’ll be like old rifles — it’s like, “Hey, let’s have mercenaries run our entire army.” That creates a whole new set of risks that having better rifles does not. I’m certainly not a computer scientist, I would never call myself a technologist, so I’m very cautious about making any kind of predictions about what this technology can be, where it can go. Why do you seem fairly certain that we’re going to get at a point where we will have a technology beyond our control? Set aside whether it will mean a bad thing happens, why are you confident that the technology itself will be worthy of being called general intelligence or superintelligence? Looking at the trends, Scott Aronson, who is one of the top computer scientists in the world just on Twitter a few days ago, was mentioning how GPT-5 helped improve a new result. So I think we’re close to the highest levels of human intellectual achievement, but it would be a massively weird coincidence if the highest humans could get was also the highest AIs could get. We have lots of limitations that an AI doesn’t. I think a good analogy would be like chess, where for a while, the best chess players were human and now we’re at the point where chess programs are so good that humans add absolutely nothing to them. And I just think the same is likely to happen, these programs keep getting better. The other thing is, as an economist, I think it is impossible to be completely accurate about predicting the future, but stock markets are, on average, pretty good, and as I’m sure you know, literally trillions of dollars are being bet on this technology working. So the people that have a huge incentive to get this right, think, yeah, this is the biggest thing ever. If the top companies, Nvidia was worth a $100 million, yeah, maybe they’re not sure, but it’s the most valuable company in the world right now. That’s the wisdom of the markets, which I still believe in, that the markets are saying, “We think this is probably going to work.” Reading the markets (7:24) . . . for most final goals an AI would have, it would have intermediate goals such as gaining power, not being turned off, wanting resources, wanting compute. Do you think the bond market’s saying the same thing? It seems to me that the stock market might be saying something about AI and having great potential, but to me, I look at the bond markets, that doesn’t seem so clear to me. I haven’t been looking at the bond markets for that kind of signal, so I don’t know. I guess you can make the argument that if we were really going to see this acceleration, that means we’re going to need a huge demand for capital and we would see higher interest rates, and I’m not sure you really see the evidence so far. It doesn’t mean you’re wrong by any means. I think there’s maybe two different messages. Figuring out what the market’s doing at any point in time is pretty tricky business. If we think through what happens if AI succeeds, it’s a little weird where there’s this huge demand for capital, but also AI could destroy the value of money, in part by destroying us. You might be right about the bond market message. I’m paying more attention to the stock market messages, there’s a lot of things going on with the bond markets. So the next step is that you’re looking at the trend of the technology, but then there’s the issue of “Well, why be negative about it? Why assume this scenario where bad things would happen, why not good things would happen? That’s a great question and it’s one almost never addressed, and it goes by the concept of instrumental convergence. I don’t know what the goals of AI are going to be. Nobody does, because they’re programed using machine learning, we don’t know what they really want, that’s why they do weird things. So I don’t know its final goals, but I do know that, for most final goals an AI would have, it would have intermediate goals such as gaining power, not being turned off, wanting resources, wanting compute. Well, the easiest way for an AI to generate lots of computing power is to build lots of data centers. The best way of doing that is probably going to poison the atmosphere for us. So for pretty much anything, if an AI is merely indifferent to us, we’re dead. I always feel like I’m asking someone to jump through a hoop when I ask them about any kind of timeline, but what is your sense of it? We know the best models released can help the top scientists with their work. We don’t know how good the best unreleased models are. The top models, you pay like $200 a month — they can’t be giving you that much compute for that. So right now, if OpenAI is devoting a million dollars of compute to look at scientific problems, how good is that compared to what we have? If that’s very good, if that’s at the level of our top scientists, we might be a few weeks away from superintelligence. So my guess is within three years we have a superintelligence and humans no longer have control. I joke, I think Donald Trump is probably the last human president. Death (or worse) by AI (10:25) No matter how bad a situation is, it can always get worse, and things can get really dark. Well that’s a beautiful segue because literally written on my list of questions next was that question: I was going to ask you, when you talk about Trump being maybe the last human president, do you mean because we’ll have an AI-mediated system because AI will be capable of governing or because AI will just demand to be governing? AI kills everyone so there’s no more president, or it takes over, or Trump is presi

    30 min

Ratings & Reviews

5
out of 5
10 Ratings

About

Welcome to Faster, Please! — The Podcast. Several times a month, host Jim Pethokoukis will feature a lively conversation with a fascinating and provocative guest about how to make the world a better place by accelerating scientific discovery, technological innovation, and economic growth. fasterplease.substack.com

You Might Also Like