Jack chats with Sebastian Mallaby, senior fellow at the Council on Foreign Relations, about his new book The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence. They discuss current challenges in AI safety, the U.S.-China race and prospects for cooperation, and the emerging risks posed by powerful new models like Anthropic’s Mythos. They also talk about tensions between frontier labs and the U.S. government, and the trajectory toward greater government control. Mentioned: * Sebastian Mallaby, The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence (2026) Thumbnail: President Trump delivers remarks at the White House AI Summit in Washington, D.C., Wednesday, July 23, 2025. (Official White House Photo by Joyce N. Boghosian) Consider becoming a free or paid subscriber to Executive Functions. This is an edited transcript of an episode of “Executive Functions Chat.” You can listen to the full conversation by following or subscribing to the show on Substack, Apple, Spotify, or wherever you get your podcasts. Jack Goldsmith: Today I’m chatting with Sebastian Mallaby, who’s a senior fellow at the Council on Foreign Relations and an acclaimed biographer and writer. And we’re going to be talking about his newest book, which is called The Infinity Machine. Sebastian, thanks for talking with me. Sebastian Mallaby: Thank you, Jack. Nice to be with you. So tell us what the book is about. Who is Demis Hassabis, and why did you write a book about him? So the book is about artificial intelligence, and it’s centered on this character, Demis Hassabis, who is, in a way, the OG sort of AI lab leader, right? He starts DeepMind, this startup in London, back in 2010, before AI could even recognize the photograph of a cat—like nothing worked. It was full AI winter. So this is five years before Sam Altman and Elon Musk start OpenAI. It’s fully 11 years before Anthropic gets started. So he was extremely early. So if you wanted to tell the story of the making of modern AI through a personality, you know, Demis’s career and intellectual development maps perfectly onto that story. So the thing that’s most interesting to me about him is that, as you emphasize in the book, his real interest in this, I think it’s fair to say, is scientific and not profit-making. And he, at least at the outset, and I think even today, has a rather idealistic—to me anyway, idealistic or optimistic—conception of the technology and how it can be used. But the story I also see is someone who—and I don’t mean this uncharitably—but who has basically engaged in a series of compromises or fudges with regard to those values as he’s gotten deeper and deeper into the AI competition. So is that fair? And can you talk about that arc? Yes. I mean, he started DeepMind in 2010 with an absolute focus on AI safety. In fact, he met his scientific co-founder, Shane Legg, at a safety lecture in which Shane projected that by 2030 or so, AIs would be sophisticated enough—cleverer than humans—have their own sort of objective functions, and would maybe start to threaten humans. And this was the lecture over which they bonded. And then in 2014, Demis Hassabis sells his company DeepMind to Google. And part of the sale condition was that AI would not be used for military purposes, that it would be safeguarded by a sort of ethics oversight committee that would be separate from the corporate leadership of Google. So he took it very seriously. And then this continues for a while. Between 2016 and 2019, he wages a secret battle, a thing called Project Mario, where he’s trying to put pressure on Google’s leadership to have this independent safety oversight board, because Google kind of reneged on the deal at the point of sale in 2014. And then after 2019, it kind of fades away. And, you know, by now you have Google being willing to provide AI to the national security establishment. In the US, there is no safety and ethics oversight board. And Demis is left explaining to me, well, you know, I feel as if, you know, if I lean into Google and I’m part of the team there, and I, you know, understand the different pressures that a corporation is under, then I have a seat at the table. And so when push comes to shove, I can chime in in favor of safety. And so I’m a good person—trust me—is kind of the bottom line, which is a sort of flimsy scaffolding of reassurance for an alarmed world. Especially since—I mean, this was also a time—a lot of this is happening at a time before ChatGPT amazes the world a few years ago with whatever model it was, I can’t remember. And suddenly there’s this massive competition among several frontier models that has been extremely fierce. And now we’re in a massive competition among those labs and with Chinese firms, and the countries are in fierce competition. And he’s now leading—you talk in the book about how they combined, how Google combined its AI efforts—and he’s leading it. So he’s really leading, in some sense, this fiercely competitive charge, which isn’t taking—doesn’t appear to be taking—safety all that seriously. Is that fair? Yeah, it’s fair. And, you know, I think there’s a slight caveat in that his style is to pursue safety ideas secretly. I mean, he doesn’t talk about them. And Dario Amodei, the leader of Anthropic, is extremely public when he picks a fight with the Pentagon, when he releases this new model called Mythos, where he’s publicly said, you know, this is too dangerous to release generally, so I’m going to release it to a sort of restricted list of people. He likes to be very out there in public with it. Demis Hassabis, on the other hand, did two important things, to my knowledge, about safety. One was this secret battle I described before, which he was so unkeen to have sort of move into the public sphere that when I discovered it through leaks from other people, you know, I had to talk to his general counsel, who was trying to tell me I wasn’t allowed to publish that. So he really didn’t want that to be public. And then secondly, he told Rishi Sunak in 2023, after ChatGPT came out, “Mr. Prime Minister, you know, I have an idea for you, which is you could have an international discussion on AI safety—invite the Chinese, invite everybody—start a process that might lead to some kind of understanding internationally on AI safety.” Demis never told me that he told the Prime Minister that. I only know this because other people, like the Prime Minister’s advisers, told me. So he didn’t advertise what he was doing. So I think he’s trying to do things now, but they’re not in the public view. So that’s a slight caveat. But basically, you’re right. I mean, he’s leading one of the major labs, Google DeepMind, in frontier AI, racing as fast as he can, even releasing, by the way, open-weight models, which by his own analysis are dangerous because you can’t control them once they’re out there. And so there is this contradiction—you could call it even hypocrisy—between his stated beliefs about AI safety and what he’s actually doing. And so then the question is, well, how harshly does one judge him? And I’ve just floated the word hypocrisy. But on the other hand, were he to quit his job and go off and become a professor somewhere and pursue research, which I think is the alternative path for him, it wouldn’t make the world safer, right? There’d still be this race dynamic. To be clear, I wasn’t judging him. And he seems—I’m trying to understand—he seems like a thoroughly decent, honorable, brilliant guy. I’m just trying to understand the mindset of someone who, from a very young age, had these extraordinary scientific ambitions, which he’s been as important as anyone in making possible. And—but safety and this kind of benign vision has always been part of it, and it just seems to have been overtaken by reality—mostly competitive, financial, and global competition reality. And I’m just wondering how he processes that. That’s what I’m getting at. Absolutely. I mean, I was exactly trying to do the same thing—to kind of figure out how you process it and sort of portray that. And, you know, at the end of the book, he tells me, you know, I’m in a paradoxical situation. On the one hand, Shane Legg and I projected back in 2009, 2010 that by around 2030, AI would be very powerful. And that’s kind of what’s going to happen. And we’ve been central to building it. So, you know, I’ve delivered on this vision in an amazingly gratifying way. On the other hand, I had this hope that I could control the technology somehow and make it safe, and that hasn’t worked. And, you know, when you want to ask, you know, why did it turn out so contrary to his expectations? You know, it’s the Oppenheimer syndrome. Oppenheimer led the Manhattan Project, built the amazing technology, and was an incredible scientific leader as well as a scientist, and thought he could sort of go and sell Truman not to use the bomb or to give the technology to the UN or whatever. Truman just kicks him out of his office and says, “Don’t bring that guy in here again.” So scientists think that they can control their inventions, but often the inventions have their own will. Okay, you’ve written a lot of interesting essays closer to the topics of this Substack in connection with the publication of the book. And I just want to talk about some of these policy and governance themes that are implicated—that are talked about in the book—but that you’ve talked about, I think, more in connection with the publication of the book. First of all—and you’re, you know, the keenest of observers of these various relationships and where we are in these AI races—so I just want to get your temperature on, first, what is the state of the relations between the U.S. government and the front