Years before OpenAI became a household name, Karen Hao was one of the very first journalists to gain access to the company. What she saw when she did unsettled her. Despite a name that signaled transparency, executives were elusive, the culture secretive. Despite publicly heralding a mission to build AGI, or artificial general intelligence, company leadership couldn’t really say what that meant, or how they defined AGI at all. The seeds of a startup soon to be riven by infighting, rushing to be first to market with a commercial technology and a powerful narrative, and led by an apparently unscrupulous leader, were all right there. OpenAI deemed Hao’s resulting story so negative it refused to speak with her for three years. More people should have read it, probably. Since then, OpenAI has launched Dall-E and ChatGPT, amassed a world-historic war chest of venture capital, and set the standard for generative AI, the definitive technological product of the 2020s. And Hao has been in the trenches, following along, investigating the company every step of the way. The product of all that reportage, her new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, is now officially out. It’s excellent. In fact, I would go so far as to say that if you were looking to understand modern Silicon Valley, the AI boom, and the impact of both on the wider world by reading just one book, that book should be Empire of AI. So, given that it could not be more in the Blood in the Machine wheelhouse, I invited Hao to join me for the first-ever BITM livestream, which we held yesterday afternoon, to discuss its themes and revelations. It went great, imo. I wasn’t sure how many folks would even drop by, as I’d never tried a livestream here before, but by the end there were hundreds of you in the room, leaving thoughtful comments and questions, stoking a great discussion. This is also the first BITM podcast, too—though perhaps I got a little overzealous; the audio quality isn’t always great, and if I want to get official about it, I think I’d have to edit a new intro, as I was kind of all over the place. We’ll figure this stuff out, eventually, so thanks for bearing with. But the conversation wound up being so good that in addition to reposting the video above, I transcribed it below, too, so you can read it as a Q+A. Forgive any typos. (I didn’t transcribe the audience Q+A portion.) As always, all of this work is made possible *100%* by paid supporters of this newsletter. If you value work like this—in-depth interviews with folks like Hao, who are fighting the good fight—please consider becoming a paid subscriber. Many human-generated thanks to all those who already do. BLOOD IN THE MACHINE: Okay, greetings and welcome to the very first Blood in the Machine Multimedia Spectacular, with Karen Hao. Karen is a tech journalist extraordinaire. She's been a reporter for the MIT Technology Review and the Wall Street Journal. And you currently write for the Atlantic, as well as other places. You lead the Pulitzer Center's AI Spotlight Series, where you train journalists around the world how to cover AI. And after reading this book, I'm so glad it is you doing the training and not certain other journalists in this ecosystem—we won't name names. So congratulations on all of that. Did I get it all? Did I get all the accolades? Karen Hao: Yes [laughs]. Okay, perfect. But most importantly, for our purposes today, Karen has written this book, Empire of AI, Dreams and Nightmares in Sam Altman's Open AI. And it's out this week. And let me just say this bluntly. This is not the interview you want to come to for hardballs and gotchas on Karen, because I just absolutely love this book. I personally hoped somebody would write this book. I think it's just really just a real feat of reportage and cultural analysis and economic and social critique. It's a panoramic look, not just at the leading generative AI company, but at the last five to 10 years of some of the most important technological, economic, and cultural headwinds in all of tech, as well as how they're impacting, reshaping, and injuring communities around the world. But if you have time to read one book about AI and its global and political implications, then this is it. Honestly, this is it. And we'll dig into why in just a second.I can't recommend it enough Okay. End of effusive praise. Karen, thank you so much for joining. Thank you so much for having me, Brian. And it's an honor to be part of this first live stream. I religiously read all of your issues. And it is also so effective and inspirational for me to do the work that you're doing. So thank you. Well thank you. And I look forward to diving on in. So let us do so right now. And let's just start with with the title. Okay, we've got this is this book is called Empire of AI, not say, “OpenAI, the company that changed everything.” It is very explicitly, I think, this formation, which I think really does sort of put in context the entire story to come in quite a useful lens. So why is that? Why is it called Empire of AI? Why is this book about OpenAI beginning with this empire framing? Yeah, so the thing that I have come to realize over reporting on opening AI and AI for the last seven years is that we need to start using new language to really cap the full scope and magnitude of the economic and political power that these companies like OpenAI now have. And what I eventually concluded was the only real word that captures all of that is empire. These are new forms of empire, AI companies. And the reason is in the long history of European colonialism, empires of old several features to them. First was they laid claim to resources that were not their own and they would create rules that suggested that they were in fact their own. They exploited a lot of labor as in they didn't pay many workers or they paid them very, very well for the labor that would fortify the empire. They competed with one another in this kind of moralistic way where the British Empire say they were better than the French Empire, or the Dutch Empire would say they were better than the British Empire, and all of this competition was ultimately accelerated the extraction, the exploitation, because their empire alone had to be the one at the head of the race leading the world towards modernity and progress. So the last future of empire is that they all have civilizing missions and they, whether it was rhetoric or whether they truly believed, they would fly this banner of we are plundering the world because this is the price of bringing everyone to the future. And empires of AI have all of these features. They are also laying claim to resources that are not their own, like the data and the work of artists, writers, creators. And they also design rules to suggest that actually it is their own. Oh, it's all just on the internet. And copyright law is fair use. And they also exploit a lot of labor around the world, in that they do not pay very well the contractors that are literally working for the companies to clean up their models and to do all the labeling and the preparation of the data that goes into their model. And they are ultimately creating labor automating technology. So they're exploiting labor on the other end of the AI development process as well in the deployment of these models, where OpenAI literally defines AGI as highly autonomous technology. As systems that outperform humans' most economically valuable work. So their technologies are suppressing the ability of workers to mobilize and demand more rights. And they do it in this aggressive race where they're saying, there's a bad guy, we're the good guy, so let us continue to race ahead and be number one. And one of the things that I mentioned in the book is empires of old were extremely violent and we do not have that kind of overt violence with empires of AI today but we need to understand that modern day empires will look different than empires of old because there has been 150 years of human rights progress and so modern day empires they will take that playbook and move it into what would be acceptable today but one of the things that I don't put in the book itself, but I started using as an analogy is, if you think about the British East India Company, they were originally a company that was engaging in mutually beneficial economic activity in India. And at some point there was a flip that switched where they gained enough economic and political leverage that they were able to start acting in their self-interest with absolutely no consequence. And that's when they dramatically evolved to imperial power. And they did this with the backing of the British crown. They did it with the resources of the British crown, with the license of the British crown. And we are now at a moment, like I froze this manuscript in early January. And then the Trump administration came into power. And we are literally now seeing the same inflection point happening where these companies already have such profound power and are more powerful than most governments around the world. And previously, really the only government didn't necessarily have complete power over was the U.S. government. And now we've reached that point where the Trump administration is fully backing these companies, allowing them to do whatever they want, completely frictionless. And so we have also gotten to the point now where companies have turned into imperial powers where they can do anything in their self-interest with no material consequence. Yeah. Well said. And what more profound an example of these sort of imperial tendencies that you're talking about than for your book to drop the same week that there's literally a bill getting pushed through reconciliation that says ‘states can't write any more laws about AI. We're going to ban lawmaking around AI. This is too important.’ It really fits into that def