Welcome to Episode 53 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Richard Heimann, Director of AI for the State of South Carolina and author of “Sutskever’s List”, to talk about the papers that built modern AI, the man behind OpenAI’s biggest breakthroughs, and what happens when living doubts become explosive decisions. Richard walked me through Ilya Sutskever’s legendary reading list: 27 papers that supposedly explain 90% of what’s happening in artificial intelligence, and why understanding this curated canon matters more than drowning in the weekly flood of new research. The conversation moved fluidly between deep learning history, the Sam Altman firing saga, bubble economics, and the challenge of separating genuine progress from AGI fever dreams. The Reading List That Became a Book We started by exploring how a simple recommendation from Ilya to John Carmack turned into a full book project. When Ilya shared his reading list in 2021 or 2022, he made a promise: read these papers and you’ll understand 90% of what’s going on in AI. Manning Publications initially wanted an anthology: 27 chapters analyzing each paper in isolation. Richard pushed back. The papers weren’t just standalone artifacts; they built on each other and told a larger human story. Ilya’s story. The publisher agreed, and Richard spent the last year weaving the technical breakthroughs into a narrative that makes sense for people who aren’t writing these papers themselves. The book is done. The final chapters just went up on Manning’s early access program. Print release is scheduled for May 2025. Who Is Ilya Sutskever and Why Should We Care? For those who only know Ilya from the Sam Altman firing drama, Richard provided crucial context. This is the person responsible for AlexNet in 2012: the moment that launched the modern deep learning era. He’s behind Word2Vec, sequence-to-sequence models, and the scaling of transformers at OpenAI. GPT-1, 2, 3, and beyond. But beyond the technical contributions, Ilya has this mystique. He doesn’t say much. When he does, it’s high signal. And his work has consistently centered on safety concerns, which makes him both a technical innovator and someone genuinely worried about the implications. The reading list reflects his mental model. It gives insight into what he sees, what he values, and why he makes the decisions he makes. The Sam Altman Firing: Living Doubts Gone Wrong We spent significant time unpacking the OpenAI board saga. Richard’s take was fascinating: he traced it back to GPT-2 in 2019, when OpenAI deemed the model “too dangerous to release” and staged its rollout over nine months. At the time, researchers were skeptical. It looked like hype-building. But Richard sees it differently now: it was a living doubt. Ilya and OpenAI acted on their safety concerns in a transparent, reversible way. They could always say “we were wrong” and release the full model, which they eventually did. The Sam Altman firing was different. It was explosive, irreversible, and impossible to unwind once initiated. The lesson from a safety perspective: whatever your doubts are, structure them so you can reverse course if you’re wrong. Bubble Economics and the Free Lunch Era I asked the question everyone wants answered: are we in an AI bubble? Richard’s response was nuanced. Yes, it’s bubbly. But bubbles aren’t inherently bad. Nothing important happens without bubbles. You don’t get this kind of capital, talent, and momentum from purely rational actors making measured bets. The key difference from 2008: there’s real underlying technology here. It’s more like the dot-com bubble: bad ideas will get flushed out, valuations will correct, but the fundamental shift is genuine. What’s remarkable isn’t the diminishing returns everyone’s complaining about. It’s that scaling worked at all. For 50-60 years, AI progress required genuine innovation: new architectures, new training tricks. For the last five years, we just made models bigger and threw more data at them. That free lunch was unprecedented. Now the free lunch is ending. Ilya himself recently said the era of scaling is over. We’re going to need good ideas again. AGI: Paper Hopes vs. Living Technology Richard was refreshingly direct about AGI hype. He doesn’t find the concept appealing. It’s a paper hope: something people talk about but don’t actually build toward in meaningful ways. The substrate we’re working with isn’t going to produce human-like intelligence. And we don’t need it to. The technology is already powerful and will continue improving linearly. But the exponential curves and S-curves are done. We’re hitting asymptotes. The implication: a lot of the AI safety concerns about alignment and existential risk become less urgent. He doesn’t see an existential threat from his computer. What’s Underrated and Overrated I asked Richard what people are sleeping on and what’s empty hype. Overrated: AGI and the entire AI safety research agenda focused on existential risk. Underrated: The technology itself, at least among skeptics. Too many people dismiss these models as “stochastic parrots” or “just databases” without understanding what they actually are. The technology will be pervasive in five to ten years, and the skeptics are needlessly rounding down. Working in Government AI We also covered Richard’s day job: Director of AI for South Carolina. He evaluates use cases from 80+ state agencies, all interested in adopting AI. Some have clear ideas, others need help defining their approach. About 80% is advisory: looking at use cases from technical, governance, privacy, and security perspectives. The remaining 20% is an informal accelerator developing strategic use cases in-house. The scale is what attracts him. Even in a small state of 5 million people, the potential impact is enormous. At its core, this episode was about understanding foundations in a field that rewards chasing novelty. How to build mental models that persist beyond the next model release. How to act on doubts without making irreversible mistakes. And what it takes to write a book that captures not just the papers, but the worldview behind them. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show