HockeyStick Show

Miko Pawlikowski

Steal breakthrough ideas in tech, business & performance from world-class experts www.hockeystick.show

  1. Inside OpenAI: the Future of Deep Learning, with Richard Heimann - HockeyStick #53

    FEB 21

    Inside OpenAI: the Future of Deep Learning, with Richard Heimann - HockeyStick #53

    Welcome to Episode 53 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Richard Heimann, Director of AI for the State of South Carolina and author of “Sutskever’s List”, to talk about the papers that built modern AI, the man behind OpenAI’s biggest breakthroughs, and what happens when living doubts become explosive decisions. Richard walked me through Ilya Sutskever’s legendary reading list: 27 papers that supposedly explain 90% of what’s happening in artificial intelligence, and why understanding this curated canon matters more than drowning in the weekly flood of new research. The conversation moved fluidly between deep learning history, the Sam Altman firing saga, bubble economics, and the challenge of separating genuine progress from AGI fever dreams. The Reading List That Became a Book We started by exploring how a simple recommendation from Ilya to John Carmack turned into a full book project. When Ilya shared his reading list in 2021 or 2022, he made a promise: read these papers and you’ll understand 90% of what’s going on in AI. Manning Publications initially wanted an anthology: 27 chapters analyzing each paper in isolation. Richard pushed back. The papers weren’t just standalone artifacts; they built on each other and told a larger human story. Ilya’s story. The publisher agreed, and Richard spent the last year weaving the technical breakthroughs into a narrative that makes sense for people who aren’t writing these papers themselves. The book is done. The final chapters just went up on Manning’s early access program. Print release is scheduled for May 2025. Who Is Ilya Sutskever and Why Should We Care? For those who only know Ilya from the Sam Altman firing drama, Richard provided crucial context. This is the person responsible for AlexNet in 2012: the moment that launched the modern deep learning era. He’s behind Word2Vec, sequence-to-sequence models, and the scaling of transformers at OpenAI. GPT-1, 2, 3, and beyond. But beyond the technical contributions, Ilya has this mystique. He doesn’t say much. When he does, it’s high signal. And his work has consistently centered on safety concerns, which makes him both a technical innovator and someone genuinely worried about the implications. The reading list reflects his mental model. It gives insight into what he sees, what he values, and why he makes the decisions he makes. The Sam Altman Firing: Living Doubts Gone Wrong We spent significant time unpacking the OpenAI board saga. Richard’s take was fascinating: he traced it back to GPT-2 in 2019, when OpenAI deemed the model “too dangerous to release” and staged its rollout over nine months. At the time, researchers were skeptical. It looked like hype-building. But Richard sees it differently now: it was a living doubt. Ilya and OpenAI acted on their safety concerns in a transparent, reversible way. They could always say “we were wrong” and release the full model, which they eventually did. The Sam Altman firing was different. It was explosive, irreversible, and impossible to unwind once initiated. The lesson from a safety perspective: whatever your doubts are, structure them so you can reverse course if you’re wrong. Bubble Economics and the Free Lunch Era I asked the question everyone wants answered: are we in an AI bubble? Richard’s response was nuanced. Yes, it’s bubbly. But bubbles aren’t inherently bad. Nothing important happens without bubbles. You don’t get this kind of capital, talent, and momentum from purely rational actors making measured bets. The key difference from 2008: there’s real underlying technology here. It’s more like the dot-com bubble: bad ideas will get flushed out, valuations will correct, but the fundamental shift is genuine. What’s remarkable isn’t the diminishing returns everyone’s complaining about. It’s that scaling worked at all. For 50-60 years, AI progress required genuine innovation: new architectures, new training tricks. For the last five years, we just made models bigger and threw more data at them. That free lunch was unprecedented. Now the free lunch is ending. Ilya himself recently said the era of scaling is over. We’re going to need good ideas again. AGI: Paper Hopes vs. Living Technology Richard was refreshingly direct about AGI hype. He doesn’t find the concept appealing. It’s a paper hope: something people talk about but don’t actually build toward in meaningful ways. The substrate we’re working with isn’t going to produce human-like intelligence. And we don’t need it to. The technology is already powerful and will continue improving linearly. But the exponential curves and S-curves are done. We’re hitting asymptotes. The implication: a lot of the AI safety concerns about alignment and existential risk become less urgent. He doesn’t see an existential threat from his computer. What’s Underrated and Overrated I asked Richard what people are sleeping on and what’s empty hype. Overrated: AGI and the entire AI safety research agenda focused on existential risk. Underrated: The technology itself, at least among skeptics. Too many people dismiss these models as “stochastic parrots” or “just databases” without understanding what they actually are. The technology will be pervasive in five to ten years, and the skeptics are needlessly rounding down. Working in Government AI We also covered Richard’s day job: Director of AI for South Carolina. He evaluates use cases from 80+ state agencies, all interested in adopting AI. Some have clear ideas, others need help defining their approach. About 80% is advisory: looking at use cases from technical, governance, privacy, and security perspectives. The remaining 20% is an informal accelerator developing strategic use cases in-house. The scale is what attracts him. Even in a small state of 5 million people, the potential impact is enormous. At its core, this episode was about understanding foundations in a field that rewards chasing novelty. How to build mental models that persist beyond the next model release. How to act on doubts without making irreversible mistakes. And what it takes to write a book that captures not just the papers, but the worldview behind them. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    34 min
  2. Exploring GenAI, with Maggie Engler & Numa Dhamani - HockeyStick #52

    JAN 19

    Exploring GenAI, with Maggie Engler & Numa Dhamani - HockeyStick #52

    Welcome to Episode 52 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Maggie Engler and Numa Dhamani, co-authors of “Introduction to Generative AI (Second Edition)”, to talk about navigating the AI landscape without getting swept up in hype, fear, or misinformation. Maggie and Numa shared what it’s like to write a technical book in a field moving so fast that a second edition became necessary just a year after the first. The conversation moved fluidly between AI agents, copyright battles, bubble economics, and the challenge of staying grounded when headlines scream about both utopia and apocalypse. When Your Book Needs an Update Before the Ink Dries We started by exploring why a second edition was needed so quickly. The answer wasn’t just new models or better benchmarks—it was a fundamental shift in how people think about and use generative AI. When the first edition came out, people were still asking “What is generative AI?” By the time they started the second edition, the question had become “How do I actually use this in my daily work?” The technology moved from experiment to infrastructure in less than two years. Maggie and Numa described the challenge of writing about a field where specific results and capabilities change weekly. Their solution: focus on teaching people how to interpret new developments rather than chasing the latest numbers. Agents: Promise, Limitations, and Reality We spent significant time on AI agents—one of the biggest additions to the second edition. The conversation was refreshingly balanced. No wild predictions about fully automated workflows next quarter. No dismissive skepticism either. They explained how agents show real promise in constrained domains like coding, where you can verify results against tests. Tool use capabilities have improved. Infrastructure like Anthropic’s Model Context Protocol is maturing. But we’re still far from the autonomous systems some headlines suggest. The key insight: agents work best when you can clearly define success and verify outcomes. The further you get from that, the more human oversight you need. The Legal Wild West and Copyright Chaos The copyright discussion was particularly interesting. Maggie and Numa didn’t dance around the obvious: large-scale model developers are training on copyrighted material. The question isn’t whether it’s happening—it’s what happens next. We talked about the recent SORA controversy, where OpenAI initially told anime studios they could opt out character by character, then reversed course within days. The lawsuits, the settlements, the attempts at licensing frameworks—it’s all still being negotiated in real time. Their take: we’re converging on some baseline principles around transparency and accountability, but the intellectual property questions will take much longer to resolve. Bubble or Revolution? Yes. I asked the question everyone wants answered: are we in an AI bubble? Their response was nuanced. Yes, there are bubble characteristics—high valuations, massive investment, limited returns, lots of speculation. But no, the underlying technology isn’t a passing fad. The comparison to the dot-com era felt apt: real value underneath, correction likely, but the fundamental shift is genuine. Maggie predicted we’ll see market consolidation and some valuations adjusting. Numa emphasized we’re moving from wild optimism toward more measured metrics and tempered hype. But the core technology will keep evolving, and returns will materialize. Starting Points and Practical Advice We closed by discussing how people should actually get started with generative AI today. Their advice was simple: just play with the tools. Try Gemini, Claude, ChatGPT. Most have free tiers. Experiment with prompting. See what works for you. The hesitation people feel—not knowing the “right” use cases or perfect prompts—is the main barrier. The best way through it is hands-on exploration, not more reading. At its core, this episode was about maintaining perspective in a field that rewards extremes. How to stay informed without getting overwhelmed. How to evaluate capabilities honestly without falling into either hype or cynicism. And what it takes to write a book that stays relevant when the field updates faster than publishing cycles allow. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    28 min
  3. Become Legendary, with Tommy Breedlove - HockeyStick #51

    12/13/2025

    Become Legendary, with Tommy Breedlove - HockeyStick #51

    Welcome to Episode 51 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Tommy Breedlove (the author of the book “Legendary”) to talk about the long road from survival mode to self-worth, and how money, identity, and purpose get tangled together along the way. Tommy shared his personal story, from growing up around addiction and incarceration to building a successful career, losing himself inside of it, and ultimately redefining what “success” actually means. The conversation moved fluidly between money, masculinity, relationships, and the quiet damage caused by chasing external validation. Money, Identity, and the Cost of Approval Tommy started by unpacking how early trauma and instability shape our relationship with achievement. For him, success became a shield. Money, status, and performance were ways to feel safe, respected, and untouchable. He explained how this pattern shows up for many high performers, especially in tech and business. On the surface, things look great. Underneath, there is burnout, resentment, and a constant fear of being exposed. The more approval you chase, the more expensive it becomes to maintain the image. When Net Worth Becomes Self-Worth We spent time digging into how money quietly becomes a proxy for value. Tommy talked about how easy it is to confuse financial success with identity, and how that mindset erodes relationships, health, and joy over time. He challenged the idea that more is ever enough when the underlying wound is unresolved. Without self-respect, success only amplifies insecurity. With it, money becomes a tool instead of a scoreboard. Redefining Success on Your Own Terms The conversation shifted toward what it actually takes to step off the treadmill. Tommy described slowing down, setting boundaries, and getting honest about what you want rather than what you think you should want. That process often involves hard tradeoffs. Letting go of roles, relationships, and expectations that no longer fit. Learning how to say no. Building a life that feels aligned, even if it looks smaller from the outside. At its core, this episode was about sustainability at a human level. How to build a career without losing yourself. How to pursue ambition without outsourcing your self-worth. And what success looks like when nobody is watching. Thanks for listening! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    35 min
  4. From FerretDB to Percona, with Peter Farkas - HockeyStick #50

    11/29/2025

    From FerretDB to Percona, with Peter Farkas - HockeyStick #50

    Welcome to Episode 50 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Peter Farkas to dig into the messy reality of modern infrastructure, open source licensing, and what really happens when companies try to protect their products from hyperscalers. We walked through his recent LinkedIn post, the story behind it, the unintended consequences of “defensive licensing,” and what the future might look like for teams trying to build sustainable businesses on top of open source. Cloud Providers, Open Source, and the Licensing Squeeze Peter started by explaining the background behind his post: why companies shift to restrictive licenses like SSPL, what they’re trying to defend against, and why it often snowballs into confusion for both users and vendors. He shared examples of how cloud providers respond, how this changes the economics of running a service, and why certain licensing decisions end up punishing the wrong people. The conversation unraveled into a broader point about how blurry the line has become between infrastructure, managed services, and full-blown products. Why “Open Source Alternatives” Aren’t Always What They Seem We also talked about the wave of drop-in replacements and forks that appear every time a company tightens its license. Peter explained the real costs behind “just run it yourself,” the pressure it puts on engineering teams, and why some of these forks still depend heavily on the original maintainers. Underneath it all is a bigger question: who actually pays for the innovation that everyone wants to remain free? The Realities of Building a Business Around Infrastructure Peter broke down the challenges of turning infrastructure into a viable product: operational burden, attack surfaces, compatibility expectations, and the never-ending stream of breaking changes that users don’t see. The theme kept coming back to sustainability. What does fair monetization look like? How do you protect your company without alienating your community? And what options do founders realistically have when cloud giants can replicate their service within months? Thanks for listening! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    39 min
  5. Building Better Platforms, with Ajay Chankramath, Sean Alvarez & Nic Cheneweth - HockeyStick #49

    11/15/2025

    Building Better Platforms, with Ajay Chankramath, Sean Alvarez & Nic Cheneweth - HockeyStick #49

    Welcome to Episode 49 of The HockeyStick Show! I’m Miko Pawlikowski, and this week I sat down with three platform leaders who’ve lived through the messy, unglamorous reality of building internal platforms that actually help teams ship better software: Ajay Chankramath, Sean Alvarez, and Nic Cheneweth. We unpacked what platforms really are, why they’re misunderstood, and how good platform work is far more human than technical. Platforms Aren’t Magic — They’re Just Good Engineering Done at Scale All three guests pointed out a simple truth: most companies don’t need fancy platform branding, they just need to fix the basics. Shared tooling, stable environments, repeatable patterns — the “boring stuff” is what creates real leverage. A platform isn’t a product you install. It’s a consistent way of working that reduces chaos and duplication. Lesson: A platform is not the shiny thing — it’s the reliable thing.Action: Identify one repeated pain your teams face and solve it once, centrally. Internal Customers Matter More Than Internal Technology A theme that came up repeatedly: platform work only succeeds when the platform team treats engineers as customers, not as people who should “just use what we built.” Ajay talked about how teams often skip discovery and jump straight into building. Sean emphasized empathy. Nic highlighted that many “platform failures” are really product failures — misaligned expectations, poor communication, and unclear value. Lesson: If no one is using your platform, it’s not a platform — it’s shelfware.Action: Before building anything new, interview five developers about what they actually need. Reduce Cognitive Load, Don’t Add to It Every engineer knows the pain of juggling too many deployment paths, tooling options, and config formats. A good platform reduces cognitive load by removing decisions that shouldn’t matter. This isn’t about limiting freedom. It’s about letting teams spend their energy on product, not plumbing. Lesson: The best platform decisions remove decisions.Action: Pick one workflow today that your team repeats and standardize it. Developer Experience Is a Business Metric Nic made a point that stuck with me: no executive wakes up excited about “platform engineering.” They care about throughput, reliability, cost, and time-to-market. A platform only earns its place when it moves those numbers. You don’t justify platform work with architecture diagrams. You justify it by showing how much faster teams deliver because of it. Lesson: If you want executive support, speak the language of outcomes.Action: Track one metric affected by platform friction — and show the before and after. Platforms Fail When They Become Mandates Instead of Choices Sean raised this repeatedly: forcing a platform onto teams rarely works. The healthiest platforms are opt-in, because they’re useful enough that teams choose them. Mandates hide problems. Adoption exposes them. Lesson: If you have to force adoption, the real issue isn’t adoption — it’s value.Action: Ask a team why they didn’t choose your platform. Their answer is your roadmap. Culture Makes or Breaks the Platform Ajay described how teams often treat platform issues as technical problems, when they’re usually cultural ones: trust, communication, ownership, and the willingness to collaborate across team boundaries. The best platforms grow in environments where experimentation is allowed, feedback loops are short, and teams feel safe saying “this isn’t working.” Lesson: A platform is a cultural artifact as much as a technical one.Action: Start including platform updates in your engineering ceremonies — make it part of the conversation, not an afterthought. A final thought from me This conversation reminded me that platforms aren’t about abstraction layers or golden paths or YAML templates. They’re about helping people do their best work without tripping over the infrastructure underneath them. If you take one thing from this episode: treat platform engineering as a service, not a structure. Talk to your teams, fix the pain that matters, and keep the human side front and center. Thanks for listening! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    41 min
  6. Exploring OpenUK, with Amanda Brock - HockeyStick #48

    10/04/2025

    Exploring OpenUK, with Amanda Brock - HockeyStick #48

    Welcome to Episode 48 of The Hockey Stick Show! I’m Miko Pawlikowski, and in this episode, I had the pleasure of speaking with Amanda Brock, CEO of OpenUK and a leading voice in open technology, open source, and policy. Amanda’s experience spans law, tech, and community-building, and we dove deep into the realities of open source, AI, and what it means to build a collaborative ecosystem. 1. Focus on People, Not Just Companies Amanda emphasized that OpenUK is about bringing together people from all backgrounds, not just companies. The UK’s open source community is diverse and global, and real progress comes from connecting individuals, not just organizations. Lesson: Community is built on people, not brands.Action: Reach out to someone in your field you haven’t met yet—collaboration starts with a conversation. 2. Openness in AI Is Complicated, but Critical We discussed the difference between “open source” and “open innovation,” especially in the context of AI models like Meta’s Llama. Amanda explained that true openness means anyone can use and share code freely, but many new licenses add friction. Lesson: Don’t take “open” at face value—read the fine print.Action: Next time you use an “open” tool, check the license and see what you’re really allowed to do. 3. The UK’s Role in Open Source Is Bigger Than You Think Amanda shared that the UK leads Europe in open source contributors, but most projects are small and globally connected. The focus isn’t on “UK projects,” but on people who choose to live and work here. Lesson: Impact isn’t about ownership—it’s about contribution.Action: Contribute to a global project, even if it’s just a small fix or a comment. 4. Funding and Culture Shape Innovation We talked about why so many open source companies and talent move to the US: better funding terms, risk-taking culture, and strong local networks. The UK and Europe have the talent, but need to foster environments that reward experimentation and accept failure. Lesson: Innovation thrives where risk is embraced.Action: Try something new in your work—even if it might fail, you’ll learn more than by playing it safe. 5. The Future of AI and Openness Is Global Amanda’s final message: the future of AI and open tech isn’t just about national interests or big companies—it’s about global collaboration, access, and making sure everyone can benefit from new technology. Lesson: The best solutions come from working together across borders.Action: Share your knowledge and support open access, wherever you are. A final Thought from me Talking with Amanda was a reminder that technology is about people, policy, and the choices we make together. If you take one thing from this episode: be open, stay curious, and connect with your community. Thanks for listening! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    30 min
  7. The Future of Diffusion Models, with Mark Liu - HockeyStick #47

    08/09/2025

    The Future of Diffusion Models, with Mark Liu - HockeyStick #47

    Welcome to Episode 47 of The Hockey Stick Show! I’m Miko Pawlikowski, and in this episode, I had the chance to chat with Mark Liu about AI, creativity, and how to think about the future of technology. Mark has worked on some really cool projects, and we went deep into both the technical and personal lessons. 1. Understand the Basics Before Chasing the Hype Mark explained diffusion models in a super simple way. Imagine starting with a noisy picture and slowly cleaning it up until it looks like your idea — like “a dog walks on the beach.” That’s how modern AI image generation works. Lesson: Don’t just follow the buzzwords. Pick one idea, break it into small parts, and understand it step-by-step.Action: Next time you hear about a new tech trend, try to explain it to a friend in one sentence. If you can’t, you don’t understand it yet. 2. New Tech is Cool, But It’s Messy at First We talked about video generation models like OpenAI’s Sora. They’re powerful, but they still make weird mistakes. The tech will get better, but it’s not “magic” yet. Lesson: Don’t expect perfection from new tools. Early versions always have flaws.Action: Experiment with new tools, but keep backups and don’t bet your whole project on them. 3. Writing a Book or Building a Project Takes Clarity Mark shared how writing his book forced him to clarify his ideas. You can’t just write for yourself — you have to make it easy for others to understand. Lesson: Teaching something is one of the best ways to truly understand it.Action: If you’re learning something new, write a blog post or record a short video explaining it. 4. Don’t Be Afraid to Start Small We both agreed that starting is often the hardest part. Many people wait until they feel “ready,” but that day never comes. Lesson: Progress beats perfection.Action: Break your goal into the smallest possible next step and do it today — even if it’s just 15 minutes of work. 5. Your Network is Part of Your Skillset Mark’s opportunities often came from people he’d met through side projects, talks, and collaborations. Lesson: Skills matter, but so do connections.Action: Share your work in public, even if it’s not perfect. You never know who’s watching. Final Thought from Me Talking with Mark reminded me that the future of AI (and any tech) isn’t just about algorithms — it’s about people who are willing to explore, make mistakes, and share what they learn. If you take one thing from this episode: be curious, start small, and share your work. Thanks for listening! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    29 min
  8. Will Prompts take your Job?, with Dan Cleary - HockeyStick #46

    07/26/2025

    Will Prompts take your Job?, with Dan Cleary - HockeyStick #46

    Welcome to Episode 46 of The Hockey Stick Show! I’m Miko Pawlikowski, and in this episode, I had the chance to chat with Dan Cleary, co-founder of PromptHub. We dove deep into the world of prompt engineering — what it is, why it matters, and how it’s evolving alongside rapid advancements in AI. Understanding the Rise of Prompt Engineering Dan explained how PromptHub emerged from real challenges building LLM-powered features into traditional software. Versioning, collaboration, and the non-deterministic nature of LLMs highlighted the need for a dedicated, GitHub-like platform for prompts. Prompt engineering, it turns out, isn’t just a trend — it’s about treating prompts like first-class citizens in software development. Why PromptHub, Not GitHub for Prompts? Dan shared why GitHub alone doesn’t cut it for managing prompts: * Different update cycles: Prompts evolve faster than code. * Non-technical collaboration: PMs and domain experts need to iterate on prompts without touching code. * Testing & deployment: PromptHub includes tools tailored for LLM workflows, like testing prompts across models and environments. Prompt Engineering as a Core Skill We explored the evolution of prompting — from copy-paste templates to a nuanced skill essential in production applications. Dan emphasized: * Practicing prompting improves your model intuition. * Strong prompts are crucial in LLM-integrated products. * Prompt engineering may not be a standalone role forever — but it will remain a vital skill for PMs, engineers, and AI builders. The Myth of the Jobless AI Future Dan addressed fears about AI replacing jobs: “If engineering becomes 10x easier, we won’t have 1/10th of the engineers — we’ll have 10x more code.” AI will empower smaller teams to achieve more, not eliminate human roles. The key? Learn the tools, become harder to replace. Prompting Today: Still a Bit of Magic From politeness in prompts to chain-of-thought breakthroughs, we’re still learning what works. Some "tricks" are fading, but clear, structured prompting remains core. As models evolve (like GPT-4.1 and Claude 3), prompt style must adapt too — and companies now publish official guidance on model-specific prompting. What Makes Prompt Engineering Hard Dan broke down what still makes this challenging: * Translating tacit knowledge into text * Handling subtle context in real-world scenarios * Designing reusable, portable prompts across models Even with smarter models, clear instructions remain an art — and a differentiator in production LLM apps. Looking Ahead: What’s Next for Prompting We discussed where the space is going: * Formalization of tooling (like PromptHub, MCP protocols) * Agents, reasoning, long-term planning * Voice interfaces as a rising trend * More companies building prompt ops and infra stacks Prompt engineering is here to stay — but it’s becoming more sophisticated and integrated. Getting Started with Prompt Engineering Dan’s advice for beginners: * Use models a lot. Prompting improves with practice. * Test across models. Understand how they interpret inputs. * Explore community prompts. Learn by forking and tweaking. * Read the PromptHub blog for deep dives and practical guidance. Live in London: The First In-Person Prompt Engineering Event Dan and I are excited to co-host the first in-person promptengineering.rocks Conference on October 16 2025 in London — following the success of the virtual event in 2023. Stay tuned for more details! Thanks for tuning in! You can connect with Dan on PromptHub.us and follow the latest prompt engineering discussions on their blog and community platform. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.hockeystick.show

    20 min
5
out of 5
6 Ratings

About

Steal breakthrough ideas in tech, business & performance from world-class experts www.hockeystick.show