There’s a guy named Eliezer Yudkowsky who dropped out of school in seventh grade, never graduated high school, holds zero degrees in computer science or AI or literally anything, and has somehow convinced people to give him over $50 million to “save humanity” from artificial intelligence. His organization, MIRI, has been operating for over 20 years and has published almost nothing in peer-reviewed journals that mainstream AI researchers actually cite. And now? He’s got a book coming out called “If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.” It’s the kind of thing you’d expect to see at a gas station checkout counter between “Bigfoot Stole My Wife” and “Aliens Control Our Government.” But this isn’t some tabloid nonsense—this is a guy who testified before the U.S. Senate and got listed in TIME Magazine’s “100 Most Influential People in AI.” Thing is, this isn’t really about AI safety at all. This is about something much more interesting (and profitable): how complete incompetence can monetize anxiety when it’s packaged with the right mix of intellectual-sounding jargon and apocalyptic certainty. I first encountered Yudkowsky and his colleague Nate Soares on Sam Harris’s podcast—episode 434, “Can We Survive AI?” And honestly? I’m still processing my disappointment that Harris, someone whose epistemological rigor I’ve respected for years, would completely abandon his usual critical thinking standards to platform these guys. Harris typically grills guests on their credentials and evidence. But somehow, when it comes to AI doom, he just… didn’t. It was like watching a skeptic suddenly decide to interview psychics and take their arguments seriously. The Eternal Grifter’s Formula Yudkowsky didn’t invent this playbook. He’s just the latest in a long line of self-appointed prophets who figured out that selling fear pays way better than actually solving problems. Every generation has them. In the 1980s, it was evangelical leaders convincing America that Satan worshippers were running daycare centers. In the 2000s, it was alternative medicine gurus selling supplements to protect you from “Big Pharma.” Now it’s AI doom. The pattern is always the same: Step 1: Find or Create an Existential Threat. Target something most people don’t understand but sounds plausibly dangerous. Mike Warnke figured this out in the 1970s when he started claiming he was a former Satanic high priest who escaped a massive underground cult network. Yudkowsky just swapped out demons for digital superintelligence. Step 2: Position Yourself as the Brave Truth-Teller. Discredit actual experts. “Mainstream scientists are bought and paid for!” or “Academic AI researchers don’t understand the real risks!” You don’t need actual credentials—you just need to convince people that traditional credentials are part of the conspiracy. Step 3: Build the Money Machine. The sales funnel is always the same: free content to build an audience, then monetize their anxiety through increasingly expensive products and services. Warnke’s progression: Free church talks → $500 speaking fees → $50 books → $1,500 weekend seminars → ongoing “consultation.” At his peak, he was pulling in $1-2 million annually just by telling scary stories about his fictional cult days. Yudkowsky’s version? Free blog posts on LessWrong → Harry Potter fanfiction (seriously, 661,000 words of wish-fulfillment fiction featuring a “brilliant” autodidact protagonist) to recruit followers → MIRI donations → speaking fees → book deals. Get thousands of tech workers emotionally invested in your worldview, then pivot them to your AI doom theories. Following the Money Trail What’s wild is that nearly all the most extreme “AI doom” funding comes from cryptocurrency speculation profits. Sam Bankman-Fried’s embezzled customer funds didn’t just disappear into personal real estate—they propped up the entire “effective altruism” ecosystem that amplifies AI doom messaging. When your movement’s biggest financial backer turns out to be running a multi-billion dollar fraud operation, maybe that says something about your due diligence standards. Then there’s the anonymous donations. That $15.6 million MakerDAO donation in 2021? It came right after MIRI’s “Death with Dignity” messaging hit peak hysteria. The incentive structure is obvious: more paranoid predictions equal more funding from anxious crypto millionaires. Meanwhile, MIRI’s academic footprint is basically nonexistent. Open Philanthropy’s brutal 2016 review found MIRI’s total research output comparable to “an intelligent but unsupervised graduate student over 1-3 years.” Since 2018, MIRI implemented a “nondisclosed-by-default” research policy, which is academic speak for “we don’t publish anything because we don’t want people to see how little we actually produce.” The Strategic Retreat The smoking gun? MIRI’s 2024 mission update explicitly abandons research for “advocacy.” They admit their research approach “largely failed.” Translation: “We can’t do real research, so now we’re just going to lobby politicians instead.” When your $50+ million AI safety organization admits its research approach failed and pivots to pure advocacy, that tells you everything about whether they were ever serious about solving technical problems. Real research institutes have peer review, university affiliations, government grants, and mainstream citations. MIRI’s model? Private crypto funding, secret research, self-published papers, and circular citations. The beautiful asymmetry that makes this business model so profitable: competent people are constrained by facts, evidence, and professional ethics. Incompetent people are free to sell whatever nightmare pays best. As Bertrand Russell nailed it: “The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts.” In ten years, “If Anyone Builds It, Everyone Dies” will sit in discount bins next to books about Y2K computer disasters and 2012 Mayan calendar prophecies. The grifters will find new fears to monetize, and the cycle will begin again. But hey, at least we’ll have documented how a middle school dropout built a $50 million empire by convincing smart people that robots are coming to kill us all. The antidote to fear merchants has always been the same: demand evidence, check credentials, follow the money, and remember that people selling protection from the apocalypse have a financial incentive to keep you believing the apocalypse is coming. Attribution: Originally published on Infinite Possibilities Daily Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): https://substack.com/@limitededitionjonathan