muckrAIkers

Jacob Haimes and Igor Krawczuk

Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.

  1. 3D AGO

    Big Tech Plans to Move Fast and Break Democracy

    We're talking about developments in AI while those in power have unapologetically revealed their true fascist intensions; are we spending our time in the right way? Igor and I discuss the importance of shining a light on the techno-authoritarians who have played a very significant role in current state-of-the-world. While we discuss the murders of Nicole Good and Alex Pretti during this episode, it's important that we also acknowledge the many marginalized people who have died as a result of ICE's behavior, and they same level of outcry didn't happen. Six additional individuals died in ICE custody under suspicious circumstances between January 1st and 25th of 2026: Victor Manuel Díaz, Geraldo Lunas Campos, Luis Gustavo Núñez Cáceres, Luis Beltrán Yáñez-Cruz, Parady La, and Heber Sánchez Domínguez. Chapters (00:00) - | Introduction (03:57) - | The Authoritarian Stack (08:33) - | Palantir & Theil-Government Consolidation (13:44) - | Move Fast & Break Everything (23:14) - | Fascism in the US & Starving the Beast (39:48) - | Finding Local Opportunities for Action Critical LinksBelow are the most important links for this episode. For more, visit the episode page on Kairos.fm.The Authoritarian Stack websiteProject 2025 Observer websiteEFF report - ICE Using Palantir Tool Feeds on Medicaid DataThe Guardian article - Eight people have died in dealings with ICE so far in 2026. These are their storiesIndivisible websiteDistributed AI Research Institute projectsEAAMO website - Mechanism Design for Social GoodCarlos Maza video - How To Be Hopeless

    49 min
  2. JAN 12

    AI Skeptic PWNED by Facts and Logic

    Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic between the two of us, recent developments have shown him genuine utility in specific coding tasks, but this doesn't validate the hype or change the fundamental critiques. We discuss what "rote tasks" are and why they're now automatable with enough investment, the difference between genuine utility and AGI claims, and why this update actually impacts our bubble analysis. We explore how massive investment has finally produced something useful for a narrow domain, but it doesn't mean the technology is generalizable or that AGI is real. Chapters (00:00) - | Introduction (05:07) - | What Changed Igor’s Mind (18:27) - | Rote Tasks Explained (23:31) - | How Does This Impact our Bubble Analysis? (30:48) - | AGI Is Still BS (34:07) - | Externalities Remain Unchanged (37:49) - | Final Thoughts & Outro LinksRelated muckrAIkers episode - Tech Bros Love AI WaifusBubble Talk OfficeChai startup - OpenAI Hasn’t Completed A Successful Full-Scale Pretraining Run Since GPT-4o In May 2024, Says SemiAnalysisVechron report - Anthropic Prepares for Potential 2026 IPO in Bid to Rival OpenAIYCombinator Forum post on AI crashYCombinator Forum post on OpenAI adopting Anthropic's "skills"YCombinator Forum post on OpenAI rumorsYCombinator Forum post on OpenAI add suggestionsOther Sources LinkedIn post discussing an agentic coding vibe shiftExecutive Order - Ensuring a National Policy Framework for Artificial IntelligenceInside Tech Law blogpost - Germany delivers landmark copyright ruling against OpenAI: What it means for AI and IPNeurIPS 2025 paper - Ascent Fails to ForgetNBER working paper - Large Language Models, Small Labor Market EffectsDwarkesh Podcast blogpost - RL is even more information inefficient than you thought

    39 min
  3. 12/15/2025

    Tech Bros Love AI Waifus

    OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI winter, we argue the bubble is shaking and needs to pop now, before it becomes another 2008. The good news? Grassroots resistance works. Protests have already blocked $64 billion in data center projects. NOTE: The project that we cite for the $64 billion blockage is actually a pro-data-center campaign. The numbers still seem ok, but it's worth being aware of. Chapters (00:00) - - Introduction (06:45) - - The Addiction Business Model (10:15) - - Public Sentiment Data (22:45) - - Data Centers and Infrastructure Problems (36:30) - - The Bubble Discussion (44:36) - - Closing Thoughts & Outro LinksPublic Sentiment on AIPew Research report - How People Around the World View AIPew Research report - How the U.S. Public and AI Experts View Artificial IntelligencePew Research report - How Americans View AI and Its Impact on People and SocietyUniversity of Toronto report - Trust, attitudes and use of artificial intelligence: A global study 2025Melbourne Business School report - Key findings on public attitudes towards AIThe Washington Post article - Americans have become more pessimistic about AI. Why?The New York Times article - From Mexico to Ireland, Fury Mounts Over a Global A.I. FrenzyThe Guardian article - ‘It shows such a laziness’: why I refuse to date someone who uses ChatGPTThe Register article - OpenAI's ChatGPT is so popular that almost no one will pay for itAI and Claims of Curing Cancer Rachel Thomas, PhD blogpost - “AI will cure cancer” misunderstands both AI and medicineThe Atlantic article - OpenAI Wants to Cure Cancer. So Why Did It Make a Web Browser?Independent article - ChatGPT boss predicts when AI could cure cancerThe Atlantic article - AI Executives Promise Cancer Cures. Here’s the RealityAI Porn and the Addiction Economy Forbes article - ChatGPT Will Allow ‘Erotica’ After Easing Mental Health Restrictions, Sam Altman SaysThe Addiction Economy websitePPC article - OpenAI is staffing up to turn ChatGPT into an ad platformTom Nicholas video - Vape-o-nomics: Why Everything is Addictive NowAI Bubble Fast Company article - AI isn’t replacing jobs. AI spending isPivot to AI article - The finance press finally starts talking about the ‘AI bubble’Fortune article - Without data centers, GDP growth was 0.1% in the first half of 2025, Harvard economist saysThe Atlantic article - Just How Bad Would an AI Bubble Be?The New York Times article - Debt Has Entered the A.I. BoomWill Lockett's Newsletter article - AI Pullback Has Officially StartedReuters article - Michael Burry of 'Big Short' fame is closing his hedge fundBusiness Insider article - The guy who shorted Enron has a warning about the AI boomDatacenters Bloomberg article - AI Needs So Much Power, It’s Making Yours WorseData Center Watch report - $64 billion of data center projects have been blocked or delayed amid local oppositionMore Perfect Union video - We Found the Hidden Cost of Data Centers. It's in Your Electric BillDataCenter Knowledge article - Why Communities Are Protesting Data Centers – And How the Industry Can RespondFighting Back Knight First Amendment Institute essay - AI as Normal TechnologyPranksters vs. Autocrats chapter - Laughtivism: The Secret IngredientSPSP article - Playing with Power: Humor as Everyday ResistanceBlood in the Machine article - The Luddite Renaissance is in full swing

    46 min
  4. AI Safety for Who?

    10/13/2025

    AI Safety for Who?

    Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations. Chapters (00:00) - Introduction & AI Investment Insanity (01:43) - The Problem with AI Safety (08:16) - Anthropomorphizing AI & Its Dangers (26:55) - Mental Health, Wellness, and AI (39:15) - Censorship, Bias, and Dual Use (44:42) - Solutions, Community Action & Final Thoughts LinksAI Ethics & Philosophy Foreign affairs article - The Cost of the AGI DelusionNature article - Principles alone cannot guarantee ethical AIXeiaso blog post - Who Do Assistants Serve?Argmin article - The Banal Evil of AI SafetyAI Panic News article - The Rationality TrapAI Model Bias, Failures, and Impacts BBC news article - AI Image Generation IssuesThe New York Times article - Google Gemini German Uniforms ControversyThe Verge article - Google Gemini's Embarrassing AI PicturesNPR article - Grok, Elon Musk, and Antisemitic/Racist ContentAccelerAId blog post - How AI Nudges are Transforming Up-and Cross-SellingAI Took My Job websiteAI Mental Health & Safety Concerns Euronews article - AI Chatbot TragedyPopular Mechanics article - OpenAI and PsychosisPsychology Today article - The Emerging Problem of AI PsychosisRolling Stone article - AI Spiritual Delusions Destroying Human RelationshipsThe New York Times article - AI Chatbots and DelusionsGuidelines, Governance, and Censorship Preprint - R1dacted: Investigating Local Censorship in DeepSeek's R1 Language ModelMinds & Machines article - The Ethics of AI Ethics: An Evaluation of GuidelinesSSRN paper - Instrument Choice in AI GovernanceAnthropic announcement - Claude Gov Models for U.S. National Security CustomersAnthropic documentation - Claude's ConstitutionReuters investigation - Meta AI Chatbot GuidelinesSwiss Federal Council consultation - Swiss AI Consultation ProceduresGrok Prompts Github RepoSimon Willison blog post - Grok 4 Heavy

    50 min
  5. 08/21/2025

    The Co-opting of Safety

    We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety. (00:00) - Intro (00:21) - Mecha-Hitler Grok (10:07) - "Safety" (19:40) - Under-specification (53:56) - This time isn't different (01:01:46) - Alignment Tax myth (01:17:37) - Actually making AI safer LinksJMLR article - Underspecification Presents Challenges for Credibility in Modern Machine LearningTrail of Bits paper - Towards Comprehensive Risk Assessments and Assurance of AI-Based SystemsSSRN paper - Uniqueness Bias: Why It Matters, How to Curb ItAdditional Referenced Papers NeurIPS paper - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?ICML paper - AI Control: Improving Safety Despite Intentional SubversionICML paper - DarkBench: Benchmarking Dark Patterns in Large Language ModelsOSF preprint - Current Real-World Use of Large Language Models for Mental HealthAnthropic preprint - Training a Helpful and Harmless Assistant with Reinforcement Learning from Human FeedbackInciting Examples ars Technica article - US government agency drops Grok after MechaHitler backlash, report saysThe Guardian article - Musk’s AI Grok bot rants about ‘white genocide’ in South Africa in unrelated chatsBBC article - Update that made ChatGPT 'dangerously' sycophantic pulledOther Sources London Daily article - UK AI Safety Institute Rebrands as AI Security Institute to Focus on Crime and National SecurityVice article - Prominent AI Philosopher and ‘Father’ of Longtermism Sent Very Racist Email to a 90s Philosophy ListservLessWrong blogpost - "notkilleveryoneism" sounds dumb (see comments)EA Forum blogpost - An Overview of the AI Safety Funding SituationBook by Dmitry Chernov and Didier Sornette - Man-made Catastrophes and Risk Information ConcealmentEuronews article - OpenAI adds mental health safeguards to ChatGPT, saying chatbot has fed into users’ ‘delusions’Pleias websiteWikipedia page on Jaywalking

    1h 24m
  6. 07/14/2025

    AI, Reasoning or Rambling?

    In this episode, we redefine AI's "reasoning" as mere rambling, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech's new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, rambling traces that appear to improve benchmarks, largely due to best-of-N sampling and benchmark gaming. Words and definitions actually matter! Carelessness leads to misplaced investments and an overestimation of systems that are currently just surprisingly useful autocorrects. (00:00) - Intro (00:40) - OBB update and Meta's talent acquisition (03:09) - What are rambling models? (04:25) - Definitions and polarization (09:50) - Logic and consistency (17:00) - Why does this matter? (21:40) - More likely explanations (35:05) - The "illusion of thinking" and task complexity (39:07) - "Potemkin understanding" and surface-level recall (50:00) - Benchmark gaming and best-of-n sampling (55:40) - Costs and limitations (58:24) - Claude's anecdote and the Vending Bench (01:03:05) - Definitional switch and implications (01:10:18) - Outro LinksApple paper - The Illusion of ThinkingICML 2025 paper - Potemkin Understanding in Large Language ModelsPreprint - Large Language Monkeys: Scaling Inference Compute with Repeated SamplingTheoretical understanding Max M. Schlereth Manuscript - The limits of AGI part IIPreprint - (How) Do Reasoning Models Reason?Preprint - A Little Depth Goes a Long Way: The Expressive Power of Log-Depth TransformersNeurIPS 2024 paper - How Far Can Transformers Reason? The Globality Barrier and Inductive ScratchpadEmpirical explanations Preprint - How Do Large Language Monkeys Get Their Power (Laws)?Andon Labs Preprint - Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous AgentsLeapLab, Tsinghua University and Shanghai Jiao Tong University paper - Does Reinforcement Learning Really Incentivize Reasoning CapacityPreprint - RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMsPreprint - Mind The Gap: Deep Learning Doesn't Learn DeeplyPreprint - Measuring AI Ability to Complete Long TasksPreprint - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsOther sources Zuck's Haul webpage - Meta's talent acquisition trackerHacker News discussion - Opinions from the AI communityInterconnects blogpost - The rise of reasoning machinesAnthropic blog - Project Vend: Can Claude run a small shop?

    1h 11m
  7. 06/23/2025

    One Big Bad Bill

    In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, military AI integration, and a 10-year moratorium blocking all state AI regulation. We explore the historical parallels with authoritarian data consolidation and why this represents a fundamental shift away from limited government principles once held by US conservatives. (00:00) - Intro (01:13) - Bill, general overview (05:14) - Bill, AI overview (07:54) - Medicaid fraud detection systems (11:20) - Bias in AI Systems and Ethical Concerns (17:58) - Centralization of data (30:04) - Military integration of AI (37:05) - Tax incentives for development (40:57) - Regulatory moratorium (47:58) - One big bad authoritarian regime LinksCongress page on the One Big Beautiful Bill ActNYMag article - Republicans Admit They Didn’t Even Read Their Big Beautiful BillEverything is Horrible Blogpost - They Did Vote For This (GOP House Edition)Authoritarianism Historical contextHolocaust Encyclopedia article - Gleichschaltung: Coordinating the Nazi StateWikipedia article - 1943 Amsterdam civil registry office bombingWikipedia article - Four DsConservative leaning, pro-privacy, anti-governmentData Governance Hub blogpost - Review and Literature Guide of Trump’s “One Big Beautiful Dataset”Cato Institute blogpost - If You Value Privacy, Resist Any Form of National ID CardsAmerican Enterprise Intitute blogpost - The Dangerous Road to a “Master File”—Why Linking Government Databases Is a Terrible IdeaEFF blogpost - The Dangers of Consolidating All Government InformationACLU against national ID cardsACLU main page on national ID cardsACLU blogpost - National Identification Cards: Why Does the ACLU Oppose a National I.D. System?ACLU blogpost - 5 Problems with National ID CardsInherent unfairness of MLLighthouse Reports investigation - The Limits of Ethical AILighthouse Reports investigation - Suspicion MachinesAmazon Science publication - Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination lawMichigan Technology Law Review article - The Unfairness of Fair Machine Learning: Levelling down and strict egalitarianism by defaultWired article - Health Care Bias Is Dangerous. But So Are ‘Fairness’ AlgorithmsMilitary WallStreet Journal article - The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and MoreTrump executive order - Unleashing American Drone DominanceAnthropic press release - Claude Gov Models for U.S. National Security CustomersMoratorium on State AI Regulation TechPolicy.Press article - The State AI Laws Likeliest To Be Blocked by a MoratoriumForbes article - Colorado’s AI Law Still Stands After Update Effort FailsOther Sources KPMG report - Incentives and credits tax provisions in “One Big Beautiful Bill Act”The Register article - Trump team leaks AI plans in public GitHub repositoryWallStreet Journal article - To Feed Power-Wolfing AI, Lawmakers Are Embracing NuclearCBS Austin article - IRS direct file program exceeded its expectations but faces uncertain future

    53 min
  8. 05/26/2025

    Breaking Down the Economics of AI

    Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "cost centers" like call centers and corporate art, and claims of explosive growth. They dig into the actual data, debunk the hype, and explain why most productivity claims don't hold up in practice. Plus: MIT denounces a paper with fabricated data, and Grok randomly promotes white genocide myths. (00:00) - Recording date + intro (00:52) - MIT denounces paper (04:09) - Grok's white genocide (06:23) - Butthole convergence (07:13) - AI and the economy (14:50) - Automating profit centers (29:46) - Removing the last cost centers (47:16) - "This time is different" (explosive growth) (57:55) - Alpha Evolve, optimization, and slippage LinksUniversity of Chicago working paper - Large Language Models, Small Labor Market EffectsOECD working paper - Miracle or Myth? Assessing the macroeconomic productivity gains from Artificial IntelligenceEpoch AI blogpost - Explosive Growth from AI: A Review of the ArgumentsBusiness Insider article - Anthropic CEO: AI Will Be Writing 90% of Code in 3 to 6 MonthsPreprint - Transformative AGI by 2043 is 1% likelyAutomating profit centers Pivot to AI blogpost - If AI is so good at coding … where are the open source contributions?Ben Evans' Mastodon post - "Show me the pull requests"NY Times article - Your A.I. Radiologist Will Not Be With You SoonFastCompany article - More companies are adopting 'AI-first' strategies. Here's how it could impact the environmentForbes article - Business Tech News: Shopify CEO Says AI First Before EmployeesNewsroom article - IBM Study: CEOs Double Down on AI While Navigating Enterprise HurdlesPNAS research article - Evidence of a social evaluation penalty for using AIArs Technica article - AI use damages professional reputation, study suggestsRemoving cost centers The Register article - Anthopic's law firm blames Claude hallucinations for errorsFortune article - Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliverWikipedia article - The Market for LemonsAlphaEvolve Deepmind press release - AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmsDeepmind white paper - AlphaEvolve: A coding agent for scientific and algorithmic discoveryOff Topic VelvetShark blogpost - Why do AI company logos look like buttholes?MIT Economics press release - Assuring an accurate research recordPivot to AI blogpost - How to make a splash in AI economics: fake your dataPivot to AI blogpost - Even Elon Musk can’t make Grok claim a ‘white genocide’ in South Africa

    1h 7m

About

Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.

You Might Also Like