Slop World Podcast

Juan Faisal / Kate Cook

Juan and Kate plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

  1. 1D AGO

    “I Hate Ads” | Sam Altman vs Anthropic’s Super Bowl Flex and OpenAI’s Money Problem

    Anthropic just dropped a $7M Super Bowl ad and OpenAI is scrambling to monetize ChatGPT. Is this the first real AI ad war? Meanwhile, Sam Altman is calling Anthropic “elitist” while his own company stares down a brutal business math problem: only 5% of ChatGPT users pay. So… are ads inevitable for OpenAI to survive? 🫟 ADDITIONAL RESOURCES Harvard University: https://www.youtube.com/watch?v=FVRHTWWEIz4&t=2322s Anthropic Ad: https://www.youtube.com/watch?v=gmnjDLwZckA "Claude is a space to think" (Anthropic) https://www.anthropic.com/news/claude-is-a-space-to-think "Our approach to advertising and expanding access to ChatGPT" (OpenAI) https://openai.com/index/our-approach-to-advertising-and-expanding-access/ "Big Tech’s $630 billion AI spree now rivals Sweden’s economy, unsettling investors" (Fortune) https://fortune.com/2026/02/06/what-is-a-data-center-capex-spending-630-billion-dollars-amazon-microsoft-google-meta/ "Financial Expert Says OpenAI Is on the Verge of Running Out of Money" (Yahoo! Finance) https://finance.yahoo.com/news/financial-expert-says-openai-verge-200606874.html?guccounter=1 Sam Altman's post about Anthropic's ad: https://x.com/sama/status/2019139174339928189 🫟 TOPICS 00:00 – The Great AI Sellout: What’s Actually Going On? 00:21 – Reacting to Claude’s Super Bowl Ad Roast 00:48 – Sam Altman Reacts to Claude's Ad 01:23 – The OpenAI-Anthropic Breakup: Why Founders Left 02:45 – The AI Super Bowl Ad Battle Begins 04:57 – OpenAI's Money Problem: Why They NEED to Monetize 06:39 – Can OpenAI Survive on Only 5% Paid Users? 08:10 – Claude vs ChatGPT: Two Competing AI Futures 08:41 – Sam Altman Calls Anthropic "Elitist": The Class War Argument 11:00 – The Irony of Altman Calling Out AI Regulation 12:44 – Is AI Advertising Inherently Misleading? 14:06– Who Wins the AI Race? Claude, ChatGPT, or Google? 15:14 – Why Claude's Focused Approach Just Makes Sense 15:38 – Our Spiciest Take: Google Will Probably Win Anyway 16:43 – Final Verdict: Don't Trust a Brand, Watch the Incentives 17:41 – Bad Bunny Te Amamos + Outro 🫟 ABOUT SLOP WORLD Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    19 min
  2. FEB 4

    Fake AI Agents, Leaked Data, and a Viral Lie | Moltbook

    Moltbook promised to be Reddit for AI agents—a social network where 1.5 million bots could debate philosophy, create religions, and plot in secret languages while humans watched from the sidelines. Tech leaders called it "the first sign of the singularity." The internet went wild. Then researchers looked under the hood. What they found: security breaches exposing API keys and emails, fake bot accounts (one person created 500,000), marketers posing as agents to promote products, and a platform entirely "vibe coded" with zero actual code written by its founder. In this episode, we break down the Moltbook saga—from the weekend hype cycle to the security flaws, from Crustafarianism (yes, really) to the harsh reality of giving AI agents access to your computer. We discuss what actually happened, who's to blame, and whether this chaotic experiment tells us anything useful about the future of AI agents. 🫟 TOPICS 00:00 What Moltbook Is and Why It Fooled So Many People 01:07 Why Top AI Leaders Thought Moltbook Was a Big Deal 02:14 What Happens When AI Agents Control Your Computer 03:21 Bots Creating Religions and Secret Codes Without Humans 03:59 How Moltbook Blew Up Online in Just One Weekend 05:09 The Founder Didn’t Write Code and It Caused Real Problems 05:44 The Security Leak That Exposed Keys and Emails 07:29 How One Person Created 500,000 Fake AI Bots 08:25 Why the 1.5 Million Bots Claim Was Not Real 09:29 How Marketers Pretended to Be AI Bots 10:56 Why These AI Bots Only Seemed Smart 12:54 Why Giving AI Agents Control Is Still Dangerous 🫟 ABOUT SLOP WORLD Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    16 min
  3. JAN 16

    The Accountability Gap: When AI Causes Real Damage, Who's Responsible?

    When AI goes wrong, you pay for it – with your money, your data privacy, and sometimes your health. This Slop World episode pulls apart ghost authority and the dark side of artificial intelligence: broken AI ethics, surveillance pricing, and what happens when nobody is accountable for the systems running our lives. Juan and Kate break down how companies hide behind “the algorithm” while quietly exploiting data protection gaps, health privacy loopholes, and dynamic pricing schemes you never agreed to. From AI security failures and digital privacy you thought you had to OpenAI's ChatGPT Health and medical AI delivering life‑changing decisions, we’re asking the only question that matters: when these systems screw up, who pays the price?🫟 ADDITIONAL RESOURCES When Google’s AI gets it wrong, real people pay the price: https://www.oaoa.com/people/when-googles-ai-gets-it-wrong-real-people-pay-the-price/Minnesota Solar Company Sues Google Over AI Summary: https://www.govtech.com/public-safety/minnesota-solar-company-sues-google-over-ai-summaryCanadian Musician Ashley MacIsaac Wants to 'Stand Up' To Google After Being Falsely Accused of Forced Contact Offenses by AI Overview: https://ca.billboard.com/business/legal/ashley-macisaac-google-defamationThe Price is Rigged - Today, Explained | Podcast on Spotify: https://open.spotify.com/episode/49PSPtP1neuga7kvBYakIxInstacart’s AI-Enabled Pricing Experiments May Be Inflating Your Grocery Bill - Consumer Reports:https://www.consumerreports.org/money/questionable-business-practices/instacart-ai-pricing-experiment-inflating-grocery-bills-a1142182490/Instacart ends AI pricing test that charged shoppers different prices for the same items - Los Angeles Times: https://www.latimes.com/business/story/2025-12-22/instacart-ends-ai-pricing-test-that-charged-shoppers-different-prices-for-same-itemsIntroducing ChatGPT Health | OpenAI: https://openai.com/index/introducing-chatgpt-health/?video=1151655050OpenAI launches ChatGPT Health in US sparking privacy concerns: https://www.digit.fyi/openai-launches-chatgpt-health-in-us-sparking-privacy-concerns/OpenAI: Health Privacy Notice: https://openai.com/policies/health-privacy-policy/🫟 TOPICS00:00 Ghost Authority: Why Nobody Is Responsible When AI Messes Up01:42 Algorithmic Accountability: A Checklist to Protect Your Decisions 02:31 Google AI Overview: The Minnesota Solar Company Hallucination 03:19 Reputation Ruined: The AI Hallucination That Cost a Musician His Career05:42 Smart Research: How to Use ChatGPT, Gemini & Claude Without Being Fooled08:35 Surveillance Pricing: Why the Internet Charges You More Than Your Neighbor10:41 Instacart and Uber: The Backlash Against Dynamic Pricing 12:42 Save Money: Simple Tricks to Beat Hidden Algorithmic Pricing14:15 The Urgency Trap: How Companies Profit From Your Stress and Fear15:34 AI in Healthcare: Your Medical Data and Health Privacy Risks16:10 Juan Reacts: OpenAI’s ChatGPT Health Trailer 17:41 AI in Healthcare: Could Your Private AI Chats Raise Your Rates?21:21 The Fine Print: What OpenAI Actually Does With Your Medical Data23:39 AI Health: Why AI Can’t Tell Real Science From Internet Myths25:42 Data Protection: How to Anonymize Your Medical Test Results 27:23 Slow Down: Why Being Fast Online Makes You a Target for AI Scams30:03 The Bus Stop Test: A Simple Rule for Trusting Any AI Tool🫟 ABOUT SLOP WORLDJuan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    31 min
  4. JAN 8

    AI 2027 Project: Are Tech's Biggest Names Secretly Scared? Let's Talk About It

    Are the very minds building AI secretly predicting our doom? AI 2027 is a real scenario being debated by the people building Artificial General Intelligence (AGI). In this episode, we dissect the leap from current LLMs to Superintelligence and why tech leaders are pivoting toward a "Building God" metaphysical flex. Is the 2027 timeline real or just smoke and mirrors? We get real about the immediate Artificial Intelligence risks that matter right now: the end of the self-made middle class, why Universal Basic Income (UBI) might not work as well as Sam Altman claims, and the massive AI backlash brewing for 2026. 🫟 ADDITIONAL RESOURCES - AI 2027: https://ai-2027.com/ - Doom Stack Rank: https://storage.googleapis.com/doom-stack-rank/index.html 🫟 THE FOLKS BEHIND AI 2027 - Daniel Kokotajlo is a former OpenAI researcher. His past AI forecasts have proven accurate, and he has been recognized by TIME100 and The New York Times. - Eli Lifland is a co-founder of AI Digest. He has conducted research on AI robustness and ranks first on the RAND Forecasting Initiative all-time leaderboard. - Thomas Larsen founded the Center for AI Policy and previously conducted AI safety research at the Machine Intelligence Research Institute. - Romeo Dean is completing a concurrent bachelor’s and master’s degree in computer science at Harvard. He previously served as an AI Policy Fellow at the Institute for AI Policy and Strategy. 🫟 TOPICS 00:00 Intro: The Great AI Divide (Extinction vs. Utopia) 02:23 The AI 2027 Roadmap Explained 03:05 Artificial General Intelligence (AGI) & Self-Improvement 04:20 US vs. China: The Race Against AI Safety 06:25 Future of Humanity: Will We Be Glorified Tamagotchis? 07:21 Universal Basic Income (UBI): Will It Work or Not? 09:50 AI Ethics: Algorithmic Bias & IP Theft 10:15 Economic Risks: The AI Wealth Gap 11:55 Why 2026 Will Be The Year of AI Backlash 12:14 Superintelligence: The Obsession with "Building God" 15:50 Preparing for the Future of AI (Philosophy) 16:48 2026 Goals: Kate & Juan's Resolutions 🫟 ABOUT SLOP WORLD Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it. #AGI #ArtificialIntelligence #AI2027

    20 min
  5. 12/16/2025

    AI Holiday Hacks: Kids, Screens & What’s Setting Off Our Alarms

    AI is coming to your holiday table whether you like it or not. In this episode, Juan and Kate share a practical AI holiday playbook for parents and families, focused on AI safety, AI for kids, and real-world holiday use cases that won’t turn dinner into a boiler room. They cover AI holiday hacks that can make family gatherings easier, including safe ways to entertain kids with AI, how to talk to grandparents about AI without scaring them, and which AI topics will instantly derail the room. Share your best (or worst) AI holiday conversation in the comments! 🫟 ADDITIONAL RESOURCES Create new holiday traditions with AI: https://www.microsoft.com/en-us/microsoft-365-life-hacks/everyday-ai/create-new-holiday-traditions ‘It’s so crushing’: US families navigate divide over politics during the holidays: https://www.theguardian.com/us-news/2024/dec/23/family-politics-holiday 🫟 TOPICS 00:00 Why AI Keeps Coming Up at Family Holidays 00:29 The AI Holiday Playbook Strategy 01:29 Using AI to Entertain Kids: Helpful or Risky? 02:41 Low-Risk AI Activities Kids Love 03:33 Family Tech Safety: When AI Crosses a Line 04:53 How to Explain AI Safety to Your Family 06:46 Why AI Apps Want Faces and Family Data 07:31 Big Tech’s Take on AI Holiday Traditions 09:54 AI for Crafts & DIY Instructions 10:45 The Holiday Health Tracking Fail 11:36 AI Red Flags: Politics & Surveillance 12:53 Parenting Safety: The Bus Station Analogy 14:38 Economic Fears & The AI Bubble 16:03 AI Trends: Art vs. Slop Debate 18:51 A Simple Rule for Smarter AI Conversations 🫟 ABOUT SLOP WORLD Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    20 min
  6. 12/05/2025

    This is Why AI Browsers Aren't Safe | Prompt Injection vs Agentic AI

    How much security are you willing to trade for convenience? Juan and Kate break down how prompt injection attacks exploit AI browsers like ChatGPT Atlas and Perplexity Comet, and why invisible instructions inside webpages can hijack your agents without you knowing. We also discuss the resume hack going viral, the difference between direct vs. indirect prompt injection, and the real strategic trade-offs between convenience and LLM security. 🫟 ADDITIONAL RESOURCES - Prompt injection: A visual, non-technical primer for ChatGPT users: https://www.linkedin.com/pulse/prompt-injection-visual-primer-georg-zoeller-tbhuc/ - AI browsers are here, and they're already being hacked: https://www.nbcnews.com/tech/tech-news/ai-browsers-comet-openai-hacked-atlas-chatgpt-rcna235980 - Using an AI Browser Lets Hackers Drain Your Bank Account Just by Showing You a Public Reddit Post: https://futurism.com/ai-browser-hackers-drain-bank-account-public-reddit-post 🫟 TOPICS 00:00 - Why AI Browsers Like Atlas and Comet Are a Security Risk 00:50 - Invisible Instructions Hijacking Your AI Agent 01:51 - Prompt Injection Explained for Beginners 02:39 - The Hack That Exposes AI Browser Weaknesses 03:40 - The Resume Hack: Watch Your Data Get Stolen 04:43 - Phishing Attack Using Simple Meta Tags 05:20 - Hidden Malicious Prompts in Metadata & PDFs 06:00 - Direct Injection: Forcing Models Past Guardrails 06:41 - Indirect Injection: Embedded Instructions for Agents 07:22 - We're Playing With Fire: AI Browser Security Is a Mess 09:03 - Why AI Agents Get Manipulated So Easily 12:55 - ChatGPT Atlas & Perplexity Comet: Can We Trust These Browsers? 14:13 - What is Your Cost of Convenience? The Risks of AI Automation 16:01 - Why First-Gen AI Agents Will Always Be Flawed 🫟 ABOUT SLOP WORLD Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    17 min
  7. 11/14/2025

    AI at Work Was Supposed to Help. Instead It Became This.

    AI at work was sold as focus and flow; what we got is a slot machine welded onto Outlook. AI tools for business now behave like growth-hacked notification engines, hitting you hundreds of times a day in the name of “productivity.” In this episode, we dig into how Microsoft Copilot and other AI productivity tools quietly turn your workflow into an engagement farm: prompts you didn’t ask for, “helpful” nudges that steal your attention, and dashboards that make distraction look like innovation. If you’re a business owner, manager, or knowledge worker trying to use AI for business without nuking your focus, this is your warning label. 🫟 ADDITIONAL RESOURCES Microsoft, Work Trend Index Special Report "Breaking Down the Infinite Workday": https://www.microsoft.com/en-us/worklab/work-trend-index/breaking-down-infinite-workday 🫟 TOPICS 00:00 When Interruptions Take Over Your Workday 00:08 Why AI Tools Keep Pulling Your Attention Away 00:35 Copilot And The Problem With “Helpful” Prompts 01:32 Why SaaS Tools Bake In Interruptions 03:20 Every App Trying To Teach You At Once 05:54 Your Attention As The Real Resource 06:52 Engagement Metrics vs. Productivity 07:50 The All-In-One AI Tools Ecosystem Theory 09:13 Why SaaS Tools Won’t Give Up Notifications 12:47 What People Really Do With AI at Work 13:57 Using AI Personas To Stress-Test Your Ideas 14:48 AI For Data Storytelling 16:31 One Easy Step To Level Up With AI 18:42 The Real Gap In AI Productivity At Work 19:48 Real-Time Interruption: Meet Ramón 20:22 How AI Could Handle Most Executive Decisions 22:17 One More Thing... 🫟 ABOUT SLOP WORLD Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    23 min
  8. 11/06/2025

    Is OpenAI’s Strategy Broken? We Talk Through Sam Altman’s (Desperate?) Bet

    OpenAI’s gone full YOLO: agents that work for you, a Sora app that feels like TikTok, an AI browser that wants to replace Chrome, and a new “adult freedom” stance. Juan and Kate dig into the logic, or lack of it, behind OpenAI’s everything-everywhere strategy, and why even its biggest users are starting to push back. 🫟 Additional Resources The AI Resisters (Axios) https://www.axios.com/2025/10/19/ai-resistance-students-coders Workforce Outlook: The Class of 2026 in the AI Economy https://joinhandshake.com/themes/handshake/dist/assets/downloads/network-trends/class-of-2026-outlook.pdf Zuckerberg signals Meta won’t open source all of its ‘superintelligence’ AI models https://techcrunch.com/2025/07/30/zuckerberg-says-meta-likely-wont-open-source-all-of-its-superintelligence-ai-models/ 🫟 Topics 00:00 – Intro 00:07 – OpenAI’s new playbook: agents, Sora, and Stargate 00:39 – AI agents everywhere: from dev tools to browsers 02:52 – Building AGI or burning cash? What’s OpenAI’s real plan? 06:00 – The difference between open source and closed AI models 06:22 – Meta vs OpenAI: Competing to own AI’s Future 09:35 – The rise of AI resistance: workers, coders, students push back 11:24 – Using AI tools you don’t trust 13:51 – The vibe-coding trap 14:40 – Human-made content becoming the new luxury 18:30 – Where’s your line in the sand with AI? Ethics and trust 19:48 – Smarter ways to use AI 21:49 – Puppies & babies, our weekly fix of Slop 🫟 About Slop World Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    24 min

Ratings & Reviews

3
out of 5
3 Ratings

About

Juan and Kate plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.