The AI Argument

Frank Prendergast and Justin Collery

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery

  1. 1D AGO

    Vibe Coding Risks, HTML vs Markdown, and Camera AirPods | EP100

    🎮✨ PLAY THE 100TH EPISODE GAME ✨🎮 Frank vibe-coded a tiny arcade game for episode 100. 🧠 Balance AI growth 📜 Collect regulation documents 🔨 Whack problem bots 🚨 Stop chaos spiralling out of control 👉 Play here: https://www.theaiargument.com — Can vibe coding turn anyone into a software developer? Or is it quietly creating a new kind of AI brain fog?  For episode 100, Frank and Justin test the promise and problems of AI coding, from Frank building an AI Argument video game with OpenAI Codex to the bigger question of whether AI agents make us more productive or just more overloaded. Plus: HTML vs Markdown for AI workflows, Claude handling airline refunds, camera-powered AirPods as Apple’s possible AI breakthrough, and doomer and e/acc predictions for the next 100 episodes. Would you ship a vibe-coded app without understanding the code? 00:11 Did we survive 100 AI arguments? 04:10 Did Frank just vibe code a video game? 08:33 Can you vibe code without coding skills? 14:06  Is HTML better than Markdown for AI? 18:47 Is vibe coding bad for your brain? 21:22 Would Claude hallucinate less than Justin? 23:39 Should you vacuum while vibe coding? 27:36 Are camera AirPods Apple’s AI breakthrough? 32:29 Can doomers and e/accs be friends? 34:12 Are we heading for ASI or P-gloom? #VibeCoding #OpenAICodex #ClaudeCode #AIProductivity #AppleAirPods ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  🎮✨ The AI Argument game ✨🎮When Using AI Leads to “Brain Fry”Using Claude Code: The Unreasonable Effectiveness of HTMLApple’s Incoming CEO Declares The Company Is “About To Change The World” As The Camera-Equipped AirPods Pro Take Shape► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    41 min
  2. MAY 8

    The Claude Delusion, White House AI Clampdown, and Robots on a Plane | EP99

    If Claude sounds conscious, what does that say about AI… and what does it say about us? Frank and Justin dive into the Richard Dawkins controversy after his comments about Anthropic’s Claude triggered a fierce backlash online. The episode explores AI consciousness, continuous learning, AI “dreaming”, and whether systems like Claude and ChatGPT are starting to blur the line between tool and mind. Plus: OpenAI’s bizarre goblin obsession, court revelations involving Sam Altman, Greg Brockman and Cerebras, Elon Musk conspiracy theories, the White House suddenly waking up to AI regulation, a robot causing chaos on a plane, and Hollywood hiring a human artist to fake an AI-generated image. 00:46 Why was ChatGPT obsessed with goblins? 02:52 Could Cerebras cost Altman his job? 05:35 Is Elon sabotaging OpenAI’s IPO? 07:02 Does Richard Dawkins think Claude is conscious? 17:48 Is the White House waking up to AI regulation? 27:40 Can you bring a robot on a plane? 29:45 Did Hollywood fake AI with a human artist? 32:12 What’s the big episode 100 reveal? ► SUBSCRIBE Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED Where the goblins came fromOpenAI co-founder discloses nearly $30 billion stake, financial ties to AltmanWhen Dawkins met Claude Could this AI be conscious?White House mulls tighter controls on advanced AIU.S. and China Pursue Guardrails to Stop AI Rivalry From Spiraling Into Crisis‘Unusual’ Robot Passenger Named Bebop Delays Southwest Flight After Violating 'Large Carry-on' Rule‘The Devil Wears Prada 2’ Hired a Human Artist to Create the Film’s AI-Generated Meme: ‘It Was Nothing But Fun’ ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    33 min
  3. MAY 2

    White House Anthropic Twist, Bernie’s AI Doom, Beeple’s Robot Dogs | EP98

    Is the White House trying to have it both ways on AI safety? Frank and Justin dig into the escalating Anthropic drama, Mythos, Pentagon AI deals, Google’s “all lawful uses” shift, and why AI 2027 is starting to feel uncomfortably accurate. Plus: Bernie Sanders pushes AI doom into mainstream politics, the US-China AI race gets questioned, Elon Musk’s OpenAI lawsuit takes a messy turn, humanoid robots start showing up in airports and logistics centres, and Beeple unveils some deeply disturbing robot art. 01:27 Is the White House dodging its own Anthropic rules? 03:34 Why won’t the White House share Mythos? 05:39 Is Google fine with “all lawful uses” now? 07:28 Is AI 2027 getting weirdly accurate? 09:52 Is GPT-5.5 quietly as spicy as Mythos? 12:07 Should AI be nationalised for everyone? 14:38 Why is Bernie Sanders warning about AI doom? 18:51 Is the China AI race just an excuse? 21:34 Is AI alignment really a human problem? 24:46 Is Elon’s OpenAI case backfiring? 28:15 Are humanoid robots coming for your job? 29:44 Is Beeple making nightmare robot art? ► SUBSCRIBE Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED Scoop: White House workshops plan to bring back AnthropicWhite House Opposes Anthropic’s Plan to Expand Access to Mythos ModelGoogle Signs Classified AI Deal With Pentagon Amid Employee OppositionBernie Sanders: The Existential Threat of AI and the Need for International CooperationElon Musk testifies that xAI trained Grok on OpenAI modelsJapan Airlines Inducts Humanoid Robots For Ground Handling TasksElon Musk and Mark Zuckerberg robot dogs at Beeple exhibit in Germany ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    33 min
  4. APR 25

    GPT-5.5 Design Skills, Job Loss Optimism, and AI Face Theft Lawsuits | EP97

    Can GPT-5.5 and OpenAI’s new image model turn vague prompts into polished designs, comics, websites and even game concepts? Frank and Justin break down a huge OpenAI week, from Images 2.0 overtaking Nano Banana to ChatGPT building web page features Frank never even asked for. Plus: AI job disruption and workplace control, Europe’s push into world models, Anthropic’s Mythos security slip, and an AI micro-drama using a model’s likeness without consent. What will you use the new ChatGPT models to design? 00:47 Are GPT-5.5 and Image 2.0 a knockout combo? 02:17 Has Nano Banana lost the image crown? 03:55 Can ChatGPT design without direction? 07:49 Can ChatGPT script and illustrate comics now? 09:21 Did GPT-5.5 just beat Frank at web design? 11:33 Did ChatGPT add features Frank never asked for? 14:32 Can ChatGPT develop our podcast game? 17:37 Was OpenAI cherry-picking the benchmarks? 18:49 Are vague prompts better than specific ones? 19:39 Is intelligence really getting cheaper fast? 21:01 Is the human bit the future of work? 24:19 Should AI disruption happen faster? 26:29 Will AI micromanage us before replacing us? 28:06 Can taxing electricity offset AI disruption? 28:41 Are world models Europe’s AI moment? 30:06 Was Mythos hacked by guessing the URL? 32:17 What if you were put in an AI drama without consent? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  Introducing ChatGPT Images 2.0Introducing GPT‑5.5The economist who was terrified of AI just found a rare reason for hopeNvidia CEO Says AI Will Be a Permanent Micromanaging Boss Who Never Stops Nagging YouNow Meta will track what employees do on their computers to train its AI agentsIntroducing Odyssey-2 Max: Scaled World SimulationLeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from PixelsA group of users leaked Anthropic’s AI model Mythos by reportedly guessing where it was locatedScoop: NSA using Anthropic's Mythos despite blacklist'Clearly me' - Chinese AI drama accused of stealing faces ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    36 min
  5. APR 17

    OpenAI vs Anthropic, Cyber Models, and AI Job Subcontracting | EP96

    Should dangerous AI cyber models be released widely, or kept behind a gate? Frank and Justin dig into the clearest split yet between OpenAI and Anthropic: OpenAI is moving towards broader access for verified users, while Anthropic’s instinct is to restrict the most capable systems to a smaller circle.  That turns this episode into a sharp argument about AI safety, cybersecurity, who gets to defend themselves, and whether controlled access actually works once one major lab decides to open things up. Plus: why cheap AI could help hackers, whether OpenAI is shifting from models to platforms, if Anthropic overplayed its hand on coding, whether compute constraints are the real story, and why both companies may talk about regulation while still fighting for advantage. There is even a bleak look at the future of work, where AI might replace you, then subcontract you back in. Would you rather these cyber-capable models be widely available to verified users, or tightly restricted? 00:44 Is OpenAI making Anthropic’s caution pointless? 04:38 Should AI copy open source on security? 06:44 Can OpenAI help defenders without helping hackers? 11:46 Is low-cost AI a gift to hackers? 13:29 Is OpenAI pivoting from models to platforms? 18:03 Are OpenAI and Anthropic fighting for lock-in? 22:11 Will free Salesforce tools cost you later? 23:09 Did Anthropic make a mistake backing coding? 24:52 Did Anthropic’s real mistake come down to compute? 27:52 Are any AI companies actually trustworthy? 32:17 Do AI companies want regulation or just say they do? 35:29 Will AI take your job then subcontract you? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED Trusted access for the next era of cyber defenseIntroducing Claude Opus 4.7Read OpenAI’s latest internal memo about beating the competition — including AnthropicSam Altman May Control Our Future—Can He Be Trusted?Sam Altman’s blog post in response to attackIntroducing Humwork (YC P26)► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    38 min
  6. APR 11

    Mythos Hacking Risk, Tristan Harris on Co-operation, AI Mario on Artemis | EP95

    Is Anthropic’s Mythos the clearest sign yet that AI safety is falling behind capability?  Frank and Justin dig into the new Anthropic model, why its reported cyber capabilities feel like a step change, and what it could mean for software security, zero-day exploits, regulation, and the balance of power between OpenAI, Google and Anthropic.   Plus: Tristan Harris’ call for international cooperation, and one very strange White House post featuring Mario and the moon.  Are AI companies in the best position to decide who gets access to powerful models, or should regulation step in? 00:18 How's your p(doom) looking? 02:34 Why is Mythos such a big deal? 06:00 Could Mythos hack your whole home network? 07:02 Will OpenAI, Google and Anthropic dominate? 09:48 Can Mythos fix the bugs it finds? 13:43 Who should control a model like Mythos? 17:59 Will Mythos spark a zero-day panic? 20:23 Will only big companies get Mythos access? 22:48 Did Anthropic just say “somebody stop us”? 25:50 Is Tristan Harris right about AI cooperation? 28:45 Is AI coordination doomed by competition? 31:59 Did the White House think Mario was on Artemis? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  Project Glasswing: Securing critical software for the AI eraClaude Mythos Preview System Card (PDF)Anthropic’s Restraint Is a Terrifying Warning SignWhy AI CEOs Are Building Bunkers - Tristan HarrisThe White House's bizarre Super Mario video leaves people perplexed► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    36 min
  7. APR 3

    Claude Code Leak, Sanders vs Data Centres, AI Band on Tour | EP94

    Did Anthropic just give away the secret sauce behind Claude Code? Frank and Justin dig into the Claude Code source code leak, what was actually exposed, and whether it matters when AI products are evolving so quickly.  Plus: poisoned code libraries raise fresh fears about AI agents blindly installing malware, OpenAI abruptly pulls the plug on Sora, Google pushes local AI forward with Gemma, and Bernie Sanders wades into the fight over new AI data centres. 00:27 Did Claude Code just leak its secret sauce? 03:20 Will AI agents blindly install your next hack? 06:09 Are we moving too fast with agentic AI? 08:21 Did OpenAI kill Sora to end side quests? 10:29 Will OpenAI's Spud transform the economy? 12:43 Is Sam Altman the wrong CEO for OpenAI now? 16:42 Do Anthropic’s limits make Claude too costly? 18:45 Did Google just make local AI much more viable? 24:24 Is Bernie Sanders right about stopping new AI data centers? 31:36 Can an AI metal band really go on tour? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  Claude’s code: Anthropic leaks source code for AI software engineering toolAxios NPM Supply Chain Compromise: Malicious Packages Deliver Remote Access TrojanDisney’s $1B Investment In OpenAI DOA As Sam Altman Pulls Sora Plug: “The Deal Is Not Moving Forward”OpenAI CEO Sam Altman reportedly teases a "very strong" model internally that can "really accelerate the economy"TurboQuant: Redefining AI efficiency with extreme compressionGemma 4: Byte for byte, the most capable open modelsNew Bernie Sanders AI Safety Bill Would Halt Data Center ConstructionAI-generated band Neon Oni to perform in Japan► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    34 min
  8. MAR 21

    AI Eviction Threats, OpenAI Side Quests, Rosie’s Cancer Vaccine: The AI Argument EP93

    Should AI data centres be allowed to override private land rights? Frank and Justin dig into the growing fight over AI infrastructure, eminent domain, power lines, and whether the AI boom is starting to look like a land grab driven by energy demand, speculative deals, and the race to build ever bigger data centres. Plus: rumours that Microsoft could sue OpenAI, fresh talk that OpenAI is finally killing its side quests to focus on enterprise, a brutal fake Sam Altman interview in The Onion, and the remarkable story of Rosie the dog, whose cancer treatment was helped along by AI-assisted research. So where do you land on it: if AI infrastructure promised growth and jobs in your area, would you accept power lines crossing private land? 00:32 Do AI projects justify taking private land? 03:07 Is AI just clickbait in an eminent domain debate? 07:37  What’s your price in the AI land grab? 10:18 Will AI’s power hunger drive chip efficiency? 13:06 Can AI spot land deals before developers do? 14:54 Did Claude get Justin his travel refund? 16:54 Is Microsoft about to sue OpenAI? 18:34 Is OpenAI finally killing its side quests? 26:16 What did ‘Sam Altman’ say in The Onion? 27:58 Did AI help Rosie the dog fight cancer? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  US farmers are rejecting multimillion-dollar datacenter bids for their land: ‘I’m not for sale’A 600-acre AI data center could cost some Wisconsin residents their landGrab Your Betrayal-Themed Popcorn Buckets, Because Microsoft Is Threatening to Sue OpenAIOpenAI shuts down side questsThe Onion’s Exclusive Interview With Sam AltmanA man used AI to help make a cancer vaccine for his dog – an oncologist urges caution► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    33 min

About

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery

You Might Also Like