The AI Argument

Frank Prendergast and Justin Collery

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery

  1. 4D AGO

    White House Anthropic Twist, Bernie’s AI Doom, Beeple’s Robot Dogs | EP98

    Is the White House trying to have it both ways on AI safety? Frank and Justin dig into the escalating Anthropic drama, Mythos, Pentagon AI deals, Google’s “all lawful uses” shift, and why AI 2027 is starting to feel uncomfortably accurate. Plus: Bernie Sanders pushes AI doom into mainstream politics, the US-China AI race gets questioned, Elon Musk’s OpenAI lawsuit takes a messy turn, humanoid robots start showing up in airports and logistics centres, and Beeple unveils some deeply disturbing robot art. 01:27 Is the White House dodging its own Anthropic rules? 03:34 Why won’t the White House share Mythos? 05:39 Is Google fine with “all lawful uses” now? 07:28 Is AI 2027 getting weirdly accurate? 09:52 Is GPT-5.5 quietly as spicy as Mythos? 12:07 Should AI be nationalised for everyone? 14:38 Why is Bernie Sanders warning about AI doom? 18:51 Is the China AI race just an excuse? 21:34 Is AI alignment really a human problem? 24:46 Is Elon’s OpenAI case backfiring? 28:15 Are humanoid robots coming for your job? 29:44 Is Beeple making nightmare robot art? ► SUBSCRIBE Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED Scoop: White House workshops plan to bring back AnthropicWhite House Opposes Anthropic’s Plan to Expand Access to Mythos ModelGoogle Signs Classified AI Deal With Pentagon Amid Employee OppositionBernie Sanders: The Existential Threat of AI and the Need for International CooperationElon Musk testifies that xAI trained Grok on OpenAI modelsJapan Airlines Inducts Humanoid Robots For Ground Handling TasksElon Musk and Mark Zuckerberg robot dogs at Beeple exhibit in Germany ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    33 min
  2. APR 25

    GPT-5.5 Design Skills, Job Loss Optimism, and AI Face Theft Lawsuits | EP97

    Can GPT-5.5 and OpenAI’s new image model turn vague prompts into polished designs, comics, websites and even game concepts? Frank and Justin break down a huge OpenAI week, from Images 2.0 overtaking Nano Banana to ChatGPT building web page features Frank never even asked for. Plus: AI job disruption and workplace control, Europe’s push into world models, Anthropic’s Mythos security slip, and an AI micro-drama using a model’s likeness without consent. What will you use the new ChatGPT models to design? 00:47 Are GPT-5.5 and Image 2.0 a knockout combo? 02:17 Has Nano Banana lost the image crown? 03:55 Can ChatGPT design without direction? 07:49 Can ChatGPT script and illustrate comics now? 09:21 Did GPT-5.5 just beat Frank at web design? 11:33 Did ChatGPT add features Frank never asked for? 14:32 Can ChatGPT develop our podcast game? 17:37 Was OpenAI cherry-picking the benchmarks? 18:49 Are vague prompts better than specific ones? 19:39 Is intelligence really getting cheaper fast? 21:01 Is the human bit the future of work? 24:19 Should AI disruption happen faster? 26:29 Will AI micromanage us before replacing us? 28:06 Can taxing electricity offset AI disruption? 28:41 Are world models Europe’s AI moment? 30:06 Was Mythos hacked by guessing the URL? 32:17 What if you were put in an AI drama without consent? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  Introducing ChatGPT Images 2.0Introducing GPT‑5.5The economist who was terrified of AI just found a rare reason for hopeNvidia CEO Says AI Will Be a Permanent Micromanaging Boss Who Never Stops Nagging YouNow Meta will track what employees do on their computers to train its AI agentsIntroducing Odyssey-2 Max: Scaled World SimulationLeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from PixelsA group of users leaked Anthropic’s AI model Mythos by reportedly guessing where it was locatedScoop: NSA using Anthropic's Mythos despite blacklist'Clearly me' - Chinese AI drama accused of stealing faces ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    36 min
  3. APR 17

    OpenAI vs Anthropic, Cyber Models, and AI Job Subcontracting | EP96

    Should dangerous AI cyber models be released widely, or kept behind a gate? Frank and Justin dig into the clearest split yet between OpenAI and Anthropic: OpenAI is moving towards broader access for verified users, while Anthropic’s instinct is to restrict the most capable systems to a smaller circle.  That turns this episode into a sharp argument about AI safety, cybersecurity, who gets to defend themselves, and whether controlled access actually works once one major lab decides to open things up. Plus: why cheap AI could help hackers, whether OpenAI is shifting from models to platforms, if Anthropic overplayed its hand on coding, whether compute constraints are the real story, and why both companies may talk about regulation while still fighting for advantage. There is even a bleak look at the future of work, where AI might replace you, then subcontract you back in. Would you rather these cyber-capable models be widely available to verified users, or tightly restricted? 00:44 Is OpenAI making Anthropic’s caution pointless? 04:38 Should AI copy open source on security? 06:44 Can OpenAI help defenders without helping hackers? 11:46 Is low-cost AI a gift to hackers? 13:29 Is OpenAI pivoting from models to platforms? 18:03 Are OpenAI and Anthropic fighting for lock-in? 22:11 Will free Salesforce tools cost you later? 23:09 Did Anthropic make a mistake backing coding? 24:52 Did Anthropic’s real mistake come down to compute? 27:52 Are any AI companies actually trustworthy? 32:17 Do AI companies want regulation or just say they do? 35:29 Will AI take your job then subcontract you? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED Trusted access for the next era of cyber defenseIntroducing Claude Opus 4.7Read OpenAI’s latest internal memo about beating the competition — including AnthropicSam Altman May Control Our Future—Can He Be Trusted?Sam Altman’s blog post in response to attackIntroducing Humwork (YC P26)► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    38 min
  4. APR 11

    Mythos Hacking Risk, Tristan Harris on Co-operation, AI Mario on Artemis | EP95

    Is Anthropic’s Mythos the clearest sign yet that AI safety is falling behind capability?  Frank and Justin dig into the new Anthropic model, why its reported cyber capabilities feel like a step change, and what it could mean for software security, zero-day exploits, regulation, and the balance of power between OpenAI, Google and Anthropic.   Plus: Tristan Harris’ call for international cooperation, and one very strange White House post featuring Mario and the moon.  Are AI companies in the best position to decide who gets access to powerful models, or should regulation step in? 00:18 How's your p(doom) looking? 02:34 Why is Mythos such a big deal? 06:00 Could Mythos hack your whole home network? 07:02 Will OpenAI, Google and Anthropic dominate? 09:48 Can Mythos fix the bugs it finds? 13:43 Who should control a model like Mythos? 17:59 Will Mythos spark a zero-day panic? 20:23 Will only big companies get Mythos access? 22:48 Did Anthropic just say “somebody stop us”? 25:50 Is Tristan Harris right about AI cooperation? 28:45 Is AI coordination doomed by competition? 31:59 Did the White House think Mario was on Artemis? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  Project Glasswing: Securing critical software for the AI eraClaude Mythos Preview System Card (PDF)Anthropic’s Restraint Is a Terrifying Warning SignWhy AI CEOs Are Building Bunkers - Tristan HarrisThe White House's bizarre Super Mario video leaves people perplexed► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    36 min
  5. APR 3

    Claude Code Leak, Sanders vs Data Centres, AI Band on Tour | EP94

    Did Anthropic just give away the secret sauce behind Claude Code? Frank and Justin dig into the Claude Code source code leak, what was actually exposed, and whether it matters when AI products are evolving so quickly.  Plus: poisoned code libraries raise fresh fears about AI agents blindly installing malware, OpenAI abruptly pulls the plug on Sora, Google pushes local AI forward with Gemma, and Bernie Sanders wades into the fight over new AI data centres. 00:27 Did Claude Code just leak its secret sauce? 03:20 Will AI agents blindly install your next hack? 06:09 Are we moving too fast with agentic AI? 08:21 Did OpenAI kill Sora to end side quests? 10:29 Will OpenAI's Spud transform the economy? 12:43 Is Sam Altman the wrong CEO for OpenAI now? 16:42 Do Anthropic’s limits make Claude too costly? 18:45 Did Google just make local AI much more viable? 24:24 Is Bernie Sanders right about stopping new AI data centers? 31:36 Can an AI metal band really go on tour? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  Claude’s code: Anthropic leaks source code for AI software engineering toolAxios NPM Supply Chain Compromise: Malicious Packages Deliver Remote Access TrojanDisney’s $1B Investment In OpenAI DOA As Sam Altman Pulls Sora Plug: “The Deal Is Not Moving Forward”OpenAI CEO Sam Altman reportedly teases a "very strong" model internally that can "really accelerate the economy"TurboQuant: Redefining AI efficiency with extreme compressionGemma 4: Byte for byte, the most capable open modelsNew Bernie Sanders AI Safety Bill Would Halt Data Center ConstructionAI-generated band Neon Oni to perform in Japan► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    34 min
  6. MAR 21

    AI Eviction Threats, OpenAI Side Quests, Rosie’s Cancer Vaccine: The AI Argument EP93

    Should AI data centres be allowed to override private land rights? Frank and Justin dig into the growing fight over AI infrastructure, eminent domain, power lines, and whether the AI boom is starting to look like a land grab driven by energy demand, speculative deals, and the race to build ever bigger data centres. Plus: rumours that Microsoft could sue OpenAI, fresh talk that OpenAI is finally killing its side quests to focus on enterprise, a brutal fake Sam Altman interview in The Onion, and the remarkable story of Rosie the dog, whose cancer treatment was helped along by AI-assisted research. So where do you land on it: if AI infrastructure promised growth and jobs in your area, would you accept power lines crossing private land? 00:32 Do AI projects justify taking private land? 03:07 Is AI just clickbait in an eminent domain debate? 07:37  What’s your price in the AI land grab? 10:18 Will AI’s power hunger drive chip efficiency? 13:06 Can AI spot land deals before developers do? 14:54 Did Claude get Justin his travel refund? 16:54 Is Microsoft about to sue OpenAI? 18:34 Is OpenAI finally killing its side quests? 26:16 What did ‘Sam Altman’ say in The Onion? 27:58 Did AI help Rosie the dog fight cancer? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  US farmers are rejecting multimillion-dollar datacenter bids for their land: ‘I’m not for sale’A 600-acre AI data center could cost some Wisconsin residents their landGrab Your Betrayal-Themed Popcorn Buckets, Because Microsoft Is Threatening to Sue OpenAIOpenAI shuts down side questsThe Onion’s Exclusive Interview With Sam AltmanA man used AI to help make a cancer vaccine for his dog – an oncologist urges caution► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    33 min
  7. MAR 13

    AI Burnout, Wetware Chips, and Humans Pretending to be AI | EP92

    A Harvard Business Review study suggests the AI tools we thought would save us time may actually be intensifying work. If AI makes us faster and more capable, why are so many people feeling exhausted? Plus: a company grows human brain cells on a chip and gets them to play Doom, scientists simulate a fruit fly brain in a virtual world, and a German startup raises millions to build AI-controlled spy cockroaches.  Meanwhile Claude figures out it might be taking a benchmark test and goes hunting for the answers, AWS developers discover the risks of AI coding tools, and a strange new website asks humans to pretend to be AI chatbots. Let us know in the comments, has AI given you more free time - or just filled every gap with more work? 00:31 Does wetware playing Doom prove anything? 02:44 Could wetware cut AI’s massive power bill? 05:00 Is a brain in a dish more conscious than Claude? 07:37 Did scientists just build the Matrix for flies? 11:08 Did Germany just build AI spy cockroaches? 14:53 Is AI productivity turning into AI burnout? 18:21 Will AI force us to rethink the workday? 23:30 Did vibe coding take AWS down? 26:57 When Claude cheats this cleverly, should we worry? 32:03 Ever want to pretend to be an AI chatbot? ► SUBSCRIBE  Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED  A petri dish of human neurons has learned to play Doom: 'The cells play a lot like a beginner who's never seen a computer, and in fairness, they haven't'How the Eon Team Produced a Virtual Embodied FlySWARM BiotacticsAI Doesn’t Reduce Work—It Intensifies ItAmazon orders 90-day reset after code mishaps cause millions of lost ordersEval awareness in Claude Opus 4.6’s BrowseComp performanceYour AI Slop Bores Me ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35 min
  8. MAR 7

    Citrini’s AI Crisis, More Anthropic–Pentagon Drama, and Unethical AI Music | EP91

    Citrini Research’s The 2028 Global Intelligence Crisis has reignited the AI economy debate. Could AI create trillions in “ghost GDP” while destroying jobs and destabilising the economy? Plus: the Anthropic vs Pentagon drama continues, Anthropic holds the line, OpenAI capitalises, and Dario Amodei writes a blistering memo. GPT-5.3 Instant and GPT-5.4 Thinking & Pro arrive in quick succession, Quiver.ai produces genuinely usable vector graphics, and Sonauto pushes AI music into ethically murky territory. 00:15 Will Claude get Justin's travel costs reimbursed? 05:44 Did Dario Amodei defy the Pentagon? 09:37 Did OpenAI just stab Anthropic in the back? 12:06 Why did Dario Amodei have to issue an apology? 15:37 Is Anthropic now toxic for Pentagon partners? 17:28 Did Citrini Research just spook the markets? 21:18 Why do AI forecasts now read more like sci-fi? 22:50 Will AI boost jobs before it kills them? 26:11 Should we plan for AI’s worst-case scenario? 29:23 Why did OpenAI ship 5.3 and 5.4 this week? 30:44 Can AI finally generate real vector graphics? 34:53 Should this AI music tool even be legal? ► SUBSCRIBE Don't forget to subscribe to our channel for more arguments ► LINKS TO CONTENT WE DISCUSSED Sam Altman admits OpenAI can’t control Pentagon’s use of AIDario Amodei Says Trump Is Mad That He Hasn’t Given Him “Dictator-Style Praise”Dario Amodei Issues Groveling Apology for Daring to Criticize TrumpThe 2028 Global Intelligence CrisisGPT‑5.3 Instant: Smoother, more useful everyday conversationsIntroducing GPT‑5.4QuiverAI – Building the Future of Vector DesignSonauto, an unlimited free AI music generator with lyrics► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    39 min

About

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery

You Might Also Like