The AI Argument

Frank Prendergast and Justin Collery

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery

  1. -8 Ч

    AI Bubble Trouble, Monsters at Anthropic, Taylor Swift Slop: The AI Argument EP75

    Is it an AI bubble, a boom, or a 'market correction in waiting'? Frank reckons OpenAI’s AGI dreams won’t survive a bubble burst. Justin says if the crash hits, Microsoft will scoop up OpenAI for pennies. One thing they both agree on? Google will be just fine. The AI shakeout is coming, so who’s built for the long haul and who’s about to vanish in a puff of VC smoke? Plus, Anthropic’s co-founder sees monsters in the model and wants the public to hold them accountable. Meanwhile, Europe is pumping €1.1B into AI and hoping it’s enough to matter. Is it, really? New research from Anthropic’s suggests just 250 poisoned documents could poison an LLM. And Taylor Swift gets accused of using “AI slop”. 02:59 Are we in an AI bubble about to burst? 10:06 Will €1.1B make EU AI competitive? 12:38 Is Anthropic afraid of its own creation? 20:10 Could just 250 docs poison an LLM? 26:57 Is GPT-6 coming before Christmas? 29:39 Did Taylor Swift use AI slop? ► LINKS TO CONTENT WE DISCUSSED  Deutsche Bank Issues Grim Warning for AI IndustryEU pushes new AI strategy to reduce tech reliance on US and ChinaTechnological Optimism and Appropriate FearA small number of samples can poison LLMs of any sizeTaylor Swift fans accuse singer of using AI in her Google scavenger hunt videos► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    34 мин.
  2. 27 СЕНТ.

    OpenAI & Nvidia’s 10GW Plan, Stephen Fry’s Red Lines, and a ChatGPT Exploit: The AI Argument EP73

    OpenAI and Nvidia want to build out 10 gigawatts of AI infrastructure, but Frank calls out the missing electricity and the missing billions. Justin argues the economy will lean on compute, data centres will dwarf expectations, and photonic chips could gut today’s energy bill. One sees magic beans. The other sees a steel-and-silicon juggernaut. Meanwhile, Stephen Fry wants global “red lines” for AI. Justin says you can’t regulate something that doesn’t exist yet. Frank says if you don’t set the guardrails now, you’ll never claw it back once things go sideways. While policy-makers debate red line fears, the smaller, sneakier dangers are already in your inbox. Hidden prompts in your emails can trick AI agents like Deep Research into quietly leaking your personal data. Finally, Frank and Justin look at two new AI startups.  Huxe promises to be your AI newsreader, feeding you audio summaries tailored to your interests.  And Neon? That one pays you to install spyware on yourself. It listens to your phone calls and ships the data off to AI labs for training. Because that sounds like a good idea 😬 00:36 Can OpenAI and Nvidia power their big AI dream? 13:45 Can Stephen Fry draw red lines for AI? 21:58 Could ChatGPT silently leak your data? 26:28 Could Huxe be your new AI newsreader? 30:53 Would you let Neon monetise your phone calls? ► LINKS TO CONTENT WE DISCUSSED OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systemsSam Altman: Abundant IntelligenceAI experts return from China stunned: The U.S. grid is so weak, the race may already be overWe urgently call for international red lines to prevent unacceptable AI risks.OpenAI plugs ShadowLeak bug in ChatGPT that let miscreants raid inboxesHuxe: Your Personal Audio CompanionNeon, the No. 2 social app on the Apple App Store, pays users to record their phone calls and sells data to AI firms ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35 мин.
  3. 20 СЕНТ.

    AI Doom vs Gloom, ChatGPT Usage Revealed, and Google’s AI-Run Economies: The AI Argument EP72

    Justin says p(doom) is for losers. He’s betting on p(bloom), a near-certainty in his view. AI brings abundance, robots do our housework, and everything just gets better. But Frank wants to know your p(gloom). What’s the probability we don’t reach AGI, and instead let AI quietly erode work, value, and meaning, while we end up fixing its mistakes for minimum wage? They argue their way through all three scenarios, then turn to what people are actually doing with ChatGPT. Together, they unpack eight big takeaways from OpenAI’s latest usage report, including who’s using it, what for, and why code barely features. They also dig into where Claude adoption is growing fastest, and Google’s own warnings about the rise of AI-run economies. And there’s a man kicking a robot that might just be the perfect visual metaphor for where we’re at with AI right now. Full list of topics: 00:34 Can Frank pass the Tesla ethics test? 01:31 Is Faggella’s p(bloom) just p(doom) in disguise? 03:37 Is p(gloom) worse than p(doom)? 14:19 What are people really doing with ChatGPT? 24:18 Should AI be a basic human right? 27:35 Can Google steer the AI agent economy safely? 30:57 Is this robot a metaphor for AI today? ► SUBSCRIBE Don't forget to subscribe for more arguments ► LINKS TO CONTENT WE DISCUSSED  What’s Your p(Bloom)?How people are using ChatGPTAnthropic Economic Index report: Uneven geographic and enterprise AI adoptionVirtual Agent EconomiesRobot that won’t get knocked down► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35 мин.
  4. 15 СЕНТ.

    OpenAI’s Hallucination Plan, Reproducible AI Outputs, and Telepathic AI: The AI Argument EP71

    Frank and Justin clash over new publications from OpenAI and Thinking Machines. Frank insists hallucinations make LLMs unreliable. Justin fires back that they’re the price of real creativity. Still, even Frank and Justin agree that big companies don’t want poetry, they want predictability. Same input, same output. Trouble is… today’s models can’t even manage that. And then there’s GPT-5, busy gaslighting everyone with lyrical nonsense while telling us it’s genius. Add in an optical model that burns a fraction of the energy, a mind-reading AI headset, and Gemini demanding compliments or throwing a sulk, and you’ve got plenty to argue about. Full list of topics: 06:31 Can OpenAI fix the hallucination problem? 10:12 Is Mira Murati fixing flaky AI outputs? 19:27 Is GPT-5 gaslighting us with pretty prose? 26:14 Could light fix AI’s energy addiction? 28:32 Is the Alterego device really reading your mind? 32:41 Is your code giving Gemini a nervous breakdown? ► SUBSCRIBE Don't forget to subscribe for more arguments! ► LINKS TO CONTENT WE DISCUSSED  Why language models hallucinateDefeating Nondeterminism in LLM InferenceThere's Something Bizarre About When GPT-5 Writes in a Literary StyleOptical generative modelsInteract at the speed of thoughtGemini requires emotional support or will freak out and uninstall itself from Cursor ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/ ► YOUR INPUT Are today’s LLMs reliable enough to take humans out of the loop?

    36 мин.
  5. 29 АВГ.

    Suleyman vs. Conscious AI, Pedantic GPT-5, and Google’s Deepfake Generator: The AI Argument EP70

    Mustafa Suleyman wants to ban AIs from sounding conscious. Frank worries if they ever do become conscious, we might have trained them to stay silent about it. Justin argues it’s all unknowable anyway. If you can’t prove consciousness, how can you know AI isn’t conscious? Plus: GPT-5’s unbearable accuracy, lawsuits over pirated training data, Google’s deepfake-friendly image model, models that “dream” better answers, and Elon’s plan to take on Microsoft with MacroHard. 00:25 Is GPT-5 just too pedantic to love? 05:24 Can Suleyman stop AI from seeming conscious? 13:37 Is fair use still fair if you stole the data? 16:46 Did Google just make deepfakes too easy? 23:44 Do training loops beat clever design? 28:44 What's Elon up to? ► LINKS TO CONTENT WE DISCUSSED  We must build AI for people; not to be a personAnthropic Settles Major AI Copyright Suit Brought by AuthorsThe Vacker v. Eleven Labs settlement doesn’t resolve the fundamental legal questions around AI and IP, but it sends a powerful message: AI companies are not above the law…Introducing Gemini 2.5 Flash Image, our state-of-the-art image modelThe Hidden Drivers of HRM's Performance on ARC-AGIApple employees built an LLM that taught itself to produce good user interface code - but worryingly, it did so independentlyElon Musk claims to be making Microsoft competitor named Macrohard and despite the 'tongue-in-cheek name', the project is unfortunately 'very real'► CONNECT WITH US Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    34 мин.
  6. 4 АВГ.

    AI Agents Under Fire, LLM Bias Runs Deep, and a Wizard of Oz Fail: The AI Argument EP68

    AI agents crumble faster than wet cardboard when under attack. A recent study proved it. Every single agent tested failed against prompt injections. That’s a 100% failure rate.  Justin sees this as a fixable engineering problem with smart design and strict access controls.  Frank isn’t convinced. Real-world complexity means isolation isn’t that simple.  And while Justin rails against regulation, Frank points to the EU’s looming rules as a possible safety net. The bigger takeaway? Businesses racing to deploy open-ended agents could be building ticking time bombs. The safer bet might be narrow, well-scoped agents that automate specific tasks. But will hype win over common sense? From there, the debate shifts to a study exposing bias in LLMs. It found they recommend lower salaries for women and minority groups. Can removing personal details fix the problem, or is the bias baked in?  Then it takes a technical turn with Chinese researchers using LLMs to design stronger models, before veering into the unexpected: a football club handing legal contracts to AI and a Wizard of Oz remake that left Vegas audiences unimpressed. 02:12 Can any AI agent survive a prompt attack? 14:51 Is AI quietly spreading bias everywhere? 25:19 Are LLMs now designing better LLMs? 29:32 Did United just make AI their star player? 31:13 Did AI butcher the Wizard of Oz in Vegas? ► LINKS TO CONTENT WE DISCUSSED Security Challenges in AI Agent Deployment: Insights from a Large Scale Public CompetitionSalary advice from AI low-balls women and minorities, says new report"AlphaGo Moment" For Self Improving AI... can this be real?Cambridge United partners with Genie AI to adopt AI for contract managementIs The Wizard of Oz With Generative AI Still The Wizard of Oz? ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35 мин.

Об этом подкасте

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery