The AI Argument

Frank Prendergast and Justin Collery

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery

  1. 9월 27일

    OpenAI & Nvidia’s 10GW Plan, Stephen Fry’s Red Lines, and a ChatGPT Exploit: The AI Argument EP73

    OpenAI and Nvidia want to build out 10 gigawatts of AI infrastructure, but Frank calls out the missing electricity and the missing billions. Justin argues the economy will lean on compute, data centres will dwarf expectations, and photonic chips could gut today’s energy bill. One sees magic beans. The other sees a steel-and-silicon juggernaut. Meanwhile, Stephen Fry wants global “red lines” for AI. Justin says you can’t regulate something that doesn’t exist yet. Frank says if you don’t set the guardrails now, you’ll never claw it back once things go sideways. While policy-makers debate red line fears, the smaller, sneakier dangers are already in your inbox. Hidden prompts in your emails can trick AI agents like Deep Research into quietly leaking your personal data. Finally, Frank and Justin look at two new AI startups.  Huxe promises to be your AI newsreader, feeding you audio summaries tailored to your interests.  And Neon? That one pays you to install spyware on yourself. It listens to your phone calls and ships the data off to AI labs for training. Because that sounds like a good idea 😬 00:36 Can OpenAI and Nvidia power their big AI dream? 13:45 Can Stephen Fry draw red lines for AI? 21:58 Could ChatGPT silently leak your data? 26:28 Could Huxe be your new AI newsreader? 30:53 Would you let Neon monetise your phone calls? ► LINKS TO CONTENT WE DISCUSSED OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systemsSam Altman: Abundant IntelligenceAI experts return from China stunned: The U.S. grid is so weak, the race may already be overWe urgently call for international red lines to prevent unacceptable AI risks.OpenAI plugs ShadowLeak bug in ChatGPT that let miscreants raid inboxesHuxe: Your Personal Audio CompanionNeon, the No. 2 social app on the Apple App Store, pays users to record their phone calls and sells data to AI firms ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35분
  2. 9월 20일

    AI Doom vs Gloom, ChatGPT Usage Revealed, and Google’s AI-Run Economies: The AI Argument EP72

    Justin says p(doom) is for losers. He’s betting on p(bloom), a near-certainty in his view. AI brings abundance, robots do our housework, and everything just gets better. But Frank wants to know your p(gloom). What’s the probability we don’t reach AGI, and instead let AI quietly erode work, value, and meaning, while we end up fixing its mistakes for minimum wage? They argue their way through all three scenarios, then turn to what people are actually doing with ChatGPT. Together, they unpack eight big takeaways from OpenAI’s latest usage report, including who’s using it, what for, and why code barely features. They also dig into where Claude adoption is growing fastest, and Google’s own warnings about the rise of AI-run economies. And there’s a man kicking a robot that might just be the perfect visual metaphor for where we’re at with AI right now. Full list of topics: 00:34 Can Frank pass the Tesla ethics test? 01:31 Is Faggella’s p(bloom) just p(doom) in disguise? 03:37 Is p(gloom) worse than p(doom)? 14:19 What are people really doing with ChatGPT? 24:18 Should AI be a basic human right? 27:35 Can Google steer the AI agent economy safely? 30:57 Is this robot a metaphor for AI today? ► SUBSCRIBE Don't forget to subscribe for more arguments ► LINKS TO CONTENT WE DISCUSSED  What’s Your p(Bloom)?How people are using ChatGPTAnthropic Economic Index report: Uneven geographic and enterprise AI adoptionVirtual Agent EconomiesRobot that won’t get knocked down► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35분
  3. 9월 15일

    OpenAI’s Hallucination Plan, Reproducible AI Outputs, and Telepathic AI: The AI Argument EP71

    Frank and Justin clash over new publications from OpenAI and Thinking Machines. Frank insists hallucinations make LLMs unreliable. Justin fires back that they’re the price of real creativity. Still, even Frank and Justin agree that big companies don’t want poetry, they want predictability. Same input, same output. Trouble is… today’s models can’t even manage that. And then there’s GPT-5, busy gaslighting everyone with lyrical nonsense while telling us it’s genius. Add in an optical model that burns a fraction of the energy, a mind-reading AI headset, and Gemini demanding compliments or throwing a sulk, and you’ve got plenty to argue about. Full list of topics: 06:31 Can OpenAI fix the hallucination problem? 10:12 Is Mira Murati fixing flaky AI outputs? 19:27 Is GPT-5 gaslighting us with pretty prose? 26:14 Could light fix AI’s energy addiction? 28:32 Is the Alterego device really reading your mind? 32:41 Is your code giving Gemini a nervous breakdown? ► SUBSCRIBE Don't forget to subscribe for more arguments! ► LINKS TO CONTENT WE DISCUSSED  Why language models hallucinateDefeating Nondeterminism in LLM InferenceThere's Something Bizarre About When GPT-5 Writes in a Literary StyleOptical generative modelsInteract at the speed of thoughtGemini requires emotional support or will freak out and uninstall itself from Cursor ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/ ► YOUR INPUT Are today’s LLMs reliable enough to take humans out of the loop?

    36분
  4. 8월 29일

    Suleyman vs. Conscious AI, Pedantic GPT-5, and Google’s Deepfake Generator: The AI Argument EP70

    Mustafa Suleyman wants to ban AIs from sounding conscious. Frank worries if they ever do become conscious, we might have trained them to stay silent about it. Justin argues it’s all unknowable anyway. If you can’t prove consciousness, how can you know AI isn’t conscious? Plus: GPT-5’s unbearable accuracy, lawsuits over pirated training data, Google’s deepfake-friendly image model, models that “dream” better answers, and Elon’s plan to take on Microsoft with MacroHard. 00:25 Is GPT-5 just too pedantic to love? 05:24 Can Suleyman stop AI from seeming conscious? 13:37 Is fair use still fair if you stole the data? 16:46 Did Google just make deepfakes too easy? 23:44 Do training loops beat clever design? 28:44 What's Elon up to? ► LINKS TO CONTENT WE DISCUSSED  We must build AI for people; not to be a personAnthropic Settles Major AI Copyright Suit Brought by AuthorsThe Vacker v. Eleven Labs settlement doesn’t resolve the fundamental legal questions around AI and IP, but it sends a powerful message: AI companies are not above the law…Introducing Gemini 2.5 Flash Image, our state-of-the-art image modelThe Hidden Drivers of HRM's Performance on ARC-AGIApple employees built an LLM that taught itself to produce good user interface code - but worryingly, it did so independentlyElon Musk claims to be making Microsoft competitor named Macrohard and despite the 'tongue-in-cheek name', the project is unfortunately 'very real'► CONNECT WITH US Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    34분
  5. 8월 4일

    AI Agents Under Fire, LLM Bias Runs Deep, and a Wizard of Oz Fail: The AI Argument EP68

    AI agents crumble faster than wet cardboard when under attack. A recent study proved it. Every single agent tested failed against prompt injections. That’s a 100% failure rate.  Justin sees this as a fixable engineering problem with smart design and strict access controls.  Frank isn’t convinced. Real-world complexity means isolation isn’t that simple.  And while Justin rails against regulation, Frank points to the EU’s looming rules as a possible safety net. The bigger takeaway? Businesses racing to deploy open-ended agents could be building ticking time bombs. The safer bet might be narrow, well-scoped agents that automate specific tasks. But will hype win over common sense? From there, the debate shifts to a study exposing bias in LLMs. It found they recommend lower salaries for women and minority groups. Can removing personal details fix the problem, or is the bias baked in?  Then it takes a technical turn with Chinese researchers using LLMs to design stronger models, before veering into the unexpected: a football club handing legal contracts to AI and a Wizard of Oz remake that left Vegas audiences unimpressed. 02:12 Can any AI agent survive a prompt attack? 14:51 Is AI quietly spreading bias everywhere? 25:19 Are LLMs now designing better LLMs? 29:32 Did United just make AI their star player? 31:13 Did AI butcher the Wizard of Oz in Vegas? ► LINKS TO CONTENT WE DISCUSSED Security Challenges in AI Agent Deployment: Insights from a Large Scale Public CompetitionSalary advice from AI low-balls women and minorities, says new report"AlphaGo Moment" For Self Improving AI... can this be real?Cambridge United partners with Genie AI to adopt AI for contract managementIs The Wizard of Oz With Generative AI Still The Wizard of Oz? ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    35분
  6. 7월 28일

    EU Code of Conduct Clash, Zuck’s Big Bucks, and Model Owl Bias: The AI Argument EP67

    A €300 million AI investment vanished overnight—and Justin says it’s a warning Europe is sleepwalking into irrelevance. Because while the US plans nuclear power and light-touch rules, the EU is doubling down on regulation and failing to build the energy infrastructure AI needs.  Frank argues regulation isn’t a handicap, it’s Europe’s best shot at leadership, setting the stage for global guardrails while others race blindly ahead. Either way, Anthropic predicts training a frontier model could soon require up to five gigawatts of power, the same energy it takes to run millions of homes. Europe isn’t building that capacity. The US is. And that’s just the start.  From Zuckerberg offering billion-dollar contracts to the cultural showdown between OpenAI and Google, this one packs a lot in.  We also dive into how synthetic data can secretly pass on biases, why academic peer review might be gamed by prompt injections, and even LinkedIn’s bot problem. → 00:57 Why isn’t Amazon building its AI facility in Ireland? → 02:54 Will EU rules choke AI or make us leaders? → 14:39 Can Zuckerberg buy his way to AI dominance? → 20:37 Google vs OpenAI: who aced the math olympiad? → 29:44 Can AI bias spread through random numbers? → 35:01 Is AI gaming peer review AND your LinkedIn feed? ► SUBSCRIBE Don't forget to subscribe to get all the latest arguments. ► LINKS TO CONTENT WE DISCUSSED Amazon drops €300m Irish investment on energy supply concernsAnthropic: Build AI in AmericaMeta won’t sign EU’s AI Code, but who will?The Epic Battle for AI Talent—With Exploding Offers, Secret Deals and TearsGoogle Takes the Gold. OpenAI under fire.A new study just upended AI safetyICML’s Statement about subversive hidden LLM prompts ► CONNECT WITH US For more in-depth discussions, connect Justin and Frank on LinkedIn. Justin: https://www.linkedin.com/in/justincollery/ Frank: https://www.linkedin.com/in/frankprendergast/

    41분

소개

Worried that AI is moving too fast? Worried like me that it's not moving fast enough? Just interested in the latest news and events in AI. Frank Prendergast and Justin Collery discuss in 'The AI Argument'Contact Frank at frank@frankandmarci.comlinkedin.com/in/frankprendergastContact Justin at justin.collery@wi-pipe.comX - @jcollery