Enough About AI

Enough About AI

A podcast that brings you enough about the key tech topic of our time for you to feel a bit more confident and informed. Dónal Mulligan, a media and technology lecturer, and Ciarán O'Connor, a disinformation expert, help you explore and understand how AI is affecting our lives.

Episodes

  1. Expansion, Economics, Erotica & Education.

    12/04/2025

    Expansion, Economics, Erotica & Education.

    Dónal and Ciarán return with more "Bubble Watch", reporting on the latest expansion in valuations, hype, and attempts to find new ways to commercialise AI. Among those potential avenues to income, they discuss emerging AI robots, OpenAI's decision to allow AI erotica for adults, and the push to get AI into education - as well as the associated concerns. This last quarterly update for 2025 draws together some themes that you've identified in your submitted comments and questions and tries to end on a little hope amid a lot of anxiety! Topics in the episode The AI Bubble gets bubblier - OpenAI's quarterly loss reports, Sam Altman's angry interviews, and the beginnings of a withdrawal of money from key parts of the AI economy.AI hardware in the form of robots like the recent XPENG demo and renewed concerns about labour replacement and military applications.OpenAI's hope of monetising via erotica.AI disinformation in the Irish Presidential election, as well as related stories in the Netherlands and elsewhereThe unsuitability of mainstream GenAI tools to educational contexts and recent research on their associated Cognitive Deficit in learning contextsTop-down vs Bottom-up reactions and responses to AI in our lives and work Resources & Links The widely circulated images describing the circular investment within AI include this famous example from Bloomberg ReportingRecent reporting on the topic is widespread but includes these examples from WSJ, NYT, Ars Technica, Business Insider, etc., etc. MIT NANDA report on 95% of AI integration showing no return on investmentMIT Media Lab's research on Cognitive Deficit and AIReporting on the XPENG humanoid robot (Euronews)Reporting from Ciarán's colleagues at the Institute for Strategic Dialogue in a Digital Dispatch on Russian state content surfacing in Chatbot outputsMore discussion of "LLM Grooming & Data Voids" in this Harvard Kennedy School paper.Fact Check reporting from TheJournal on AI videos in the Irish Presidential Election, including the infamous Catherine Connolly withdrawal fake videoReporting from Politico on AI Deepfakes in European ElectionsYou can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    42 min
  2. Bursting the Bubble?

    08/20/2025

    Bursting the Bubble?

    Dónal and Ciarán explore the increasingly looming questions about the overinflation of the major AI companies' valuations and the anxiety about whether we are in a bubble - and what might happen if it pops. Relevant to their discussion of the need for AI models to keep the hype levels high, they discuss the muted reception to the release of ChatGPT-5 and some of the emerging strategies to make AI chatbots more palatable to certain audiences who are worried about "woke". Topics in this episode Are we in an AI bubble? Spoiler alert: Yes - based on any normal metric of what an investment bubble is - but why is the promise of an almost-there superintelligence keeping things from popping?Despite being in a bubble, the lasting impacts of where AI already is on jobs and society is discussedThe recent release of ChatGPT-5 has led to negative feedback from the tech press and vocal users - this is contrasted with other recent version releases.Examining how AI companies are trying to find new ways to add value, leading to a discussion of "Third Devices" and AI hardwareThe limitations and diminishing returns of training on synthetic data - and the apparent slowing down in model progressAI & Ideology - what does it mean to have a non-woke AI?Resources & Links The Economist story mentioned by Dónal: "AI valuations are verging on the unhinged - Unless superintelligence is just around the corner" (25 June 2025)Article in TheJournal.ie on "Brendan", the AI Dublin Tour GuideChatGPT's dodgy graph is linked and discussed here: "OpenAI gets caught vibe graphing" (The Verge, 07 August)Sam Altman (OpenAI) tells venture capitalists that he will take billions of their money and build AGI - and then ask it how to make a return on the investment  (Twitter Video, Warren Terra)Some good discussion on the struggles of agentive AI ("AI Agents have, so far, mostly been a dud", Gary Marcus, Substack)Apple's important recent paper on the limitations of "reasoning" within tested reasoning models is available as a PDF hereCoverage of Truth Social's deal with Perplexity - to make a non-woke chatbot for the platformYou can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    47 min
  3. Alignment Anxieties & Persuasion Problems

    05/13/2025

    Alignment Anxieties & Persuasion Problems

    Dónal and Ciarán continue the 2025 season with a second quarterly update that looks at some recent themes in AI development. They're pondering doom again, as we increasingly grapple with the evidence that AI systems are powerfully persuasive and full of flattery at the same time as our ability to meaningfully supervise them seems to be diminishing. Topics in this episode Can we see how reasoning models reason? If AI is thinking, or sharing information and it's not in human language, how can we check that it's aligned with our values. This interpretability issue is tied to the concept of neuralese - inscrutable machine thoughts!We discuss the predictions and prophetic doom visions of the AI-2027 documentIncreasing ubiquity and sometimes invisibility of AI, as it's inserted into other products. Is this more enshittification? AI is becoming a persuasion machine - we look at the recent issues on Reddit's r/ChangeMyView, where researchers skipped good ethics practice but ended up with worrying resultsWe talk about flattery, manipulation, and Eli Yudkowsky's AI-Box thought experimentResources & Links The AI-2027 piece, from Daniel Kokotajlo et al. is a must-read!Dario Amodei's latest essay, The Urgency of InterpretabilityT.O.P.I.C. - A detailed referencing model for indicating the use of GenAI Tools in academic assignments. Yudkowsky's AI-box Experiment, described on his site."The Worst Internet-Research Ethics Violation I Have Ever Seen" - coverage of the University of Zurich / Reddit study, by Tom Barlett for The AtlanticChatGPT wants us to buy things via our AI conversations (reported by Reece Rogers, for Wired)You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    47 min
  4. 2025 E5 Misinformation and Regulation

    11/18/2024

    2025 E5 Misinformation and Regulation

    Dónal and Ciarán discuss some of the concerns about misinformation and disinformation that have emerged with the rise of impressively capable GenAI models, and provide some detail on what their effects might be. They discuss the calls for regulation and how this has begun to take shape in the EU, Ireland,  and elsewhere. Topics in this episode What are the implications for misinformation inherent in the current and emerging GenAI models?Why have there been calls to pause development, and why did this not lead anywhere?How have the various language, image, audio, and video models already been used for problematic content?Is social media ready for the onslaught to come?Can we regulate AI to combat this and how is that beginning?Why should we be critical of offers to self-regulate from the tech companies?What's the EU AI Act? And why is Ireland using the word "doomsayers" in policy documents about AI?Resources & Links The EU's AI Act: https://artificialintelligenceact.eu/Some of ISD's work on AI & Misinformation: https://www.isdglobal.org/digital_dispatches/disconnected-from-reality-american-voters-grapple-with-ai-and-flawed-osint-strategies/More on the Slovak Deepfake case discussed by Ciarán: https://misinforeview.hks.harvard.edu/article/beyond-the-deepfake-hype-ai-democracy-and-the-slovak-case/GenAI & ISIS: https://gnet-research.org/2024/02/05/ai-caliphate-pro-islamic-state-propaganda-and-generative-ai/?The Irish Government's "Friend or Foe" Report: https://www.gov.ie/en/publication/6538e-artificial-intelligence-friend-or-foe/ You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    40 min
  5. 2024 E2 Long Time Coming

    11/04/2024

    2024 E2 Long Time Coming

    Dónal and Ciarán go back, back, back to see how the long history of computing machines connects to the AI revolution we're in now. Topics in this episode How far back does the history of humans building machines to aid our thinking go?Why are  French weaving machines and alcoholic poet's daughters involved?How does the history of computing go through County Cork in the 1800s?Who are Turning and Shannon, and why do workplace disagreements leading to new companies fracturing off seem to be a repeating theme?What are the key developments in the evolution of computing that allow us to build AI systems now?Links & Resources The Antikythera Mechanism (Wikipedia)Information about George Boole's time in Cork (UCC Website)Crash Course video on Boolean Logic (YouTube)An image of "The Mechanical Turk" (WikiMedia Commons)Tom Standage's (2002) The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine Information about and images of the Jacquard Loom (Science & Industry Museum)The portrait of J.M. Jacquard, woven in silk by his loom, which took 24,000 punch cards to program (WikiMedia Commons)Photograph of a young Claude Shannon, juggling on a unicycle (Ray Soni, Photo courtesy of the Shannon family)Ray Cavanaugh's (2016) article Claude Shannon: The Juggling Unicyclist Who Pedaled Us Into the Digital Age (Time Magazine) You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    48 min

Ratings & Reviews

About

A podcast that brings you enough about the key tech topic of our time for you to feel a bit more confident and informed. Dónal Mulligan, a media and technology lecturer, and Ciarán O'Connor, a disinformation expert, help you explore and understand how AI is affecting our lives.