The FIR Podcast Network Everything Feed

The FIR Podcast Network Everything Feed

Subscribe to receive every episode of every show on the FIR Podcast Network

  1. 23H AGO

    FIR #491: Deloitte's AI Verification Failures

    Big Four consulting firm Deloitte submitted two costly reports to two governments on opposite sides of the globe, each containing fake resources generated by AI. Deloitte isn’t alone. A study published on the website of the U.S. Centers for Disease Control (CDC) not only included AI-hallucinated citations but also purported to reach the exact opposite conclusion from the real scientists’ research. In this short midweek episode, Neville and Shel reiterate the importance of a competent human in the loop to verify every fact produced in any output that leverages generative AI. Links from this episode: Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations Deloitte allegedly cited AI-generated research in a million-dollar report for a Canadian provincial government Deloitte breaks silence on N.L. healthcare report Deloitte Detected Using Fake AI Citations in $1 Million Report Deloitte makes ‘AI mistake’ again, this time in report for Canadian government; here’s what went wrong CDC Report on Vaccines and Autism Caught Citing Hallucinated Study That Does Not Exist The next monthly, long-form episode of FIR will drop on Monday, December 29. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 491. I’m Neville Hobson. Shel Holtz: And I’m Shel Holtz, and I want to return to a theme we addressed some time ago: the need for organizations, and in particular communication functions, to add professional fact verification to their workflows—even if it means hiring somebody specifically to fill that role. We’ve spent the better part of three years extolling the transformative power of generative AI. We know it can streamline workflows, spark creativity, and summarize mountains of data. But if recent events have taught us anything, it’s that this technology has a dangerous alter ego. For all that AI can do that we value, it is also a very confident liar. When communications professionals, consultants, and government officials hand over the reins to AI without checking its work, the result is embarrassing, sure, but it’s also a direct hit to credibility and, increasingly, the bottom line. Nowhere is this clearer than in the recent stumbles by one of the world’s most prestigious consulting firms. The Big Four accounting firms are often held up as the gold standard for diligence. Yet just a few days ago, news broke that Deloitte Canada delivered a report to the government of Newfoundland and Labrador that was riddled with errors that are characteristic of generative AI. This report, a massive 526-page document advising on the province’s healthcare system, came with a price tag of nearly $1.6 million. It was meant to guide critical decisions on virtual care and nurse retention during a staffing crisis. But when an investigation by The Independent, a progressive news outlet in the province, dug into the footnotes, the veneer of expertise crumbled. The report contained false citations pulled from made-up academic papers. It cited real research on papers they hadn’t worked on. It even listed fictional papers co-authored by researchers who said they had never actually worked together. One adjunct professor, Gail Tomlin Murphy, found herself cited in a paper that doesn’t exist. Her assessment was blunt: “It sounds like if you’re coming up with things like this, they may be pretty heavily using AI to generate work.”Deloitte’s response was to claim that AI wasn’t used to write the report, but was—and this is a quote—”selectively used to support a small number of research citations.” In other words, they let AI do the fact-checking and the AI failed. Amazingly, Deloitte was caught doing something just like this earlier in a government audit for the Australian government. Only months before the Canadian revelation, Deloitte Australia had to issue a humiliating correction to a report on welfare compliance. That report cited court cases that didn’t exist and contained quotes from a federal court judge that had never been spoken. In that instance, Deloitte admitted to using the Azure OpenAI tool to help draft the report. The firm agreed to refund the Australian government nearly $290,000 Australian dollars. This isn’t an isolated incident of a junior copywriter using ChatGPT to phone in a blog post. This is a pattern involving a major consultancy submitting government audits in two different hemispheres. The lesson is pretty stark: The logo on your letterhead isn’t going to protect you if the content is fiction. In fact, this could have long-term repercussions for the Deloitte brand. But it doesn’t stop at consulting firms. Here in the US, we’ve seen similar failures in the public sector. There’s one from the Make America Healthy Again (MAHA) commission. They released a report with non-existent study citations to a presentation on the CDC website—that’s the Centers for Disease Control—citing a fake autism study that contradicted the real scientists’ actual findings. The common thread here is a fundamental misunderstanding of the tool. For years, the mantra in our industry was a parroting of the old Ronald Reagan line: “Trust but verify.” When it comes to AI though, we just need to drop that “trust” part. It’s just verify. We have to remember that large language models are designed to predict the next plausible word, not to retrieve facts. When Deloitte’s AI invented a research paper or a court case, it wasn’t malfunctioning. It was doing exactly what it was trained to do: tell a convincing story. And that brings us to the concept of the human in the loop. This phrase gets thrown around a lot in policy documents as a safety net, but these cases prove that having a human involved isn’t enough. You need a competent human in the loop. Deloitte’s Canadian report undoubtedly went through internal reviews. The Australian report surely passed across several desks. The failure here wasn’t just technological, it was a failure of human diligence. If you’re using AI to write content that relies on facts, data, or citations, you can’t simply be an editor. You must be a fact-checker. Deloitte didn’t just lose money on refunds or potential reputational hits; they lost the presumption of competence. For those of us in PR and corporate communications, we’re the guardians of our organization’s truth. If we allow AI-generated confabulations to slip into our press releases, earnings statements, annual reports, or white papers, we erode the very foundation of our profession. Communicators need to update their AI policies. Make it explicit that no AI-generated fact, quote, or citation can be published without primary source verification. And you need to make sure that you have the human resources to achieve that. The cost of skipping that step, trust me, is a lot higher than a subscription to ChatGPT. Neville Hobson: It’s quite a story, isn’t it really? I think you kind of get exasperated when we talk about something like this, because we’ve talked about this quite a bit. Most recently, in our interview with Josh Bernoff—which will be coming in the next day or so—where this very topic came up in discussion: fact-checking versus not doing the verification. I suppose you could cut through all the preamble about the technology and all this stuff, and the issue isn’t that; it’s the humans involved. Now, we don’t know more than the Fortune article, I’ve seen the one in Entrepreneur magazine, and the link that you shared. Nowhere does it disclose detail about exactly what it was other than the citation. So we don’t know, was it prompted badly or what? Either way, someone didn’t check something. I don’t know how much you need to really hammer home the point that if you don’t verify what the AI assistant has responded to or the output to your input, then you’re just asking for this kind of trouble. I did something just this morning, funnily enough, when I was doing some research. The question I asked came back with three comments linking to the sources. A bit like Josh—because Josh mentioned this in our interview—every instruction to your AI goes: “Do not come back with anything unless you’ve got a source.” And so I checked the sources, one of which just did not exist. The document concerned on the website of a reputable media company wasn’t there. Now, it could be that someone had moved it, or it did exist but it was in another location. But the trouble is, when these things happen, you tend to fall on the side of, “Look, they didn’t do this properly.” So I’m not sure what I can add to the story, Shel, frankly. Your remarks towards the end about your reputation is the one that’s going to get hit. You look stupid. You really do. And your credibility suffers. I found in Entrepreneur they quoted a Deloitte spokesperson saying, “Deloitte Canada firmly stands behind the recommendations put forward in our report.” Excuse me? Where’s your little humility there? Because you’ve been caught out doing something here. And they’re saying, “We’re revising it to make a small number of citation corrections which do not impact the report finding.” What arrogance they are display

    14 min
  2. DEC 1

    FIR #490: What is AI Reading?

    Studies purport to identify the sources of information that generative AI models like ChatGPT, Gemini, and Claude draw on to provide overviews in response to search prompts. The information seems compelling, but different studies produce different results. Complicating matters is the fact that the kinds of sources AI uses one month aren’t necessarily the same the next month. In this short midweek episode, Neville and Shel look at a couple of these reports and the challenges communicators face relying on them to help guide their content marketing placements. Links from this episode: Webinar: What is AI Reading? (Muck Rack) AI Search Volatility: Why AI Search Results Keep Changing Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors Major AI conference flooded with peer reviews written fully by AI The next monthly, long-form episode of FIR will drop on Monday, December 29. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz Hi everybody, and welcome to episode number 490 of For Immediate Release. I’m Shel Holtz. Neville Hobson And I’m Neville Hobson. One of the big questions behind generative AI is also one of the simplest: What is it actually reading? What are these systems drawing on when they answer our questions, summarize a story, or tell us something about our own industry? A new report from Muckrec in October offers one of the clearest snapshots we’ve seen so far. They analyzed more than a million links cited by leading AI tools and discovered something striking. When you switch citations on, the model doesn’t just add footnotes, it changes the answer itself. The sources it chooses shape the narrative, the tone, and even the conclusion. We’ll dive into this next. Those sources are overwhelmingly from earned media. Almost all the links AI sites come from non-paid content, and journalism plays a huge role, especially when the query suggests something recent. In fact, the most commonly cited day for an article is yesterday. It’s a very different ecosystem from SEO, where you can sometimes pay your way to the top. Here, visibility depends much more on what is credible, current, and genuinely covered. So that gives us one part of the picture. AI relies heavily on what is most available and most visible in the public domain. But that leads to another question, a more unsettling one raised by a separate study published in the JMIR Mental Health in November. Researchers examined how well GPT-4.0 performs when asked to generate proper academic citations. And the answer is not well at all. Nearly two thirds of the citations were either wrong or entirely made up. The less familiar the topic, the worse the accuracy became. In other words, when AI doesn’t have enough real sources to draw from, it fills the gaps confidently. When you put these two pieces of research side by side, a bigger story emerges. On the one hand, AI tools are clearly drawing on a recognizable media ecosystem: journalism, corporate blogs, and earned content. On the other hand, when those sources are thin, or when the task shifts from conversational answers to something more formal, like scientific referencing, the system becomes much less reliable. It starts inventing the citations it thinks should exist. We end up with a very modern paradox. AI is reading more than any of us ever could, but not always reliably. It’s influenced by what is published, recent, and visible, yet still perfectly capable of fabricating material when the trail runs cold. There’s another angle to this that’s worth noting. Nature reported last week that more than 20% of peer reviews for a major AI conference were entirely written by AI, many containing hallucinated citations and vague or irrelevant analysis. So if you think about that in the context of the Muckrec findings in particular, it becomes part of a much bigger story. AI tools are reading the public record, but increasing parts of that public record are now being generated by AI itself. The oversight layer that you use to catch errors is starting to automate as well. And that creates a feedback loop where flawed material can slip into the system and later be treated as legitimate source material. For communicators, that’s a reminder that the integrity of what AI reads is just as important as the visibility of what we publish. All this raises fundamental questions. How much has earned media now underpin what AI says about a brand? If citations actively reshape AI outputs, what does that mean for accuracy and trust? How do we work in a world where AI can appear transparent, citing its sources, while still producing invented references in other contexts? And the Muckrec and MJIR studies show that training data coverage, not truth, determines what AI cites. So the question, is AI reading, has two answers, I think. It reads what is most visible and recent in the public domain, and it invents what it thinks should exist when the knowledge isn’t there. That gap between the real and the fabricated is now a core communication risk for organizations. How do you see it, Shel? Thoughts on that? Shel Holtz It is a very, very complex issue. I was looking at a study from Profound called AI Search Volatility. And what it found was that search engines within the AI context, the search that ChatGPT and Gemini and Claude conduct, are probabilistic rather than deterministic, which means that they’re designed to give different answers and to cite different resources, even for the same query over time. Another thing that this study found was that there is citation drift. That is, the percentage of domains cited in July are not necessarily present in June for the same prompts. You look at these results, the number that weren’t present in June that were in July for Google AI overviews, nearly 60%, just over 54% for ChatGPT, over 53% for Co-Pilot, and over 40% for Perplexity. So 40 to 60% of the domains that are cited in AI responses are going to be different a month later for the same prompt. And this volatility increases over time, goes from 70 to 90 percent over a six month period. So you look at one of these studies that’s a snapshot in time and it’s not necessarily telling you that you should be using this information as a strategy to guide where you’re going to publish your content if the sources are going to drift. And by the way, a profound study by their AEO specialist, a guy named Josh Bliskolp, found that AI relies heavily on social media and user generated content, which is different from what the Muckrec study found. They were probably getting that snapshot in time where the citations had drifted. So, while I think all these studies are interesting, I think what it tells us as communicators looking to show up in these answers is we need to be everywhere. Neville Hobson Yeah, I’ve been trying to get my head around this. I must admit reading these reports and the Nature one kind of threw me sideways when I found that because I thought how relevant is that to the topic we’re discussing in this podcast? And so my further research showed it is relevant as the content is being fed back into the system and that’s showing up in social results. You’re right. In another sense, I think you can get all these survey reports and dissect them which way to Christmas. But they have credibility in my eyes, certainly, particularly Muckrec’s. I find the MJIR one equally good, but it touches on areas that I’m not wholly familiar with. This one in Nature is equally good, quite troubling, I think, that that one shows. Listening to how you were describing the profound report on citation consistency over time, I just kept thinking now about the Nature one as an example, let’s say. What if that sounds great, it’s measuring citation consistency over time, but what if the citations are fake, they’re full of hallucinations, they’re full of invalid information? Where does that sit? That’s my question, I suppose. Shel Holtz Well, yeah, this shouldn’t surprise anybody who’s been paying attention. AI still confabulates. It’s still at the bottom. I think of the ChatGPT or Gemini that this is still prone to misinformation. They are configured more to satisfy your query than they are to be accurate. So when they can’t find or don’t know an accurate citation, they’ll make one up. We still have attorneys who are filing briefs with cases that don’t actually exist. So this is the nature of the beast right now. If you’re not verifying the information that you get before you do something with it, that’s on you. That’s not on the AI. They’re telling you that these things still hallucinate. They’re working on it. They hope to have that fixed one of these days, but they’re not quite sure how that actually works. So it’s not like just going in and turning a dial or flipping a switch, the researchers are struggling to figure this out. And if it were that easy, they would have done it by now. Neville Hobson Sure. Although what you just said does not come across at all in any of the communication you see from any of the chatbot makers, except in four point tight at the bottom, you know, it can hallucinate, you need to do your verification. I don’t hear that clear call to a kind of a warning shot, if you like, from anyone when they’re talking about all this st

    22 min
  3. NOV 26

    Circle of Fellows #122: Preparing Communication Professionals for the Future

    The forward-looking discussion was joined by five seasoned leaders: two professors shaping the next generation of communicators and three senior practitioners traversing today’s real-world pressures. Together, they bridge campus and workplace, theory and execution, to define what readiness really looks like in a world of constant change. Shel Holtz, SCMP, IABC Fellow, will moderate the session. This episode featured a candid, fast-paced discussion on the skills and mindsets that matter now — and the ones you’ll need next. From AI literacy and data comfort to ethical judgment, change agility, and human-centered storytelling, the panel will share practical frameworks you can apply immediately. You’ll hear how universities are evolving curricula, how employers can cultivate lifelong learning, and how individual pros can future-proof their careers without losing the craft that sets them apart. You’ll get actionable guidance, plenty of examples from classrooms and boardrooms. Whether you lead a team, teach, hire, or are building your own career path, this conversation will help you set priorities for the year ahead. You’ll leave with: A clear, current skills map for modern communicators Practical ways to integrate AI and analytics—without sacrificing trust and creativity Playbooks for continuous upskilling across individuals, teams, and organizations About the panel: Diane Gayeski is recognized as a thought leader in the practice and teaching of business communications.  She is Professor of Strategic Communications at the Roy H. Park School of Communications at Ithaca College and provides consulting in communications analysis and strategies through Gayeski Analytics.  Diane was recently inducted as an IABC Fellow; she’s been active in IABC for more than 30 years as a featured speaker and think-tank leader at the international conference, the author of 3 editions of the IABC-published book, Managing the Communications Function, and the advisor to Ithaca College’s student chapter.  She has led more than 300 client engagements for clients, including the US Navy, Bank of Montreal, Fiat, Sony, Abbott Diagnostics, and Borg-Warner, focusing on assessing and building capacities and implementing new technologies for workplace communications and learning teams. Sue Heuman, SCMP, ABC, MC, IABC Fellow, based in Edmonton, Canada, is an award-winning, accredited authority on organizational communications with more than 40 years of experience. Since co-founding Focus Communications in 2002, Sue has worked with clients to define, understand, and achieve their communications objectives. Sue is a highly sought-after executive advisor, specializing in leading communication audits and strategies for clients across all three sectors. Much of her practice involves a strategic review of the communications function within an organization, analyzing channels and audiences. She creates strategic communication plans and provides expertise to enable their execution. Sue has been a member of the International Association of Business Communicators (IABC) since 1984, which enables her to both stay current with and contribute to the field of communications practices. In 2016, Sue received the prestigious Rae Hamlin Award from IABC in recognition of her work in promoting global standards for communication. She was also named 2016 IABC Edmonton Chapter Communicator of the Year. In 2018, IABC named Sue a Master Communicator, the Association’s highest honor in Canada. Sue earned the IABC Fellow designation in 2022. Dr. Theomary Karamanis is a multiple award-winning communication professor and consultant with 25 years of global experience. She is currently a full-time senior lecturer in Management Communication at the Cornell SC Johnson College of Business and regularly delivers executive education programs in leadership communication, crisis communication, and strategic communication. She has held several professional leadership positions, including Chair of the GCCC (Global Communication Certification Council), Chair of the IABC (International Association of Business Communicators) Academy, and Chair of the IABC Awards committee. Her academic background includes a PhD in communication studies, a Master of Arts in mass communication, and a postgraduate certificate in telecommunications, all from Northwestern University, as well as a bachelor’s degree in economics from the Athens University of Economics and Business. She also holds professional certifications as a Strategic Communication Management Professional (SCMP), online facilitator, and executive program instructor. She has received 40 professional communication awards, including 12 Platinum MarCom awards, 7 Gold Quill awards, 4 Silver Quill awards, and a Comm Prix award. In 2020, she received the Award for Excellence in Communication Consulting by APCC (Association of Professional Communication Consultants) and ABC (Association for Business Communication). She is the author of several books and academic papers on communication, and she also regularly delivers presentations at international conferences and other business forums. Leticia Narváez, ABC, is the CEO and Founding Partner of Narváez Group, a consulting firm specializing in Strategic Communication, Crisis Management, Employee Engagement, Communication Training, and Change Management. A 30-year experienced professional, she held top-level positions at Sanofi, Merck, American Express, and Ford Motor Co., among others. She builds communication bridges to the highest standards of excellence. She has developed communication strategies for several employers and clients, including those involved in mergers and acquisitions, diversity leadership, crisis management, and senior executive consulting. Many of these strategies have earned global awards for their proven results and successful impact. She has been a speaker at international forums, is a co-author of several books and manuals on business communication, public relations, and inclusion. She teaches Measurement and Evaluation in the Master of Institutional Communication at the Panamericana University in Mexico City. Jennifer Wah, MC, ABC, has worked with clients to deliver ideas, plans, words and results since she founded her storytelling and communications firm, Forwords Communication Inc., in 1997. With more than two dozen awards for strategic communications, writing, and consulting, Jennifer is recognized as a storyteller and strategist. She has worked in industries from healthcare and academia to financial services and the resource sector, and is passionate about the strategic use of storytelling to support business outcomes. Although she has delivered workshops and training throughout her career, Jennifer formally added teaching to her experience in 2013, first with Royal Roads University and more recently as an adjunct professor of business communications with the UBC Sauder School of Business, where she now works part-time to impart crucial communication skills on the next generation of business leaders. When she is not working, Jennifer spends her time cooking, walking her dog, Orion, or discussing food, hockey, or music with her husband and two young adult children in North Vancouver, Canada. Raw Transcript 00:00:00 Speaker: Hi everybody, and welcome to episode one hundred and twenty two of Circle of Fellows. I’m Shel Holtz, your moderator today, and I am the senior director of Communications at Webcor. We’re a commercial general contractor in California, headquartered in San Francisco. And I’m coming to you live today from our offices across the bay in Alameda. Uh, and I am also a certified, uh, communication professional through the Global Communication Certification Council. And I am delighted to have a terrific panel joining me today to talk about preparing tomorrow’s communication professionals. Uh, that includes some people from the world of academia. Uh, you’ll learn who they are as they introduce themselves in just a couple of seconds. But first, I’m going to give you the, uh, the first of a few reminders that, um, you are welcome to participate in this discussion. You are watching this presumably through YouTube and there is a chat feature. And if you send us a question or a comment or an observation through that chat window, I’ll be able to share it on the screen and we can get feedback from the panelists who will now introduce themselves, starting with Letty. Um. Hi, everybody. Um, I’m Letty Narvaez, I’m based in Mexico City, and I’ve been working on communication for more than thirty years. For the last ten years, I have had my own consulting firm specializing mainly on employee communications, change management, crisis and risk management, and and a lot of training on measurement and presentation skills. And it’s great to to be here. It’s great to have you here, Letty. Uh, Theo. Mary. You’re next. Hello, everyone. Thanks for being here with us. I’m Theo America. I’m based in Ithaca, New York. This is upstate New York. I work for Cornell University, and I teach MBA and executive MBA students. And I’m also very much involved with executive education. So I get to see a lot of executives and leaders across industries and professions. Um, I’ve been in communication for more than I don’t want to say, but I will, uh, twenty five years now. And I started after my PhD. I started in corporate communication. So I had a corporate life. Then I went into consulting. I had my own boutique firm, and for the past ten, maybe now close to fifteen years, I’ve been full time in academia. I’ve been in contact. I always have contact with executives, uh, through my executive education courses and also through my Um, MBA courses, and I’m looking forward to sharing with you some insights about communication professionals and what communication will mean to us, uh, in the future. And than

    1h 2m
  4. NOV 17

    FIR #489: An Explosion of Thought Leadership Slop

    In the long-form episode for November 2025, Shel and Neville riff on a post by Robert Rose of the Content Marketing Institute, who identifies “idea inflation” as a growing problem on multiple levels. Idea inflation occurs when leaders prompt an AI model to generate 20 ideas for thought leadership posts, then send them to the communications team to convert them into ready-to-publish content. Also in this episode: A growing number of companies are moving branding under the communications umbrella, detouring around Marketing and the CMO. It’s all about safeguarding reputation. Quantum computing has been a topic of conversation in tech circles for years. Now, its arrival as a commercially viable product is imminent. Communicators need to prepare. AI’s ability to generate software code from a plain-language prompt has put the power to create apps in the hands of almost anyone. There are communication implications. Share some photos of yourself with an AI model, or companies that provide this as a service, and you can get an amazing likeness of yourself. But is it okay to use it as your LinkedIn profile? Research finds that leaders not only handle change management badly, but it’s also having an impact on employees who have to endure the process. Communicators can help. In his Tech Report, Dan York reports on WhatsApp launching third-party chat integration in Europe; X is finally rolling out Chat, its DM replacement, with encryption and video calling; Mozilla has announced an AI “window” for the Firefox browser; WordPress 6.9 offers new features, collaboration tools, and AI enhancements; Amazon has rebranded Project Kuper as Amazon Leo; and Open AI says it has “fixed” ChatGPT’s em dash problem. (We dispute that it’s a problem.) Links from this episode: Why companies are merging communications and brand under one leader Will quantum be bigger than AI? ‘Vibe coding’ and other ways AI is changing who can build apps and how  The market has spoken: Vibe coding is serious business The potential of vibe coding Everything Wrong with Vibe Coding and How to Fix It Vibe Coding: How to Avoid Over-Engineering and Build Smarter, Not Harder Mastering Vibe Coding: How to Get Better AI-Generated Code Every Time Why AI Thought Leadership Hurts Content Teams Is it Ok to use AI-generated images for LinkedIn Profiles? Your Staff Thinks Management Is Inefficient—They May Have a Point The next monthly, long-form episode of FIR will drop on Monday, December 29. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: Hi everybody and welcome to episode number 489 of For Immediate Release. This is our long-form monthly episode for November 2025. I’m Shel Holtz in Concord, California. Neville Hobson: And I’m Neville Hobson in Somerset in England. Shel Holtz: We have a jam-packed show for you today. Virtually every story we’re going to cover has an artificial intelligence angle. That shouldn’t be a surprise — AI seems to dominate communication conversations everywhere these days. We do hope that you will engage with this show by leaving a comment. There are so many ways that you can leave a comment. You can leave one right there on the show notes at firpodcastnetwork.com. You can even leave an audio comment from there. Just click the “record voicemail” button that you’ll see on the side of the page, and you can leave up to a 90-second audio. You can also send us an audio clip — just record it, attach it to an email, send it to fircomments@gmail.com. You can comment on the posts we publish on LinkedIn and Facebook and elsewhere, announcing the availability of a new episode. There are just so many ways that you can leave a comment and we hope you will — and also rate and review the show. That’s what brings new listeners aboard. As I mentioned, we have a jam-packed show today, but Neville, I wanted to mention before we even get into our rundown of previous episodes: did you see the study that showed that podcasting is very male-dominated as a medium? Neville Hobson: I did see something in one of my news feeds, but I haven’t read it. Shel Holtz: I heard about it on a podcast — I don’t remember which one — but I found it really interesting because the conversation was all about equity. And I’m certainly not in favor of male-dominated anything, but podcasting is not an industry where there is a CEO who can mandate an initiative to bring women into a more equitable position in podcasting. This is a medium — let’s face it, even though The New York Times and The Wall Street Journal and other major media organizations have jumped into the podcasting waters — where it’s essentially a hobbyist occupation. You and I started this because we wanted to, and the tools are available to anybody who wants them. I remember when we started this, one of the analogies we used was trying to walk into a radio station and say, “Hey, I want to have an hour-long show every day on public relations.” You’d be laughed out of the radio station because there’s not an audience big enough to support that kind of content. But here, if you can find an audience, you can have a podcast. So I don’t know how you go about making this more equitable, but I found that to be an interesting perspective. Neville Hobson: Yeah, I agree. There are some podcasts I’ve listened to that are hosted by women — which, frankly, are few beyond the realms of kind of “feminine-oriented” content. But there are a couple in our area of interest in communication that are. So they’re out there, but the majority, very much, are men. Shel Holtz: Yeah. I mean, just in internal communications, there’s Katie Macaulay, and there are a lot of women doing communication-focused podcasts. Maybe if you’re going to look for somebody to make this a more equitable media space, it has to start with the mainstream media organizations that are producing podcasts — The New York Times, The Wall Street Journal of the world. Neville Hobson: Yeah, over here you’ve got The Times and a few others who have women doing this. They are there in the mainstream media orientation, but the kind of homebrew content that we started out with, I don’t see too many. Shel Holtz: No. Well, Neville, why don’t we move into our rundown of previous episodes? Neville Hobson: Okay, let’s get into it. So we’ve got a handful of shows. We’re actually recording this monthly episode about a week and a half earlier than we normally would. I think the reason for that, Shel, is something to do with U.S. holidays, your travel, and stuff like that. Shel Holtz: Yeah, I’m going to be in San Diego next weekend, visiting my daughter and granddaughter because they’re not able to come up here for Thanksgiving. And then the next weekend is Thanksgiving weekend. So that’s why this is early this month. Neville Hobson: Right. Okay, that explains it. We are, we are. So, not too many episodes since the last one, but they’re good ones, though, I have to say. Before we talk about those, let’s mention episode 485, which was prior to the last monthly. We had a comment. Shel Holtz: We had two that we didn’t have when we ran down this episode in our last monthly episode. The first is from Katie Howell, who says, “Already reward return visits over one-off reach and the clever brands are catching up. If your brief still says ‘go viral,’ you’re chasing a metric that won’t help you keep your job. Repeat engagement with the right people is the proper goal. Less glamorous, miles more useful.” And Andy Green says, “Good clarification over strategies, but you also need to recognize viral — also known as meme-friendly — is at the heart of effective communications. Also greater recognition of the impact of zeitgeist. Check out Steven Pinker’s latest book, When Everyone Notes.” Neville Hobson: They were on LinkedIn, I think, weren’t they? That’s where most of them come in. So, to the ones we did: we have the monthly of October that we did on the 27th of October, when it was published. The lead story we focused on in the headline was “Measuring sentiment won’t help you maintain trust.” Other topics — there were five others — including an interesting one: Lloyds Bank, the CEO and executive team learning AI to reimagine the future of banking with generative AI. We talked about case studies in a piece that described, “Conduct, culture, and context collide: three crisis case studies,” reviewed in Provoke Media. Shel Holtz: Yeah, they did 13 or 14 case studies. It was a very interesting article, so we highlighted a couple. And there was more content there too. Neville Hobson: Episode 487, we published on the 5th of November. This was a really interesting discussion. You and I analyzed and discussed Martin Waxman’s LinkedIn post about slower publishing, deeper thinking, better outcomes — a pivot he’s made with his business and his newsletter. He left a number of comments, but on the show notes post he left a long comment that was great. We don’t normally get comments on the show notes, so thank you, Martin. Shel Holtz: Yeah, there were several comments from Martin. I’m going to run through these. He said, “Thank you for having me as a virtual guest once-removed on the episode, Neville. I just listened today and enjoyed your and

    1h 42m
  5. NOV 10

    FIR #488: Did a Soda Pop Make AI Slop?

    For the second year in a row, Coca-Cola turned to artificial intelligence to produce its global holiday campaign. The new ad replaces people with snow scenes, animals, and those iconic red trucks, aiming for warmth through technology. The response? A mix of admiration for the technical feat and criticism for what some called a “soulless,” “nostalgia-free” production. Shel and Neville break down the ad’s reception and what it tells us about audience expectations, creative integrity, and the communication challenges that come with AI-driven content. Despite Coke’s efforts to industrialize creativity — working with two AI studios, 100 contributors, and more than 70,000 generated clips — the final product sparked as much skepticism as wonder. The discussion explores: Why The Verge called the ad “a sloppy eyesore” — and why Coke went ahead anyway The sheer scale and cost of AI production (and why it’s not necessarily cheaper) Whether Coke’s campaign is marketing, corporate signaling, or both How critics’ reactions reflect discomfort with AI aesthetics in emotional brand spaces Lessons for communicators about context, authenticity, and being transparent about “why” Links from this episode: Coke’s AI Ad Isn’t Just Marketing. It’s Corporate Communications. Coca-Cola | Holidays Are Coming (YouTube) Coca-Cola | Holidays are Coming, Behind the Scenes (YouTube) Coca-Cola’s new AI holiday ad is a sloppy eyesore Coca-Cola Sparks Backlash With New, Entirely AI-Generated Holiday 2025 Ad, Insists ‘The Genie Is Out of the Bottle, and You’re Not Going to Put It Back In’  Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different What Coca-Cola has learned on its generative AI journey so far Coca-Cola’s AI Chief Dishes on Why the Brand Went Ahead With Another AI Holiday Ad Hilarious graphic shows how bad the Coca-Cola Christmas ad really is Remember kids, without the creative, we just have blank squares. It’s ALL about the CREATIVE.  The next monthly, long-form episode of FIR will drop on Monday, November 17. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville Hobson: Hi everyone, and welcome to For Immediate Release, episode 488. I’m Neville Hobson. Shel Holtz: And I’m Shel Holtz. Coca-Cola is back with a holiday spot created using AI for the second year running, and the blowback is about as big as the media buy. If last year’s criticism centered on uncanny humans, this year they tried to sidestep that by leaning into animals, snow, and those iconic red trucks. The problem is that a lot of viewers still found the whole thing visually inconsistent and emotionally hollow — more of a tech demo than Christmas magic. The Verge didn’t mince words, calling it a “sloppy eyesore.” This wasn’t a lone creative prompting a model in a dark room. According to The Verge, Coke worked with two AI studios — SilverSide and Secret Level — involving roughly 100 contributors. So when people say AI is taking work away from humans, this example complicates that argument. The project generated and refined over 70,000 clips to assemble the final film, with five AI specialists dedicated to wrangling and iterating those shots. If you think of AI work as cheap and easy, that scale tells a different story. This was massive, industrialized production. Despite all that, audience reaction has been harsh. Delish collected consumer responses labeling the ad “soulless,” “nostalgia-free,” and — my favorite phrase — “intentional rage bait.” In other words, people felt provoked, not moved. The general sentiment is familiar: “Just bring back the classic trucks or polar bears and let real filmmakers work their craft.” The level of blowback reflects a mainstream discomfort with AI aesthetics invading a beloved ritual. So why is Coke doing this again? Partly for speed and efficiency, sure — but the more interesting rationale is signaling. As Forbes argues, this isn’t just marketing, it’s corporate communication: a message to investors and partners that Coke is a modern operator experimenting across its value chain. In that sense, the ad is a press release in moving pictures — “We’re innovating.” Whether consumers cheer or jeer, the signal still gets sent. For communicators, I see three takeaways. First, scale doesn’t guarantee soul. You can throw 100 people and 70,000 clips at a film and still end up with something that feels off. Craft and continuity remain stubbornly human problems, and current video models still struggle with temporal consistency and art direction. Second, context beats novelty. Holiday ads are about rituals and memories. When the urge to adopt AI clashes with audience expectations for warmth and authenticity, “innovative” can come across as “indifferent.” If you’re going to bring AI into sacred brand moments, you need strong creative guardrails — and maybe keep flagship storytelling human-first until the tools catch up. Third, be explicit about your “why.” If your real audience is Wall Street or prospective partners, say so — ideally without sacrificing the consumer experience. Coke’s narrative of blending human creativity with new tools can work, but only if the end result still feels like Coca-Cola. Otherwise, you’re asking consumers to bankroll your R&D with their attention during the most sentimental time of the year. These trucks will keep rolling — and so will the debate — until the models solve for continuity and feel. Brands risk trading wonder for workflow, and audiences know the difference. That said, I watched this ad last night during Monday Night Football. Looking at it through that lens, I didn’t see what the critics were talking about. I suspect most of the audience didn’t either. The vast majority probably aren’t aware it was generated with AI and didn’t see any problem with it. I think the hypercritical responses are mostly from people who are following the AI conversation closely — and maybe looking for an excuse to slam something that wasn’t made by human creators. Neville, what do you think? Neville Hobson: I watched the video on YouTube — both the global version and the one Coca-Cola uploaded for European audiences. Honestly, I couldn’t tell the difference. They’re exactly the same length. Like you, I thought it was well done. It was pretty clear to me within a few seconds that it was AI-generated — not because it looked AI-generated, but because of the scale and scope. You just know they’d use AI for something like this. Coke has used this theme for years — the trucks, the snow, the feel-good singing. This time, there aren’t any humans front and center; it’s all animals. But as storytelling, I thought it worked. That said, I did see some severe critiques, particularly from design industry voices. Creative Bloq, for example, called it an example of “how a company risks decades of hard-won brand equity through the use of nascent tech that’s still not up to the job.” I think that’s a bit unfair and shows a lack of understanding of what Coke was really trying to do. There’s also a fascinating behind-the-scenes video Coke posted. It’s narrated by AI voices — the same ones from NotebookLM, actually — so it’s an AI explaining an AI. And the prompts they show are incredible: dozens of paragraphs for a single shot. This wasn’t a one-line “make a Christmas ad” job. That explainer reinforces your Forbes point — this could be as much about corporate signaling as marketing. Personally, I see it more as a brand experiment than a corporate ad, but I can see both perspectives. And yes, some critics are inevitably Coke detractors. One UK designer, Dino Berberich, posted screenshots showing technical errors — missing truck wheels, misaligned shots, and so on. Maybe Coke fixed those later, maybe not. But if they take that kind of feedback seriously, it’ll be invaluable. Overall, I think it’s what you’d expect from Coke. Set aside the fact that it’s AI — it’s actually quite good. It continues the “Real Magic” theme they’ve been running for years. I remember one a couple of years ago with paintings in an art gallery coming to life when they got a Coke — also beautifully done. So this feels like the next step in their evolution. Most viewers won’t realize it’s AI unless they’re already thinking that way. Awareness is growing, but the average person just sees a nice Christmas ad. Of course, we’re now in a world where people start by asking, “Is this AI?” before saying, “Wow, what a great image.” That mindset can distract from the story — but it’s part of the landscape now. This kind of work will only get better, and Coke is helping to move it forward. Shel Holtz: Yeah, I agree. And if you look at Berberich’s LinkedIn post, you can see the issues he points out, but that’s not how most people watch ads. They’re not stopping every frame to analyze wheel placement. They’re watching during a football game or between shows. Most people just see a Coke commercial with some fuzzy bunnies. One critique I read said the ad couldn’t decide whether it wanted cartoony or semi-realistic animals. I didn’t notice that. If you go in looking to criticize AI, sure, you’ll find something. But again,

    17 min
4.5
out of 5
24 Ratings

About

Subscribe to receive every episode of every show on the FIR Podcast Network