The FIR Podcast Network Everything Feed

The FIR Podcast Network Everything Feed

Subscribe to receive every episode of every show on the FIR Podcast Network

  1. 6D AGO

    ALP 299: Hire people who understand how to solve problems

    Most hiring processes obsess over the wrong things. Do they know our project management software? Are they proficient in this specific tool? Meanwhile, the one capability that actually determines whether someone will make your life easier or harder—their ability to solve problems independently—gets a cursory “are you a good problem solver?” question that everyone answers with “yes.” In this episode, Chip and Gini break down why problem-solving ability should be the primary hiring criterion, especially as AI makes technical skills easier to acquire and offload. The conversation explores why this matters more now than ever: as AI handles tactical execution, the ability to define problems clearly, break them into components, and figure out solutions becomes the differentiator between humans who add value and humans who get replaced. Chip and Gini discuss how problem-solving cuts across every role, even ones you don’t typically think of as problem-solving positions. Designers facing impossible deadlines, account people navigating last-minute client demands, anyone dealing with the reality that things rarely go according to plan. They all need to be able to figure out how to move forward rather than escalating every obstacle upward. The episode tackles the mechanics of actually interviewing for this capability. You can’t just ask “are you a good problem solver?”—you need scenario-based questions that reveal how candidates think through challenges. But not hypothetical scenarios you make up; real situations that have happened in your agency. Ask them to walk through how they’ve handled compressed timelines, missing information, conflicting priorities, or last-minute changes in past roles. Gini shares how her daughter’s school explicitly focuses on humanities and emotional intelligence rather than technical skills, anticipating that AI will reshape what jobs exist. She connects this to Anthropic’s hiring practice of seeking people with humanities degrees who can absorb information, think critically, and demonstrate emotional intelligence rather than just technical proficiency. The episode concludes with an important reminder: if you hire problem solvers but then micromanage how they solve problems, you’ve wasted the hire. You need to let them solve things their way, even if it’s different from how you’d do it, or you’ll end up with everything back on your plate anyway. [read the transcript] The post ALP 299: Hire people who understand how to solve problems appeared first on FIR Podcast Network.

    21 min
  2. 6D AGO

    FIR #507: Should Nobody Really Ever Write with AI?

    Take a stroll through LinkedIn. You’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write for them. While that case isn’t hard to make for professional writers, there are countless professionals in other fields who struggle with writing, never trained to be writers, yet now have to write everything from emails to reports as part of their jobs. Should they really sweat for hours over wording, time they could be devoting to the core areas of subject expertise, when AI can produce content that is cogent, clear, and direct? In this short mid-week episode, Neville and Shel look at the trends in using AI for writing, despite the plethora of opinions from the pundits. Links from this episode: Meet the Tech Reporters Using AI to Help Write and Edit Their Stories Meet the Journalist Using AI to Write Stories How Journalists Feel About AI Muck Rack’s 2026 State of Journalism Report Finds 82% of Journalists Use AI AI Doesn’t Reduce Work—It Intensifies It Is Writing with AI at Work Undermining Your Credibility? How We’re Using AI Review of ‘Using Artificial Intelligence in Academic Writing’ Best Practices for the Effective Use of AI in Business Writing AI Tools for Business Writing 5 Ways to Instantly Level Up Your Communication Using AI Tools Charlene Li and Katia Walsh demonstrate the right way to build a book with AI help – Josh Bernoff The Truth About Writing a Book on AI The next monthly, long-form episode of FIR will drop on Monday, April 27. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville: Hi everyone and welcome to For Immediate Release episode 507. I’m Neville Hobson. Shel: And I’m Shel Holtz. And if you spend any time at all on LinkedIn, you’ll see the degree to which anti-AI sentiment is ramping up. A lot of it’s aimed at using AI for writing and how absolutely wrong that is. Yet just last week, on the same day, Wired Magazine and The Wall Street Journal both published articles on reporters using AI to help write and edit their stories. So today, let’s talk about using AI to write. Specifically, is it okay for employees to use AI to help them write for work? And my answer is not only is it okay for many employees, it might be one of the most genuinely useful things AI can do. Here’s the framing I would push back on. When we talk about AI writing assistants, we tend to picture a journalist or a marketer or a communications professional, someone whose craft is writing, it’s what they’re paid for, handing their keyboard over to a robot. And for those of us who are professional writers, that raises legitimate professional and ethical questions. But that’s not the population we’re talking about when we’re communicating AI adoption in most organizations. Think about who actually has to write at work. Engineers document processes. Product managers write status updates. Safety officers draft incident reports. Shel: Finance analysts compose budget justifications. Scientists write up findings for non-technical stakeholders. These are not people who chose their careers because they love writing. Writing is a tax they pay to do the work they actually care about. And many of them pay that tax really, really badly. The idea that a structural engineer should produce elegant prose unaided is the same logic as saying a communications director should coordinate the concrete mix for a construction project. We don’t expect that. So why do we expect every knowledge worker to be a competent writer? Muckrack’s 2026 State of Journalism report found that 82% of journalists, professional writers, people whose job this is, are now using at least one AI tool. That’s up from 77% the year before. If the people whose professional identity is tied to their writing are using AI tools, it shouldn’t surprise us that everyone else is too, or that they should. Now the research does tell us something important about how to use these tools. A University of Florida study of 1,100 professionals found that AI tools can make workplace writing more professional. But regular heavy use can undermine trust between managers and employees, particularly for relationship-oriented messages like praise, motivation, or personal feedback. The study found that employees are more skeptical when they perceive a supervisor is leaning heavily on AI for those kinds of communications. Now that’s a meaningful finding and it’s exactly the kind of nuance internal communicators need to help their organizations understand. It’s not an argument against AI writing assistance. It’s an argument for knowing when it’s appropriate. Purdue Business School Professor Casey Roberson, who literally wrote one of the first business writing textbooks to address AI, puts it this way: AI is a great tool for brainstorming when you’re stuck, for outlining and structuring documents, for revising drafts to improve clarity and tone, but it should not be used for confidential information, and using it to write first drafts can stifle creativity and critical thinking. The Wharton communication program makes a similar distinction. Their guidance frames AI tools as powerful and skilled hands for the right task, valuable for brainstorming, editing, improving conciseness, and anticipating challenging questions, but a liability when used as a substitute for your own thinking, your own knowledge of your audience, and your own credibility. So what’s the practical guidance for internal communicators trying to help their colleagues use AI responsibly in their writing? First, make the distinction between communication types explicit. Routine informational writing — process documentation, project updates, meeting recaps, technical reports — that’s where AI assistance is most defensible and most valuable. That’s exactly where the trust risk is lowest and the productivity gain is highest. Conversely, messages that carry relationship weight, like a manager recognizing someone’s contribution or a leader addressing a team through a difficult moment, that deserves a human voice. Help your employees understand that difference. Second, reframe the conversation around who’s actually writing. A systematic review published in the International Journal of Business Communication found that AI can significantly help with idea generation, structure, literature synthesis, editing, and refinement. Essentially all the phases of writing that non-writers find most daunting. AI isn’t replacing a writer’s voice. In many cases, it’s giving non-writers a voice they otherwise wouldn’t even have. Third, be honest about the nuance inside the journalism conversation. The Columbia Journalism Review published a fascinating piece where journalists across major newsrooms shared their practices. Nicholas Thompson, the CEO of The Atlantic, described using AI the way he’d use a fast, well-read research assistant who’s also a terrible writer — helpful for checking consistency, flagging chronological issues, examining logical claims, but not for the writing itself. Amelia Daly, a senior reporter at VentureBeat, put it this way: AI helps her productivity, but she refuses to use it to write because writing is how she maintains trust with her readers. That distinction — AI as research and process support versus AI as voice — maps directly to the guidance you should be giving your colleagues. I read one other reporter in one of these articles who said he actually does use it to write because he didn’t become a journalist in order to write. He didn’t like writing; he liked reporting. So he did all the other work and then lets the AI produce the writing. And here’s the thing I’d leave your employees with because I think it gets lost in this debate. Wharton’s communication faculty make the argument that writing is thinking, that when you rely on AI for drafting, you don’t know your content as deeply as you should, and you lose the nimbleness to adapt when the moment requires it. And that’s true. But for an engineer who agonizes over every sentence of a procedure document, who spends four times as long on the writing as on the analysis, Shel: AI doesn’t replace their thinking. It clears away the friction so their thinking can actually reach the page. For internal communicators, this is a genuinely useful message to take to your AI adoption rollouts. AI writing assistance isn’t about cutting corners. It’s about removing a barrier that prevents good ideas from being communicated clearly while still insisting on the judgment, authenticity, and relational awareness that only human beings can bring. Neville: Yeah, it’s a big topic, I have to admit. And I think of it from not the employee communication point of view so much. That’s pretty a major part of it, I think, major usage. Is anyone writing, in fact? Whether you’re in public relations, whether you’re a journalist, et cetera, people who need to write as part of their roles is what’s in my mind mostly. I’m also drawn by a very good analysis by Josh Bernoff. You and I interviewed Josh, what, two, three months ago. He wrote an assessment of Charlene Li’s new book, Winning with AI, which she used AI extensively in the creation of the book. Worth pointing out that the book — the AI didn’t write any of the content. She and her co-author, Katia Walsh,

    26 min
  3. MAR 29

    Circle of Fellows #126: Communicating in the Era of the Polycrisis

    The days when a crisis communicator could simply reach for a dusty binder and follow a pre-scripted, linear checklist are gone — and they aren’t coming back. In the “good old days,” a crisis was often a contained event with a predictable lifecycle; crisis teams could address them by checking off items on a checklist. Today, we face the era of the polycrisis, where economic instability, geopolitical friction, and a 24/7 social media cycle collide, creating a torrent of simultaneous challenges. This new reality has effectively obliterated the traditional news cycle, replacing it with an always-on environment where a single viral post can tarnish a brand before leadership even knows there is a problem. Thriving in this volatile landscape requires a move away from rigid manuals toward a more fluid, strategic approach. Rather than a step-by-step rulebook, modern practitioners need logical scaffolding — a flexible framework of principles and values that provides a foundation for action while allowing for real-time adaptability. It is about preparation, not just prescription. As the boundaries between internal and external perception continue to erode, the ability to maintain transparency and connection through these multifaceted disruptions is no longer a luxury; it is table stakes for organizational survival. Four Fellows of the International Association of Business Communicators (IABC) shared their perspectives in this episode of IABC’s Circle of Fellows. About the Panel: Edward “Ned” Lundquist is a retired U.S. Navy captain with 43 years of professional public affairs and strategic communications experience. His company, Echo Bridge LLC, which provides outreach and advocacy support to government and commercial clients. He served on active duty for 24 years in the U.S. Navy as a surface warfare officer and public affairs specialist. Captain Lundquist was a Pentagon spokesman with the Office of the Assistant Secretary of Defense for Public Affairs, Director of the Fleet Home Town News Center, and director of public affairs and corporate communications for the Navy Exchange Service Command. His last tour of duty was commanding the 450 men and women of the Naval Media Center. He is an accredited business communicator and award-winning communicator who served as president of IABC/Hampton Roads and IABC/Washington, director of U.S. District 3, and chair of the International Accreditation Council. He was named an IABC Fellow in 2016. Captain Lundquist received the Surface Navy Association’s Special Recognition Award in January of this year, for his service on SNA’s executive committee and chair of the SNA communications committee. He writes for numerous naval, maritime, and defense publications and chairs and presents at communications, naval, and maritime security conferences around the world. Robin McCasland, IABC Fellow, SCMP, is Senior Director of Corporate Communications for Health Care Service Corporation (HCSC). She leads the company’s communications team and the employee listening program, demonstrating to senior leaders how employee and executive communication add value to the business’s bottom line. Previously, Robin excelled in leadership roles in communication for Texas Instruments, Dell, Tenet Healthcare, and Burlington Northern Santa Fe. She has also worked for large and boutique HR consulting firms, leading major communication initiatives for various well-known companies. Robin is a past IABC chairman and has served in numerous association leadership roles for over 30 years. She was honored in 2023 and 2021 by Ragan/PR Daily as one of the Top Women Leaders in Communication. She’s also received IABC Southern Region and IABC Dallas Communicator of the Year honors. Robin is a graduate of The University of Texas at Austin and a Leadership Texas alumnus. Her own podcast, Torpid Liver (and Other Symptoms of Poor Communication), features guest speakers addressing timely topics to help communication professionals become more influential, strategic advisors and leaders. She resides in Dallas, Texas, with her husband, Mitch, and their canine kids, Tank and Petunia. George McGrath is founder and managing principal of McGrath Business Communications, which helps clients build winning corporate reputations, promote their products and services, and advance their views on key issues. George brings more than 25 years in PR and public affairs to his firm. Over the course of his career, he has held senior management positions at leading strategic communications and integrated marketing agencies including Hill and Knowlton, Carl Byoir & Associates, and Brouillard Communications. Caroline Sapriel, founder and Managing Partner of CS&A, brings over 30 years of specialized expertise in risk, crisis, and business continuity management to the table. A Fellow of the International Association of Business Communicators (IABC) and a recipient of the Gold Quill Award for her “10 Commandments of Crisis Management,” Sapriel is a recognized authority in providing high-level, results-driven counsel to senior leaders across the energy, pharmaceutical, and aviation sectors. Her deep academic roots as a lecturer at Antwerp, Leuven, and Leiden Universities, combined with her authorship of Crisis Management – Tales from the Front Line, underscore a career dedicated to transforming systemic vulnerabilities into robust reputation management strategies. Fluent in five languages and possessing a multi-disciplinary background in International Relations and Chinese Studies, she offers a uniquely global perspective on the evolution of stakeholder engagement during high-stakes disruptions. The post Circle of Fellows #126: Communicating in the Era of the Polycrisis appeared first on FIR Podcast Network.

    1h 3m
  4. MAR 23

    ALP 298: Build the business you want to own, not the one you hope to sell

    Most agency owners have read Built to Sell. But many have internalized the wrong lesson from it—fixating on that final chapter where the protagonist drives off into the sunset with a pile of cash, rather than the actual business-building advice throughout the book. The result is owners spending years building businesses optimized for a sale that may never happen, or that won’t deliver the outcome they’re imagining. In this episode, Chip and Gini discuss Chip’s “Build to Own” philosophy as a counterpoint to the built-to-sell mindset. The core principle: focus on creating a business that serves you today, not some hypothetical buyer tomorrow. This doesn’t mean you can’t or won’t sell—it means you stop treating the sale as the primary objective and start treating ownership as the thing you’re optimizing for right now. Chip breaks down the TMRW framework for thinking about what you want from your business: Time (how much you spend and what flexibility you have), Meaning (what gives you satisfaction—clients, team, impact), Rewards (financial outcomes that fund your life today and tomorrow), and Work (the actual role you’re crafting for yourself). Gini shares her decision to retire from speaking despite conventional wisdom saying agency owners should be out there raising their profile—because the anxiety wasn’t worth the marginal business benefit. The conversation tackles the uncomfortable reality that most agency owners counting on a sale to fund their retirement are likely building businesses that won’t command the multiple they’re hoping for. Meanwhile, owners who build businesses that throw off enough cash to fund retirement directly—while also being enjoyable to run—end up with something far more attractive to buyers when and if they do decide to sell. Gini tells the story of a friend who prepared five years in advance for a sale: removing himself from day-to-day operations, hiring a president to build culture, ensuring the business wasn’t founder-dependent. The result? An 18x multiple. But the episode’s point isn’t “here’s how to get a great sale”—it’s that you should make every decision through the lens of “would I still be happy with this if I never sold?” [read the transcript] The post ALP 298: Build the business you want to own, not the one you hope to sell appeared first on FIR Podcast Network.

    20 min
  5. MAR 23

    FIR #506: Battle of the Bots!

    In this monthly long-form episode for March, Neville and Shel tackle a trio of interconnected themes reshaping the communications profession in the age of AI. The conversation opens with Anthropic’s top lawyer declaring that AI will destroy the billable hour. That thread leads naturally into JP Morgan’s controversial use of digital monitoring to verify junior bankers’ working hours, where Shel and Neville question whether surveillance technology can substitute for genuine managerial trust and engagement. The episode also examines Gartner’s widely circulated prediction that PR budgets will double by 2027 as AI search engines favor earned media. Shel delivers a detailed report on the escalating misinformation crisis, citing a 900% surge in global deepfake incidents and new research from the C2PA on content provenance standards. The episode closes with a discussion of Cloudflare CEO Matthew Prince’s prediction that bot traffic will exceed human traffic by 2027, and a sobering peer-reviewed study on how social bots hijack organizational messaging — research reported by Bob Pickard, who has experienced bot-driven attacks firsthand. Dan York also contributes a tech report on the state of the Fediverse and Mastodon, as well as on AI developments for WordPress. Links from this episode: AI will destroy the billable hour, says Anthropic’s top lawyer Gartner predicts PR budgets will increase 2x by 2027 5 takes on Gartner’s new optimism for PR and earned media in the age of AI PR is back, baby — Gartner is predicting… [LinkedIn post by Lindsay Bennett] The Gartner claim that public relations and earned media budgets will double by 2027 JPMorgan starts programme to monitor junior banker hours [Financial Times] FT Exclusive: The US bank has started to… [Financial Times LinkedIn post] Senator Bernie Sanders Discusses the Impact of AI on Privacy and Democracy with Claude Let’s Talk Keyboard Jamming and Why It Might Suggest Bigger Problems at Work Telling Fact From Fiction With Online Misinformation Online bot traffic will exceed human traffic by 2027, Cloudflare CEO says Public Relations & Organizational Communication [LinkedIn post by Bob Pickard] Social Bots as Agenda-Builders: Evaluating the Impact of Algorithmic Amplification on Organizational Messaging Links from Dan York’s Tech Report: Mastodon post by Eugen Rochko (@Gargron) — mastodon.social Mastodon — Decentralized social media How to Generate a WordPress Theme with Telex Telex — AI-Assisted Authoring Environment for WordPress WordPress.com now lets AI agents write and publish posts, and more Your AI agent can now create, edit, and manage content on WordPress.com Enable MCP tool access for AI agents WordPress.com MCP prompt examples The next monthly, long-form episode of FIR will drop on Monday, April 27. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville:  Hi everyone, and welcome to the Forum Immediate Release podcast, long form episode for March, 2026. I’m Neville Hobson. Shel: And I’m Shel Holtz. Neville: As ever, we have six great stories to discuss and share with you, and we hope you’ll gain insight and enjoyment from our discussion. Perhaps you’ll want to share a comment with us once you’ve had a listen. We’d like that. Our topics this month range from AI in the end of the billable hour to Gartner’s predictions about PR budgets to monitoring work in the age of AI to newsrooms battling AI generated misinformation and more, including Dan York’s tech reports. Before we get into our discussion, let’s begin with a recap of the episodes we’ve published over the past month and some list of comments in the long form. In episode 502 for February, published on the 23rd of that month, we explored how rapidly accelerating technology is reshaping the communication profession from autonomous agents with attitudes to the evolving ROI of podcasting. We led with a chilling milestone moment, an autonomous AI coding agent that publicly shamed a human developer after he rejected its code contribution. A leader can build goodwill for days and lose it in seconds. In FIR 503 on the 2nd of March, we reported on the president of the IOC, that’s the International Olympic Committee, who had no answers to reporters’ questions and suggested on camera that someone on her communications team should be fired. We got comment on this, haven’t we, Shel? Shel: Boy, do we have comments on this one. This attracted a good number of them, starting with Kevin Anselmo, who used to have a podcast on the FIR Podcast Network. It was on higher education communication. He says, having previously worked in communications for two different international sport federations, I found this story quite amusing. One of my first PR roles was working at the 2000 Sydney Olympic Games. I was working on the sport federation side, not the IOC. Neville: Yep, you did. Shel: But I know that working at such events is exhilarating and exhausting as you have to deal with a myriad of different issues. I can imagine that toward the end of the Olympics, the PR team fell short of delivering a robust brief. But nevertheless, in answer to your question, even if the PR people were abysmal, the fault is on Coventry for the way she handled the situation. A simple, we will have to look into this and get back to you response would have worked. Instead, by handling it the way she did, she drew unnecessary attention to the questions she and the team weren’t prepared to answer, as you and Neville shared. I guess in the process of this mishap, I learned that Germany was in the running for the 2036 Olympics, which I wasn’t aware of. We also heard from Monique Zitnick, who said, really enjoyed your discussion on this. Certainly a puzzling situation that has surely ended in broken trust on both sides. Shel: Mike Klein said, another ignominious IOC leader in the mold of Brundage and Samaranch. Neville, you replied. You said that’s an interesting comparison. Mike, Avery Brundage and Juan Antonio Samaranch both left very complicated legacies, particularly around politics and governance in the Olympic movement. What struck me about this episode wasn’t so much ideology or policy. It was leadership under pressure. Coventry had actually received a fair amount of praise for how she handled some difficult moments during the games, which makes the press conference moment even more interesting from a communication perspective. It’s a reminder that reputation capital can be fragile. A single public moment can reshape the narrative very quickly. Mike replied, yes, leadership under pressure, but also the kind of people the IOC has chosen for leadership over the years. Coventry has a complicated history over her involvement with her native Zimbabwe’s recent regimes as well. Sylvia Camby said, Neville, watching Coventry’s press conference took me back to the time I spent doing comms for an international association. It reminded me of how inward-looking organizations like the IOC can be. So totally focused on their internal member politics with leaders too lazy or too overconfident to bother to educate themselves about current affairs. Also, they often have a distorted idea of what the press is interested in. They often think they can dictate their agenda. As you and Shel mentioned on the podcast, the questions were entirely predictable. You replied, Neville, that’s a really insightful observation, Sylvia. Organizations like the IOC can become quite inward facing, particularly when so much of their energy is spent navigating internal governance and member politics. That can create a kind of blind spot about how issues look from the outside. Sylvia said, and I was thinking, I’m proud of Germany for being so sensitive about the significance of that date and for opposing the 2036 bid. They are much better at reading the spirit of the time than Coventry. As an aside, my father’s cousin competed in the 1936 Olympics in Berlin as a gymnast. She passed away last year at the age of 104. She often spoke to me of the atmosphere surrounding the Olympics at the time, a heaviness and a sense of unspeakable doom. So yes, 2036 is a date that Berlin should definitely avoid. And you replied to that, Neville. People can go find that one in the comments. Neville: That’s a good one. There are some great points of view, perspectives there. So thanks to everyone who commented. Are companies using AI as a convenient explanation for layoffs? That was a question we asked in FIR 504 on the 10th of March when we discussed AI washing, when organizations blame workforce cuts on AI, even when the reality is more complicated. It’s a difficult ethical space for communicators. And we have comment on this too, don’t we? Shel: Three short ones. First from Monique, who commented that she was looking forward to listening to the episode because she’s been having a lot of conversations on this over the last month. Jacqueline Trzezinski said, I’m glad you’re delving into this. The same thought came to my mind when I saw the Block layoff announcement, especially as it was held up by some on LinkedIn as an example of how valuable transparency is during layoffs. And Jesper Anderson said, I find it fascinating how quickly the world turns upside down. 18 to 24 months ago, companies were accused of letting people go beca

    1h 43m
  6. MAR 17

    FIR #505: Social Media's Big Shift

    In FIR #505, Neville and Shel dig into Hootsuite’s Social Media Trends 2026 report, which argues that social media is no longer just a communication channel — it’s morphing into a search engine, cultural radar, and real-time research tool. They explore what it means for communicators when younger audiences treat TikTok and Instagram as their primary discovery platforms, and when Google itself starts indexing social content. The conversation also tackles “fastvertising” — the growing pressure on brands to react to cultural moments within hours — and whether that speed actually translates to bottom-line results or just burnout. The discussion takes a provocative turn when Shel raises Ethan Mollick’s warning that public forums are being systematically overrun by machine-generated content, with research suggesting one in five accounts in public conversations may be automated. They weigh the AI paradox facing communicators: generative AI has become table stakes for social media production, yet 30% of consumers say they’re less likely to choose a brand whose ads they know were AI-created. Neville and Shel agree that social media can serve as both a publishing channel and a listening tool — but only if human-to-human communication can survive the rising tide of bot-generated noise. Links from this episode: Social Media Trends 2026 | Hootsuite The 18 social media trends to shape your 2026 strategy Sferra Design video on Social Media Trends report | Instagram World-first social media wargame reveals how AI bots can swing elections AI bot swarms threaten to undermine democracy B2B Social Media Trends and Predictions for 2026 The next monthly, long-form episode of FIR will drop on Monday, March 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel: Hi everybody, and welcome to episode number 505 of For Immediate Release. I’m Shel Holtz. Neville: And I’m Neville Hobson. Social media might be going through its biggest change since the rise of the news feed, and it’s happening quietly. Platforms that started as places to connect with friends are increasingly acting like search engines, cultural sensors, and even market research tools. It’s been a while since Shel and I talked about social media on the podcast, and frankly, that’s partly because the conversation often feels repetitive. New platforms appear, algorithms change, someone declares the death of Twitter again. That’s the kind of format that we seem to be following. But every now and then, a report comes along that suggests something deeper is happening. Hootsuite’s new Social Media Trends 2026 report published last month argues that social media is no longer just a communication channel. It’s becoming something much broader — part search engine, part cultural radar, and part market research lab. Take search, for example. Younger users increasingly treat platforms like TikTok or Instagram as search tools. Instead of Googling “best coffee shop in London,” they search TikTok and watch short videos from real people recommending places to go. And now Google itself has started indexing Instagram posts and surfacing short-form social video in search results. The line between social media and search is starting to blur. At the same time, we’re seeing a strange tension around artificial intelligence. According to the report, most social media managers now use generative AI tools every day to write captions, brainstorm ideas, edit images or video. But audiences are increasingly suspicious of content that feels automated or synthetic. More than 30% of consumers say they’re less likely to choose a brand if they know its ads were created by AI. So brands are in a curious position. AI is becoming essential behind the scenes, but the content that performs best often needs to feel unmistakably human. And culturally, social media itself is fragmenting. The report points to what it calls Gen Alpha Chaos Culture — absurd memes, distorted audio, and intentionally chaotic editing styles that dominate TikTok among younger audiences. Meanwhile, older audiences — that’s you and me, Shel — are gravitating towards almost the opposite aesthetic: nostalgic references to the ’80s and ’90s, calming, cozy content, and even posts about slow living and digital detox. I do some of that, but I also do the other stuff too. So it’s hard to pigeonhole me, I have to tell you that. So reading this report left me wondering something slightly provocative. Maybe social media isn’t really social anymore. If discovery is driven by algorithms and search behavior rather than who you know, perhaps these platforms are evolving into something else — systems that surface information, culture, and trends in real time. Which raises the bigger question for communicators. Are we still thinking about social media as a place to publish content? Or is it becoming something much more powerful — a tool for understanding behavior, culture, and trust as it unfolds online? Which leads me to a first question. If people increasingly discover products, places, and even news through TikTok or Instagram rather than Google, does that fundamentally change how communicators should think about social media? Shel: I absolutely think so. I mean, this shift deserves way more attention, I think, than it’s been getting from marketers and communicators. We’re looking at a fundamental change in how people get information. The rise of social media as a primary search engine — this is not a fringe behavior. In 2026, this is going to be the dominant reality for a massive swath of the population. Brands are just starting to get their arms around AEO. And now they’re going to have to apply the same efforts to social content that they’ve historically reserved for traditional search engine optimization. So captions and alt text and subtitles aren’t going to be nice-to-haves. These are the bedrock of discoverability. And there’s a specific angle here for those of us in internal communications too. I mean, if employees are using TikTok and Instagram the way they used to use Google to make personal decisions, we have to ask if that behavior is bleeding into their professional research. And there’s data that suggests it is. A company called Alpha P-Tech did a study and found that 75% of B2B buy-side stakeholders are going to use social media to gather information about vendors and solutions this year. So this isn’t just a consumer trend. This is a professional evolution too. Neville: Yeah, I would agree with that, I think. I mean, there’s a lot to unpack here from Hootsuite’s report. And I think it’s, you know, I throw out thoughts that occurred to me when I was reading this. It talks about something I’d not encountered before, whether you have — fast, if I pronounce it right, even it’s a manufactured word — fastvertising. So the word “fast” with “vertising” from advertising, right? Fastvertising. The question actually is, does the fastvertising culture create more risk for communicators, things moving so fast, where, according to Hootsuite, brands now feel pressure to react to trends within hours, if not less than that even? So reacting too quickly can lead to tone-deaf, poorly thought-through posts, I would say, as does Hootsuite, in fact. Are we moving into a world, then, where social media requires newsroom-style judgment and governance? What do you think? Shel: Well, yes, and I think we’ve been there for a while. We remember the — what were they called — the war rooms that social media teams for various brands were using. Remember Oreo during their 100 Days of Oreo several years ago now. And they had a newsroom that was looking for trends so they could take the one that was planned based on somebody’s birthday. And if something major happens, they could just switch it up and really quickly knock one out that was relevant to what was in the news. I remember they had one cookie that had black and white stripes. And it turned out that it was related to a National Football League referee strike that had just been called. So yeah, I think brands have gotten accustomed to monitoring trends and knocking stuff out fast. Another one was, I think it was the tequila with the chocolate beans, that they pulled that out of Google Trends and said, let’s get that out there while this is a hot trend. And it was up and it did really well, that particular post from whatever tequila company it was. So this is something that I think brands, a lot of them anyway, are already accustomed to. I think the scale that we’re talking about here though is probably not good. I think if you’re reacting to just what you happen to see and not running some analytics, you risk being tone-deaf by jumping into a conversation that turns out to be not that big a deal. You risk saying something that is incongruous with the tone of the conversation because you rushed. I guess the only benefit you get out of this is the fact that everything’s moving so fast that in six hours, no one’s going to remember what you did. Neville: Yeah, Hootsuite talks about this in the context of fastvertising. Obviously, the word du jour for this thing that’s been around a while is disrupting the content calendar. To that point, online brands are now responding to cultural moments within hours, not days. 22% of m

    21 min
  7. MAR 16

    ALP 297: Holding companies discover retainers, call them “subscriptions”

    S4 Capital has announced a revolutionary new pricing model that will transform how agencies charge for their services: instead of billable hours, they’re moving to… subscriptions. Fixed monthly fees. Annual contracts that auto-renew. All costs absorbed into the price rather than passed through as variables. You know, retainers. The pricing model most independent agencies have used for decades. In this episode (somewhat abbreviated due to Gini’s technical difficulties), Chip and Gini dissect the holding company’s “brilliant innovation” with the appropriate level of sarcasm, then pivot to the more interesting question buried in the announcement: how should agencies price around AI? The conversation moves from eye-rolling at repackaged retainer models to wrestling with legitimate uncertainty about how AI costs will evolve and what that means for agency pricing strategies. Chip points out that we only know what AI costs today, and it’s likely those costs will rise as platforms realize they’re replacing expensive labor and can charge accordingly. This creates a pricing puzzle—do you transparently pass through AI costs, absorb them into your general cost of doing business, or find some middle ground? Gini shares how she’s handling questions from college students about whether jobs will exist when they graduate, explaining that the work itself is shifting from doing to orchestrating, from creating to editing and refining AI outputs. The discussion highlights the difference between cosmetic changes (calling retainers “subscriptions”) and substantive challenges (figuring out sustainable pricing as AI capabilities and costs both increase). They land on the principle that AI costs should be factored into your total cost of doing business rather than line-itemized separately, giving you flexibility to adapt as the landscape shifts without locking yourself into specific cost structures that may not hold. The subtext throughout is that holding companies remain out of touch with how most agencies actually operate, still discovering “innovations” that the rest of the industry implemented years ago. [read the transcript] The post ALP 297: Holding companies discover retainers, call them “subscriptions” appeared first on FIR Podcast Network.

    15 min
  8. MAR 10

    FIR #504: When Companies Blame Layoffs on AI -- and Leave Communicators Holding the Bag

    Shel and Neville examine a troubling trend gaining momentum across corporate America: AI washing — the practice of attributing layoffs to artificial intelligence when the real reasons are more complex. The discussion centers on two high-profile cases. Block CEO Jack Dorsey announced a 40 percent workforce reduction, crediting AI tools, despite three prior rounds of cuts that had nothing to do with AI and pushback from former employees who say the moves look like standard cost management. Meanwhile, Oracle is cutting thousands of jobs, not because AI replaced those workers, but to fund a massive data center expansion that Wall Street projects won’t generate positive cash flow until 2030. Meanwhile, a new Anthropic labor market study adds context, finding limited evidence that AI has meaningfully displaced workers to date—though hiring of younger workers in exposed occupations may be slowing. Neville and Shel dig into what this means for communicators who may be asked to craft layoff messaging that overstates AI’s role. Links from this episode: Labor market impacts of AI: A new measure and early evidence | Anthropic What is AI Washing and Why Has It Been Linked to Layoffs? Block employees react to mass layoffs, impact of AI The US economy lost 92,000 jobs in February and the unemployment rate rose to 4.4% The Curious Case of the Block ‘AI Layoffs’ Jack Dorsey Is Ready to Explain the Block Layoffs Oracle Plans Thousands of Job Cuts in Face of AI Cash Crunch Is AI really driving an increase in layoffs? Why Today’s AI-Driven Layoffs Are Becoming Tomorrow’s Rehiring Crisis The next monthly, long-form episode of FIR will drop on Monday, March 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Neville: Hi everyone and welcome to For Immediate Release. This is episode 504. I’m Neville Hobson. Shel: And I’m Shel Holtz. Let’s talk about something today that should be keeping every communication professional up at night. We’re in the middle of a wave of layoffs where AI is being cited as the cause and the data suggests that in many cases that explanation is somewhere between incomplete and pure fiction. That puts communicators in a genuinely difficult position. You may be asked to help craft messaging that you have good reason to believe is misleading. Shel: That’s a violation of codes of ethics. The stakes here are pretty high. We’ll explain all of this and what communicators should be doing about it right after this. Shel: Let’s start with the numbers. News of the Oracle layoffs broke just last week amid news that the U.S. economy lost 92,000 jobs in February. And into that bleak backdrop, two major stories landed almost simultaneously. First, Block. Jack Dorsey announced that the company is cutting its staff by 40 percent, more than 4,000 people. The reason, according to his letter to shareholders, intelligence tools. Dorsey framed this as inevitable and even proactive saying, and this is a quote, “I think most companies are late. Within the next year, I think the majority of companies will reach the same conclusion.” But here’s where it gets complicated. Block had already undergone three rounds of layoffs since 2024 before this one. And in a previous round, Dorsey claimed that they were being made for performance reasons. AI, as far as I can tell, wasn’t mentioned at all, despite the fact that the same tools he now credits were already available and being used by employees. Former employees and analysts pushed back pretty hard on Dorsey’s assertions. One former Block employee wrote that the cuts “read like standard prioritization and cost management, not AI-driven reinvention.” Shel: And another analyst was blunter, saying the vast majority of these cuts were probably not due to AI. Then, as I mentioned earlier, there’s Oracle, which is planning to axe thousands of jobs among its moves to handle a cash crunch. That cash crunch was created by a massive AI data center expansion effort. Now, this is a different kind of AI-related layoff. It’s not AI replacing these workers, but rather, we’re spending so much money building AI infrastructure that we can’t afford to keep paying these people. Wall Street projects Oracle’s cash flow will go negative for the coming years before all that spending starts to pay off in 2030. That’s workers losing their jobs not because AI took their role, but because their employer’s betting the company on AI and needs the payroll budget to fund that bet. Both cases are AI related. Neither is quite the story it appears to be on the surface. And that is the problem. And it has a name: AI washing. The term describes companies blaming layoffs on AI when the circumstances may be more complicated, like attributing financially motivated cuts to future AI implementation that actually hasn’t happened yet. A Forrester report argues that a lot of companies announcing AI-related layoffs don’t have mature, vetted AI applications ready to fill those roles. Shel: Molly Kinder at the Brookings Institution makes the investor logic explicit. Calling layoffs AI driven is a very investor-friendly message, especially compared to admitting that the business is ailing. Even Sam Altman, whose company is arguably the reason any of this is happening in the first place, acknowledged all of this. He said, “There’s some AI washing where people are blaming AI for layoffs that they would otherwise do.” Now the data complicates the picture even more. Shel: Anthropic just released a major labor market study. It’s worth your attention. They find limited evidence that AI has affected employment to date. Their new “observed exposure” metric, which tracks what AI is actually doing in real workplaces, not what it could do theoretically, shows that workers in the most exposed occupations have not become unemployed at meaningfully higher rates than workers in AI-proof jobs. There’s one exception worth watching: suggestive evidence that hiring of younger workers, particularly ages 20 to 25, has slowed in those occupations exposed to AI. The good news in the Anthropic research also serves as a warning. The reason we’re not seeing mass displacement yet is largely because actual AI adoption is just a fraction of what AI tools are feasibly capable of performing. The gap between theoretical capability and real-world deployment is wide today, but it is closing. Shel: So what does this mean for communicators? Well, here’s the ethical minefield. When executives AI wash their layoff announcements, they may be revealing that they view AI as a means for eliminating jobs, and that could cause workers not to trust or even sabotage their future plans for AI adoption. Employee concerns about job loss due to AI have already skyrocketed from 28% in 2024 to 40% in 2026, and 62% of employees feel leaders underestimate AI’s emotional and psychological impact. Anti-AI sentiment is real and growing, and every time a company uses AI as a convenient cover story for financially motivated cuts, it feeds that sentiment, making the actual work of responsible AI adoption harder for everyone. Shel: For communicators who are handed layoff messaging that overstates AI’s role, the guidance from ethics researchers is worth holding on to. Rather than vague claims about AI transformation, companies should provide specifics. How many positions are directly attributable to automation of specific functions? And how many reflect shifting market conditions and strategic realignment? Investors can handle complexity and so can employees. The Block situation is a canary in the coal mine, but perhaps not in the way Jack Dorsey intended. It’s a warning about what happens when the narrative outruns the reality, when the story told to shareholders diverges from the story experienced by the people being let go. Our job as communicators isn’t to make bad news sound good, it’s to make complicated truth navigable. That truth has never been more important or more difficult than it is right now. Neville: A lot to unpack in that, Shel. I mean, absolute tons. I was curious, actually. One thing you mentioned, I think it was a quote, where you talked about, you know, referencing Sam Altman, where you said, you mentioned the phrase “AI-proof jobs.” What are those? I don’t think anything is AI proof. Shel: Well, I think a gardener is an AI-proof job. A drywall installer is an AI-proof job. These are the ones that an AI can’t do. Even if you look at the definition that they’re throwing around for artificial general intelligence, it’s any cognitive task that a normal person could perform at their computer. And there are a lot of jobs. I mean, my son-in-law is a plumber and AI is not going to take his job anytime soon. So those are the AI-proof jobs. Neville: That could be a good topic for a separate discussion, I think. I’ve got some different views. Anyway, one thing that struck me in everything you said is how often AI is framed as inevitable, as Jack Dorsey noted, almost like the technology made the decision. But organization leaders are choosing how and when to deploy AI. So do you think those leaders risk removing their own accountability when they say “AI made us do this”? Shel: I think they do, even though that accountability is to the shareholders and they’re performing what they think the sharehol

    23 min
4.5
out of 5
24 Ratings

About

Subscribe to receive every episode of every show on the FIR Podcast Network