Leading Change

Ema Roloff

Welcome to Leading Change, where we dive into the real conversations shaping the future of work. Hosted by Ema Roloff, this series brings together business leaders, change-makers, and innovators to explore the intersection of technology, change management, and leadership in today’s evolving workplace. Each episode is packed with actionable insights, candid stories, and fresh perspectives on navigating transformation—whether it’s leveraging emerging tech, leading through disruption, or building resilient teams. If you’re passionate about creating meaningful change and thriving in the digital era, this is the podcast for you. Let’s redefine what it means to lead in a world where change is the only constant.

  1. 2D AGO

    OpenAI and Anthropic Just Admitted AI Isn’t Magic

    If AI is supposed to replace human productivity, why are OpenAI and Anthropic spending billions to build human-led services companies? In this episode of Leading Change in the Wild, I break down the back-to-back announcements from OpenAI and Anthropic to launch venture-backed consulting and implementation firms designed to help companies adopt AI. And hidden inside these announcements is a quiet admission. AI is not a magic wand. Because despite all the hype around instant productivity and “AI-first” transformation, companies are running into the same problem technology implementations have always faced. The people side of change. Here’s what I unpack:  Why OpenAI and Anthropic are launching AI-focused services companies   The real reason enterprise AI adoption has been so difficult   How the “AI magic wand” narrative is colliding with reality   Why buying AI tools without strategy creates confusion and waste   The ongoing gap between technology implementation and true transformation   Why leadership, training, and communication matter more than ever   The danger of skipping over change management in the rush to adopt AI  The takeaway is clear. AI alone will not transform your business. Real transformation happens when technology, leadership, process, and people work together. This is not just a technology conversation. It is a leadership one. Because the companies that win with AI will not be the ones that adopt it the fastest. They will be the ones that adopt it with the most intention. 👇 Let’s discuss:  What do these new AI services companies signal to you?   Is your organization focused more on technology or strategy?   Do you think most companies are prepared for the people side of AI adoption?  🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    12 min
  2. MAY 5

    Tokenmaxxing Is Breaking AI Strategy

    Companies are racing to adopt AI. But what happens when they start incentivizing the wrong behavior? In this episode of Leading Change in the Wild, I break down the rise of “tokenmaxxing” and how AI leaderboards inside major companies are driving massive usage… without delivering real value.   From engineers burning tokens to hit leaderboards to companies blowing through millions in AI spend, we are starting to see the consequences of chasing usage instead of outcomes. Here’s what I unpack:  What “tokenmaxxing” is and why it’s spreading across companies   How AI leaderboards are driving the wrong behaviors   The massive cost of AI usage without clear strategy   Why companies are burning through budgets faster than expected   The connection between AI spend and layoffs   What the data actually says about AI productivity gains   Why incentivizing usage instead of value is a leadership failure  The takeaway is clear. More AI usage does not equal more productivity. If you measure the wrong thing, you get the wrong outcome. This is not just a technology problem. It is a leadership problem. Because the way you incentivize behavior will determine whether AI becomes an advantage or a liability. 👇 Let’s discuss:  Is your company tracking AI usage or actual outcomes?   Have you seen behavior like this inside your organization?   What should leaders be measuring instead?  🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    11 min
  3. APR 28

    The AI Strategy Gap

    There’s a growing gap in how AI is being experienced inside organizations. And most leaders are missing it. In this episode of Leading Change in the Wild, I share original research from her audience that reveals a major disconnect between leadership and employees when it comes to AI adoption. Because while leaders are optimistic, employees are overwhelmed. And that gap is creating more problems than progress. This is not just about AI. It is about how we lead change. Here’s what I unpack:  The stark difference between leadership and employee sentiment toward AI   Why most companies don’t actually have an AI strategy   How hype and external pressure are driving decision-making   The reality of AI creating more work instead of less   Why poor training and unclear direction are hurting adoption   The “FOMO cycle” and how it keeps repeating   What leaders need to do differently to close the gap  The takeaway is clear. This is not a technology problem. It is a leadership problem. If you want real results from AI, you have to start with the problem, not the tool. And you have to bring your people into the process. Because without alignment, strategy is just noise. 📄 Download the full report here: https://mailchi.mp/roloffconsulting/aigap 👇 Let’s discuss:  Does this gap exist in your organization?   Is AI making your work easier or more complicated?   What would need to change for AI to actually deliver value?  🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    16 min
  4. APR 21

    Gen Z Is Sabotaging AI… But Here’s Why

    44% of Gen Z workers say they’ve tried to “sabotage” AI at work. But before we jump to conclusions, we need to ask a better question. Why? In this episode of Leading Change in the Wild, I break down the latest data on Gen Z’s shifting sentiment toward AI and why this reaction has less to do with resistance to technology and more to do with how it’s being introduced. Because this is not a story about a generation rejecting AI. It is a story about what happens when leadership gets the rollout wrong. From fear-based messaging to “AI-first” mandates, we are watching a growing disconnect between how companies are deploying AI and how employees are experiencing it. Here’s what I unpack:  The data behind Gen Z’s declining trust and rising anxiety around AI   What “AI sabotage” actually looks like in the workplace   Why poor rollout strategies are driving risky and reactive behavior   The impact of fear-based narratives around job loss and automation   How AI adoption is increasing workload, not reducing it   The tension between productivity expectations and work-life balance   Why Gen Z’s pushback may actually be a signal leaders need to listen to  The takeaway is clear. This is not a Gen Z problem. It is a leadership problem. If you want adoption, you cannot skip the hard work. That means training, transparency, and real conversations about how AI will be used and why. AI is not an easy button. And your people are not the barrier. They are the signal. 👇 Let’s discuss:  Do you think Gen Z is resisting AI or responding to how it’s being rolled out?   How is AI impacting workload and expectations in your organization?   What would make AI adoption feel more intentional and less forced?  🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    15 min
  5. APR 14

    What OpenAI Says vs What They’re Doing

    OpenAI just released its policy vision for the “intelligence age” and at first glance, it sounds promising. But when you look closer, the story starts to fall apart. In this episode of Leading Change in the Wild, I break down OpenAI’s latest policy document and the growing gap between what AI companies say and what they actually do. Because this is not just about policy. It is about trust, accountability, and whether we should believe the narrative being presented to us. From energy subsidies to workforce impact, this document raises more questions than it answers. Here’s what I unpack: Why OpenAI’s “pay their own way” stance contradicts real-world actions  The role of public funding and who is actually subsidizing AI infrastructure  The disconnect between “people-first” messaging and enterprise partnerships  Why consulting-driven AI adoption often excludes the very people doing the work  The limitations of how AI companies define “human-centered” roles  The lack of real mechanisms for public and worker input  Why this document feels more like a PR move than a true shift in strategy  The takeaway is simple. Saying “people first” is not the same as acting like it. If AI companies want trust, they need to earn it through action, not just policy statements. This is not just a technology conversation. It is a leadership one. Because the future of AI will not be shaped by what companies promise. It will be shaped by what they actually do. 👇 Let’s discuss:  Do you trust AI companies to put people first?   Where do you see the biggest gap between messaging and reality?   What responsibility should companies have before regulation steps in?    🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    18 min
  6. APR 7

    Claude Code Just Leaked… Here’s Why It Matters

    When the Claude Code leak first surfaced, many thought it was an April Fool’s joke. It wasn’t. In this episode of Leading Change in the Wild, I break down what actually happened when Anthropic accidentally leaked over 500,000 lines of Claude’s source code and why the aftermath matters more than the leak itself. Because this is not just a story about human error. It is a glimpse into the future of AI, cybersecurity, and competition. From malicious repos to copyright takedowns, this moment exposed deeper tensions across the AI landscape. And it raises a bigger question. What happens when the most advanced systems can no longer be contained? Here’s what I unpack: What actually happened in the Claude Code leak and how it spread so quickly The immediate cybersecurity risks and rise of malicious copycat repos Why bad actors now have new visibility into AI systems Anthropic’s aggressive copyright response and the backlash that followed The irony of copyright claims in the age of AI training data Why this leak may signal a future of competing or open-source AI models What this means for trust, safety, and leadership in AI The takeaway is clear. The genie is out of the bottle. AI is not just evolving. It is becoming harder to control, contain, and govern. This is not just a technology conversation. It is a leadership one. Because the future of AI will not only be shaped by what companies build, but by how we respond when things don’t go as planned. TikTok mentioned in this episode: https://www.tiktok.com/@nate.b.jones/video/7624277313655442718 👇 Let’s discuss: Does this change how you think about AI security and trust? Are we prepared for the risks that come with more open AI systems? What role should companies play when something like this happens? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    9 min
  7. APR 3

    Personal Autonomy in the Age of Technology

    When do we stop blaming the individual and start blaming the system? And maybe more importantly… when do we stop blaming the system and start taking accountability ourselves? Right now, we’re watching this tension play out in real time. From a landmark lawsuit against Meta Platforms and Google to new regulations emerging in Australia, the conversation is shifting toward platform responsibility. But that shift raises a deeper question about our personal autonomy in how we engage with technology. In this episode of Leading Change, I break down what this moment signals for social media, artificial intelligence, and the balance between individual choice and system design. 📉 Here’s what we unpack: The shift from personal responsibility to platform accountability Why this debate mirrors past cases like the cigarette industry How the attention economy is designed to influence behavior What this means as AI becomes more immersive and habit-forming The risks of relying on regulation to guide our decisions Why setting personal boundaries with technology matters more than ever This is not just a legal or regulatory conversation. It is a question of autonomy. If we decide that we have no control over how we engage with technology, we give that control away. But if we recognize our role alongside these systems, we create space for more intentional use. This is not about removing responsibility from platforms. It is about understanding that regulation alone will not solve the problem. As AI continues to evolve, our choices, behaviors, and boundaries will shape its impact just as much as the technology itself. 👇 Let’s discuss:  Do you think social media platforms are responsible for addiction?  Or does individual accountability still play a bigger role?  Is waiting for regulation the right move? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    13 min
  8. MAR 24

    Is This the SaaS Apocalypse or AI Hype Gone Too Far?

    The conversation around AI is reaching a tipping point, but are we witnessing a real shift in the market or just the consequences of overhyped expectations? In this episode of Leading Change in the Wild, I dive into recent headlines around private equity firms freezing withdrawals in private credit funds and what that signals for SaaS, AI, and the broader tech economy. From “ghost GDP” fears to AI-driven panic, this moment raises an important question. Are we reacting to reality or to narratives? Here’s what I unpack: - What’s really happening with private credit funds and SaaS investments - How AI hype is influencing market behavior and investor confidence - The “white-collar replacement” narrative and why it’s driving fear - How negative AI messaging is impacting adoption and ROI - Why panic-driven decisions rarely lead to long-term success - The leadership lesson. Questioning assumptions behind your tech strategy The takeaway is simple. Markets and leaders do not fail because of change. They fail because of unchecked assumptions and reactive decisions. AI is not just a technology shift. It is a test of how intentionally we lead through uncertainty. 👇 Let’s discuss: Are we seeing a real SaaS downturn or just hype-driven panic? How is AI messaging affecting adoption inside your organization? What assumptions is your team making about the future of work and tech? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    11 min

Ratings & Reviews

5
out of 5
2 Ratings

About

Welcome to Leading Change, where we dive into the real conversations shaping the future of work. Hosted by Ema Roloff, this series brings together business leaders, change-makers, and innovators to explore the intersection of technology, change management, and leadership in today’s evolving workplace. Each episode is packed with actionable insights, candid stories, and fresh perspectives on navigating transformation—whether it’s leveraging emerging tech, leading through disruption, or building resilient teams. If you’re passionate about creating meaningful change and thriving in the digital era, this is the podcast for you. Let’s redefine what it means to lead in a world where change is the only constant.

You Might Also Like