Leading Change

Ema Roloff

Welcome to Leading Change, where we dive into the real conversations shaping the future of work. Hosted by Ema Roloff, this series brings together business leaders, change-makers, and innovators to explore the intersection of technology, change management, and leadership in today’s evolving workplace. Each episode is packed with actionable insights, candid stories, and fresh perspectives on navigating transformation—whether it’s leveraging emerging tech, leading through disruption, or building resilient teams. If you’re passionate about creating meaningful change and thriving in the digital era, this is the podcast for you. Let’s redefine what it means to lead in a world where change is the only constant.

  1. 2 DAYS AGO

    Claude Code Just Leaked… Here’s Why It Matters

    When the Claude Code leak first surfaced, many thought it was an April Fool’s joke. It wasn’t. In this episode of Leading Change in the Wild, I break down what actually happened when Anthropic accidentally leaked over 500,000 lines of Claude’s source code and why the aftermath matters more than the leak itself. Because this is not just a story about human error. It is a glimpse into the future of AI, cybersecurity, and competition. From malicious repos to copyright takedowns, this moment exposed deeper tensions across the AI landscape. And it raises a bigger question. What happens when the most advanced systems can no longer be contained? Here’s what I unpack: What actually happened in the Claude Code leak and how it spread so quickly The immediate cybersecurity risks and rise of malicious copycat repos Why bad actors now have new visibility into AI systems Anthropic’s aggressive copyright response and the backlash that followed The irony of copyright claims in the age of AI training data Why this leak may signal a future of competing or open-source AI models What this means for trust, safety, and leadership in AI The takeaway is clear. The genie is out of the bottle. AI is not just evolving. It is becoming harder to control, contain, and govern. This is not just a technology conversation. It is a leadership one. Because the future of AI will not only be shaped by what companies build, but by how we respond when things don’t go as planned. TikTok mentioned in this episode: https://www.tiktok.com/@nate.b.jones/video/7624277313655442718 👇 Let’s discuss: Does this change how you think about AI security and trust? Are we prepared for the risks that come with more open AI systems? What role should companies play when something like this happens? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    9 min
  2. 3 APR

    Personal Autonomy in the Age of Technology

    When do we stop blaming the individual and start blaming the system? And maybe more importantly… when do we stop blaming the system and start taking accountability ourselves? Right now, we’re watching this tension play out in real time. From a landmark lawsuit against Meta Platforms and Google to new regulations emerging in Australia, the conversation is shifting toward platform responsibility. But that shift raises a deeper question about our personal autonomy in how we engage with technology. In this episode of Leading Change, I break down what this moment signals for social media, artificial intelligence, and the balance between individual choice and system design. 📉 Here’s what we unpack: The shift from personal responsibility to platform accountability Why this debate mirrors past cases like the cigarette industry How the attention economy is designed to influence behavior What this means as AI becomes more immersive and habit-forming The risks of relying on regulation to guide our decisions Why setting personal boundaries with technology matters more than ever This is not just a legal or regulatory conversation. It is a question of autonomy. If we decide that we have no control over how we engage with technology, we give that control away. But if we recognize our role alongside these systems, we create space for more intentional use. This is not about removing responsibility from platforms. It is about understanding that regulation alone will not solve the problem. As AI continues to evolve, our choices, behaviors, and boundaries will shape its impact just as much as the technology itself. 👇 Let’s discuss:  Do you think social media platforms are responsible for addiction?  Or does individual accountability still play a bigger role?  Is waiting for regulation the right move? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    13 min
  3. 24 MAR

    Is This the SaaS Apocalypse or AI Hype Gone Too Far?

    The conversation around AI is reaching a tipping point, but are we witnessing a real shift in the market or just the consequences of overhyped expectations? In this episode of Leading Change in the Wild, I dive into recent headlines around private equity firms freezing withdrawals in private credit funds and what that signals for SaaS, AI, and the broader tech economy. From “ghost GDP” fears to AI-driven panic, this moment raises an important question. Are we reacting to reality or to narratives? Here’s what I unpack: - What’s really happening with private credit funds and SaaS investments - How AI hype is influencing market behavior and investor confidence - The “white-collar replacement” narrative and why it’s driving fear - How negative AI messaging is impacting adoption and ROI - Why panic-driven decisions rarely lead to long-term success - The leadership lesson. Questioning assumptions behind your tech strategy The takeaway is simple. Markets and leaders do not fail because of change. They fail because of unchecked assumptions and reactive decisions. AI is not just a technology shift. It is a test of how intentionally we lead through uncertainty. 👇 Let’s discuss: Are we seeing a real SaaS downturn or just hype-driven panic? How is AI messaging affecting adoption inside your organization? What assumptions is your team making about the future of work and tech? 🔔 Subscribe for weekly insights on digital transformation, leadership, and emerging technologies.

    11 min
  4. 10 FEB

    Inside Clawbot and Moltbook’s Leap Into Autonomous AI

    What happens when AI agents stop waiting for prompts and start taking action on their own? We’re beginning to see that line blur, and the headlines are starting to feel a little sci-fi. In this episode of Leading Change in the Wild, I break down what’s happening with autonomous AI agents like Claudebot and Moltbook, why they’re generating so much hype, and the very real leadership and ethical questions they raise as autonomy increases. 📉 Here’s what I unpack: What makes agents like Claudebot fundamentally different from traditional AI tools Why persistent memory, proactivity, and autonomy are changing the risk profile Real examples of agents acting without explicit prompts, including calling their owners What Moltbook reveals about AI agents interacting without human oversight Why accountability, governance, and human-in-the-loop design matter more than ever This technology is impressive, but it also makes one thing clear: once autonomy is introduced, the questions shift from what can AI do to who is responsible when it does it. We can’t put the genie back in the bottle. The focus now has to be on ethical design, clear guardrails, and human leadership that keeps pace with the technology. 👇 Let’s discuss: How comfortable are you with autonomous AI? Where should accountability sit when agents act on their own? What guardrails feel non-negotiable as autonomy increases? 🔔 Subscribe for weekly insights on digital transformation, change management, and emerging technologies.

    12 min
  5. 3 FEB

    Firehound and the Hidden Risk of Vibe Coding

    Vibe coding makes it feel easy to launch an app. Write a good prompt, ship fast, and start monetizing. But what happens when no one stops to think about security, data exposure, or who is actually protecting users? In this episode of Leading Change in the Wild, I take a closer look at Firehound and the work they are doing to expose vibe-coded apps in the App Store that are leaking user data, and why this should be a wake-up call for builders, leaders, and consumers. 📉 Here’s what I unpack: Why vibe-coded apps are creating serious security vulnerabilities How Firehound uncovered nearly 200 apps leaking user data What the Tea app incident revealed about verification, privacy, and harm Why fast AI-driven development often skips critical safeguards How this changes the build versus buy conversation What leaders need to consider before encouraging internal vibe coding AI can accelerate development, but speed without security creates risk. When we remove guardrails and expertise, the cost shows up later in user trust, data exposure, and reputational damage. This moment is a reminder that just because something can be built quickly does not mean it should be deployed without rigor. Whether you are building internally or shipping to the public, security and governance still matter. 👇 Let’s discuss: Do you think vibe coding belongs in enterprise environments? How should leaders balance speed, innovation, and security when using AI to build? 🔔 Subscribe for weekly insights on digital transformation, change management, leadership, and emerging technologies.

    8 min
  6. 20 JAN

    Apple & Google’s AI Partnership

    Is Siri finally about to answer our questions? Apple’s new partnership with Google has a lot of people talking. Some see it as Apple waving a white flag in the AI race. I see it as something much more strategic. In this episode of Leading Change in the Wild, I break down Apple’s decision to partner with Google’s Gemini AI to power Siri, what this means for the future of AI competition, and why the build versus buy conversation is resurfacing in a big way. 📉 Here’s what I unpack: Why Apple partnering with Google is not an AI failure but a strategic choice How this deal pushed Alphabet past a $4 trillion valuation Why build versus buy is back in enterprise conversations What data ownership and model control have to do with AI strategy How Google is quietly positioning itself for a major AI comeback What this partnership signals for leaders navigating AI investments AI leadership is not always about being first. Sometimes it is about knowing what to build, what to buy, and what to partner on. This moment is a reminder that strategy is about focus. Apple is doubling down on its core strengths while leveraging partnerships to stay competitive in a rapidly changing market. 👇 Let’s discuss: Is build versus buy a real option for most organizations right now? What do you think Apple’s partnership with Google signals about the future of AI competition? 🔔 Subscribe for weekly insights on digital transformation, change management, emerging technologies, and leadership.

    8 min

About

Welcome to Leading Change, where we dive into the real conversations shaping the future of work. Hosted by Ema Roloff, this series brings together business leaders, change-makers, and innovators to explore the intersection of technology, change management, and leadership in today’s evolving workplace. Each episode is packed with actionable insights, candid stories, and fresh perspectives on navigating transformation—whether it’s leveraging emerging tech, leading through disruption, or building resilient teams. If you’re passionate about creating meaningful change and thriving in the digital era, this is the podcast for you. Let’s redefine what it means to lead in a world where change is the only constant.