AI Trust Talks

Emma Johnson

Join Emma and her guests as they tackle the challenges and opportunities shaping the ethical AI landscape—and discover how we can build a future where technology serves both business goals and the greater good. https://emmajohnson-ai.com/ https://www.instagram.com/ai_trust_talks/ https://www.tiktok.com/@ai_trust_talks

  1. AI's "Make Your Break" Moment — Myth or Milestone? | with Glen McCracken

    1D AGO

    AI's "Make Your Break" Moment — Myth or Milestone? | with Glen McCracken

    In this episode of AI Trust Talks, host Emma Johnson sits down with Glen McCracken to unpack a critical question facing organisations adopting artificial intelligence:How do you actually build trust in AI?As AI continues to reshape industries, many organisations assume that trust comes from building more sophisticated models. But according to Glen McCracken, that assumption misses the real issue entirely.Trust in AI is not built through complexity.It is built through clarity.Glen McCracken is a data and transformation leader with extensive experience across fintech, health tech, and digital innovation. Originally from New Zealand and now working internationally, Glen has spent years designing and implementing AI systems across sectors — from email spam detection to large-scale digital platforms in sport and healthcare. With a background in statistics and enterprise transformation, he brings a practical, real-world perspective to how AI actually works inside organisations.In this conversation, Glen challenges the common narrative that AI systems “go rogue.” Instead, he highlights a simple but powerful truth:AI reflects the behaviour, incentives, and decisions of the people who build and deploy it.Rather than treating AI as an independent actor, organisations must recognise that technology is ultimately shaped by human systems, culture, and governance.One of the biggest barriers companies face is what Glen calls the execution gap. Many organisations approach AI as a purely technical project — focusing on building models — when in reality, successful AI implementation requires organisational transformation.Integrating AI often means redesigning workflows, redefining responsibilities, and shifting company culture to support data-driven decision making.Because without that foundation, even the most powerful AI system will struggle to deliver value.This episode explores:Why AI does not “go rogue” — people and systems doThe human and organisational factors behind AI successWhy many companies fail by treating AI as a purely technical problemThe importance of governance, accountability, and risk managementHow clarity in decision-making builds real trust in AIAs Glen explains, trust does not come from understanding every technical detail of an AI model. Instead, it comes from transparency around how decisions are made, who is responsible, and how risks are managed.Just as passengers trust an aircraft without needing to understand aerodynamics, organisations can trust AI when the surrounding systems — governance, processes, and accountability — are clear.Because ultimately, trustworthy AI is not just about intelligent models.It is about the systems, people, and decisions that surround them.—About Glen McCrackenGlen McCracken is a data strategist and AI practitioner with deep expertise in statistics, digital transformation, and enterprise technology deployment. His work spans fintech, health tech, sports technology, and large-scale digital systems, where he focuses on bridging the gap between advanced analytics and practical business outcomes.—About AI Trust TalksHosted by Emma Johnson, AI Trust Talks explores leadership, governance, and the human side of artificial intelligence. Through conversations with global experts, the podcast examines how organisations can build responsible, trustworthy AI in a rapidly evolving technological landscape.—If this episode resonates with you:✔️ Subscribe for more conversations on AI, leadership, and trust✔️ Share this episode with leaders building AI-driven organisations✔️ Comment below: What does trust in AI mean in your organisation?

    55 min
  2. Reimagining Innovation: Patrick Strauss on Intelligent Environments and Industry 4.0

    MAR 5

    Reimagining Innovation: Patrick Strauss on Intelligent Environments and Industry 4.0

    What Is Your “Purple Wing” in AI? In this episode of AI Trust Talks, host Emma Johnson is joined by global transformation executive Patrick Strauss for a powerful conversation on one of the most misunderstood topics in artificial intelligence: Trust cannot be added later. It must be designed from day one. Patrick Strauss is an international business leader with over three decades of experience driving large-scale digital and AI transformation across global enterprises. Having led innovation and strategic change within complex organisations, he brings a rare blend of operational depth, governance expertise, and commercial foresight to the AI conversation. And his message is clear: In a world obsessed with “move fast and break things,” the real differentiator is responsible speed. Right now, AI feels like mushrooms popping up everywhere. New tools. New startups. New promises of overnight millions. But the real question leaders should be asking is: What is your Purple Wing? Patrick introduces the concept of the Purple Wing — the defining differentiator that makes an organisation stand out in a saturated AI market. If a founder or executive cannot articulate their Purple Wing in one clear, succinct answer, that’s the red flag. This episode explores: Why AI trust must be embedded at the design stage The commercial advantage of governance, ethics, and safety Why slowing down strategically can future-proof a business The danger of becoming a “me too” AI company How leaders can build AI credibility that lasts Because in hindsight, the companies that look the smartest will not be the fastest. They will be the ones who built trust properly. — About Patrick StraussPatrick Strauss is a seasoned global executive and transformation strategist with deep expertise in digital innovation, governance frameworks, and enterprise AI implementation. His leadership experience spans multinational organisations where trust, risk management, and responsible technology deployment were mission-critical. — About AI Trust TalksHosted by Emma Johnson, AI Trust Talks explores leadership, governance, AI safety, and global innovation with the thinkers and decision-makers shaping the future of responsible artificial intelligence. — If this episode resonates:✔️ Subscribe for more conversations on AI leadership and trust✔️ Share this with a founder or executive building in AI✔️ Comment below: What is your Purple Wing?

    51 min
  3. Human After All: AI, Empathy, and the Ethics of Emotional Intelligence

    FEB 19

    Human After All: AI, Empathy, and the Ethics of Emotional Intelligence

    In this thought-provoking episode of AI Trust Talks, host Emma Johnson explores one of the most urgent questions of the AI era: Are we still human after all? Emma is joined by three powerhouse voices at the intersection of technology, governance, and human connection: Mimi Nicklin – Empathy advocate, bestselling author, and CEO of Empathy Everywhere Prof. Melodena Stephens – AI ethics strategist, professor at MBRSG, and global advisor on AI governance Dr Farzana Irshad Munshi – Biochemist turned tech entrepreneur, focused on emotional intelligence and digital innovation Together, they unpack: • What it truly means to be human in an AI-driven world• The myth that AI can “feel” empathy• Emotional AI in healthcare, education and business• Bias, profiling, and cultural flattening in large language models• Governance gaps and the global challenge of regulating AI• Loneliness, leadership, and the future of emotional intelligence• Why “AI is the side character — humans are the main character” From classrooms to hospitals, startups to global policy, this conversation challenges both the hype and the fear surrounding artificial intelligence. Is AI amplifying who we are — or reshaping humanity itself?And if empathy becomes measurable, does it risk becoming just another performance metric? This episode doesn’t offer easy answers — but it offers something more important: honest dialogue.

    56 min
  4. AI on Trial: Trust, Ethics, and Regulation in the Legal Arena

    FEB 11

    AI on Trial: Trust, Ethics, and Regulation in the Legal Arena

    In this episode of AI Trust Talks, host Emma Johnson is joined by Minesh Tanna, Partner in the Disputes & Investigations team at Simmons & Simmons and the firm’s Global AI Lead. An industry-recognised authority on AI law, Minesh leads the firm’s Tier 1 Artificial Intelligence practice and has been ranked in the Legal 500 UK AI Hall of Fame for 2024 and 2025. Minesh specialises in contentious commercial and regulatory matters in the technology sector, representing clients in investigations, litigation and arbitration, while also advising on regulatory readiness. He chairs both the Society for Computers & Law AI Group and the City of London Law Society AI Committee, and is a published author and regular speaker on AI law. A Solicitor-Advocate and accredited mediator, he has acted on complex disputes involving AI, software, telecoms, cloud services and outsourcing. He holds a first class Law degree from the University of Oxford and has worked internationally, including in Abu Dhabi, Dubai and Paris. Together, Emma and Minesh explore one of the most urgent questions facing organisations today: how do we regulate, govern and truly trust AI in high-stakes environments like law, finance and the judiciary? Key themes in this episode include: • The layered reality of AI regulation beyond the EU AI Act• Why general purpose AI creates complex compliance challenges• The gap between regulatory expectations and operational reality• Moving from AI literacy to AI fluency• Responsible AI vs legal compliance and where ethics begins• AI in the courtroom, hallucinations and liability• Whether AI can reduce human bias in legal decision making• How regulated industries must adapt to rapid AI adoptionMinesh also shares his perspective on trust thresholds for AI in judicial contexts, professional codes of conduct, and why expecting perfection from AI may not be the right benchmark for building trust.This episode is essential listening for board members, legal professionals, compliance leaders, policymakers and anyone navigating the evolving AI regulatory landscape.Follow AI Trust Talks on Instagram https://www.instagram.com/ai_trust_talksFollow, subscribe and share to keep the conversation going.Because building trust in AI is no longer optional.

    44 min

Trailers

About

Join Emma and her guests as they tackle the challenges and opportunities shaping the ethical AI landscape—and discover how we can build a future where technology serves both business goals and the greater good. https://emmajohnson-ai.com/ https://www.instagram.com/ai_trust_talks/ https://www.tiktok.com/@ai_trust_talks