AI Trust Talks

Emma Johnson

Join Emma and her guests as they tackle the challenges and opportunities shaping the ethical AI landscape—and discover how we can build a future where technology serves both business goals and the greater good. https://emmajohnson-ai.com/ https://www.instagram.com/ai_trust_talks/ https://www.tiktok.com/@ai_trust_talks

  1. Code, Control & Consequence: Ray Eitel-Porter and Paul Dongha on Governing AI’s Impact

    APR 8

    Code, Control & Consequence: Ray Eitel-Porter and Paul Dongha on Governing AI’s Impact

    AI Governance Isn’t a Constraint — It’s the Enabler of Trust In this episode of AI Trust Talks, Emma Johnson sits down with Ray Eitel-Porter and Paul Dongha, co-authors of Governing the Machine, to answer a critical question: How do we unlock AI’s potential without losing control? With decades of experience across AI, banking, and enterprise systems, Ray and Paul share a clear message: AI without governance doesn’t scale. It risks trust, adoption, and long-term value. As AI grows—from analytics to generative and agentic models—organizations face tension between innovation and risk. Governance isn’t a compliance checklist; it’s a system for building trust into AI from the start. Key topics explored: Responsible AI (principles) vs. AI governance (execution)Embedding governance across the full AI lifecycleNine key AI risk categoriesAccountability with business owners, not just technical teamsCross-functional collaboration: legal, risk, HR, techInsights from the episode: Governance accelerates, not slows, innovationAI risks (bias, explainability, sustainability) are real and must be managedEarly-stage checkpoints and ethics councils prevent costly mistakesAI literacy across teams is essentialAI is probabilistic, not “intelligent”—human oversight is crucialTrust comes from the people, processes, and standards behind AI, not the system itselfRay and Paul also highlight a growing blind spot: AI’s environmental impact. As adoption scales, energy use and carbon emissions will become central to responsible AI. Bottom line:Organizations that succeed with AI won’t just be the most innovative—they’ll be the most trusted. Governance isn’t optional; it’s the foundation of AI success. About the Authors Ray Eitel-Porter – Responsible AI leader, former Global Lead at AccenturePaul Dongha – Expert in embedding governance and ethical frameworks in large organizationsAbout AI Trust TalksHosted by Emma Johnson, the show explores leadership, governance, and human systems behind responsible AI. ✔️ Subscribe for more insights on AI leadership✔️ Share with teams deploying AI✔️ Comment: Is your AI strategy built on trust—or just speed?

    49 min
  2. Futureproofing with Purpose: Nationalisation, Digital Disruption, and Human-Centred Transformation

    APR 1

    Futureproofing with Purpose: Nationalisation, Digital Disruption, and Human-Centred Transformation

    AI Transformation Starts with People — Not Technology In this episode of AI Trust Talks, host Emma Johnson sits down with Dua Al Toobi to explore one of the most overlooked truths in digital transformation: AI transformation is not a technology challenge.It’s a workforce challenge. Dua Al Toobi is a strategist, advisor, and author of Future Proof Your Workforce, with experience across banking, government, and digital transformation. As Founder of Future Proof Advisory and Global Director of Digital Partnerships at Women in AI, she focuses on helping organisations bridge the digital skills gap and build future-ready teams. Her core message is clear: If you don’t transform your workforce, your AI strategy will fail. Many organisations still approach AI in fragmented ways — separating technology, people, and strategy into different conversations. Dua challenges this approach, arguing that true transformation starts by aligning all three. This conversation explores: • Why workforce transformation must sit at the center of AI strategy• The gap between digital ambition and people readiness• Why HR must be involved at the design stage — not just execution• Moving from hiring talent to reskilling existing teams• The role of leadership in enabling change and psychological safety Dua also introduces strategic empathy — understanding how decisions impact people on the ground — and why most digital transformations fail at the “last mile” where employees meet customers. A key insight from the episode: Organisations already have more of the skills they need than they think. The challenge is identifying, unlocking, and evolving those capabilities — not starting from scratch. The conversation also covers: • Human-machine collaboration and the role of human judgment• The future of frontline roles and preserving dignity in automation• Why nationalisation and digital transformation should be one conversation• How inclusive workforce design leads to better AI outcomes• The importance of experimentation and continuous learning Dua shares a powerful perspective on trust in the age of AI: Trust yourself. Because while AI can enhance decisions, human judgment, context, and experience remain essential. Ultimately, the organisations that succeed in AI won’t be the ones with the most advanced tools. They’ll be the ones who invest in their people, align their strategy, and build systems that bring humans and machines together effectively. Because in the age of AI, transformation doesn’t start with technology. It starts with people. — About Dua Al ToobiDua Al Toobi is a workforce transformation advisor and founder of Future Proof Advisory. She works with organisations to build future-ready capabilities and bridge the digital skills gap, while also advocating for inclusion in AI through her role at Women in AI. — About AI Trust TalksHosted by Emma Johnson, AI Trust Talks explores leadership, governance, and the human systems behind responsible artificial intelligence. — If this episode resonates with you:✔️ Subscribe for more conversations on AI leadership✔️ Share this episode with leaders navigating change✔️ Comment below: Is your organisation truly ready for AI?

    53 min
  3. The Scale of Trust: Dr. Chris Cooper on AI Investment, Data Integrity, and Strategic Growth

    MAR 25

    The Scale of Trust: Dr. Chris Cooper on AI Investment, Data Integrity, and Strategic Growth

    What happens when organisations invest in AI without understanding the problem they’re trying to solve? In this episode of AI Trust Talks, host Emma Johnson sits down with Dr. Chris to explore one of the most overlooked challenges in artificial intelligence: Are we building AI for value — or just out of fear of missing out? As AI investment accelerates globally, many organisations are rushing to adopt it without clear strategy, strong data foundations, or defined business outcomes. But according to Dr. Chris, this approach is setting companies up for failure. Because AI success doesn’t start with models. It starts with data, infrastructure, and people. Dr. Chris brings decades of experience across telecommunications, healthcare, energy, and enterprise AI — including scaling one of the first AI companies in the Middle East to unicorn status. From early work in data systems to leading large-scale AI transformation, he offers a grounded, real-world perspective on what actually drives value in AI. In this conversation, he challenges the hype-driven narrative around AI and highlights a critical truth: AI is not magic — it’s simply a tool for extracting insights from data. And when that data is flawed, incomplete, or misunderstood, the outcomes will be too. One of the biggest risks organisations face today is investing in AI without addressing foundational issues like data quality, governance, and clear accountability. Many companies focus on building solutions before asking the most important question: What problem are we actually trying to solve? This episode dives deep into the execution gap between AI ambition and real-world impact — and why education, leadership, and organisational alignment are essential to closing it. This episode explores:• Why AI hype and FOMO are driving poor investment decisions• The importance of starting with business problems — not technology• Why “junk in, junk out” is still the biggest risk in AI• The role of data quality, infrastructure, and people in successful AI adoption• Why governance, compliance, and accountability cannot be an afterthought• How organisations can identify high-value AI use cases• The growing importance of AI literacy across all levels of a business As Dr. Chris explains, trust in AI doesn’t come from complexity or cutting-edge models. It comes from understanding your data, defining your goals, and building the right systems around it. Because ultimately, successful AI isn’t about chasing innovation. It’s about solving real problems — in the right way. __ About Dr. Chris CooperDr. Chris is a data and AI leader with deep expertise across telecommunications, healthcare, energy, and enterprise technology. With a PhD background and decades of experience in data systems, analytics, and digital transformation, he has led large-scale AI initiatives and played a key role in scaling AI ventures to unicorn status in the Middle East. His work focuses on bridging the gap between advanced AI capabilities and practical business value. —About AI Trust TalksHosted by Emma Johnson, AI Trust Talks explores leadership, governance, and the human systems behind responsible artificial intelligence. Through conversations with global experts, the podcast examines how organisations can deploy AI safely while maintaining innovation and trust.—If this episode resonates with you:✔️ Subscribe for more conversations on AI governance and leadership✔️ Share this episode with leaders building AI systems✔️ Comment below: What’s the biggest blind spot you see in AI risk management today?

    43 min
  4. Reasoning, Risk and Reality: Making AI Safe in Practice | with Gary Ang

    MAR 18

    Reasoning, Risk and Reality: Making AI Safe in Practice | with Gary Ang

    In this episode of AI Trust Talks, host Emma Johnson sits down with Gary Ang to cut through the hype around AI safety and explore what actually makes AI systems trustworthy in the real world. Gary Ang brings a rare combination of expertise in financial regulation, model risk management, and artificial intelligence. Formerly with the Monetary Authority of Singapore (MAS), he spent years working on credit, market, and liquidity models before moving into AI risk management. His work includes leading a major review of banks’ AI model risk practices and contributing to AI risk management guidance for the financial sector. With a background that spans risk supervision, deep learning research, and AI governance, Gary offers a grounded perspective on how organisations should approach AI safety in practice. And his core message is simple: AI safety is not something you add at the end.It must be embedded from the beginning through testing, evaluation, governance, and accountability.In a world where generative AI and agentic systems are evolving rapidly, Gary argues that many organisations are still misunderstanding what makes AI safe. Too often, companies treat AI safety as a checklist or a compliance exercise — when in reality it’s an ongoing system of processes, monitoring, and organisational discipline. This conversation explores: • Why AI safety cannot be “bolted on” after development • The difference between benchmarks and real-world model validation • Why evaluation and testing are the foundation of trustworthy AI • The growing complexity introduced by generative and agentic AI systems • How AI inventories and model tracking can become a competitive advantage Gary also highlights one of the biggest blind spots organisations face today: not knowing where their AI actually exists within the organisation. Without proper identification, documentation, and governance of AI systems, managing risk becomes nearly impossible. Another key insight from the conversation is that AI risk controls should not exist in isolation. Controls such as evaluation, monitoring, explainability, and human oversight are interconnected systems, and understanding how they work together is essential for building trustworthy AI. Ultimately, the organisations that succeed with AI will not be those moving the fastest without structure.They will be the ones who build clear governance, disciplined validation frameworks, and strong accountability from the start.Because in the age of AI, trust is built not through hype — but through rigorous foundations. — About Gary Ang Gary Ang is a risk and AI governance specialist with deep expertise in model risk management and financial regulation. Formerly with the Monetary Authority of Singapore, his work spans credit and market risk modelling, AI governance frameworks, and enterprise risk management. His research background in deep learning and time-series networks gives him a unique perspective at the intersection of AI innovation and regulatory oversight. — About AI Trust Talks Hosted by Emma Johnson, AI Trust Talks explores leadership, governance, and the human systems behind responsible artificial intelligence. Through conversations with global experts, the podcast examines how organisations can deploy AI safely while maintaining innovation and trust. — If this episode resonates with you: ✔️ Subscribe for more conversations on AI governance and leadership ✔️ Share this episode with leaders building AI systems ✔️ Comment below: What’s the biggest blind spot you see in AI risk management today?

    41 min
  5. AI's "Make Your Break" Moment — Myth or Milestone? | with Glen McCracken

    MAR 12

    AI's "Make Your Break" Moment — Myth or Milestone? | with Glen McCracken

    In this episode of AI Trust Talks, host Emma Johnson sits down with Glen McCracken to unpack a critical question facing organisations adopting artificial intelligence:How do you actually build trust in AI?As AI continues to reshape industries, many organisations assume that trust comes from building more sophisticated models. But according to Glen McCracken, that assumption misses the real issue entirely.Trust in AI is not built through complexity.It is built through clarity.Glen McCracken is a data and transformation leader with extensive experience across fintech, health tech, and digital innovation. Originally from New Zealand and now working internationally, Glen has spent years designing and implementing AI systems across sectors — from email spam detection to large-scale digital platforms in sport and healthcare. With a background in statistics and enterprise transformation, he brings a practical, real-world perspective to how AI actually works inside organisations.In this conversation, Glen challenges the common narrative that AI systems “go rogue.” Instead, he highlights a simple but powerful truth:AI reflects the behaviour, incentives, and decisions of the people who build and deploy it.Rather than treating AI as an independent actor, organisations must recognise that technology is ultimately shaped by human systems, culture, and governance.One of the biggest barriers companies face is what Glen calls the execution gap. Many organisations approach AI as a purely technical project — focusing on building models — when in reality, successful AI implementation requires organisational transformation.Integrating AI often means redesigning workflows, redefining responsibilities, and shifting company culture to support data-driven decision making.Because without that foundation, even the most powerful AI system will struggle to deliver value.This episode explores:Why AI does not “go rogue” — people and systems doThe human and organisational factors behind AI successWhy many companies fail by treating AI as a purely technical problemThe importance of governance, accountability, and risk managementHow clarity in decision-making builds real trust in AIAs Glen explains, trust does not come from understanding every technical detail of an AI model. Instead, it comes from transparency around how decisions are made, who is responsible, and how risks are managed.Just as passengers trust an aircraft without needing to understand aerodynamics, organisations can trust AI when the surrounding systems — governance, processes, and accountability — are clear.Because ultimately, trustworthy AI is not just about intelligent models.It is about the systems, people, and decisions that surround them.—About Glen McCrackenGlen McCracken is a data strategist and AI practitioner with deep expertise in statistics, digital transformation, and enterprise technology deployment. His work spans fintech, health tech, sports technology, and large-scale digital systems, where he focuses on bridging the gap between advanced analytics and practical business outcomes.—About AI Trust TalksHosted by Emma Johnson, AI Trust Talks explores leadership, governance, and the human side of artificial intelligence. Through conversations with global experts, the podcast examines how organisations can build responsible, trustworthy AI in a rapidly evolving technological landscape.—If this episode resonates with you:✔️ Subscribe for more conversations on AI, leadership, and trust✔️ Share this episode with leaders building AI-driven organisations✔️ Comment below: What does trust in AI mean in your organisation?

    55 min

Trailers

About

Join Emma and her guests as they tackle the challenges and opportunities shaping the ethical AI landscape—and discover how we can build a future where technology serves both business goals and the greater good. https://emmajohnson-ai.com/ https://www.instagram.com/ai_trust_talks/ https://www.tiktok.com/@ai_trust_talks