In this episode of AI Trust Talks, host Emma Johnson sits down with Glen McCracken to unpack a critical question facing organisations adopting artificial intelligence:How do you actually build trust in AI?As AI continues to reshape industries, many organisations assume that trust comes from building more sophisticated models. But according to Glen McCracken, that assumption misses the real issue entirely.Trust in AI is not built through complexity.It is built through clarity.Glen McCracken is a data and transformation leader with extensive experience across fintech, health tech, and digital innovation. Originally from New Zealand and now working internationally, Glen has spent years designing and implementing AI systems across sectors — from email spam detection to large-scale digital platforms in sport and healthcare. With a background in statistics and enterprise transformation, he brings a practical, real-world perspective to how AI actually works inside organisations.In this conversation, Glen challenges the common narrative that AI systems “go rogue.” Instead, he highlights a simple but powerful truth:AI reflects the behaviour, incentives, and decisions of the people who build and deploy it.Rather than treating AI as an independent actor, organisations must recognise that technology is ultimately shaped by human systems, culture, and governance.One of the biggest barriers companies face is what Glen calls the execution gap. Many organisations approach AI as a purely technical project — focusing on building models — when in reality, successful AI implementation requires organisational transformation.Integrating AI often means redesigning workflows, redefining responsibilities, and shifting company culture to support data-driven decision making.Because without that foundation, even the most powerful AI system will struggle to deliver value.This episode explores:Why AI does not “go rogue” — people and systems doThe human and organisational factors behind AI successWhy many companies fail by treating AI as a purely technical problemThe importance of governance, accountability, and risk managementHow clarity in decision-making builds real trust in AIAs Glen explains, trust does not come from understanding every technical detail of an AI model. Instead, it comes from transparency around how decisions are made, who is responsible, and how risks are managed.Just as passengers trust an aircraft without needing to understand aerodynamics, organisations can trust AI when the surrounding systems — governance, processes, and accountability — are clear.Because ultimately, trustworthy AI is not just about intelligent models.It is about the systems, people, and decisions that surround them.—About Glen McCrackenGlen McCracken is a data strategist and AI practitioner with deep expertise in statistics, digital transformation, and enterprise technology deployment. His work spans fintech, health tech, sports technology, and large-scale digital systems, where he focuses on bridging the gap between advanced analytics and practical business outcomes.—About AI Trust TalksHosted by Emma Johnson, AI Trust Talks explores leadership, governance, and the human side of artificial intelligence. Through conversations with global experts, the podcast examines how organisations can build responsible, trustworthy AI in a rapidly evolving technological landscape.—If this episode resonates with you:✔️ Subscribe for more conversations on AI, leadership, and trust✔️ Share this episode with leaders building AI-driven organisations✔️ Comment below: What does trust in AI mean in your organisation?