AI Ethics Navigator

Kamini Govender

AI Ethics Navigator examines how emerging technologies are reshaping society, power, and everyday life. Hosted by Kamini Govender, each episode features researchers, advisors, and practitioners — including voices from UNESCO, the UN AI Advisory Body, and the Alan Turing Institute — exploring what ethics looks like in practice when technology meets society. Thoughtful, unhurried conversations linking ideas across disciplines and geographies, with Global South perspectives often missing from mainstream discourse. Connect with Kamini: https://www.linkedin.com/in/kamini-govender

Episodes

  1. 4 DAYS AGO

    Navigating KING V (Episode 3): From Principles to Practice

    From Principles to Practice | Kamini Govender | AI Ethics Navigator This is the final episode in the Navigating King V: AI Governance in South Africa series on AI Ethics Navigator. Episodes 1 and 2 explored the ethical foundations and board-level accountability introduced by King V, drawing on Ubuntu, impact materiality, and the non-delegable responsibility of boards when AI systems shape organisational decisions. This concluding episode addresses the remaining question boards are now facing: How is ethical AI governance actually implemented in practice? In this solo episode, Kamini Govender moves decisively from principle to execution, setting out how boards can translate King V’s ethical and governance expectations into workable governance mechanisms, clear accountability, and demonstrable assurance when AI systems are already embedded across organisations. Rather than focusing on theory or emerging regulation, this conversation centres on what boards must put in place now. In this episode, we explore: • Why King V shifts ethics from aspiration to demonstrable governance, and why values without mechanisms fail in practice. • How board accountability for AI differs from operational execution, and why “AI as an IT issue” is a governance failure. • The core governance structures boards need to oversee AI-driven decision-making, including ownership, visibility, decision rights, risk integration, and assurance. • How Ubuntu functions not only as an ethical philosophy, but as a practical governance lens shaping impact materiality, escalation thresholds, and board judgment. • What global AI governance practice is converging on, and why South Africa must adapt — not import — international frameworks. • The most common failures boards make when governing AI, and why ethics must be treated as auditable and evidence-based. This episode positions King V not as a compliance exercise, but as a test of whether boards can govern power, technology, and human impact with clarity and courage. Episode length: 28 minutes Series: Navigating King V: AI Governance in South Africa (Episode 3 of 3) Connect: https://www.linkedin.com/in/kamini-govender

    28 min
  2. 20 JAN

    Navigating King V (Episode 2): Board Responsibility and Ethical Oversight

    Episode 2 of the Navigating King V series This episode continues a three-part series on AI governance in South Africa. Navigating King V: AI Governance in South Africa examines what South Africa’s updated corporate governance code means for ethics, accountability, and board oversight as AI systems increasingly shape organisational decision-making. The series explores King V not as a compliance exercise, but as a framework for ethical leadership under conditions of technological uncertainty. Guest: Carolynn Chalmers | CEO, Good Governance Academy Carolynn Chalmers is one of South Africa’s leading voices on corporate governance, ethical leadership, and board accountability. With a background spanning computer science, systems thinking, and governance practice, she works at the intersection of board oversight, global standard-setting, and real-world decision-making. Her work includes leadership roles in ISO governance standards and decades of experience advising boards on how governance frameworks translate into practical accountability, particularly as technology and AI reshape risk, responsibility, and power. In this episode, we explore: • What board responsibility and ethical oversight mean under King V as AI systems increasingly shape organisational decisions • Why AI is not an IT problem, but a governance and accountability challenge for boards • The limits of traditional risk and assurance frameworks when governing opaque, data-driven systems • How AI expands materiality beyond financial impact to include human dignity, fairness, and societal trust • The risks of adopting global AI governance frameworks without grounding them in South Africa’s governance realities • How boards can govern AI alongside cyber, sustainability, and enterprise risk without defaulting to checkbox compliance “You can delegate tasks, but you can never delegate accountability. That responsibility always sits with the board.” Carolynn Chalmers Episode length: 1 hour 14 minutes Series: Navigating King V: AI Governance in South Africa (Episode 2 of 3) Connect with Kamini: https://www.linkedin.com/in/kamini-govender-942225159/

    1h 15m
  3. 13 JAN

    Navigating King V (Episode 1): Ubuntu – A King V AI Governance Imperative

    Episode 1 of the Navigating King V series This episode marks the start of a new three-part series on AI governance in South Africa. Navigating King V: AI Governance in South Africa examines what South Africa’s updated corporate governance code means for ethics, accountability, and board oversight as AI systems increasingly shape organisational decision-making. The series explores King V not as a compliance exercise, but as a framework for ethical leadership in conditions of technological uncertainty. Guest: Dr Ntokozo Mahlangu | Head of Operational Risk, Corporate & Investment Banking, Investec  Dr. Ntokozo Mahlangu is Head of Operational Risk for Corporate & Investment Banking at Investec, where he works at the intersection of risk management, governance, and emerging technology. He holds a PhD and has extensive experience spanning operational risk, wealth and investment operations, and banking across South Africa and the UK.  Beyond his technical expertise, Ntokozo’s work engages with how African philosophy and values can inform AI governance. His recent work explores financial inclusion through an AI lens, examining how technology can bridge South Africa's economic divides with empathy and contextual intelligence rather than simply automating exclusion at scale. He also serves on the strategic advisory board of The DaVinci Institute.   In this episode, we explore: • King V as moral inflection point: Why South Africa's updated corporate governance code represents an opportunity to reclaim moral leadership amid governance fatigue and trust deficit.  • Ubuntu philosophy explained: What "I am because we are" means for corporate governance, and why this uniquely South African philosophy of interconnectedness, dignity, and human responsibility now appears in formal governance frameworks.  • Beyond Western AI ethics: How Ubuntu offers a relational, community-centred alternative to individualistic frameworks of autonomy, fairness, and accountability—and what it surfaces that Western approaches miss.  • Impact materiality in practice: How King V shifts board accountability from profit-only to societal impact, requiring boards to measure success by how decisions affect communities, not just shareholders.  • Financial inclusion reimagined: Concrete examples of Ubuntu-centred AI design in financial services—including how AI can "see" informal economies, honour dignity in township resilience, and build credit models that recognize stokvels and spaza shops.  • Relational risk management: How Ubuntu philosophy surfaces new categories of AI risk—risks to human dignity, social cohesion, and community trust—that conventional operational risk frameworks miss.  • From compliance to moral courage: Why boards must move beyond checkbox governance to values-driven leadership, and what stakeholder service (not management) actually means in practice.  • Global South contribution: How South Africa can shape global AI governance by embedding community-centred values, and why indigenous knowledge systems must be producers, not just consumers, of AI frameworks.  • Implementation reality: Practical guidance for boards operationalizing Ubuntu principles under King V—from diagnostic questions to measuring impact beyond performative metrics.  • Education and future leaders: How universities and business schools must prepare students to carry Ubuntu principles into boardrooms through experiential learning and community engagement.  "Ubuntu is not just about South African values. It's actually a universal human value that emphasizes community, dignity, and shared humanity over hyper-individualistic, market-driven ethos." Dr. Ntokozo Mahlangu Episode length: 1 hour 32 minutes Series: Navigating King V: AI Governance in South Africa (Episode 1 of 3) Connect with Kamini:  https://www.linkedin.com/in/kamini-govender-942225159/

    1h 32m
  4. 19/11/2025

    Episode 8: Guest: Jasmina Byrne | Chief of Foresight and Policy, UNICEF Innocenti – Global Office of Research and Foresight

    Originally published: 19 November 2025   Guest: Jasmina Byrne | Chief of Foresight and Policy, UNICEF Innocenti – Global Office of Research and Foresight With over 25 years of experience in research, policy advocacy, programme management, and humanitarian action, she currently leads UNICEF's work on global foresight and anticipatory policy, covering topics such as frontier technologies, governance, macroeconomics, markets, society, and the environment. She is a lead author of UNICEF's annual foresight publication, Global Outlook for Children, and co-authored UNICEF's Manifesto on Children's Data Governance. Previously, she managed UNICEF's Office of Research portfolio on children and digital technologies, child rights, and child protection.   Topic: It Takes a Village: Governing AI for Children in the Digital Age   In this episode, we explore:   •Foresight as discipline: How UNICEF uses horizon scanning and scenario development to anticipate global trends and prepare for potential futures that impact children's lives.   •AI's promise and reality in developing countries: The gap between the democratization narrative and the actual challenges of internet access, infrastructure, cost, and biased data systems.   •Data governance in education: How EdTech companies collect and misuse children's data, with only 40% of personalized learning platforms in developing countries having data protection policies in place.   •Power dynamics in AI: The imbalance between tech companies in high-income countries and communities in the Global South, and the risk of techno-colonialism eroding local languages and cultural identity.   •Africa's demographic future: Why Africa's young population makes investment in local AI development, indigenous language models, and developer pipelines critical now.   •The ecosystem approach: How protecting children requires coordinated action from parents, teachers, tech companies, policy makers, and governments—because it takes a village to raise a child in the digital age.   •Parenting in the digital age: Practical guidance on balancing protection with autonomy, ensuring children develop digital literacy and resilience to navigate online risks.   •Building trust in technology: The importance of governance frameworks, data literacy, and age-appropriate design before scaling AI systems that children will use.   "It takes a village to raise a child. So when it comes to children and technology, children and AI, children and their data, we need to think about the role that various different people in their lives have to play from parents to teachers to company leaders to government policy makers—they all are actually responsible for children's lives." Jasmina Byrne   Episode length: 59 minutes    Connect: https://za.linkedin.com/in/kamini-govender-942225159

    1h 1m
  5. 11/11/2025

    Episode 7: Guest: Patrick “Paddy” Connolly | Global Responsible AI and Generative AI Research Manager | Fellow, World Economic Forum

    Originally published: 11 November 2025 Guest: Patrick “Paddy” Connolly | Global Responsible AI and Generative AI Research Manager | Fellow, World Economic Forum Paddy Connolly is a Dublin-based Responsible AI and Generative AI Research Manager and a Fellow with the World Economic Forum. An electronic engineer by training, he has built his research career around implementing Responsible AI, conversational AI ethics, generative AI implementation, algorithmic fairness, and Responsible AI maturity frameworks. He has authored and co-authored multiple studies on Responsible AI, including work published in MIT Sloan Management Review. His most recent research contribution, Responsible AI in the Global Context: Maturity Model and Survey, examined over 1,000 organizations across 20 industries and 19 regions to assess Responsible AI maturity.   Topic: Building Trust in AI Systems: A Strategic Imperative   In this episode, we explore:   • RAI 1.0 vs RAI 2.0: How Responsible AI must evolve from static, pre-deployment risk management to dynamic, system-level governance that addresses real-time, post-deployment risks.   • Agentic AI and trust: Why the rise of AI agents—capable of autonomous decision-making and interaction—requires new infrastructure for persistent trust, monitoring, and accountability.   • Governance in practice: What board-level accountability for AI means in light of frameworks such as South Africa’s new King V Code on corporate governance.   • Global maturity findings: Insights from research showing that most organizations remain at early stages of Responsible AI implementation, with less than 1% demonstrating advanced maturity.   • Trust as value: How trust is moving beyond compliance toward becoming a strategic enabler for scaling AI safely and effectively.   • Human factors: The importance of multidisciplinary collaboration, behavioral science, and stakeholder involvement in mitigating bias and improving design.   • Conversational AI ethics: The psychological and ethical challenges of increasingly human-like systems, and the risks of emotional manipulation and misplaced trust.   • Ethics, justice, and connection: A reflective discussion on moral understanding, digital empathy, and how humanity can preserve genuine connection in an AI-mediated world.   “You can’t rely on pre-deployment mitigation anymore. We need to build the infrastructure that allows you to know what an agent is doing, why it’s doing it, and how to fix it when it goes wrong.”  "Thank you for inviting me to speak on your podcast. I had so much fun chatting with you, and it was great to speak with someone who cares so much about Responsible AI." Paddy Connolly   Episode length: 1 hour 7 minutes   Connect with Kamini: ⁠https://www.linkedin.com/in/kamini-govender⁠ Subscribe: ⁠https://www.youtube.com/@AIethicsnavigator⁠

    1h 7m
  6. 05/11/2025

    Episode 6: Guest: Dr Simon Longstaff | What Makes Us Human in the Age of AI

    Originally published: 05 November 2025 Guest: Dr Simon Longstaff  | Philosopher | Officer of the Order of Australia | Adjunct Professor, UNSW Business School | Honorary Professor, Australian National University Dr. Simon Longstaff is a philosopher trained at Cambridge with over 34 years of experience in applied ethics. He works with CEOs, boards, and government leaders on questions of ethics, human flourishing, and what it means to makedecisions that are good and right. His recent work explores AI's relationship to human nature and what distinctive aspects of being human must be preserved as artificial intelligence advances. Topic: What Makes Us Human in the Age of AI In this episode, we explore: •Transcending animal nature: Why humans are distinctive not because we lack instincts and desires, but because we can choose to go beyond them—staying steadfast to promises even in danger, refusing food that isn't ours even when starving, putting abstract commitments above survival imperatives •The analog-digital divide: How AI systems exist in a fundamentally different world than humans do, and what information or understanding might be lost when we try to capture human experience through digital systems—including insights embedded in indigenous knowledge systems that arise from direct engagement with the analog world •Simulation versus authenticity: The philosophical difference between an AI that can perfectly replicate a consoling touch and a human who actually understands mortality; between an AI companion that performs empathy token-by-token and a friend who genuinely feels concern—and what we risk losing if we accept simulation as equivalent to the real thing •Two versions of capitalism: How Adam Smith's original conception of free markets included ethical restraints, sympathy, and the requirement that markets increase common good—versus the rapacious, power-driven capitalism that Marx criticized and that we often see today—and why choosing the former isn't inevitable but is possible •Who counts: How the major ethical question throughout history has been the expansion of who we recognize as having full personhood—from exclusions based on race, gender, and religion to current questions about sentient beings and even elements of the natural world in indigenous frameworks "The thing that worries me most is that the societies in which we live are not preparing and certainly not being open in their preparations for the major transition that will take place. When societies are profoundly challenged, they can easily go wrong very quickly when people get angry and frustrated and scared." – Dr. Simon Longstaff Episode length: 58 minutes Connect with Kamini: https://www.linkedin.com/in/kamini-govender Subscribe: https://www.youtube.com/@AIethicsnavigator

    59 min
  7. 02/11/2025

    Episode 5: Dr. Andrés Domínguez Hernández | Systemic Power and Techno-Colonialism in Global AI

    Originally published: 29 Oct 2025 Guest: Dr. Andrés Domínguez Hernández | Ethics Fellow, The Alan Turing Institute | Visiting Senior Lecturer, Queen Mary University of London Dr. Andrés Domínguez Hernández is an Ethics Fellow at The Alan Turing Institute and Visiting Senior Lecturer at Queen Mary University of London's Digital Environment Research Institute. With a PhD in Science and Technology Studies and a background in engineering and innovation policy, he examines power, justice, and ethics in AI and data-driven innovation. Previously a Senior Research Associate at the University of Bristol and Director of Technology Transfer at Ecuador's Ministry of Science, Technology, and Innovation, Andrés brings Global South perspectives to questions of responsible innovation. He contributed to the Council of Europe's HUDERIA methodology for human rights impact assessment and recently presented on systemic AI governance challenges at UNESCO's Global Forum on Ethics of AI in Bangkok. Topic: Systemic Power and Techno-Colonialism in Global AI In this episode, we explore: Systemic versus downstream concerns: Why current governance focuses on safety and bias at deployment while ignoring upstream issues like infrastructure control, supply chain exploitation, and industry concentration Power concentration in practice: Infrastructure control as governance, corporate encroachment into public systems (Palantir and NHS), and why countries with smaller GDPs can't effectively regulate major tech companies Global South as testing ground: How risky AI applications deploy where regulation is weakest, from Open AI's World Coin biometric collection to educational technology harvesting children's data Epistemic dominance: Foundation models embedding Western epistemologies globally, creating homogenization where similar prompts yield similar outputs regardless of cultural context Hype as material force: Self-updating prophecies that attract investment through claims about AGI, shaping resource allocation and governance priorities toward existential risks over present harms Human rights framework: The Council of Europe's HUDERIA methodology for assessing AI across the technology lifecycle, from design through deployment and mechanisms for redress Counter-power and world-making: Examples from the Global South including Masakhane's NLP work, Lelapa AI's small language models, and the importance of moving beyond critique to imagine alternative futures "When we critique technology, it's not the technology itself that we are critiquing, but the way it is organized and the way it is extracting value to favour a handful of companies around the world." Episode length: 1 hour 30 minutes Connect with Kamini: https://www.linkedin.com/in/kamini-govender

    1h 31m
  8. 02/11/2025

    Episode 4: Dr. Emma Schleiger | Mind the Gap: Strategic Foresight and Emerging Risks in Operationalizing Responsible AI

    Originally published: 16 Oct 2025 Guest: Dr. Emma Schleiger | Head of AI Governance, Cadent | Lead Author, Australia's AI Ethics Principles Dr. Emma Schleiger leads AI governance at Cadent, specializing in aligning strategy, risk, and standards for responsible AI development and adoption. With a PhD in Clinical Neuroscience, she brings expertise in the human impact of digital technologies and governance processes that ensure AI is safe, ethical, and compliant. As lead author of the discussion paper that informed Australia's AI Ethics Principles, Emma has shaped how organizations design, develop, and deploy AI responsibly across healthcare, transport, energy, and agriculture sectors. After seven years as a Research Scientist at CSIRO's Data61, she now works directly with clients translating ethical principles into actionable governance practices. Topic: Mind the Gap: Strategic Foresight and Emerging Risks in Operationalizing Responsible AI In this episode, we explore: Alternative pathways into AI governance: Emma's journey from clinical neuroscience to leading Australia's AI Ethics Principles, and translating high-level principles into design choices and development patterns Research to consulting: Demonstrating ROI and commercial value versus societal benefits, and meeting organizations where they actually are rather than where they claim to be Shadow AI risks: How much IP and sensitive data employees put into open-source models, why "don't use it" policies fail, and emerging technical solutions that redact data before it leaves computers Why AI initiatives fail: Organizations fitting AI onto problems that don't need it, rushing to solutions before identifying issues, and the gap between C-suite demands and workforce readiness Literacy as foundation: Building basic AI understanding across populations by meeting people without judgment and showing them they already use AI daily Governance as enabler: Demonstrating that governance enables better strategic decisions and prevents wasted investment, not just compliance "The top culprits are trying to fit an AI solution onto a problem that isn't acquiring AI. It is wanting to use the latest, greatest, shiniest, coolest toys rather than like what is best..." “It was such a pleasure to chat with Kamini Govender around all things AI Governance. It is always a great opportunity to be on the other side of the interview chair, especially with Kamini's warmth and curiosity.”  Dr Emma Schleiger Episode length: 1 hour Connect with Kamini: https://www.linkedin.com/in/kamini-govender

    1 hr
  9. 02/11/2025

    Episode 3: Michael L. Bąk | How to Integrate Cultural Context and Nuance, and Still Scale for Global Ethical Frameworks

    Originally published: 5 Oct 2025 Guest: Michael L. Bąk | Policy and Digital Rights Professional | Board Member | Former Tech, Facebook, UN & USAID Michael L. Bąk is a policy and digital rights professional with over 25 years of international experience working at the intersection of technology, democracy, and human rights. He has served as a diplomat representing USAID, the United Nations, and led public policy for Facebook in Thailand and regional institutions. Michael is Co-Founder and Director of Sprint Public Interest, Global Advisor for Ethical AI Alliance, and author of the (margin*notes)^squared newsletter. His work focuses on building equitable frameworks for AI governance that center voices from the global majority. Topic: How to integrate cultural context and nuance, and still scale for global ethical frameworks In this episode, we explore: Sovereign knowledge ecosystems: Why the Global South must steward its own research and policy development rather than translating itself for the North—and how philanthropic funding and academic networks can support knowledge generation that influences global discourse The pro-social AI framework: Moving beyond the US profit-first versus EU human rights dichotomy to embrace pro-profit, pro-people, pro-planet, and pro-potential approaches developed by academic Cornelia Walther at Sunway University Malaysia Recognition as algorithmic sorting: How lists like TIME's 100 Most Influential in AI (61% American, 75% from the global north) act like algorithms that determine who gets invited to shape the conversation—and what gets left out The uncomfortable middle: Why the most powerful knowledge emerges when we build bridges from both sides and meet in the space where the ground feels less solid—where diverse voices, experiences, and wisdom create breakthrough insights "You cannot use the master's tools to tear down the master's house." - Audre Lorde “I really enjoyed recording this podcast -- 𝐀𝐈 𝐄𝐭𝐡𝐢𝐜𝐬 𝐍𝐚𝐯𝐢𝐠𝐚𝐭𝐨𝐫 -with Kamini Govender - she in South Africa and me in Thailand. We both explore AI governance outside the lanes created by Northern tech companies, governments and multilaterals to envision a new way of governing the kinds of technology we want in our lives. Always very happy to swap stories and share insights with others passionate about guiding technology that serves societies and citizens first and foremost.”  Michael L. Bąk   Episode length: 1 hour 11 minutes Connect with Kamini: https://www.linkedin.com/in/kamini-govender

    1h 12m
  10. 02/11/2025

    Episode 2: Dr. Ravit Dotan | A User's Guide on How to Start with the End in Mind

    Originally published: 8 Oct 2025 Guest: Dr Ravit Dotan | AI Ethicist | Speaker | Researcher Dr Ravit Dotan is a philosopher and AI ethicist named among the "100 Brilliant Women in AI Ethics" (2023) and a "Responsible AI Leader of the Year" (2025) finalist. Her work has been featured in The New York Times and CNBC, and honored with a distinguished Paper award from FAccT. Dr Ravit Dotan is the founder and CEO of TechBetter, an organization that helps people and organizations use AI ethically to do meaningful work. Topic: A user's guide on how to start with the end in mind In this episode, we explore: The "takeout approach" vs. the "chef approach": Dr Dotan critiques using AI to produce final outputs (emails, outlines, summaries) and instead advocates for using AI to create processes you go through yourself—where you remain the one thinking and deciding when work is complete "Think first, prompt later": Start with what you're actually trying to achieve in your work, not with the AI tool—identify how and where technology might fit into your process rather than beginning with the technology Why cognitive decline matters: The current approach of offloading mental work leads to loss of expertise and replacement anxiety—using AI for your core work (where your expertise lies) rather than peripheral tasks changes everything Value-aligned system prompts: The practical technique of designing AI processes with your values and ethical guidelines built in from the start—making ethics inseparable from AI adoption rather than a separate compliance exercise What you'll understand after listening: A concrete framework for using AI that deepens rather than replaces your thinking—and why the hype around AI agents is finally giving way to more thoughtful adoption. “I had the pleasure of interviewing for Kamini Govender’s podcast, the AI Ethics Navigator. Kamini is one of the best podcast interviewers I’ve worked with, seriously. She has such great questions!”  Dr Ravit Dotan Episode length: 1 hour 3 minutes Connect with Kamini: https://www.linkedin.com/in/kamini-govender

    1h 4m
  11. 02/11/2025

    Episode 1: Dr. Emma Ruttkamp-Bloem | From UNESCO to UN – Shaping Global AI Ethics Policy

    Originally published: 1 Oct 2025 Guest: Dr Emma Ruttkamp-Bloem | AI Ethics Researcher, Professor and Head of Department of Philosophy at University of Pretoria, Chair of UNESCO's World Commission on the Ethics of Scientific Knowledge and Technology, Former Member UN AI Advisory Body Topic: From UNESCO to UN – Shaping Global AI Ethics Policy Getting 193 countries to agree on anything is nearly impossible. Prof Emma did it for AI ethics. She chaired the UNESCO Ad Hoc Expert Group that drafted the UNESCO Recommendation on the Ethics of AI—the first global normative instrument on AI ethics—adopted by all 193 Member States in 2021. In this episode, we explore: The reality of international AI governance: what it takes to build consensus across 193 countries with different values, interests, and stages of technological readiness—and the critical role of epistemic justice in recognizing every country as a credible contributor Why ethics isn't just about principles: Prof Emma explains ethics as a dynamic reasoning system, not a checklist, and why lists of AI principles are "completely useless" without translation into action and procedural regulation The implementation gap: why getting countries to sign on is just the beginning, and what's needed to bridge the distance between international agreements and real-world impact—including her view that we need both top-down hard legislation with serious financial consequences and bottom-up community-driven approaches Her urgent warning about pervasiveness and manipulation: "Protect your right to think for yourself… out-think the business model." Prof Emma discusses why AI's embeddedness in daily life is one of the biggest threats we face, how it affects our ability to determine what facts are, and why mental integrity and authentic decision-making are at risk What you'll understand after listening: How international AI policy actually gets made—not the idealized version, but the real negotiations, trade-offs, and ongoing work of translating agreement into action. Episode length: 1 hour 12 minutes Connect with Kamini: https://www.linkedin.com/in/kamini-govender

    1h 12m

About

AI Ethics Navigator examines how emerging technologies are reshaping society, power, and everyday life. Hosted by Kamini Govender, each episode features researchers, advisors, and practitioners — including voices from UNESCO, the UN AI Advisory Body, and the Alan Turing Institute — exploring what ethics looks like in practice when technology meets society. Thoughtful, unhurried conversations linking ideas across disciplines and geographies, with Global South perspectives often missing from mainstream discourse. Connect with Kamini: https://www.linkedin.com/in/kamini-govender