CHATROG

Roger Steare

Philosopher Roger Steare explores the ethics and dangers behind artificial intelligence with key industry figures and thinkers.

  1. 6 MAR

    How AI’s Hidden Ethical Pitfalls Threaten Our Future Humanity

    Most of us are living through the most rapid technological shift in human history—yet wonder if anything truly has changed. AI's astonishing capabilities are transforming everything from ancient texts to critical healthcare breakthroughs, and now, in 2025, AI moves from suggestion to autonomous action in workplaces worldwide. But beneath the headlines lies a profound, overlooked paradox: while AI promises productivity and innovation, it often leaves us feeling less connected, more exposed, and unprepared for the moral questions it raises.In this episode of CHATROG, Roger Steare unpacks the real ethical challenges shaping our future with AI—bias embedded in data, pervasive surveillance, job displacement, and the moral responsibility for consequential decisions. You'll discover how AI can deepen inequality and erode trust, and why organizations must ask: are we using AI to empower people or just monitor them? Through powerful frameworks rooted in Love, Logic, and Law, Roger guides you in navigating these dilemmas with clarity and courage.We break down concrete tactics for ethical decision-making—questions you can ask today to ensure AI serves humanity, not harm it. Learn why superficial compliance isn't enough and how embracing moral reasoning can help you build organizations and systems that are consciously responsible. Whether you're a leader, employee, or tech enthusiast, this episode reveals how we can harness AI’s promise while safeguarding our most human values.This isn't just about understanding AI—it's about shaping it with integrity. Perfect for anyone concerned about the future of work, ethics in technology, or the moral choices we face in this new era. Join us to explore the urgent questions that will define our collective future—because what AI should do is every bit as important as what it can do.

    15 min
  2. 6 FEB

    The Hidden Danger of Over-Reliance on AI: Losing Our Moral Muscles with Alexandra Konoplyanik

    Most companies are blindly rushing to integrate AI without fully grasping what it means for human agency and responsibility. Alexandra Konoplyanik challenges us to rethink the role of judgment, ethics, and responsibility in the age of intelligent machines, revealing why outsourcing decisions can erode our moral muscles—and how to prevent it. In this thought-provoking conversation, Alexandra, a practical philosopher with a background in business, dismantles myths about AI as a moral agent. She explains why AI cannot—and perhaps should not—exercise responsibility like humans do. You'll discover how intelligence truly encompasses judgment informed by values, and why embodying consciousness and relational experience remains uniquely human. She argues that AI should be seen as augmented thinking tools, supporting us rather than replacing us, especially in high-stakes decisions affecting lives and livelihoods.We break down how to balance automation with human oversight — from healthcare to finance. Alexandra shares practical frameworks for nurturing responsible AI use, emphasizing the importance of transparency, accountability, and safeguarding our moral muscles against laziness. She unpacks the role of philosophy in cultivating better prompts, asking better questions, and fostering ethical decision-making in organisations overwhelmed with information and choices. Why does neglecting human judgment threaten societal trust? Because when accountability is outsourced, responsibility becomes murky, risking erosion of the social fabric. Alexandra warns that over-reliance on AI for routine or critical decisions may dull our moral instincts, but with the right safeguards, education, and governance, we can leverage AI's power without sacrificing responsibility.Perfect for leaders, strategists, and ethically-minded professionals, this episode offers concrete insights on navigating AI’s promise while preserving our human ability to judge ethically and act responsibly. If you’re committed to deploying AI wisely—and avoiding the dangerous slide into moral laziness—this is essential listening.Alexandra Konoplyanik (http://alexandrakonoplyanik.com/) is a practical philosopher working with organisations on responsibility and ethical judgment, blending her business background with deep philosophical insight to foster responsible AI integration.Tune in to challenge assumptions, sharpen your judgment, and understand how to equip your organisation for an ethical AI future that truly supports human agency—and keeps our moral muscles strong.

    41 min

About

Philosopher Roger Steare explores the ethics and dangers behind artificial intelligence with key industry figures and thinkers.