starkly

Ni'coel Stark

Rooted in 4IR & Society 5.0's human-centered frame, Starkly trains the musculature of analysis and of capacity that decision making requires. Each episode slows automated thinking so society & technology serve, not steer, our judgment. For future leaders & the promiscuously curious, Ni'coel maps nuance, surfaces tacit and embodied data, and tests assumptions. She advances a pedagogy of Human Decision Intelligence, integrating skills & addressing capacity in practice. Unlearn tidy myths and make exacting decisions that bridge human & machine intelligence. https://humandecisionintelligence.com

  1. 12/30/2025

    Artificial Wisdom

    Technology is neither salvation nor threat on its own, it is an amplifier. Artificial intelligence can imitate language, pattern, and human systems, but imitation is not understanding, and acceleration is not discernment. In this episode, Ni’coel is joined by responsible-AI and international data-science leader Ricardo Baeza-Yates where they distinguish wisdom as Human Decision Intelligence: the capacity for reflective judgment in conditions of uncertainty, plurality, and consequence. They explore how contemporary tools often remove the very conditions humans need to grow. They look at the rising tendency to outsource judgment, the difference between alignment and integrity; the risks of proxy data, and the widening divide between humans who develop sapience and those who surrender agency to machines. 🔸Explore the global learning hub: humandecisionintelligence.com 🔸Cohost: Ricardo Baeza-Yates is a a part-time WASP Professor at KTH Royal Institute of Technology in Stockholm, as well as part-time professor at the departments of Engineering of Universitat Pompeu Fabra in Barcelona and Computing Science of University of Chile in Santiago. He has been Director of Research at the Institute for Experiential AI of Northeastern University (2021-25) and VP of Research at Yahoo Labs (2006-16). He is a member of the AI Technology Policy committees of GPAI/OECD, ACM and IEEE. He is co-author of the best-seller Modern Information Retrieval textbook that won the ASIST 2012 Book of the Year award. He has won national scientific awards in Chile (2024) and Spain (2018). He has a Ph.D. in CS and his areas of expertise are responsible AI, web search and data mining plus data science and algorithms in general.

    36 min
  2. 12/15/2025

    Moral Imagination

    Ni’coel and Minh Do explore moral imagination as a core sapient capacity, one that expands the option set before optimization. The episode frames moral imagination as discernment plus analogical reasoning functioning as a pre-decision engine within Human Decision Intelligence. Moral imagination enables people to anticipate harm, identify outcomes worth scaling, and resist the drift toward automated thinking. Ni’coel points out how contemporary systems reward speed over reflection, measurable metrics over meaning, and selection over invention, leaving society to efficiently optimize the wrong things. The episode argues that cultivating moral imagination is no longer optional: it’s one of the essential antidotes to mental and emotional atrophy in an accelerating machine-driven era. 🔸Explore the global learning hub: humandecisionintelligence.com 🔸Cohost: Minh Do is an entrepreneur, filmmaker, and speaker. He is the co-founder of Machine Cinema, a collective focused on AI and emerging tech in film and art, and Fantastic Day, where he is cofounder and head of AI, working with musicians, brands, and filmmakers to produce AI content. Minh is also a producer at Fairground.tv, an AI FAST channel with the goal of producing a 24/7 slate of AI content to distribute globally.  Drawing from his diverse background as a former VC, journalist, musician, and teacher. Minh is curious about how AI will transform entertainment and how AI will challenge our understanding of consciousness, and in particular, where does Zen Buddhism and AI intersect. Minh is in Creator Partner Programs for Google Labs, Sora, ChatGPT 4o Image, Pika, Luma Dream Machine, Quander, and more allowing him to play, teach, and showcase with the cutting edge of AI image and video generation.

    30 min
  3. 09/28/2025

    Starkly: Conversations in Human Decision Intelligence — Intro

    Starkly is a conversation series in Human Decision Intelligence. We slow automated thinking so society and technology serve, not steer human judgment. Context. Now, in the Fourth Industrial Revolution, society outsources not only tasks but judgment. Certain social and technological conditions quietly degrade thinking and decision-making. Starkly exists to slow automated thinking so human judgment leads. What HDI is. Ni'coel's pedagogy and framework, not therapy, not productivity hacks, not change-management. What breaks today. Legacy KPIs/OKRs optimize machine-legible outputs (throughput/compliance). Forcing sapience into those yardsticks too early creates category error and brittleness. What we practice. Discernment (precision of perception) Moral imagination (future-consequences with human context) Xenopathy (increasing tolerance for our ignorance and anxiety around other) Metacognition (watching how we think while we think) Analogical reasoning (fit by ontology, not label) Foresight (long-horizon consequence scanning) Working in liminality and existential math (holding uncertainty and doing advanced inter-domain analysis) Cathedral thinking (decisions that compound across long horizons) Responsive tempo (speed calibrated to reality, not dashboards) Spectatorship → Participation (agency recovery) How outcomes change. We repattern perception and decision pipelines so good choices become native under pressure (anxiety). We reduce projection errors, shorten repair cycles, and improve long-horizon bet quality. Human-legible indicators we track Decision latency (from reflex to optimal) Rework / repair cycles (cycle count and depth) Projection error rate (as surfaced in post debriefs) Ambiguity tolerance (measured in anxiety levels) Relational repair rate (conflict → closure cadence) Principle: stabilize in humans before any machine instrumentation. Explore the global learning hub:humandecisionintelligence.com

    5 min
  4. The Problem with Empathy

    09/21/2025

    The Problem with Empathy

    Ni’coel and Cephra pry open the cultural certainty around empathy and propose a more intelligent starting point: xenopathy. Rather than projecting sameness: “I know how you feel because I’d feel that way too if I were in your shoes,” xenopathy begins by acknowledging radical difference, and by tolerating the ignorance and anxiety that difference evokes. Empathy, Ni'coel argues, often collapses distinction into relatability, inviting projection, bias, and even performative concern. Xenopathy reframes the task as ethical care without the precondition of identification, imagination, or love. They test concrete cases (grief, gender, race, organizational life, DEI), and return to Human Decision Intelligence's core: skill has less use without capacity. By changing the language we start with, we change the decisions we make, trading tidy heuristics for curiosity, responsibility, and more accurate care across real asymmetries of risk and experience. 🔸Explore the global learning hub: humandecisionintelligence.com 🔸Cohost: Cephra Stuart is a multidisciplinary storyteller, writer, director, actor, and singer. At Mansa and the Walt Disney Studios, she worked on audience research, content strategy, and DEI analytics to champion inclusive storytelling. Earlier in her career, she also supported culture and engagement initiatives at Bumble and Twitter, helping shape the internal environments and strategy behind some of today’s most influential platforms.

    29 min

Trailer

Ratings & Reviews

5
out of 5
3 Ratings

About

Rooted in 4IR & Society 5.0's human-centered frame, Starkly trains the musculature of analysis and of capacity that decision making requires. Each episode slows automated thinking so society & technology serve, not steer, our judgment. For future leaders & the promiscuously curious, Ni'coel maps nuance, surfaces tacit and embodied data, and tests assumptions. She advances a pedagogy of Human Decision Intelligence, integrating skills & addressing capacity in practice. Unlearn tidy myths and make exacting decisions that bridge human & machine intelligence. https://humandecisionintelligence.com