Artificiality: Being with AI

Helen and Dave Edwards

Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.

  1. -6 ДН.

    De Kai: Raising AI

    In this conversation, we explore how humans can better navigate the AI era with De Kai, pioneering researcher who built the web's first machine translation systems and whose work spawned Google Translate. Drawing on four decades of AI research experience, De Kai offers a different framework for understanding our relationship with artificial intelligence—moving beyond outdated metaphors toward more constructive approaches. De Kai's perspective was shaped by observing how AI technologies are being deployed in ways that increase rather than decrease human understanding. While AI has tremendous potential to help people communicate across cultural and linguistic differences—as his translation work demonstrated—current implementations often amplify polarization and misunderstanding instead. Key themes we explore: Beyond Machine Metaphors: Why thinking of AI as "tools" or "machines" is dangerously outdated—AI systems are fundamentally artificial psychological entities that learn, adapt, and influence human behavior in ways no coffee maker ever couldThe Parenting Framework: De Kai's central insight that we're all currently "parenting" roughly 100 artificial intelligences daily through our smartphones, tablets, and devices—AIs that are watching, learning, and imitating our attitudes, behaviors, and belief systemsSystem One vs. System Two Intelligence: How current large language models operate primarily through "artificial autism"—brilliant pattern matching without the reflective, critical thinking capacities that characterize mature human intelligenceTranslation as Understanding: Moving beyond simple language translation toward what De Kai calls a "translation mindset"—using AI to help humans understand different cultural framings and perspectives rather than enforcing singular universal truthsThe Reframing Superpower: How AI's capacity for rapid perspective-shifting and metaphorical reasoning represents one of humanity's best hopes for breaking out of polarized narratives and finding common groundSocial Fabric Transformation: Understanding how 800 billion artificial minds embedded in our social networks are already reshaping how cultures and civilizations evolve—often in ways that increase rather than decrease mutual understanding Drawing on insights from developmental psychology and complex systems, De Kai's "Raising AI" framework emphasizes conscious human responsibility in shaping how these artificial minds develop. Rather than viewing this as an overwhelming burden, he frames it as an opportunity for humans to become more intentional about the values and behaviors they model—both for AI systems and for each other. About De Kai: De Kai is Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s International Computer Science Institute. He is Independent Director of AI ethics think tank The Future Society, and was one of eight inaugural members of Google’s AI ethics council. De Kai invented and built the world’s first global-scale online language translator that spawned Google Translate, Yahoo Translate, and Microsoft Bing Translator. For his pioneering contributions in AI, natural language processing, and machine learning, De Kai was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows and by Debrett’s as one of the 100 most influential figures of Hong Kong.

    55 мин.
  2. 21 АВГ.

    Christine Rosen: The Extinction of Experience

    In this conversation, we explore the shifts in human experience with Christine Rosen, senior fellow at the American Enterprise Institute and author of "The Extinction of Experience: Being Human in a Disembodied World." As a member of the "hybrid generation" of Gen X, Christine (like us) brings the perspective of having lived through the transition from an analog to a digital world and witnessed firsthand what we've gained and lost in the process. Christine frames our current moment through the lens of what naturalist Robert Michael Pyle called "the extinction of experience"—the idea that when something disappears from our environment, subsequent generations don't even know to mourn its absence. Drawing on over 20 years of studying technology's impact on human behavior, she argues that we're experiencing a mass migration from direct to mediated experience, often without recognizing the qualitative differences between them. Key themes we explore: The Archaeology of Lost Skills: How the abandonment of handwriting reveals the broader pattern of discarding embodied cognition—the physical practices that shape how we think, remember, and process the world around usMediation as Default: Why our increasing reliance on screens to understand experience is fundamentally different from direct engagement, and how this shift affects our ability to read emotions, tolerate friction, and navigate uncomfortable social situationsThe Machine Logic of Relationships: How technology companies treat our emotions "like the law used to treat wives as property"—as something to be controlled, optimized, and made efficient rather than experienced in their full complexityEmbodied Resistance: Why skills like cursive handwriting, face-to-face conversation, and the ability to sit with uncomfortable emotions aren't nostalgic indulgences but essential human capacities that require active preservationThe Keyboard Metaphor: How our technological interfaces—with their control buttons, delete keys, and escape commands—are reshaping our expectations for human relationships and emotional experiences Christine challenges the Silicon Valley orthodoxy that frames every technological advancement as inevitable progress, instead advocating for what she calls "defending the human." This isn't a Luddite rejection of technology but a call for conscious choice about what we preserve, what we abandon, and what we allow machines to optimize out of existence. The conversation reveals how seemingly small decisions—choosing to handwrite a letter, putting phones in the center of the table during dinner, or learning to read cursive—become acts of resistance against a broader cultural shift toward treating humans as inefficient machines in need of optimization. As Christine observes, we're creating a world where the people designing our technological future live with "human nannies and human tutors and human massage therapists" while prescribing AI substitutes for everyone else. What emerges is both a warning and a manifesto: that preserving human experience requires actively choosing friction, inefficiency, and the irreducible messiness of being embodied creatures in a physical world. Christine's work serves as an essential field guide for navigating the tension between technological capability and human flourishing—showing us how to embrace useful innovations while defending the experiences that make us most fully human. About Christine Rosen: Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on the intersection of technology, culture, and society. Previously the managing editor of The New Republic and founding editor of The Hedgehog Review, her writing has appeared in The Atlantic, The New York Times, The Wall Street Journal, and numerous other publications. "The Extinction of Experience" represents over two decades of research into how digital technologies are reshaping human behavior and social relationships.

    55 мин.
  3. 16 АВГ.

    Beth Rudden: AI, Trust, and Bast AI

    Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summit In this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems. Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language. Key themes we explore: Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformation Beth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems. The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration. About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it. This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.

    37 мин.
  4. 3 АВГ.

    Steve Sloman: Information to Bits at the Artificiality Summit 2024

    At the Artificiality Summit in October 2024, Steve Sloman, professor at Brown University and author of The Knowledge Illusion and The Cost of Conviction, catalyzed a conversation about how we perceive knowledge in ourselves, others, and now in machines. What happens when our collective knowledge includes a community of machines? Steve challenged us to think about the dynamics of knowledge and understanding in an AI-driven world, about the evolving landscape of narratives, and ask the question can AI make us believe in ways that humans make us believe? What would it take for AI to construct a compelling ideology and belief system that humans would want to follow? Bio: Steven Sloman has taught at Brown since 1992. He studies higher-level cognition. He is a Fellow of the Cognitive Science Society, the Society of Experimental Psychologists, the American Psychological Society, the Eastern Psychological Association, and the Psychonomic Society. Along with scientific papers and editorials, his published work includes a 2005 book Causal Models: How We Think about the World and Its Alternatives, a 2017 book The Knowledge Illusion: Why We Never Think Alone co-authored with Phil Fernbach, and the forthcoming Righteousness: How Humans Decide from MIT Press. He has been Editor-in-Chief of the journal Cognition, Chair of the Brown University faculty, and created Brown’s concentration in Behavioral Decision Sciences.

    35 мин.
  5. 27 ИЮЛ.

    Jamer Hunt on the Power of Scale

    At the Artificiality Summit 2024, Jamer Hunt, professor at the Parsons School of Design and author of Not to Scale, catalyzed our opening discussion on the concept of scale. This session explored how different scales—whether individual, organizational, community, societal, or even temporal—shape our perspectives and influence the design of AI systems. By examining the impact of scale on context and constraints, Jamer guided us to a clearer understanding of the appropriate levels at which we can envision and build a hopeful future with AI. This interactive session set the stage for a thought-provoking conference. Bio: Jamer Hunt collaboratively designs open and adaptable frameworks for participation that respond to emergent cultural conditions—in education, organizations, exhibitions, and for the public. He is the Vice Provost for Transdisciplinary Initiatives at The New School (2016-present), where he was founding director of the graduate program in Transdisciplinary Design at Parsons School of Design (2009-2015). He is the author of Not to Scale: How the Small Becomes Large, the Large Becomes Unthinkable, and the Unthinkable Becomes Possible (Grand Central Publishing, March 2020), a book that repositions scale as a practice-based framework for analyzing broken systems and navigating complexity. He has published over twenty articles on the poetics and politics of design, including for Fast Company and the Huffington Post, and he is co-author, with Meredith Davis, of Visual Communication Design (Bloomsbury, 2017).

    42 мин.
  6. 12 ИЮЛ.

    Avriel Epps: Teaching Kids About AI Bias

    In this conversation, we explore AI bias, transformative justice, and the future of technology with Dr. Avriel Epps, computational social scientist, Civic Science Postdoctoral Fellow at Cornell University's CATLab, and co-founder of AI for Abolition. What makes this conversation unique is how it begins with Avriel's recently published children's book, A Kids Book About AI Bias (Penguin Random House), designed for ages 5-9. As an accomplished researcher with a PhD from Harvard and expertise in how algorithmic systems impact identity development, Avriel has taken on the remarkable challenge of translating complex technical concepts about AI bias into accessible language for the youngest learners. Key themes we explore: - The Translation Challenge: How to distill graduate-level research on algorithmic bias into concepts a six-year-old can understand—and why kids' unfiltered responses to AI bias reveal truths adults often struggle to articulate - Critical Digital Literacy: Why building awareness of AI bias early can serve as a protective mechanism for young people who will be most vulnerable to these systems - AI for Abolition: Avriel's nonprofit work building community power around AI, including developing open-source tools like "Repair" for transformative and restorative justice practitioners - The Incentive Problem: Why the fundamental issue isn't the technology itself, but the economic structures driving AI development—and how communities might reclaim agency over systems built from their own data - Generational Perspectives: How different generations approach digital activism, from Gen Z's innovative but potentially ephemeral protest methods to what Gen Alpha might bring to technological resistance Throughout our conversation, Avriel demonstrates how critical analysis of technology can coexist with practical hope. Her work embodies the belief that while AI currently reinforces existing inequalities, it doesn't have to—if we can change who controls its development and deployment. The conversation concludes with Avriel's ongoing research into how algorithmic systems shaped public discourse around major social and political events, and their vision for "small tech" solutions that serve communities rather than extracting from them. For anyone interested in AI ethics, youth development, or the intersection of technology and social justice, this conversation offers both rigorous analysis and genuine optimism about what's possible when we center equity in technological development. About Dr. Avriel Epps: Dr. Avriel Epps (she/they) is a computational social scientist and a Civic Science Postdoctoral Fellow at the Cornell University CATLab. She completed her Ph.D. at Harvard University in Education with a concentration in Human Development. She also holds an S.M. in Data Science from Harvard’s School of Engineering and Applied Sciences and a B.A. in Communication Studies from UCLA. Previously a Ford Foundation predoctoral fellow, Avriel is currently a Fellow at The National Center on Race and Digital Justice, a Roddenberry Fellow, and a Public Voices Fellow on Technology in the Public Interest with the Op-Ed Project in partnership with the MacArthur Foundation. Avriel is also the co-founder of AI4Abolition, a community organization dedicated to increasing AI literacy in marginalized communities and building community power with and around data-driven technologies. Avriel has been invited to speak at various venues including tech giants like Google and TikTok, and for The U.S. Courts, focusing on algorithmic bias and fairness. In the Fall of 2025, she will begin her tenure as Assistant Professor of Fair and Responsible Data Science at Rutgers University. Links:- Dr. Epps' official website: https://www.avrielepps.com - AI for Abolition: https://www.ai4.org - A Kids Book About AI Bias details: https://www.avrielepps.com/book

    51 мин.
5
из 5
Оценок: 9

Об этом подкасте

Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.

Вам может также понравиться