The Behavioral Design Podcast

Samuel Salzer and Aline Holzwarth
The Behavioral Design Podcast

How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.

  1. Season 4 Recap: Meet Our AI Co-Hosts

    JUL 9

    Season 4 Recap: Meet Our AI Co-Hosts

    Season 4 Recap: Can AI Capture a Whole Season? In this special recap episode of the Behavioral Design Podcast, hosts Aline and Samuel reflect on the ambitious arc of Season 4—our deep dive into the intersection of behavioral science and artificial intelligence. From empathic chatbots to algorithmic sameness, AI co-therapists to synthetic friendships, we explored how AI is reshaping human behavior, relationships, and decision-making. But here’s the twist: the second half of this episode isn’t hosted by us. It’s AI. Using transcripts from every conversation this season, we asked our AI co-hosts to generate a narrated summary of the biggest ideas and themes that emerged across episodes. Can AI recap a whole season better than we can? Is this the beginning of our own replacement? Along the way, we revisit: How AI is changing the emotional landscape of our lives Why automation and personalization are both liberating and limiting What happens when algorithms replace—not just supplement—human judgment The ethical fault lines of psychological targeting, autonomy, and consent Whether behavioral science can keep up with the pace and power of AI If you missed any episodes or want a distilled tour of the season, this is the one to listen to. Next up: our Season Finale, featuring Aline and Samuel’s most controversial takes on AI 🍿 -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    46 min
  2. Productivity and AI with Oliver Burkeman

    JUN 11

    Productivity and AI with Oliver Burkeman

    Productivity in the Age of AI with Oliver Burkeman In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Oliver Burkeman, journalist and bestselling author of Four Thousand Weeks, to explore what it means to live and work meaningfully in an era of accelerating AI. Together, they examine how AI tools are reshaping our relationship with time, focus, and control—from email-writing assistants to algorithmic scheduling and optimization. Oliver shares his thoughts on how these technologies, while promising to save us time, often pull us deeper into compulsive productivity loops and distract us from the deeper questions: What are we optimizing for? And what does it mean to spend our time well? The conversation covers: The seduction of infinite optionality and why AI might make it worse Whether AI-generated outputs dull our creative instincts or free them Why doing fewer things might become even more important in the AI era The psychological cost of outsourcing decisions to machines How behavioral science can help people reclaim agency and meaning in a world of hyper-efficiency This episode is a must-listen for anyone navigating the tension between automation and intention—especially those wondering how to stay human in the loop. -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    1h 4m
  3. AI Therapy with Alison Cerezo

    MAY 28

    AI Therapy with Alison Cerezo

    AI Co-Therapists with Alison CerezoIn this episode of the Behavioral Design Podcast, hosts Aline and Samuel talk with Dr. Alison Cerezo, a clinical psychologist, professor, and Senior Vice President of Research at Mpathic, a company developing AI tools that support therapists in delivering more empathetic and precise care. They explore the growing role of AI in mental health, from real-time feedback during therapy sessions to tools that help clinicians detect risk, stay aligned with best practices, and reduce bias. Alison describes how Mpathic works as a co-therapist—supporting rather than replacing the human element of therapy. The conversation also digs into larger questions: Can AI feel more empathetic than humans? How do we avoid over-reliance on machines for emotional support? And what does it really mean to design AI that complements rather than competes with people? This episode is a must-listen for anyone interested in the future of therapy, empathy, and AI—and what it looks like to build systems that enhance human care, not undermine it. -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    52 min
  4. Empathy and AI with Michael Inzlicht

    MAY 15

    Empathy and AI with Michael Inzlicht

    Empathic Machines with Michael InzlichtIn this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Michael Inzlicht, professor of psychology at the University of Toronto and co-host of the podcast Two Psychologists Four Beers. Together, they explore the surprisingly effortful nature of empathy—and what happens when artificial intelligence starts doing it better than we do. Michael shares insights from his research into empathic AI, including findings that people often rate AI-generated empathy as more thoughtful, emotionally satisfying, and effortful than human responses—yet still prefer to receive empathy from a human. They unpack the paradox behind this preference, what it tells us about trust and connection, and whether relying on AI for emotional support could deskill us over time. This conversation is essential listening for anyone interested in the intersection of psychology, emotion, and emerging AI tools—especially as machines get better at sounding like they care. -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    1h 4m
  5. Building Moral AI with Jana Schaich Borg

    MAY 1

    Building Moral AI with Jana Schaich Borg

    How Do You Build a Moral AI? with Jana Schaich Borg In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try? Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient. This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care. —  Topics Covered: What AI alignment really means (and why it’s so hard) Bottom-up vs. top-down moral AI systems How organizations get ethical AI wrong—and what to do instead The messy reality of human values and decision making Translational ethics and the need for AI KPIs Personalizing AI to match your values When moral self-reflection becomes a design feature — Timestamps: 00:00  Intro: AI Alignment — Mission Impossible? 04:00  Why Moral AI Is So Hard (and Necessary) 07:00  The “Spec” Story & Reinforcement Gone Wrong 10:00  Anthropomorphizing AI — Helpful or Misleading? 12:00  Introducing Jana & the Moral AI Project 15:00  What “Moral AI” Really Means 18:00  Interdisciplinary Collaboration (and Friction) 21:00  Bottom-Up vs. Top-Down Approaches 27:00  Why Human Morality Is Messy 31:00  Building a Hybrid Moral AI System 41:00  Case Study: Kidney Donation Decisions 47:00  From Models to Moral Reflection 52:00  Embedding Ethics Inside Organizations 56:00  Moral Growth Mindset & Training the Workforce 01:03:00  Why Trust & Culture Matter Most 01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta 01:10:00  What We Still Don’t Know 01:11:00  Quickfire: To AI or Not To AI 01:16:00  Jana’s Most Controversial Take 01:19:00  Can AI Make Us Better Humans? — 🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show. Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.

    1h 22m
  6. State of AI Risk with Peter Slattery

    APR 16

    State of AI Risk with Peter Slattery

    Understanding AI Risks with Peter Slattery In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats. Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI. This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track. -- LINKS: Peter's LinkedIn ProfileMIT FutureTech Lab: futuretech.mit.eduAI Risk Repository -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    1h 10m
  7. Enter the AI Lab

    MAR 20

    Enter the AI Lab

    Enter the AI Lab: Insights from LinkedIn Polls and AI Literature Reviews In this episode of the Behavioral Design Podcast, hosts Samuel Salzer and Aline Holzwarth explore how AI is shaping behavioral design processes—from discovery to testing. They revisit insights from past LinkedIn polls, analyzing audience perspectives on which phases of behavioral design are best suited for AI augmentation and where human expertise remains crucial. The discussion then shifts to AI-driven literature reviews, comparing the effectiveness of various AI tools for synthesizing research. Samuel and Aline assess the strengths and weaknesses of different platforms, diving into key performance metrics like quality, speed, and cost, and debating the risks of over-reliance on AI-generated research without human oversight. The episode also introduces Nuance’s AI Lab, highlighting upcoming projects focused on AI-driven behavioral science innovations. The conversation concludes with a Behavioral Redesign series case study on Peloton, offering a fresh take on how AI and behavioral insights can reshape product experiences. If you're interested in the intersection of AI, behavioral science, and research methodologies, this episode is packed with insights on where AI is excelling—and where caution is needed. LINKS: Nuance AI Lab: Website TIMESTAMPS:00:00 Introduction and Recap of Last Year's AI Polls06:27 AI's Strengths in Literature Review15:12 Emerging AI Tools for Research19:31 Evaluating AI Tools for Literature Reviews23:57 Comparing Chinese and American AI Tools26:01 Evaluating Literature Review Outputs28:12 Critical Analysis and Human Oversight35:19 The Worst Performing Model37:21 Introducing Nuance's AI Lab38:51 Behavioral Redesign Series: Peloton Example45:21 Podcast Highlights and Future Guests -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    48 min
  8. When to AI, and When Not to AI with Eric Hekler

    MAR 6

    When to AI, and When Not to AI with Eric Hekler

    When to AI, and When Not to AI with Eric Hekler "People are different. Context matters. Things change." In this episode of the Behavioral Design Podcast, Aline is joined by Eric Hekler, professor at UC San Diego, to explore the nuances of AI in behavioral science and health interventions. Eric’s mantra—emphasizing the importance of individual differences, context, and change—serves as a foundation for the conversation as they discuss when AI enhances behavioral interventions and when human judgment is indispensable. The discussion explores just-in-time adaptive interventions (JITAI), the efficiency trap of AI, and the jagged frontier of AI adoption—where machine learning excels and where it falls short. Eric shares his expertise on control systems engineering, human-AI collaboration, and the real-world challenges of scaling adaptive health interventions. The episode also explores teachable moments, the importance of domain knowledge, and the need for AI to support rather than replace human decision-making. The conversation wraps up with a quickfire round, where Eric debates AI’s role in health coaching, mental health interventions, and optimizing human routines. LINKS: Eric Hekler: TIMESTAMPS:02:01 Introduction and Correction05:21 The Efficiency Trap of AI08:02 Human-AI Collaboration11:04 Conversation with Eric Hekler14:12 Just-in-Time Adaptive Interventions15:19 System Identification Experiment28:27 Control Systems vs. Machine Learning39:44 Challenges with Classical Machine Learning43:16 Translating Research to Real-World Applications49:49 Community-Based Research and Context Matters59:46 Quickfire Round: To AI or Not to AI01:08:27 Final Thoughts on AI and Human Evolution -- Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more. Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.  Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    1h 7m

Ratings & Reviews

4.7
out of 5
15 Ratings

About

How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada