Beyond the Couch: AI in Psychology

Ernest Wayde

Beyond the Couch: AI in Psychology bridges the gap between psychology and artificial intelligence, offering psychologists across all specialties clear, actionable insights into how AI is transforming the field. Whether you're a clinician, researcher, educator, organizational psychologist, or working in any psychological specialty, this podcast delivers expert perspectives on ethical integration, practical applications, and future developments to help you confidently navigate the digital transformation of our profession. Presented by ABPP and Wayde AI, with sponsorship from the National Register of Health Service Psychologist, this collaboration brings together psychological expertise and technological innovation to explore how AI can enhance psychological work while maintaining professional standards and human connection. Subscribe now to join a growing community of forward-thinking psychologists who are shaping the future of our profession.

  1. Parenting Through the AI Era: What Every Parent Needs to Know with Dr. Amber Childs

    1D AGO

    Parenting Through the AI Era: What Every Parent Needs to Know with Dr. Amber Childs

    In this episode of Beyond the Couch, Dr. Ernest Wayde sits down with Dr. Amber Childs, child and adolescent psychiatrist, Yale School of Medicine associate professor, and founder of Dr. Amber Childs Advisory. The conversation explores how artificial intelligence is reshaping the lives of teens, parents, clinicians, and the future of mental health care. Dr. Childs shares how her unexpected journey into AI began during the COVID-19 pandemic, when she rapidly helped scale telehealth services for adolescent psychiatry at Yale. She discusses how teens are already integrating AI into their daily lives for learning, emotional support, curiosity, and mental health conversations, often turning to chatbots when trusted human support feels unavailable. The discussion also highlights the fears many parents experience around AI, the importance of curiosity-driven conversations instead of fear-based reactions, and why bans alone may fail to protect young people. Dr. Childs emphasizes that clinicians, caregivers, and psychologists must stay engaged with technology, develop AI literacy, and help shape safer, evidence-based solutions that support human connection rather than replace it. Takeaways: AI Is Already Deeply Integrated Into Teen Life and Mental Health Conversations.Teens Often Use AI for Exploration, Emotional Support, and Nonjudgmental Guidance.Parents Should Approach AI Conversations With Curiosity Instead of Fear or Control.Banning AI Without Education or Safeguards May Create More Problems Than Solutions.Psychologists and Clinicians Must Help Shape the Future of Ethical AI in Mental Health Care.Human Connection, Communication, and Trust Still Matter More Than Technology.AI Literacy Is Becoming Essential for Parents, Therapists, and Educators. Connect with Dr. Amber Childs: LinkedIn: https://www.linkedin.com/in/amberwchilds/ Website: https://www.dramberchilds.com/ Connect With Us https://www.waydeai.com/ https://www.facebook.com/waydeai https://www.linkedin.com/company/wayde-ai/ info@waydeai.com Subscribe https://the-waydeai-brief.beehiiv.com/ Chapters: 00:00 - Intro 02:37 - How AI Entered Her Work “By Accident” During the Pandemic 06:55 - Teen Skepticism, AI Anxiety & Concerns About Relationships 08:51 - What Parents Are Most Worried About With AI 11:30 - Why Teens Turn to AI for Support & Validation 13:38 - Why Attacking AI or “The Friend” Backfires With Teens 16:00 - Dangerous AI Scenarios Parents Should Watch For 19:52 - Trusted Resources for Parents Navigating AI & Mental Health 22:53 - How Parents Can Start the AI Conversation With Their Teens 25:08 - Why AI Bans May Do More Harm Than Good 29:43 - Where to Follow Dr. Amber Childs Online 30:28 - Final Advice: “Curiosity Is Free”

    31 min
  2. How AI Is Reshaping PTSD Therapy and Clinician Training with Dr. Philip Held

    APR 28

    How AI Is Reshaping PTSD Therapy and Clinician Training with Dr. Philip Held

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Philip Held, a clinical psychologist and researcher focused on improving PTSD treatment outcomes through AI and accelerated therapy models. The conversation explores how Dr. Held’s team developed “Socrates 2.0,” a multi-agent AI system designed to support cognitive restructuring through Socratic dialogue alongside evidence-based therapy. Dr. Held explains how the system uses multiple AI agents to supervise and improve therapeutic conversations in real time, reducing looping behaviors and improving the quality of AI-assisted interactions. The discussion highlights how veterans are using AI as a practice space before therapy sessions, how clinicians are beginning to use these tools for supervision and training, and why validation, safety testing, and clear guardrails are critical as AI becomes more integrated into mental health care. The episode also explores the future of AI-assisted clinician training, ethical considerations around validation standards, and why curiosity and responsible experimentation are essential as psychology adapts to rapidly advancing technologies. Takeaways: Multi-Agent AI Can Improve the Quality of Therapeutic Conversations.AI Tools Can Help Veterans Practice Difficult Conversations Before Therapy Sessions.Validation, Safety Testing, and Guardrails Are Essential for Mental Health AI Tools.AI Is Best Used as a Support Tool Rather Than a Replacement for Clinicians.Clinicians Are Beginning to Use AI for Supervision, Roleplay, and Skill Development. Connect with Dr. Philip Held LinkedIn: https://www.linkedin.com/in/philip-held-phd/ Website: https://roadhomeprogram.org/ Connect With Us https://www.waydeai.com/ https://www.facebook.com/waydeai https://www.linkedin.com/company/wayde-ai/ info@waydeai.com Subcribe https://the-waydeai-brief.beehiiv.com/ Chapters: 00:00 - Intro 02:24 - Dr. Philip Held’s journey into AI and psychology 05:58 - How Socratic dialogue works inside the AI tool 08:24 - Multi-agent AI supervision inspired by clinical training 10:05 - What success looks like for Socrates 2.0 11:30 - The challenge of measuring “good enough” in AI therapy 14:48 - How AI is changing traditional therapy methods 17:00 - How veterans responded to using the AI tool 19:33 - Why validating AI mental health tools matters 23:02 - What responsibilities still belong to clinicians 25:40 - Clinicians’ reactions to AI-assisted therapy tools 27:33 - Future AI applications for clinician training and supervision 30:28 - The need for AI benchmarks, boundaries, and guardrails 34:36 - What “validation” really means in AI mental health 35:45 - Dr. Philip Held’s advice on staying curious about AI

    37 min
  3. How AI Is Changing Human Relationships and Mental Health with Dr. Rachel Wood

    APR 21

    How AI Is Changing Human Relationships and Mental Health with Dr. Rachel Wood

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Rachel Wood, a cyber psychology researcher, licensed professional counselor, and founder of the AI Mental Health Collective. The discussion explores how artificial intelligence is shifting the relational bedrock of society, noting that clients are increasingly bringing AI into their therapy sessions for advice, comfort, and validation. Dr. Wood emphasizes that as AI usage becomes more common, therapists should prioritize clinician competence and practice informed consent. She advocates for a cross-disciplinary approach, urging mental health practitioners to collaborate with AI builders to establish safeguards, raise user awareness, and ensure the responsible development of these technologies. Takeaways: AI Is Shifting Client Expectations and Relational Dynamics in Therapy.Clinicians Must Prioritize Informed Consent and Their Own AI Competence.Clients Often Turn to Chatbots Seeking Validation and Frictionless Interactions.Clinical Judgment and Patient Safety Must Always Supersede Any AI Usage.Mental Health Professionals Must Claim a Voice at the Table During AI Development. Connect with Dr. Rachel Wood LinkedIn: https://www.linkedin.com/in/rachelwoodphd/ Website: https://www.dr-rachelwood.com/ Website: https://www.aimentalhealthcollective.com/ Connect With Us https://www.waydeai.com/ https://www.facebook.com/waydeai https://www.linkedin.com/company/wayde-ai/ info@waydeai.com Subcribe https://the-waydeai-brief.beehiiv.com/ Chapters: 00:00 Intro 00:31 Welcome and Guest Intro 02:04 Dr. Rachel Wood Origin Story 04:01 How AI Impacts Therapy and Client Usage 05:36 Clinician Competence and Informed Consent 07:10 Shifting Expectations and AI Triangulation 09:58 What Clients Get from AI Chatbots 11:24 Clinical Judgment and Attachment Theory 15:24 Practitioner Boundaries and Accountability 18:21 The AI Mental Health Collective 21:20 Responsible AI Integration 23:07 Closing Advice and Where to Find Her

    25 min
  4. Defining the Boundaries of AI in Mental Health with Dr. Shannon Wiltsey Stirman

    APR 7

    Defining the Boundaries of AI in Mental Health with Dr. Shannon Wiltsey Stirman

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Shannon Wiltsey Stirman, a professor of Psychiatry and Behavioral Sciences at Stanford and co-director of the Center for Responsible and Effective AI Technology Enhancement for PTSD treatment (CREATE), discusses how large language models can support evidence-based mental health interventions, and can also be used to assist in training therapists through the use of simulated patients. Dr. Wiltsey Stirman notes that while AI can be a powerful tool for tasks like clinical scribing and reflection, it should supplement rather than replace human therapists, especially regarding complex diagnoses and high-risk scenarios. She highlights the necessity of AI literacy, urging therapists and organizations to prioritize transparency, privacy, and responsible implementation. Takeaways: AI Should Supplement, Not Replace Human Therapists Simulated Patients Offer Safe Practice for Clinicians AI Diagnostics and High-Risk Treatment Require Firm Boundaries Organizations Must Prioritize Transparency and Privacy Therapists Need to Increase Their AI Literacy Connect with Dr. Shannon Wiltsey Stirman Email: sws1@stanford.edu LinkedIn: https://www.linkedin.com/in/shannon-wiltsey-stirman-3874056/ https://med.stanford.edu/fastlab.html https://create.stanford.edu/contact https://create.stanford.edu/contact Connect With Us https://www.waydeai.com/ https://www.facebook.com/waydeai https://www.linkedin.com/company/wayde-ai/ info@waydeai.com Subcribe https://the-waydeai-brief.beehiiv.com/ Chapters: 00:00 Intro 00:35 Welcome and Guest Intro 02:15 Dr. Shannon Wiltsey Stirman Origin Story 04:54 Realistic AI Capabilities in Therapy Today 07:00 Meaningful AI Implementation in Evidence-Based Care 09:34 What People Get Wrong About AI Tools 12:57 Boundaries Between AI and Human Therapists 15:46 Safe AI Boundaries for Therapists 19:01 Organizational Implementation and Transparency 21:27 The CREATE Center at Stanford 25:06 Closing Advice and Where to Find Her

    26 min
  5. AI Ethics, Responsibility, and the Role of Humans in the Age of AI with Dr. Joanna Bryson

    MAR 10

    AI Ethics, Responsibility, and the Role of Humans in the Age of AI with Dr. Joanna Bryson

    In this episode of Beyond the Couch, Dr. Ernest Wayde interviews Dr. Joanna Bryson, professor of Ethics and Technology in Berlin and advisor to organizations including the UN and EU, about what “AI ethics” really means. Dr. Bryson argues it’s not coherent to call AI itself ethical. She argues that the primary concern should be whether and how humans should build and deploy AI and how it may change societies. Dr. Bryson highlights recurring concerns like bias, but stresses broader failures around accountability, surveillance, deception, and weaponization, urging users to maintain agency, verify outputs, protect data, and avoid trusting AI. Takeaways: AI Itself Is Not Ethical—Humans Are Responsible Bias Is a Major Concern—but Not the Only One Accountability Must Start With Development The Information Age Demands Critical Thinking Learning and Adaptation Are Essential Connect with Dr. Joanna Bryson bryson@hertie-school.org https://www.hertie-school.org/en/who-we-are/profile/person/bryson Connect With Us https://www.waydeai.com/ https://www.facebook.com/waydeai https://www.linkedin.com/company/wayde-ai/ info@waydeai.com Subcribe https://the-waydeai-brief.beehiiv.com/ Chapters: 00:00 What Is AI Ethics 00:23 Welcome and Guest Intro 02:06 Dr. Joanna Bryson Origin Story 05:38 From AI Research to Ethics 07:08 Ethical AI Misconceptions 09:02 Policy Failures and Liability 10:57 Beyond Bias Surveillance Risks 12:30 Everyday User Responsibility 15:23 AI and Mental Health Use 16:46 EU Rules and Bot Disclosure 17:51 Scams Surveillance and Freedom 20:30 Closing Advice and Where to Find Her

    22 min
  6. Wellness AI for College Student Wellbeing with Dr. Ashleigh Golden

    FEB 24

    Wellness AI for College Student Wellbeing with Dr. Ashleigh Golden

    In this episode of Beyond the Couch, host Dr. Ernest Wayde sits down with Dr. Ashleigh Golden, a Stanford trained clinical psychologist and co-founder of WayHaven, to explore the transformative role of conversational AI in student wellness. Dr. Golden and Dr. Wayde discuss the upstream model of care: using AI not as a replacement for therapy, but as a proactive tool to help students build social emotional skills and navigate campus resources before they reach a clinical crisis. Dr. Golden emphasizes that while technology is evolving rapidly, the clinician must remain in the driver’s seat, using these tools to supplement evidence-based treatment and bridge the action implementation gap between sessions. Takeaways Wayhaven serves as a well-being coach for college students, addressing everyday challenges.AI tools like Wayhaven are not substitutes for clinical services but provide proactive support.Transparency about AI's capabilities and limitations is crucial for users.Clinicians must remain involved in the development of AI tools to ensure ethical use.Banning AI in mental health is not the solution; better safeguards are needed.Understanding the risks associated with AI usage is essential for clinicians.Collaboration between clinicians and AI developers can enhance mental health support. Connect With Dr. Ashleigh Golden https://www.linkedin.com/in/ashleigh-golden/ https://www.wayhaven.com/ Connect With Us https://www.waydeai.com/ https://www.facebook.com/waydeai https://www.linkedin.com/company/wayde-ai/ info@waydeai.com Subcribe https://the-waydeai-brief.beehiiv.com/

    32 min

Ratings & Reviews

5
out of 5
3 Ratings

About

Beyond the Couch: AI in Psychology bridges the gap between psychology and artificial intelligence, offering psychologists across all specialties clear, actionable insights into how AI is transforming the field. Whether you're a clinician, researcher, educator, organizational psychologist, or working in any psychological specialty, this podcast delivers expert perspectives on ethical integration, practical applications, and future developments to help you confidently navigate the digital transformation of our profession. Presented by ABPP and Wayde AI, with sponsorship from the National Register of Health Service Psychologist, this collaboration brings together psychological expertise and technological innovation to explore how AI can enhance psychological work while maintaining professional standards and human connection. Subscribe now to join a growing community of forward-thinking psychologists who are shaping the future of our profession.

You Might Also Like