Misinformation Machines with Gordon Pennycook – Part 2

The Behavioral Design Podcast

Debunkbot and Other Tools Against Misinformation

In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation.

Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively.

Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter.

The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression.

This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI.

LINKS:

Gordon Pennycook:

  • ⁠Google Scholar Profile⁠
  • ⁠Twitter⁠
  • ⁠Personal Website⁠
  • ⁠Cornell University Faculty Page⁠

Further Reading on Misinformation:

  • Debunkbot - The AI That Reduces Belief in Conspiracy Theories
  • Interventions Toolbox - Strategies to Combat Misinformation

TIMESTAMPS:

01:27 Intro and Early Voting
06:45 Welcome back, Gordon!
07:52 Strategies to Combat Misinformation
11:10 Nudges and Behavioral Interventions
14:21 Comparing Intervention Strategies
19:08 Psychological Inoculation and Prebunking
32:21 Echo Chambers and Online Misinformation
34:13 Individual vs. Policy Interventions
36:21 If You Owned a Social Media Company
37:49 Algorithm Changes and Platform Quality
38:42 Community Notes and Fact-Checking
39:30 Reddit’s Moderation System
42:07 Generative AI and Fact-Checking
43:16 AI Debunking Conspiracy Theories
45:26 Effectiveness of AI in Changing Beliefs
51:32 Potential Misuse of AI
55:13 Final Thoughts and Reflections

--

Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠

Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business. 

Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠

The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

무삭제판 에피소드를 청취하려면 로그인하십시오.

이 프로그램의 최신 정보 받기

프로그램을 팔로우하고, 에피소드를 저장하고, 최신 소식을 받아보려면 로그인하거나 가입하십시오.

국가 또는 지역 선택

아프리카, 중동 및 인도

아시아 태평양

유럽

라틴 아메리카 및 카리브해

미국 및 캐나다