Your Undivided Attention

The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

  1. VOR 6 STD.

    The Race to Build God: AI's Existential Gamble — Yoshua Bengio & Tristan Harris at Davos

    This week on Your Undivided Attention, Tristan Harris and Daniel Barcay offer a backstage recap of what it was like to be at the Davos World Economic Forum meeting this year as the world’s power brokers woke up to the risks of uncontrolled AI.  Amidst all the money and politics, the Human Change House staged a weeklong series of remarkable conversations between scientists and experts about technology and society. This episode is a discussion between Tristan and Professor Yoshua Bengio, who is considered one of the world’s leaders in AI and deep learning, and the most cited scientist in the field.  Yoshua and Tristan had a frank exchange about the AI we’re building, and the incentives we’re using to train models. What happens when a model has its own goals, and those goals are ‘misaligned’ with the human-centered outcomes we need? In fact this is already happening, and the consequences are tragic.  Truthfully, there may not be a way to ‘nudge’ or regulate companies toward better incentives. Yoshua has launched a nonprofit AI safety research initiative called Law Zero that isn't just about safety testing, but really a new form of advanced AI that's fundamentally safe by design. RECOMMENDED MEDIA  All the panels that Tristan and Daniel did with Human Change House  LawZero: Safe AI for Humanity  Anthropic’s internal research on ‘agentic misalignment’  RECOMMENDED YUA EPISODES  Attachment Hacking and the Rise of AI Psychosis How OpenAI's ChatGPT Guided a Teen to His Death What if we had fixed social media? What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton CORRECTIONS AND CLARIFICATIONS  1) In this episode, Tristan Harris discussed AI chatbot safety concerns. The core issues are substantiated by investigative reporting, with these clarifications: Grok: The Washington Post reported in August 2024 that Grok generated sexualized images involving minors and had weaker content moderation than competitors.  Meta: The Wall Street Journal reported in December 2024 that Meta reduced safety restrictions on its AI chatbots. Testing showed inappropriate responses when researchers posed as 13-year-olds (Meta's minimum age). Our discussion referenced "eight year olds" to emphasize concerns about young children accessing these systems; the documented testing involved 13-year-old personas. Bottom line: The fundamental concern stands—major AI companies have reduced safety guardrails due to competitive pressure, creating documented risks for young users. 2) There was no Google House at Davos in 2026, as stated by Tristan. It was a collaboration at Goals House.  3) Tristan states that in 2025, the total funding going into AI safety organizations was “on the order of about $150 million.” This number is not strictly verifiable.  Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    37 Min.
  2. 5. FEB.

    FEED DROP: Possible with Reid Hoffman and Aria Finger

    This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org.  This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape.  In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence.   This is the kind of conversation that needs to happen more in tech. Reid has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes. You can find "Possible" wherever you get your podcasts! And you can follow Reid on YouTube for more of his content: https://www.youtube.com/@reidhoffman.  RECOMMENDED MEDIA Aza’s first appearance on “Possible” The website for Earth Species Project “Amusing Ourselves to Death” by Neil Postman The Moloch’s Bargain paper from Stanford On Human Nature by E.O. Wilson Dawn of Everything by David Graber RECOMMENDED YUA EPISODES The Man Who Predicted the Downfall of Thinking America and China Are Racing to Different AI Futures Talking With Animals... Using AI How OpenAI's ChatGPT Guided a Teen to His Death Future-proofing Democracy In the Age of AI with Audrey Tang Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    1 Std. 7 Min.
  3. 21. JAN.

    Attachment Hacking and the Rise of AI Psychosis

    Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide. The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.  Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this. In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.  If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIPHRC.org. This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services   RECOMMENDED MEDIA  The website for the AI Psychological Harms Research Coalition Further reading on AI Pscyhosis The Atlantic article on LLM-ings outsourcing their thinking to AI Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”   RECOMMENDED YUA EPISODES How OpenAI's ChatGPT Guided a Teen to His Death People are Lonelier than Ever. Enter AI. Echo Chambers of One: Companion AI and the Future of Human Connection Rethinking School in the Age of AI   CORRECTIONS After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium  Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System. Aza referred to CHT as expert witnesses in litigation cases on AI-enabled suicide. CHT serves as expert consultants, not witnesses.     Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    51 Min.
  4. 8. JAN.

    What Would It Take to Actually Trust Each Other? The Game Theory Dilemma

    So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand.  This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world. In this episode, Tristan and Aza explore the game theory dilemma  — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. Sonja Amadae, a professor of Political Science at the University of Helsinki. She's also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.” RECOMMENDED MEDIA “Prisoners of Reason: Game Theory and the Neoliberal Economy” by Sonja Amadae (2015) The Cambridge Centre for the Study of Existential Risk “Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944) Further reading on the importance of trust in Finland Further reading on Abraham Maslow’s Hierarchy of Needs RAND’s 2024 Report on Strategic Competition in the Age of AI Further reading on Marshall Rosenberg and nonviolent communication The study on self/other overlap and AI alignment cited by Aza Further reading on The Day After (1983)   RECOMMENDED YUA EPISODES America and China Are Racing to Different AI Futures The Crisis That United Humanity—and Why It Matters for AI Laughing at Power: A Troublemaker’s Guide to Changing Tech The Race to Cooperation with David Sloan Wilson   Clarifications: The proposal for a federal preemption on AI was enacted by President Trump on December 11, 2025, shortly after this recording.Aza said that "The Day After" was the most watched TV event in history when it aired. It was actually the most watched TV film, the most watched TV event was the finale of MASH Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    45 Min.
  5. 18.12.2025

    America and China Are Racing to Different AI Futures

    Is the US really in an AI race with China—or are we racing toward completely different finish lines? In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China's AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism. If we're going to avoid a catastrophic AI arms race, we first need to understand what race we're actually in—and whether we're even running toward the same finish line. Note: On December 8, after this recording took place, the Trump administration announced that the Commerce Department would allow American semiconductor companies, including Nvidia, to sell their most powerful chips to China in exchange for a 25 percent cut of the revenue. RECOMMENDED MEDIA “China's Big AI Diffusion Plan is Here. Will it Work?” by Matt Sheehan Selina’s blog Further reading on China’s AI+ Plan Further reading on the Gaither Report and the missile gap Further Reading on involution in China The consensus from the international dialogues on AI safety in Shanghai RECOMMENDED YUA EPISODES The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future AI Is Moving Fast. We Need Laws that Will Too. The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    58 Min.
  6. 04.12.2025

    AI and the Future of Work: What You Need to Know

    No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability?  Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI. Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market.  RECOMMENDED MEDIA Co-Intelligence: Living and Working with AI by Ethan Mollick Further reading on Molly’s study with the Yale Budget Lab The “Canaries in the Coal Mine” Study from Stanford’s Digital Economy Lab Ethan’s substack One Useful Thing   RECOMMENDED YUA EPISODES Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel We Have to Get It Right’: Gary Marcus On Untamed AI AI Is Moving Fast. We Need Laws that Will Too. Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins   CORRECTIONS Ethan said that in 2022, experts believed there was a 2.5% chance that ChatGPT would be able to win the Math Olympiad. However, that was only among forecasters with more general knowledge (the exact number was 2.3%). Among domain expert forecasters, the odds were an 8.6% chance.Ethan claimed that over 50% of Americans say that they’re using AI at work. We weren’t able to independently verify this claim and most studies we found showed lower rates of reported use of AI with American workers. There are reports from other countries, notably Denmark, which show higher rates of AI use.Ethan indirectly quoted the Walmart CEO Doug McMillon as having a goal to “keep all 3 million employees and to figure out new ways to expand what they use.” In fact, McMillon’s language on AI has been much softer, saying that “AI is expected to create a number of jobs at Walmart, which will offset those that it replaces.” Additionally, Walmart has 2.1 million employees, not 3. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    45 Min.
  7. 06.11.2025 · BONUS

    What if we had fixed social media?

    We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path? In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them. This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA Dopamine Nation by Anna Lembke The Anxious Generation by Jon Haidt More information on Donella Meadows Further reading on the Kids Online Safety Act Further reading on the lawsuit filed by state AGs against Meta RECOMMENDED YUA EPISODES Future-proofing Democracy In the Age of AI with Audrey Tang Jonathan Haidt On How to Solve the Teen Mental Health Crisis AI Is Moving Fast. We Need Laws that Will Too. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    17 Min.
4.9
von 5
22 Bewertungen

Info

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

Das gefällt dir vielleicht auch