Your Undivided Attention

The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

  1. 4 天前

    The Crisis That United Humanity—and Why It Matters for AI

    In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem. Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis. So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change. Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.   RECOMMENDED MEDIA “Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan Solomon The full text of the Montreal Protocol The full text of the Kigali Amendment   RECOMMENDED YUA EPISODES Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI AI Is Moving Fast. We Need Laws that Will Too. Big Food, Big Tech and Big AI with Michael Moss Corrections: Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198. Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.

    52 分鐘
  2. 8月26日

    How OpenAI's ChatGPT Guided a Teen to His Death

    Content Warning: This episode contains references to suicide and self-harm.  Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.” Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost. CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all. If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team. RECOMMENDED MEDIA  The 988 Suicide and Crisis Lifeline Further reading on Adam’s story Further reading on AI psychosis Further reading on the backlash to GPT5 and the decision to bring back 4o OpenAI’s press release on sycophancy in 4o Further reading on OpenAI’s decision to eliminate the persuasion red line Kashmir Hill’s reporting on the woman with an AI boyfriend RECOMMENDED YUA EPISODES AI is the Next Free Speech Battleground People are Lonelier than Ever. Enter AI. Echo Chambers of One: Companion AI and the Future of Human Connection When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton CORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.

    45 分鐘
  3. 8月14日

    “Rogue AI” Used to be a Science Fiction Trope. Not Anymore.

    Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger. And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all. In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years.  Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security. The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it? Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA Gladstone AI’s State Department Action Plan, which discusses the loss of control risk with AI Apollo Research’s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo Research Anthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment faking The Trump White House AI Action Plan Further reading on the phenomenon of more advanced AIs being better at deception. Further reading on Replit AI wiping a company’s coding database Further reading on the owl example that Jeremie gave Further reading on AI induced psychosis Dan Hendryck and Eric Schmidt’s “Superintelligence Strategy”   RECOMMENDED YUA EPISODES Daniel Kokotajlo Forecasts the End of Human Dominance Behind the DeepSeek Hype, AI is Learning to Reason The Self-Preserving Machine: Why AI Learns to Deceive This Moment in AI: How We Got Here and Where We’re Going CORRECTIONS Tristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times. Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions.

    42 分鐘
  4. 7月31日

    AI is the Next Free Speech Battleground

    Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property.  Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability. This isn't a science fiction scenario. It’s the future we’re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts. In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court’s role in steering AI and what we can do to help steer it better. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA “The First Amendment Does Not Protect Replicants” by Larry Lessig More information on the Tech Justice Law Project Further reading on Sewell Setzer’s story Further reading on NYT v. Sullivan Further reading on the Citizens United case Further reading on Google’s deal with Character AI More information on Megan Garcia’s foundation, The Blessed Mother Family Foundation RECOMMENDED YUA EPISODES When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton AI Is Moving Fast. We Need Laws that Will Too. The AI Dilemma

    49 分鐘
  5. 7月17日

    Daniel Kokotajlo Forecasts the End of Human Dominance

    In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future.  AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you’re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don’t have to agree with Daniel’s specific forecast to recognize that the incentives around AI could take us to a very bad place. We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.   Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA The AI 2027 forecast from the AI Futures Project Daniel’s original AI 2026 blog post  Further reading on Daniel’s departure from OpenAI Anthropic recently released a survey of all the recent emergent misalignment research Our statement in support of Sen. Grassley’s AI Whistleblower bill  RECOMMENDED YUA EPISODES The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future AGI Beyond the Buzz: What Is It, and Are We Ready? Behind the DeepSeek Hype, AI is Learning to Reason The Self-Preserving Machine: Why AI Learns to Deceive Clarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections.

    38 分鐘
  6. 6月26日

    Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel

    Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete? Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal. We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction? Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA The Tyranny of Merit by Michael Sandel Democracy’s Discontent by Michael Sandel What Money Can’t Buy by Michael Sandel Take Michael’s online course “Justice” Michael’s discussion on AI Ethics at the World Economic Forum Further reading on “The Intelligence Curse” Read the full text of Robert F. Kennedy’s 1968 speech Read the full text of Dr. Martin Luther King Jr.’s 1968 speech Neil Postman’s lecture on the seven questions to ask of any new technology RECOMMENDED YUA EPISODES AGI Beyond the Buzz: What Is It, and Are We Ready? The Man Who Predicted the Downfall of Thinking The Tech-God Complex: Why We Need to be Skeptics The Three Rules of Humane Tech AI and Jobs: How to Make AI Work With Us, Not Against Us with Daron Acemoglu Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?

    47 分鐘
  7. 6月12日

    The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

    The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step. Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path. This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control? We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack. RECOMMENDED MEDIA  Tristan’s TED talk on the Narrow Path Sam’s 95 Theses on AI Sam’s proposal for a Manhattan Project for AI Safety Sam’s series on AI and Leviathan The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson Dario Amodei’s Machines of Loving Grace essay. Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey The Paradox of Libertarianism by Tyler Cowen Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference Further reading on surveillance with 6G RECOMMENDED YUA EPISODES AGI Beyond the Buzz: What Is It, and Are We Ready? The Self-Preserving Machine: Why AI Learns to Deceive  The Tech-God Complex: Why We Need to be Skeptics  Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt CORRECTIONS Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.”  Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”

    48 分鐘
  8. 5月30日

    People are Lonelier than Ever. Enter AI.

    Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder. And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them. How will that change us? And what rules should we set down now to avoid the mistakes of the past? These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025. Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack. RECOMMENDED MEDIA “Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle’s books on how technology mediates our relationships. Key & Peele - Text Message Confusion  Further reading on Hinge’s rollout of AI features Hinge’s AI principles “The Anxious Generation” by Jonathan Haidt “Bowling Alone” by Robert Putnam The NYT profile on the woman in love with ChatGPT Further reading on the Sewell Setzer story Further reading on the ELIZA chatbot RECOMMENDED YUA EPISODES Echo Chambers of One: Companion AI and the Future of Human Connection What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton Esther Perel on Artificial Intimacy Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    44 分鐘
4.8
(滿分 5 顆星)
1,488 則評分

簡介

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

你可能也會喜歡