The Most Interesting People I Know

Garrison Lovely

Interviews with fascinating people on science, ethics, and politics.

  1. 05/09/2024

    35 - Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”

    I'm really excited to come out of hiatus to share this conversation with you. You may have noticed people are talking a lot about AI, and I've started focusing my journalism on the topic. I recently published a 9,000 word cover story in Jacobin’s winter issue called “Can Humanity Survive AI,” and was fortunate to talk to over three dozen people coming at AI and its possible risks from basically every angle. You can find a full episode transcript here. My next guest is about as responsible as anybody for the state of AI capabilities today. But he's recently begun to wonder whether the field he spent his life helping build might lead to the end of the world. Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.  (I’ve started a Substack since my last episode was released. You can subscribe here.) The Jacobin story asked if AI poses an existential threat to humanity, but it also introduced the roiling three-sided debate around that question. And two of the sides, AI ethics and AI safety, are often pitched as standing in opposition to one another. It's true that the AI ethics camp often argues that we should be focusing on the immediate harms posed by existing AI systems. They also often argue that the existential risk arguments overhype the capabilities of those systems and distract from their immediate harms. It's also the case that many of the people focusing on mitigating existential risks from AI don't really focus on those issues. But Dr. Bengio is a counterexample to both of these points. He has spent years focusing on AI ethics and the immediate harms from AI systems, but he also worries that advanced AI systems pose an existential risk to humanity. And he argues in our interview that it's a false choice between AI ethics and AI safety, that it's possible to have both. Yoshua Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.  In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI. I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.   We discuss: His background and what motivated him to work on AI Whether there's evidence for existential risk (x-risk) from AI How he initially thought about x-risk Why he started worrying How the machine learning community's thoughts on x-risk have changed over time Why reading more on the topic made him more concerned Why he thinks Google co-founder Larry Page’s AI aspirations should be criminalized Why labs are trying to build artificial general intelligence (AGI) The technical and social components of aligning AI systems The why and how of universal, international regulations on AI Why good regulations will help with all kinds of risks Why loss of control doesn't need to be existential to be worth worrying about How AI enables power concentration Why he thinks the choice between AI ethics and safety is a false one Capitalism and AI risk The "dangerous race" between companies Leading indicators of AGI Why the way we train AI models creates risks Background Since we had limited time, we jumped straight into things and didn’t cover much of the basics of the idea of AI-driven existential risk, so I’m including some quotes and background in the intro. If you’re familiar with these ideas, you can skip straight to the interview at 7:24.  Unless stated otherwise, the below are quotes from my Jacobin story: “Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next…” Last May, “hundreds of AI researchers and notable figures signed an open letter stating, ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’ Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs.” “Hinton and Bengio were also the first authors of an October position paper warning about the risk of ‘an irreversible loss of human control over autonomous AI systems,’ joined by famous academics like Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari.” The “position paper warns that ‘no one currently knows how to reliably align AI behavior with complex values.’” The largest survey of machine learning researchers on AI x-risk was conducted in 2023. The median respondent estimated that there was a 50% chance of AGI by 2047 — a 13 year drop from a similar survey conducted just one year earlier — and that there was at least a 5% chance AGI would result in an existential catastrophe.  The October “Managing AI Risks” paper states: There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions. “Here’s a stylized version of the idea of ‘population’ growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual ‘population’ of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. [OpenAI chief scientist Ilya] Sutskever thinks it’s likely that ‘the entire surface of the earth will be covered with solar panels and data centers.’” “The fear that keeps many x-risk people up at night is not that an advanced AI would ‘wake up,’ ‘turn evil,’ and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, ‘You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.’” Links How We Can Have AI Progress Without Sacrificing Safety or Democracy by Yoshua Bengio and Daniel Privitera in TIME Magazine AI extinction open letter AI and Catastrophic Risk by Yoshua Bengio in the Journal of Democracy Regulating advanced artificial agents by Michael K. Cohen, Noam Kolt, Yoshua Bengio, Gillian K. Hadfield, Stuart Russell in Science How Rogue AIs may Arise by Yoshua Bengio FAQ on Catastrophic AI Risks by Yoshua Bengio Episode art by Ricardo Santos for Jacobin.

    48 min
  2. 08/06/2023

    34 - Carl Robichaud on Oppenheimer and Dealing with Nukes

    Carl Robichaud is the first person I go to on the topic of nuclear weapons. He has been working as a grantmaker and analyst of nuclear weapons policy for close to two decades. He co-leads nuclear security grantmaking at Longview Philanthropy, where I used to work as a media consultant. Prior to Longview, Carl led nuclear grantmaking for the Carnegie Corporation of New York. We recently saw Oppenheimer together and decided to have a discussion about the film, the real history, and nuclear weapons more broadly.  This episode is being released on the 78th anniversary of the Hiroshima bombing. The Nagasaki bombing happened just three days later, after the Japanese emperor had already secretly decided to surrender. As we discuss, the fact that nuclear weapons have not been used in war in the nearly eight decades since should be seen as a remarkable achievement, or a sign of extreme luck.  We have a spoiler-filled discussion of the new film Oppenheimer and the real history until 31:12, in case you’d like to skip ahead.   We discuss: Alternatives to bombing Hiroshima and Nagasaki The controversial development of the hydrogen bomb Oppenheimer's retrospective feelings about the bomb Health effects of nuclear tests Why the world isn't totally full of nukes Whether ICBMs can be justified while nuclear subs exist Why the US won't commit to no first use How arms control agreements help get out of traps Ukraine and the possible breaking of the nuclear taboo Would we all die Near misses Whether there's always a “guy in the chair” What we should do Aspiring for a world free of nuclear weapons Calls to action  The decline of activist and philanthropic interest in nuclear weapons Links: Estimated nuclear warhead stockpiles, 1945 to 2022 “The Illogic of Nuclear Escalation” by Fred Kaplan in Asterisk Magazine The Doomsday Machine by Daniel Ellsberg Humankind by Rutger Bregman My discussion of the book with Rutger “The Atomic Bombings Reconsidered” by Barton J Bernstein in Foreign Affairs “Counting the dead at Hiroshima and Nagasaki” by Alex Wellerstein in the Bulletin of Atomic Scientists “The Puzzle of Non-Proliferation” by Carl in Asterisk Magazine Inheriting the Bomb: The Collapse of the USSR and the Nuclear Disarmament of Ukraine by Mariana Budjeryn 80,000 Hours Podcast: Daniel Ellsberg on the creation of nuclear doomsday machines, the institutional insanity that maintains them, and a practical plan for dismantling them. ”Ronald Reagan’s Disarmament Dream” by Jacob Weisberg in the Atlantic How many people would be killed as a direct result of a US-Russia nuclear exchange? By Luisa Rodriguez Wikipedia: List of nuclear close calls “39 years ago today, one man saved us from world-ending nuclear war” by Dylan Matthews in Vox Longview Philanthropy: Nuclear Weapons Policy Fund “The biggest funder of anti-nuclear war programs is taking its money away” by Dylan Matthews in Vox  General Advisory Committee's Majority and Minority Reports on Building the H-Bomb - October 30, 1949 J. Robert Oppenheimer - Address to the American Philosophical Society - delivered 16 November 1945, University of Pennsylvania, PA

    1h 41m
  3. 10/26/2022

    33 - Habiba Islam on the Left and Effective Altruism

    [This episode was recorded before the FTX collapse. It contains some discussion of Sam Bankman-Fried. Habiba has asked me to pass on that, to say the least, she no longer endorses what she says about Sam as an example of someone doing good. I've also linked in the show notes to her twitter thread with her thoughts on FTX.] This episode is a long time in the making. We’re going deep on the intersection of effective altruism (EA) and the left.  When I tell people that I’m a leftist and into effective altruism, they’re often surprised. A lot of the recent criticism of EA from the left may make it seem like the ideas and communities are incompatible, causing people to genuinely ask, can you be an effective altruist and a leftist? I think you can. But that doesn’t mean there aren’t real tensions between the two approaches to improving the world.  This is not meant to be a point by point rebuttal of any criticisms of EA or the left. Instead, I wanted to better understand myself how these ideas interact.  To discuss this, I brought on Habiba Islam. Habiba is a career advisor for 80,000 Hours, an organization that helps people find high-impact careers. 80,000 Hours grew out of the effective altruism movement, but Habiba also identifies as a leftist. As you’ll soon discover, Habiba has given these ideas a lot of thought and helped clarify a lot of longstanding confusions for me.  We go through our backgrounds with the left and EA and attempt to define each. We then go through hidden agreements EA and the left have, misconceptions each has about the other, and the real disagreements between EA in practice and the left.  When I first got into EA and left politics, I had grand plans to try to reconcile the two. I felt like EA’s commitment to prioritization, responding to evidence, and doing whatever works could help make the left better at achieving its goals. And I thought that the left’s ability to build movements, shape narratives, analyze power, and understand history could shore up some major blindspots within EA. Time has tempered my ambitions a bit, and I think there are good reasons why the left and EA will and should remain distinct things. But there is still a lot each can learn from the other.    Left critiques of EA: 2015 LRB essay on effective altruism: Stop the Robot Apocalypse Jacobin: Against Charity: Rather than creating an individualized “culture of giving,” we should be challenging capitalism’s institutionalized taking.   Show notes: Paper: Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation Vox: Caring about the future doesn’t mean ignoring the present: Effective altruism hasn’t abandoned its roots. Winners of the EA Criticism and Red Teaming Contest Jacobin: The Socialist Case for Longtermism Book: The Birth of the Pill: How Four Crusaders Reinvented Sex and Launched a Revolution TED Talk: How civilization could destroy itself -- and 4 ways we could prevent it | Nick Bostrom Book: Four Futures: Life After Capitalism Paper: The Fallacy of Philanthropy by Paul Gomberg Wikipedia: TRIPS Agreement Effective Altruism Forum: Growth and the case against randomista development Effective Altruism Forum: Tax Havens and the case for Tax Justice Effective Altruism Forum: Cause area proposal: International Macroeconomic Policy How Rich Am I? Find out how rich you are compared to the rest of the world – are you on the global rich list? Slow Boring: The rise and importance of Secret Congress

    2h 35m
  4. 10/04/2022

    32 - Rutger Bregman on Why People Are Decent, Effective Altruism, and Causing Tucker Carlson’s Meltdown

    Rutger Bregman is the bestselling author of Utopia for Realists and Humankind: a Hopeful History. He has been profiled in the New York Times and interviewed on the Daily Show. Rupert Murdoch has been spotted reading his book, and Tucker Carlson called him a “f*****g moron.”  I first came across Rutger years ago when a friend was reading Utopia for Realists. The book, which argues for UBI, open borders, and a 15 hour work week, intrigued me, but I’m ashamed to admit I haven’t read it.  He popped back up on my radar when he appeared at Davos, the annual gathering of the super-wealthy, and lambasted his audience for not talking about taxes. The viral moment he created led to an invitation onto Tucker Carlson’s show, where Rutger’s challenge to the Fox News host led to what can only be described as a meltdown. In our interview, Rutger goes deeper into the full story of both events than I’ve seen anywhere else.  We spend the bulk of the interview discussing his book Humankind, which argues that people are actually pretty decent, but power corrupts. This is one of my favorite books, and I can’t recommend it highly enough.  We wrap up with a discussion of Rutger’s relationship with effective altruism, the philosophy and social movement trying to do as much as possible to improve the world.  In particular, we discuss: His career and the publication of Utopia for Realists The unlikely success of the book His trip to Davos Making Tucker Carlson lose his mind Veneer theory and why Rousseau is underrated How people actually behave in disasters Why carpet bombing cities backfires Why distance kills The domestication of humans Why socializing makes us smart The problems with Milgram's shock experiments The replication crisis Criticisms of Rutger’s portrayal of hunter gatherer life His journey to effective altruism His ideas for solving EA’s billionaire problem His plans for an EA-adjacent book The broader changes to EA over the years Hijacking status for good   How committing your career to helping others might actually make you happiest Links: Discourse on Inequality The Doomsday Machine Violence The Secret of Our Success The Dawn of Everything The real Lord of the Flies: what happened when six boys were shipwrecked for 15 months The Possibility of an Ongoing Moral Catastrophe Giving What We Can Grilled TMIPIK - Leah Garcés on Working with Factory Farmers to Help Animals If You’re an Egalitarian, How Come You’re So Rich? Famine, Affluence, and Morality Yes, it’s all the fault of Big Oil, Facebook and ‘the system’. But let’s talk about you this time

    2h 11m
  5. 05/11/2021

    31 - Alexander Zaitchik on How Bill Gates Impeded Global Access to Covid Vaccines

    Alexander Zaitchik is a freelance journalist and author with work in The New Republic, The Nation, The Guardian, and elsewhere. Zaitchik has written two books, one about Glenn Beck and another exploring Trump’s America. He’s working on a third, out in January 2022, called Owning the Sun: A People’s History of Monopoly Medicine, from Aspirin to Covid-19.  This episode is about one of the most important stories in the world right now: global vaccine production and distribution. Alex wrote a long-form investigation in the New Republic called “How Bill Gates Impeded Global Access to Covid Vaccines”, which goes deep into the global intellectual property paradigm that is limiting vaccine production and the people who defend it.   We recorded this episode before the US announced support for some kind of waiver on vaccine patents. It’s important to note that the US did not back the TRIPS waiver proposed by South Africa and India in October 2020. The US is also reportedly concerned that sharing information would undermine American competitiveness with China and Russia in biopharmaceuticals. The idea that it would be bad if more countries developed the ability to make advanced vaccines is emblematic of the harms of prioritizing profit-making in an industry so essential to human wellbeing. A source in the Biden administration also said the negotiations are expected to take months.  Last Thursday, the Gates Foundation reversed course and supported a temporary suspension of IP rights on Covid vaccines. The Foundation’s statement cites the number of cases in Brazil and India as a reason to support the suspension. But Bill Gates was pushing against any efforts to suspend IP protections right until the US supported some kind of waiver. Gates’ firm position for over a year has been that IP protections play zero role in limiting vaccine supply, but now his foundation supports suspending those protections because we need to increase vaccine supply so badly. Either Gates recently came across some really persuasive evidence, or public opinion actually can still matter. As I record this, India is being ravaged by Covid. Yesterday, nearly 400,000 new cases were reported, a number which almost certainly represents a small fraction of true cases. Less than 10 percent of the country has received even one dose of vaccine. Hospitals and crematoria alike are overwhelmed and there is an acute shortage of wood due to the sheer number of deaths. Domestic policy failures of the Modi government play a big role in this story, but so too do the choices of pharmaceutical firms and their client governments like the United States and other rich countries. We cover a lot of ground and dispel a lot of myths propagated by the pharmaceutical industry.  We specifically discuss:  Gates’ heavily managed perception as a do-gooder His approach to public health and what opportunities it forecloses How Gates' ideological investments run deeper than his financial ones The affirmative case for IP protections in drug development  The problems with that case Alternative models of incentivizing drug development The incentives the current system creates  A brief history of drug development in the US How the US military developed a majority of successful vaccines made in the 20th century The story of South Africa and AIDS drugs The TRIPS waiver proposal Whether it's true that IP is the reason we aren't maximizing vaccine production  Moderna’s empty promise to not enforce their patents The argument that profit motives haven’t been strong enough  The PR boon vaccines have been for big pharma What a fully public response could have looked like A response to Gates’ argument that IP is necessary for quality control  How a tech billionaire became the de facto global public health czar The role he really plays in the public health space  I think this is one of the most important episodes of the show so far. So much rides on whether governments make decisions that prioritize global public health, even if they come at the expense of the profits of one industry.  Buy Alex's book in January 2022.    Alex’s writing: How Bill Gates Impeded Global Access to Covid Vaccines No Vaccine in Sight Moderna’s Pledge Not to Enforce the Patents on Their COVID-19 Vaccine Is Worthless   Links: They Pledged to Donate Rights to Their COVID Vaccine, Then Sold Them to Pharma Goldman Sachs asks in biotech research report: ‘Is curing patients a sustainable business model?’ TRIPS waiver: there’s more to the story than vaccine patents Myths of Vaccine Manufacturing Views from a vaccine manufacturer: Q&A - Abdul Muktadir, Incepta Pharmaceuticals; Pandemic Treaty Action Video of Gates responding to criticism of his push to close-source the Oxford vaccine

    1h 29m
  6. 30 - Tobias Leenaert on the Pragmatic Path to a Vegan World

    03/03/2021

    30 - Tobias Leenaert on the Pragmatic Path to a Vegan World

    Tobias Leenaert is the author of How to Create a Vegan World: a Pragmatic Approach, which has been translated into five languages. He is the cofounder of ProVeg International, which aims to reduce the consumption of animal products by 50% by 2040. Tobias also writes the Vegan Strategist blog, where he shares strategies for convincing people to reduce their animal product consumption.    We discuss:  the difference between pragmatism and idealism in animal advocacy why intentions matter less than we think “vegalomania” and whether a vegan diet is really the healthiest when behavior change leads belief change how vegetarians reduce almost as much harm as vegans how reducetarian do more for animals than vegans how much easier it's gotten to be vegan veganism's bad brand and why so many people hate on vegans a thought experiment for vegans why strict veganism can be counterproductive how you can help animals without being a vegan or vegetarian where analogies between animal agriculture and other crimes break down how to be an effective animal advocate what he’s most looking forward to I think this episode is useful for both vegetarians and vegan activists and people who are interested in consuming less animal products but aren’t sure how.   Links: Vegetarians reducing almost as much suffering as vegans 60% of veggies ate meat in last 24 hrs MOST AMERICANS DIDN'T APPROVE OF MARTIN LUTHER KING JR. BEFORE HIS DEATH, POLLS SHOW Chomsky on the lack of early meaningful opposition to the Vietnam war Rules for Radicals

    1h 6m
  7. 01/04/2021

    28 - David Shor on Why Bernie Would Have Won in 2016- But Not in 2020

    David Shor is a data scientist and the former head of political data science for Civis Analytics, a Democratic think tank. In 2012, he developed the Obama campaign’s in-house election forecasting system, which accurately predicted the outcome to within a point in every state. David was the subject of some controversy this summer when he was fired following his tweeting of an academic paper. The paper argued that violent protests decreased Democratic presidential vote share while nonviolent protests increased vote share. Unfortunately, David is not at liberty to discuss the details of this incident, which is an excellent example of what happens when employment protections don’t exist.  I want to state up front that the focus of this episode is on how to improve the electoral prospects of Democrats, which is David’s expertise. I have many disagreements with the Democratic party and its leaders, and there are many pathways to power beyond electoral politics. But America’s political institutions are extremely powerful, and ensuring that they are controlled by the non-death cult party is important.    We discuss: What happened in the 2020 election  Why the electoral college is biased towards Republicans  Efforts to combat structural bias against the Democratic party Why the polls were wrong again and why they’ll be very hard to fix Why Bernie would have won in 2016 but may not have in 2020 How Democratic staffers and left wing activists are massively unrepresentative of the American public The electoral obstacles to passing Medicare for All and how to make the policy more politically popular  Policies that combat inequality without raising taxes Whether Democrats actually want to win Why Democrats need the working class to win power  Why good politicians stay relentlessly on message How we can move voters towards policy positions we think are just Why Democrats should talk more about issues and less about values What we can learn from the growth in support for same sex marriage The importance of getting the media on your side Links: National Popular Vote Interstate Compact Matt Grossman on Twitter David Shor on Twitter

    1h 14m
4.7
out of 5
23 Ratings

About

Interviews with fascinating people on science, ethics, and politics.