6 episodes

Future Matters is a newsletter about longtermism, the view that positively influencing the long-term future is a key moral priority of our time. Each month, we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Find us at www.futurematters.news

Future Matters Matthew van der Merwe, Pablo Stafforini

    • Society & Culture

Future Matters is a newsletter about longtermism, the view that positively influencing the long-term future is a key moral priority of our time. Each month, we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Find us at www.futurematters.news

    #5: supervolcanoes, AI takeover, and What We Owe the Future

    #5: supervolcanoes, AI takeover, and What We Owe the Future

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters.
    01:08 MacAskill — What We Owe the Future.
    01:34 Lifland — Samotsvety's AI risk forecasts.
    02:11 Halstead — Climate Change and Longtermism.
    02:43 Good Judgment — Long-term risks and climate change.
    02:54 Thorstad — Existential risk pessimism and the time of perils.
    03:32 Hamilton — Space and existential risk.
    04:07 Cassidy & Mani — Huge volcanic eruptions.
    04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes.
    05:28 Hilton — Preventing an AI-related catastrophe.
    06:13 Lewis — Most small probabilities aren't Pascalian.
    07:04 Yglesias — What's long-term about "longtermism”?
    07:33 Lifland — Prioritizing x-risks may require caring about future people.
    08:40 Karnofsky — AI strategy nearcasting.
    09:11 Karnofsky — How might we align transformative AI if it's developed very soon?
    09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force.
    10:28 News.
    14:28 Conversation with Ajeya Cotra.
    15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it?
    18:08 Could you walk us through the three assumptions you make about how this scenario plays out?
    20:49 What are the key properties of the model you call Alex?
    22:55 What do you mean by “playing the training game”, and why would Alex behave in that way?
    24:34 Can you describe how deploying Alex would result in a loss of human control?
    29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?

    • 31 min
    #4: AI timelines, AGI risk, and existential risk from climate change

    #4: AI timelines, AGI risk, and existential risk from climate change

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters
    01:11 Steinhardt — AI forecasting: one year in
    01:52 Davidson — Social returns to productivity growth
    02:26 Brundage — Why AGI timeline research/discourse might be overrated
    03:03 Cotra — Two-year update on my personal AI timelines
    03:50 Grace — What do ML researchers think about AI in 2022?
    04:43 Leike — On the windfall clause
    05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
    06:32 Maas — Introduction to strategic perspectives on long-term AI governance
    06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study
    07:35 Trötzmüller — Why EAs are skeptical about AI safety
    08:08 Schubert — Moral circle expansion isn’t the key value change we need
    08:52 Šimčikas — Wild animal welfare in the far future
    09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views
    10:28 Rational Animations — Video on Karnofsky's Most important century
    11:23 Other research
    12:47 News
    15:00 Conversation with John Halstead
    15:33 What level of emissions should we reasonably expect over the coming decades?
    18:11 What do those emissions imply for warming?
    20:52 How worried should we be about the risk of climate change from a longtermist perspective?
    26:53 What is the probability of an existential catastrophe due to climate change?
    27:06 Do you think EAs should fund modelling work of tail risks from climate change?
    28:45 What would be the best use of funds?

    • 31 min
    #3: digital sentience, AGI ruin, and forecasting track records

    #3: digital sentience, AGI ruin, and forecasting track records

    Episode Notes
    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters
    01:11 Long — Lots of links on LaMDA
    01:48 Lovely — Do we need a better understanding of 'progress'?
    02:11 Base — Things usually end slowly
    02:47 Yudkowsky — AGI ruin: a list of lethalities
    03:38 Christiano — Where I agree and disagree with Eliezer
    04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates
    05:13 Karnofsky — The track record of futurists seems … fine
    06:08 Aaronson — Joining OpenAI to work on AI safety
    06:52 Shiller — The importance of getting digital consciousness right
    07:53 Pilz's — Germans opinions on translations of “longtermism”
    08:33 Karnofsky — AI could defeat all of us combined
    09:36 Beckstead — Future Fund June 2022 update
    11:02 News
    14:45 Conversation with Robert Long
    15:05 What artificial sentience is and why it’s important
    16:56 “The Big Question” and the assumptions on which it depends
    19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability
    21:57 AI sentience and the alignment problem
    24:01 The Blake Lemoine saga and the quality of the ensuing public discussion
    26:29 The risks of AI sentience becoming lumped in with certain other views
    27:55 How to deal with objections coming from different frameworks
    28:50 The analogy between AI sentience and animal welfare
    30:10 The probability of large language models like LaMDA and GPT-3 being sentient
    32:41 Are verbal reports strong evidence for sentience?

    • 34 min
    #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research

    #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:01 Welcome to Future Matters
    01:25 Schubert — Against cluelessness
    02:23 Carlsmith — Presentation on existential risk from power-seeking AI
    03:45 Vaintrob — Against "longtermist" as an identity
    04:30 Bostrom & Shulman — Propositions concerning digital minds and society
    05:02 MacAskill — EA and the current funding situation
    05:51 Beckstead — Some clarifications on the Future Fund's approach to grantmaking
    06:46 Caviola, Morrisey & Lewis — Most students who would agree with EA ideas haven't heard of EA yet
    07:32 Villalobos & Sevilla — Potatoes: A critical review
    08:09 Ritchie — How we fixed the ozone layer
    08:57 Snodin — Thoughts on nanotechnology strategy research
    09:31 Cotton-Barratt — Against immortality
    09:50 Smith & Sandbrink — Biosecurity in the age of open science
    10:30 Cotton-Barratt — What do we want the world to look like in 10 years?
    10:52 Hilton — Climate change: problem profile
    11:30 Ligor & Matthews — Outer space and the veil of ignorance
    12:21 News
    14:46 Conversation with Ben Snodin

    • 23 min
    #1: AI takeoff, longtermism vs. existential risk, and probability discounting

    #1: AI takeoff, longtermism vs. existential risk, and probability discounting

    The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors.
    — John Stuart Mill

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    Research
    Scott Alexander's "Long-termism" vs. "existential risk" worries that “longtermism” may be a worse brand (though not necessarily a worse philosophy) than “existential risk”. It seems much easier to make someone concerned about transformative AI by noting that it might kill them and everyone else, than by pointing out its effects on people in the distant future. We think that Alexander raises a valid worry, although we aren’t sure the worry favors the “existential risk” branding over the “longtermism” branding as much as he suggests: existential risks are, after all, defined as risks to humanity's long-term potential. Both of these concepts, in fact, attempt to capture the core idea that what ultimately matters is mostly located in the far future: existential risk uses the language of “potential” and emphasizes threats to it, whereas longtermism instead expresses the idea in terms of value and the duties it creates. Maybe the “existential risk” branding seems to address Alexander’s worry better because it draws attention to the threats to this value, which are disproportionately (but not exclusively) located in the short-term, while the “longtermism” branding emphasizes instead the determinants of value, which are in the far future.
    In General vs AI-specific explanations of existential risk neglect, Stefan Schubert asks why we systematically neglect existential risk. The standard story invokes general explanations, such as cognitive biases and coordination problems. But Schubert notes that people seem to have specific biases that cause them to underestimate AI risk, e.g. it sounds outlandish and counter-intuitive. If unaligned AI is the greatest source of existential risk in the near-term, then these AI-specific biases could explain most of our neglect.
    Max Roser’s The future is vast is a powerful new introduction to longtermism. His graphical representations do well to convey the scale of humanity’s potential, and have made it onto the Wikipedia entry for longtermism.
    Thomas Kwa’s Effectiveness is a conjunction of multipliers makes the important observation that (1) a person’s impact can be decomposed into a series of impact “multipliers” and that (2) these terms interact multiplicatively, rather than additively, with each other. For example, donating 80% instead of 10% multiplies impact by a factor of 8 and earning $1m/year instead of $250k/year multiplies impact by a factor of 4; but doing both of these things multiplies impact by a factor of 32. Kwa shows that many other common EA choices are best seen as multipliers of impact, and notes that multipliers related to judgment and ambition are especially important for longtermists.
    The first installment in a series on “learning from crisis”, Jan Kulveit's Experimental longtermism: theory needs data (co-written with Gavin Leech) recounts the author's motivation to launch Epidemic Forecasting, a modelling and forecasting platform that sought to present probabilistic data to decisionmakers and the general public. Kulveit realized that his "longtermist" models had relatively straightforward implications for the COVID pandemic, such that trying to apply them to this case (1) had the potential to make a direct, positive difference to the crisis and (2) afforded an opportunity to experimentally test those mo

    • 29 min
    #0: Space governance, future-proof ethics, and the launch of the Future Fund

    #0: Space governance, future-proof ethics, and the launch of the Future Fund

    > We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star.
    > — Ralph Waldo Emerson
    Welcome to Future Matters, a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to the Effective Altruism Forum and available as a podcast.

    Research
    We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’re very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness. It seems unlikely that current AI systems are conscious, but they are improving and there’s no great reason to think we will never _create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn’t harm it for no good reason.  Since it’s much worse to mistakenly _deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering. 
    The leadership of the

    Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine. The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations.
    Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating, Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling.
    The second part in a three-part series of great power conflict, Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion.
    Holden Karnofsky emails Tyler Cowen to make a very concise case that there’s at least a 1 in 3 chance we develop transformative AI this century (summarizing his earlier blogpost). There’s some very different approaches to AI forecasting al