The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors. — John Stuart Mill Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Research Scott Alexander's "Long-termism" vs. "existential risk" worries that “longtermism” may be a worse brand (though not necessarily a worse philosophy) than “existential risk”. It seems much easier to make someone concerned about transformative AI by noting that it might kill them and everyone else, than by pointing out its effects on people in the distant future. We think that Alexander raises a valid worry, although we aren’t sure the worry favors the “existential risk” branding over the “longtermism” branding as much as he suggests: existential risks are, after all, defined as risks to humanity's long-term potential. Both of these concepts, in fact, attempt to capture the core idea that what ultimately matters is mostly located in the far future: existential risk uses the language of “potential” and emphasizes threats to it, whereas longtermism instead expresses the idea in terms of value and the duties it creates. Maybe the “existential risk” branding seems to address Alexander’s worry better because it draws attention to the threats to this value, which are disproportionately (but not exclusively) located in the short-term, while the “longtermism” branding emphasizes instead the determinants of value, which are in the far future. In General vs AI-specific explanations of existential risk neglect, Stefan Schubert asks why we systematically neglect existential risk. The standard story invokes general explanations, such as cognitive biases and coordination problems. But Schubert notes that people seem to have specific biases that cause them to underestimate AI risk, e.g. it sounds outlandish and counter-intuitive. If unaligned AI is the greatest source of existential risk in the near-term, then these AI-specific biases could explain most of our neglect. Max Roser’s The future is vast is a powerful new introduction to longtermism. His graphical representations do well to convey the scale of humanity’s potential, and have made it onto the Wikipedia entry for longtermism. Thomas Kwa’s Effectiveness is a conjunction of multipliers makes the important observation that (1) a person’s impact can be decomposed into a series of impact “multipliers” and that (2) these terms interact multiplicatively, rather than additively, with each other. For example, donating 80% instead of 10% multiplies impact by a factor of 8 and earning $1m/year instead of $250k/year multiplies impact by a factor of 4; but doing both of these things multiplies impact by a factor of 32. Kwa shows that many other common EA choices are best seen as multipliers of impact, and notes that multipliers related to judgment and ambition are especially important for longtermists. The first installment in a series on “learning from crisis”, Jan Kulveit's Experimental longtermism: theory needs data (co-written with Gavin Leech) recounts the author's motivation to launch Epidemic Forecasting, a modelling and forecasting platform that sought to present probabilistic data to decisionmakers and the general public. Kulveit realized that his "longtermist" models had relatively straightforward implications for the COVID pandemic, such that trying to apply them to this case (1) had the potential to make a direct, positive difference to the crisis and (2) afforded an opportunity to experimentally test those models. While the first of these effects had obvious appeal, Kulveit considers the second especially important from a longtermist perspective: attempts to think about the long-term future lack rapid feedback loops, and disciplines that aren't tightly anchored to empirical reality are much more likely to go astray. He concludes that longtermists should engage more often in this type of experimentation, and generally pay more attention to the longtermist value of information that "near-termist" projects can sometimes provide. Rhys Lindmark’s FTX Future Fund and Longtermism considers the significance of the Future Fund within the longtermist ecosystem by examining trends in EA funding over time. Interested readers should look at the charts in the original post for more details, but roughly it looks like Open Philanthropy has allocated about 20% of its budget to longtermist causes in recent years, accounting for about 80% of all longtermist grantmaking. On the assumption that Open Phil gives $200 million to longtermism in 2022, the Future Fund lower bound target of $100 million already positions it as the second-largest longtermist grantmaker, with roughly a 30% share. Lindmark’s analysis prompted us to create a Metaculus question on whether the Future Fund will give more than Open Philanthropy to longtermist causes in 2022. At the time of publication (22 April 2022), the community predicts that the Future Fund is 75% likely to outspend Open Philanthropy. Holden Karnofsky's Debating myself on whether “extra lives lived” are as good as “deaths prevented” is an engaging imaginary dialogue between a proponent and an opponent of Total Utilitarianism. Karnofsky manages to cover many of the key debates in population ethics—including those surrounding the Intuition of Neutrality, the Procreation Asymmetry, the Repugnant and Very Repugnant Conclusions, and the impossibility of Theory X—in a highly accessible yet rigorous manner. Overall, this blog post struck us as one of the best popular, informal introductions to the topic currently available. Matthew Barnett shares thoughts on the risks from SETI. People underestimate the risks from passive SETI—scanning for alien signals without transmitting anything. We should consider the possibility that alien civilizations broadcast messages designed to hijack or destroy their recipients. At a minimum, we should treat alien signals with as much caution as we would a strange email attachment. However, current protocols are to publicly release any confirmed alien messages, and no one seems to have given much thought to managing downside risk. Overall, Barnett estimates a 0.1–0.2% chance of extinction from SETI over the next 1,000 years. Now might be a good opportunity for longtermists to figure out, and advocate for, some more sensible policies. Scott Alexander provides an epic commentary on the long-running debate about AI Takeoff Speeds. Paul Christiano thinks it more likely that improvements in AI capabilities, and the ensuing transformative impacts on the world, will happen gradually. Eliezer Yudkowsky thinks there will be a sudden, sharp jump in capabilities, around the point we build AI with human-level intelligence. Alexander presents the two perspectives with more clarity than their main proponents, and isolates some of the core disagreements. It’s the best summary of the takeoff debate we’ve come across. Buck Shlegeris points out that takeoff speeds have a huge effect on what it means to work on AI x-risk. In fast takeoff worlds, AI risk will never be much more widely accepted than it is today, because everything will look pretty normal until we reach AGI. The majority of AI alignment work that is done before this point will be from the sorts of existential risk–motivated people working on alignment now. In slow takeoff worlds, by contrast, AI researchers will encounter and tackle many aspects of the alignment problem “in miniature”, before AI is powerful enough to pose an existential risk. So a large fraction of alignment work will be done by researchers motivated by normal incentives, because making AI systems that behave well is good for business. In these worlds, existential risk–motivated researchers today need to be strategic, and identify and prioritise aspects of alignment that won’t be solved “by default” in the course of AI progress. In the comments, John Wentworth argues that there will be stronger incentives to conceal alignment problems than to solve them. Therefore, contra Shlegeris, he thinks AI risk will remain neglected even in slow takeoff worlds. Linchuan Zhang’s Potentially great ways forecasting can improve the longterm future identifies several different paths via which short-range forecasting can be useful from a longtermist perspective. These include (1) improving longtermist research by outsourcing research questions to skilled forecasters; (2) improving longtermist grantmaking by predicting how potential grants will be assessed by future evaluators; (3) improving longtermist outreach by making claims more legible to outsiders; and (4) improving the longtermist training and vetting pipeline by tracking forecasting performance in large-scale public forecasting tournaments. Zhang’s companion post, Early-warning Forecasting Center: What it is, and why it'd be cool, proposes the creation of an organization whose goal is to make short-range forecasts on questions of high longtermist significance. A foremost use case is early warning for AI risks, biorisks, and other existential risks. Besides outlining the basic idea, Zhang discusses some associated questions, such as why the organization should focus on short- rather than long-range forecasting, why it should be a forecasting center rather than a prediction market, and how the center should be structured. Dylan Mathews’s The biggest funder of anti-nuclear war programs is taking its money away looks at the reasons prompting the MacArthur Fo