Future Matters

Matthew van der Merwe, Pablo Stafforini
Future Matters

Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.

Episodes

  1. 03/02/2023

    #7: AI timelines, AI skepticism, and lock-in

    Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware safety-washing. 04:45 Karnofsky — How we could stumble into AI catastrophe. 05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring. 05:51 Crawford — Technological stagnation: why I came around. 06:38 Karnofsky — Spreading messages to help with the most important century. 07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines. 07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines. 08:15 Dourado — Heretical thoughts on AI. 08:43 Browning & Veit — Longtermism and animals. 09:04 One-line summaries. 10:28 News. 14:13 Conversation with Lukas Finnveden. 14:37 Could you clarify what you mean by AGI and lock-in? 16:36 What are the five claims one could make about the long run trajectory of intelligent life? 18:26 What are the three claims about lock-in, conditional on the arrival of AGI? 20:21 Could lock-in still happen without whole brain emulation? 21:32 Could you explain why the form of alignment required for lock-in would be easier to solve? 23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats? 26:02 Do you have any thoughts on the desirability of long-term lock-in? 28:24 What’s the story behind this report?

  2. 30/12/2022

    #6: FTX collapse, value lock-in, and counterarguments to AI x-risk

    Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Martin — How bad could a war get? 05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation? 06:28 Ord — What kind of institution is needed for existential security? 07:00 Ezell — A lunar backup record of humanity. 07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war. 08:31 Hobbhahn — The next decades might be wild. 08:54 Karnosfky — Why would AI "aim" to defeat humanity? 09:44 Karnosfky — High-level hopes for AI alignment. 10:27 Karnosfky — AI safety seems hard to measure. 11:10 Karnosfky — Racing through a minefield. 12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”. 12:53 Our World in Data — New page on artificial intelligence. 14:06 Luu — Futurist prediction methods and accuracy. 14:38 Kenton et al. — Clarifying AI x-risk. 15:39 Wyg — A theologian's response to anthropogenic existential risk. 16:12 Wilkinson — The unexpected value of the future. 16:38 Aaronson — Talk on AI safety. 17:20 Tarsney & Wilkinson — Longtermism in an infinite world. 18:13 One-line summaries. 25:01 News. 28:29 Conversation with Katja Grace. 28:42 Could you walk us through the basic case for existential risk from AI? 29:42 What are the most important weak points in the argument? 30:37 Comparison between misaligned AI and corporations. 32:07 How do you think people in the AI safety community are thinking about this basic case wrong? 33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility? 34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself? 35:34 Could describe how your views on AI risk have changed over time? 36:14 Could you quantify your credence in the probability of existential catastrophe from AI? 36:52 When you reached that number, did it surprise you?

    38 min
  3. 13/09/2022

    #5: supervolcanoes, AI takeover, and What We Owe the Future

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?

    31 min
  4. 07/08/2022

    #4: AI timelines, AGI risk, and existential risk from climate change

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On the windfall clause 05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover 06:32 Maas — Introduction to strategic perspectives on long-term AI governance 06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study 07:35 Trötzmüller — Why EAs are skeptical about AI safety 08:08 Schubert — Moral circle expansion isn’t the key value change we need 08:52 Šimčikas — Wild animal welfare in the far future 09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views 10:28 Rational Animations — Video on Karnofsky's Most important century 11:23 Other research 12:47 News 15:00 Conversation with John Halstead 15:33 What level of emissions should we reasonably expect over the coming decades? 18:11 What do those emissions imply for warming? 20:52 How worried should we be about the risk of climate change from a longtermist perspective? 26:53 What is the probability of an existential catastrophe due to climate change? 27:06 Do you think EAs should fund modelling work of tail risks from climate change? 28:45 What would be the best use of funds?

    31 min
  5. 04/07/2022

    #3: digital sentience, AGI ruin, and forecasting track records

    Episode Notes Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?

    34 min
  6. 28/05/2022

    #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. 00:01 Welcome to Future Matters 01:25 Schubert — Against cluelessness 02:23 Carlsmith — Presentation on existential risk from power-seeking AI 03:45 Vaintrob — Against "longtermist" as an identity 04:30 Bostrom & Shulman — Propositions concerning digital minds and society 05:02 MacAskill — EA and the current funding situation 05:51 Beckstead — Some clarifications on the Future Fund's approach to grantmaking 06:46 Caviola, Morrisey & Lewis — Most students who would agree with EA ideas haven't heard of EA yet 07:32 Villalobos & Sevilla — Potatoes: A critical review 08:09 Ritchie — How we fixed the ozone layer 08:57 Snodin — Thoughts on nanotechnology strategy research 09:31 Cotton-Barratt — Against immortality 09:50 Smith & Sandbrink — Biosecurity in the age of open science 10:30 Cotton-Barratt — What do we want the world to look like in 10 years? 10:52 Hilton — Climate change: problem profile 11:30 Ligor & Matthews — Outer space and the veil of ignorance 12:21 News 14:46 Conversation with Ben Snodin

    23 min
  7. 23/04/2022

    #1: AI takeoff, longtermism vs. existential risk, and probability discounting

    The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors. — John Stuart Mill Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Research Scott Alexander's "Long-termism" vs. "existential risk" worries that “longtermism” may be a worse brand (though not necessarily a worse philosophy) than “existential risk”. It seems much easier to make someone concerned about transformative AI by noting that it might kill them and everyone else, than by pointing out its effects on people in the distant future. We think that Alexander raises a valid worry, although we aren’t sure the worry favors the “existential risk” branding over the “longtermism” branding as much as he suggests: existential risks are, after all, defined as risks to humanity's long-term potential. Both of these concepts, in fact, attempt to capture the core idea that what ultimately matters is mostly located in the far future: existential risk uses the language of “potential” and emphasizes threats to it, whereas longtermism instead expresses the idea in terms of value and the duties it creates. Maybe the “existential risk” branding seems to address Alexander’s worry better because it draws attention to the threats to this value, which are disproportionately (but not exclusively) located in the short-term, while the “longtermism” branding emphasizes instead the determinants of value, which are in the far future. In General vs AI-specific explanations of existential risk neglect, Stefan Schubert asks why we systematically neglect existential risk. The standard story invokes general explanations, such as cognitive biases and coordination problems. But Schubert notes that people seem to have specific biases that cause them to underestimate AI risk, e.g. it sounds outlandish and counter-intuitive. If unaligned AI is the greatest source of existential risk in the near-term, then these AI-specific biases could explain most of our neglect. Max Roser’s The future is vast is a powerful new introduction to longtermism. His graphical representations do well to convey the scale of humanity’s potential, and have made it onto the Wikipedia entry for longtermism. Thomas Kwa’s Effectiveness is a conjunction of multipliers makes the important observation that (1) a person’s impact can be decomposed into a series of impact “multipliers” and that (2) these terms interact multiplicatively, rather than additively, with each other. For example, donating 80% instead of 10% multiplies impact by a factor of 8 and earning $1m/year instead of $250k/year multiplies impact by a factor of 4; but doing both of these things multiplies impact by a factor of 32. Kwa shows that many other common EA choices are best seen as multipliers of impact, and notes that multipliers related to judgment and ambition are especially important for longtermists. The first installment in a series on “learning from crisis”, Jan Kulveit's Experimental longtermism: theory needs data (co-written with Gavin Leech) recounts the author's motivation to launch Epidemic Forecasting, a modelling and forecasting platform that sought to present probabilistic data to decisionmakers and the general public. Kulveit realized that his "longtermist" models had relatively straightforward implications for the COVID pandemic, such that trying to apply them to this case (1) had the potential to make a direct, positive difference to the crisis and (2) afforded an opportunity to experimentally test those models. While the first of these effects had obvious appeal, Kulveit considers the second especially important from a longtermist perspective: attempts to think about the long-term future lack rapid feedback loops, and disciplines that aren't tightly anchored to empirical reality are much more likely to go astray. He concludes that longtermists should engage more often in this type of experimentation, and generally pay more attention to the longtermist value of information that "near-termist" projects can sometimes provide. Rhys Lindmark’s FTX Future Fund and Longtermism considers the significance of the Future Fund within the longtermist ecosystem by examining trends in EA funding over time. Interested readers should look at the charts in the original post for more details, but roughly it looks like Open Philanthropy has allocated about 20% of its budget to longtermist causes in recent years, accounting for about 80% of all longtermist grantmaking. On the assumption that Open Phil gives $200 million to longtermism in 2022, the Future Fund lower bound target of $100 million already positions it as the second-largest longtermist grantmaker, with roughly a 30% share. Lindmark’s analysis prompted us to create a Metaculus question on whether the Future Fund will give more than Open Philanthropy to longtermist causes in 2022. At the time of publication (22 April 2022), the community predicts that the Future Fund is 75% likely to outspend Open Philanthropy. Holden Karnofsky's Debating myself on whether “extra lives lived” are as good as “deaths prevented” is an engaging imaginary dialogue between a proponent and an opponent of Total Utilitarianism. Karnofsky manages to cover many of the key debates in population ethics—including those surrounding the Intuition of Neutrality, the Procreation Asymmetry, the Repugnant and Very Repugnant Conclusions, and the impossibility of Theory X—in a highly accessible yet rigorous manner. Overall, this blog post struck us as one of the best popular, informal introductions to the topic currently available. Matthew Barnett shares thoughts on the risks from SETI. People underestimate the risks from passive SETI—scanning for alien signals without transmitting anything. We should consider the possibility that alien civilizations broadcast messages designed to hijack or destroy their recipients. At a minimum, we should treat alien signals with as much caution as we would a strange email attachment. However, current protocols are to publicly release any confirmed alien messages, and no one seems to have given much thought to managing downside risk. Overall, Barnett estimates a 0.1–0.2% chance of extinction from SETI over the next 1,000 years. Now might be a good opportunity for longtermists to figure out, and advocate for, some more sensible policies. Scott Alexander provides an epic commentary on the long-running debate about AI Takeoff Speeds. Paul Christiano thinks it more likely that improvements in AI capabilities, and the ensuing transformative impacts on the world, will happen gradually. Eliezer Yudkowsky thinks there will be a sudden, sharp jump in capabilities, around the point we build AI with human-level intelligence. Alexander presents the two perspectives with more clarity than their main proponents, and isolates some of the core disagreements. It’s the best summary of the takeoff debate we’ve come across. Buck Shlegeris points out that takeoff speeds have a huge effect on what it means to work on AI x-risk. In fast takeoff worlds, AI risk will never be much more widely accepted than it is today, because everything will look pretty normal until we reach AGI. The majority of AI alignment work that is done before this point will be from the sorts of existential risk–motivated people working on alignment now. In slow takeoff worlds, by contrast, AI researchers will encounter and tackle many aspects of the alignment problem “in miniature”, before AI is powerful enough to pose an existential risk. So a large fraction of alignment work will be done by researchers motivated by normal incentives, because making AI systems that behave well is good for business. In these worlds, existential risk–motivated researchers today need to be strategic, and identify and prioritise aspects of alignment that won’t be solved “by default” in the course of AI progress. In the comments, John Wentworth argues that there will be stronger incentives to conceal alignment problems than to solve them. Therefore, contra Shlegeris, he thinks AI risk will remain neglected even in slow takeoff worlds. Linchuan Zhang’s Potentially great ways forecasting can improve the longterm future identifies several different paths via which short-range forecasting can be useful from a longtermist perspective. These include (1) improving longtermist research by outsourcing research questions to skilled forecasters; (2) improving longtermist grantmaking by predicting how potential grants will be assessed by future evaluators; (3) improving longtermist outreach by making claims more legible to outsiders; and (4) improving the longtermist training and vetting pipeline by tracking forecasting performance in large-scale public forecasting tournaments. Zhang’s companion post, Early-warning Forecasting Center: What it is, and why it'd be cool, proposes the creation of an organization whose goal is to make short-range forecasts on questions of high longtermist significance. A foremost use case is early warning for AI risks, biorisks, and other existential risks. Besides outlining the basic idea, Zhang discusses some associated questions, such as why the organization should focus on short- rather than long-range forecasting, why it should be a forecasting center rather than a prediction market, and how the center should be structured. Dylan Mathews’s The biggest funder of anti-nuclear war programs is taking its money away looks at the reasons prompting the MacArthur Fo

    30 min
  8. 22/03/2022

    #0: Space governance, future-proof ethics, and the launch of the Future Fund

    > We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star. > — Ralph Waldo Emerson Welcome to Future Matters, a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to the Effective Altruism Forum and available as a podcast. Research We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’re very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness. It seems unlikely that current AI systems are conscious, but they are improving and there’s no great reason to think we will never _create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn’t harm it for no good reason.  Since it’s much worse to mistakenly _deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering.  The leadership of the Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine. The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations. Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating, Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling. The second part in a three-part series of great power conflict, Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion. Holden Karnofsky emails Tyler Cowen to make a very concise case that there’s at least a 1 in 3 chance we develop transformative AI this century (summarizing his earlier blogpost). There’s some very different approaches to AI forecasting all pointing to a significant probability of TAI this century: forecasting based on ‘biological anchors’; forecasts by AI experts and Metaculus; analyses of long-run economic growth; and very outside-view arguments. On the other hand, there are few, if any, arguments that we should confidently expect TAI much later.  Malevolent nonstate actors with access to advanced technology can increase the probability of an existential catastrophe either by directly posing a risk of collapse or extinction, or indirectly, by creating instability and thereby undermining humanity's capacity to handle other risks. The magnitude of the risk posed by these actors is a function of both their ability and their willingness to cause harm. In How big are risks from non-state actors? Base rates for terrorist attacks, Rose Hadshar attempts to inform estimates of the second of these two factors by examining base rates of terrorist attacks. She finds that attacks occur at a rate of one per 700,000 people worldwide and one per 3,000,000 people in the West. Most attacks are not committed with omnicidal intent. Population dynamics are an important force shaping humanity over decades and centuries. In Retrospective on Shall the Religious Inherit The Earth, Isabel Juniewicz evaluates predictions in a 2010 book which claimed that: (1) within religions, fundamentalist growth will outpace moderate growth; (2) within regions, fundamentalist growth will outpace growth among non-religious and moderates; (3) globally, religious growth will outpace non-religious growth. She finds strongest evidence for (1). Evidence for (2) is much weaker— in the US, the non-religious share of the population has increased over the last decade. Secularization and deconversion are more than counterbalancing the fertility advantage of religious groups, and the fertility gap between religious and non-religious populations has been narrowing. Haredi Jews are one notable exception, and continue to grow as a share of the population in US and Israel. (3) seems true in the medium-term, but due primarily to dynamics overlooked in the book: population decline in (more irreligious) East Asia, and population growth in (increasingly religious) Africa. Fertility rates in predominantly Muslim countries, on which the book’s argument for (3) is largely based, have been declining substantially, to near-replacement levels in many cases. For the most part, religious populations are experiencing declining fertility in parallel with secular groups. Overall it looks like the most significant trend in the coming decades and centuries will not be increasing global religiosity, but the continued convergence of global fertility rates to below-replacement levels. We’re appalled by many of the attitudes and practices of past generations. How can we avoid making the same sort of mistakes? In Future-proof ethics (EA Forum discussion), Holden Karnofsky suggests three features of ethical systems capable of meeting this challenge: (1) Systematization: rather than relying on bespoke, intuition-based judgements, we should look for a small set of principles we are very confident in, and derive everything else from these; (2) Thin utilitarianism: our ethical system should be based on the needs and wants of others rather than our personal preferences, and therefore requires a system of consistent ethical weights for comparing any two harms and benefits; and (3) Sentientism: the key ingredient in determining how much to weigh someone’s interests should be the extent to which they have the capacity for pleasure and suffering. Combining these elements leads to the sort of ethical stance that has a good track record of being ‘ahead of the curve.’ Progress on shaping long-term prospects for humanity is to a significant degree constrained by insufficient high-quality research with the potential to answer important and actionable questions. Holden Karnofsky's Important, actionable research questions for the most important century offers a list of questions of this type that he finds most promising, in the following three areas: AI alignment, AI governance, and AI takeoff dynamics. Karnofsky also describes a process for assessing whether one is a good fit for conducting this type of research, and draws a contrast between this and two other types of research: research focused on identifying "cause X" candidates or uncovering new crucial considerations; and modest incremental research intended not to cause a significant update but rather to serve as a building block for other efforts. Andreas Mogensen and David Thorstad's Tough enough? Robust satisficing as a decision norm for long-term policy analysis attempts to open a dialogue between philosophers working in decision theory and decision-making under deep uncertainty (DMDU), a field developed by operations researchers and engineers mostly neglected by the philosophical community. The paper focuses specifically on robust satisficing as a decision norm, and discusses decision-theoretic and voting-theoretic motivations for it. The paper may be seen as an attempt to address complaints raised by some members of the effective altruism community that longtermist researchers routinely ignore the tools and insights developed by DMDU and other relevant fields. Fin Moorhouse wrote an in-depth profile on space governance—the most comprehensive examination of this topic by the EA community so far. His key points may be summarized as follows: Space, not Earth, is where almost everyone will likely live if our species survives the next centuries or millennia. Shaping how this future unfolds is potentially enormously important. Historically, a significant determinant of quality of life has been quality of governance: people tend to be happy when countries are well-governed and unhappy when countries are poorly governed. It seems plausible that the kind of space governance that ultimately prevails will strongly influen

About

Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada