#0: Space governance, future-proof ethics, and the launch of the Future Fund Future Matters

    • Philosophy

> We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star.
> — Ralph Waldo Emerson
Welcome to Future Matters, a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to the Effective Altruism Forum and available as a podcast.

Research
We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’re very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness. It seems unlikely that current AI systems are conscious, but they are improving and there’s no great reason to think we will never _create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn’t harm it for no good reason.  Since it’s much worse to mistakenly _deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering. 
The leadership of the

Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine. The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations.
Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating, Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling.
The second part in a three-part series of great power conflict, Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion.
Holden Karnofsky emails Tyler Cowen to make a very concise case that there’s at least a 1 in 3 chance we develop transformative AI this century (summarizing his earlier blogpost). There’s some very different approaches to AI forecasting al

> We think our civilization near its meridian, but we are yet only at the cock-crowing and the morning star.
> — Ralph Waldo Emerson
Welcome to Future Matters, a newsletter about longtermism brought to you by Matthew van der Merwe & Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. Future Matters is crossposted to the Effective Altruism Forum and available as a podcast.

Research
We are typically confident that some things are conscious (humans), and that some things are not (rocks); other things we’re very unsure about (insects). In this post, Amanda Askell shares her views about AI consciousness. It seems unlikely that current AI systems are conscious, but they are improving and there’s no great reason to think we will never _create conscious AI systems. This matters because consciousness is morally-relevant, e.g. we tend to think that if something is conscious, we shouldn’t harm it for no good reason.  Since it’s much worse to mistakenly _deny something moral status than to mistakenly attribute it, we should take a cautious approach when it comes to AI: if we ever have reason to believe some AI system is conscious, we should start to treat it as a moral patient. This makes it important and urgent that we develop tools and techniques to assess whether AI systems are conscious, and related questions, e.g. whether they are suffering. 
The leadership of the

Global Catastrophic Risk Institute issued a Statement on the Russian invasion of Ukraine. The authors consider the effects of the invasion on (1) risks of nuclear war and (2) other global catastrophic risks. They argue that the conflict increases the risk of both intentional and inadvertent nuclear war, and that it may increase other risks primarily via its consequences on climate change, on China, and on international relations.
Earlier this year, Hunga Tonga-Hunga Ha'apai—a submarine volcano in the South Pacific—produced what appears to be the largest volcanic eruption of the last 30 years. In What can we learn from a short preview of a super-eruption and what are some tractable ways of mitigating, Mike Cassidy and Lara Mani point out that this event and its cascading impacts provide a glimpse into the possible effects of a much larger eruption, which could be comparable in intensity but much longer in duration. The main lessons the authors draw are that humanity was unprepared for the eruption and that its remote location dramatically minimized its impacts. To better prepare for these risks, the authors propose better identifying the volcanoes capable of large enough eruptions and the regions most affected by them; building resilience by investigating the role that technology could play in disaster response and by enhancing community-led resilience mechanisms; and mitigating the risks by research on removal of aerosols from large explosive eruptions and on ways to reduce the explosivity of eruptions by fracking or drilling.
The second part in a three-part series of great power conflict, Stephen Clare's How likely is World War III? attempts to estimate the probability of great power conflict this century as well as its severity, should it occur. Tentatively, Clare assigns a 45% chance to a confrontation between great powers by 2100, an 8% chance of a war much worse than World War II, and a 1% chance of a war causing human extinction. Note that some of the key sources in Clare's analysis rely on the Correlates of War dataset, which is less informative about long-run trends in global conflict than is generally assumed; see Ben Garfinkel's comment for discussion.
Holden Karnofsky emails Tyler Cowen to make a very concise case that there’s at least a 1 in 3 chance we develop transformative AI this century (summarizing his earlier blogpost). There’s some very different approaches to AI forecasting al