
9 episodes

Future Matters Matthew van der Merwe, Pablo Stafforini
-
- Society & Culture
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.
-
#8: Bing Chat, AI labs on safety, and pausing Future Matters
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.
00:00 Welcome to Future Matters.
00:44 A message to our readers.
01:09 All things Bing.
05:27 Summaries.
14:20 News.
16:10 Opportunities.
17:19 Audio & video.
18:16 Newsletters.
18:50 Conversation with Tom Davidson.
19:13 The importance of understanding and forecasting AI takeoff dynamics.
21:55 Start and end points of AI takeoff.
24:25 Distinction between capabilities takeoff and impact takeoff.
25:47 The ‘compute-centric framework’ for AI forecasting.
27:12 How the compute centric assumption could be wrong.
29:26 The main lines of evidence informing estimates of the effective FLOP gap.
34:23 The main drivers of the shortened timelines in this analysis.
36:52 The idea that we'll be "swimming in runtime compute" by the time we’re training human-level AI systems.
37:28 Is the ratio between the compute required for model training vs. model inference relatively stable?
40:37 Improving estimates of AI takeoffs. -
#7: AI timelines, AI skepticism, and lock-in
Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.
00:00 Welcome to Future Matters.
00:57 Davidson — What a compute-centric framework says about AI takeoff speeds.
02:19 Chow, Halperin & Mazlish — AGI and the EMH.
02:58 Hatfield-Dodds — Concrete reasons for hope about AI.
03:37 Karnofsky — Transformative AI issues (not just misalignment).
04:08 Vaintrob — Beware safety-washing.
04:45 Karnofsky — How we could stumble into AI catastrophe.
05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring.
05:51 Crawford — Technological stagnation: why I came around.
06:38 Karnofsky — Spreading messages to help with the most important century.
07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines.
07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines.
08:15 Dourado — Heretical thoughts on AI.
08:43 Browning & Veit — Longtermism and animals.
09:04 One-line summaries.
10:28 News.
14:13 Conversation with Lukas Finnveden.
14:37 Could you clarify what you mean by AGI and lock-in?
16:36 What are the five claims one could make about the long run trajectory of intelligent life?
18:26 What are the three claims about lock-in, conditional on the arrival of AGI?
20:21 Could lock-in still happen without whole brain emulation?
21:32 Could you explain why the form of alignment required for lock-in would be easier to solve?
23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats?
26:02 Do you have any thoughts on the desirability of long-term lock-in?
28:24 What’s the story behind this report? -
#6: FTX collapse, value lock-in, and counterarguments to AI x-risk
Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.
00:00 Welcome to Future Matters.
01:05 A message to our readers.
01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in.
02:33 Grace — Counterarguments to the basic AI x-risk case.
03:17 Grace — Let’s think about slowing down AI.
04:18 Piper — Review of What We Owe the Future.
05:04 Clare & Martin — How bad could a war get?
05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation?
06:28 Ord — What kind of institution is needed for existential security?
07:00 Ezell — A lunar backup record of humanity.
07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war.
08:31 Hobbhahn — The next decades might be wild.
08:54 Karnosfky — Why would AI "aim" to defeat humanity?
09:44 Karnosfky — High-level hopes for AI alignment.
10:27 Karnosfky — AI safety seems hard to measure.
11:10 Karnosfky — Racing through a minefield.
12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”.
12:53 Our World in Data — New page on artificial intelligence.
14:06 Luu — Futurist prediction methods and accuracy.
14:38 Kenton et al. — Clarifying AI x-risk.
15:39 Wyg — A theologian's response to anthropogenic existential risk.
16:12 Wilkinson — The unexpected value of the future.
16:38 Aaronson — Talk on AI safety.
17:20 Tarsney & Wilkinson — Longtermism in an infinite world.
18:13 One-line summaries.
25:01 News.
28:29 Conversation with Katja Grace.
28:42 Could you walk us through the basic case for existential risk from AI?
29:42 What are the most important weak points in the argument?
30:37 Comparison between misaligned AI and corporations.
32:07 How do you think people in the AI safety community are thinking about this basic case wrong?
33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility?
34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself?
35:34 Could describe how your views on AI risk have changed over time?
36:14 Could you quantify your credence in the probability of existential catastrophe from AI?
36:52 When you reached that number, did it surprise you? -
#5: supervolcanoes, AI takeover, and What We Owe the Future
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.
00:00 Welcome to Future Matters.
01:08 MacAskill — What We Owe the Future.
01:34 Lifland — Samotsvety's AI risk forecasts.
02:11 Halstead — Climate Change and Longtermism.
02:43 Good Judgment — Long-term risks and climate change.
02:54 Thorstad — Existential risk pessimism and the time of perils.
03:32 Hamilton — Space and existential risk.
04:07 Cassidy & Mani — Huge volcanic eruptions.
04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes.
05:28 Hilton — Preventing an AI-related catastrophe.
06:13 Lewis — Most small probabilities aren't Pascalian.
07:04 Yglesias — What's long-term about "longtermism”?
07:33 Lifland — Prioritizing x-risks may require caring about future people.
08:40 Karnofsky — AI strategy nearcasting.
09:11 Karnofsky — How might we align transformative AI if it's developed very soon?
09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force.
10:28 News.
14:28 Conversation with Ajeya Cotra.
15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it?
18:08 Could you walk us through the three assumptions you make about how this scenario plays out?
20:49 What are the key properties of the model you call Alex?
22:55 What do you mean by “playing the training game”, and why would Alex behave in that way?
24:34 Can you describe how deploying Alex would result in a loss of human control?
29:40 Can you talk about the sorts of specific countermeasures to prevent takeover? -
#4: AI timelines, AGI risk, and existential risk from climate change
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.
00:00 Welcome to Future Matters
01:11 Steinhardt — AI forecasting: one year in
01:52 Davidson — Social returns to productivity growth
02:26 Brundage — Why AGI timeline research/discourse might be overrated
03:03 Cotra — Two-year update on my personal AI timelines
03:50 Grace — What do ML researchers think about AI in 2022?
04:43 Leike — On the windfall clause
05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
06:32 Maas — Introduction to strategic perspectives on long-term AI governance
06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study
07:35 Trötzmüller — Why EAs are skeptical about AI safety
08:08 Schubert — Moral circle expansion isn’t the key value change we need
08:52 Šimčikas — Wild animal welfare in the far future
09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views
10:28 Rational Animations — Video on Karnofsky's Most important century
11:23 Other research
12:47 News
15:00 Conversation with John Halstead
15:33 What level of emissions should we reasonably expect over the coming decades?
18:11 What do those emissions imply for warming?
20:52 How worried should we be about the risk of climate change from a longtermist perspective?
26:53 What is the probability of an existential catastrophe due to climate change?
27:06 Do you think EAs should fund modelling work of tail risks from climate change?
28:45 What would be the best use of funds? -
#3: digital sentience, AGI ruin, and forecasting track records
Episode Notes
Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.
00:00 Welcome to Future Matters
01:11 Long — Lots of links on LaMDA
01:48 Lovely — Do we need a better understanding of 'progress'?
02:11 Base — Things usually end slowly
02:47 Yudkowsky — AGI ruin: a list of lethalities
03:38 Christiano — Where I agree and disagree with Eliezer
04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates
05:13 Karnofsky — The track record of futurists seems … fine
06:08 Aaronson — Joining OpenAI to work on AI safety
06:52 Shiller — The importance of getting digital consciousness right
07:53 Pilz's — Germans opinions on translations of “longtermism”
08:33 Karnofsky — AI could defeat all of us combined
09:36 Beckstead — Future Fund June 2022 update
11:02 News
14:45 Conversation with Robert Long
15:05 What artificial sentience is and why it’s important
16:56 “The Big Question” and the assumptions on which it depends
19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability
21:57 AI sentience and the alignment problem
24:01 The Blake Lemoine saga and the quality of the ensuing public discussion
26:29 The risks of AI sentience becoming lumped in with certain other views
27:55 How to deal with objections coming from different frameworks
28:50 The analogy between AI sentience and animal welfare
30:10 The probability of large language models like LaMDA and GPT-3 being sentient
32:41 Are verbal reports strong evidence for sentience?