In this thought-provoking video, Malcolm and Simone Collins offer a detailed response to Scott Alexander's article on AI apocalypticism. They analyze the historical patterns of accurate and inaccurate doomsday predictions, providing insights into why AI fears may be misplaced. The couple discusses the characteristics of past moral panics, cultural susceptibility to apocalyptic thinking, and the importance of actionable solutions in legitimate concerns. They also explore the rationalist community's tendencies, the pronatalist movement, and the need for a more nuanced approach to technological progress. This video offers a fresh perspective on AI risk assessment and the broader implications of apocalyptic thinking in society.
Malcolm Collins: [00:00:00] I'm quoting from him here, okay? One of the most common arguments against AI safety is, here's an example of a time someone was worried about something, but it didn't happen.
Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer. Okay. But there are other examples of times someone was worried about something and it did happen, right? How do we know AI isn't more like those?
So specifically he is arguing against is every 20 years or so you get one of these apocalyptic movements. And this is why we're discounting this movement this is how he ends the article, so people know this isn't an attack piece, this is what he asked for in the article. He says, conclusion, I genuinely don't know what these people are thinking.
I would like to understand the mindset of people who make arguments like this, but I'm not sure I've succeeded. What is he missing according to you? He is missing something absolutely giant in everything that he's laid out.
And it is a very important point and it's very clear from his write up that this idea had just never occurred to him.
[00:01:00] Would you like to know more?
Malcolm Collins: Hello, Simone. I am excited to be here with you today. We today are going to be creating a video reply slash response to an argument that Scott Alexander, the guy who writes astral codex 10 or sleep star codex, depending on what era you were introduced to his content. Wrote about arguments against AI apocalypticism, which are based around it'll be clear when we get into the piece because I'm going to read some parts of it that no, I should know.
This is not a Scott Alexander is not smart or anything like that piece. We actually think Scott Alexander is incredibly intelligent and well meaning. And he is an intellectual who I consider a friend and somebody whose work I enormously respect. And I am creating this response because the piece is written in a way that actively requests [00:02:00] a response.
It's like, why do people believe this argument when I find it To be so weak, like one of those, what am I missing here? Kind of things. What am I missing here? Kind of things
he just clearly and I like the way he lays out his argument because it's very clear that, yes, there's a huge thing he's missing. And it's clear from his argument and the way that he thought about it that he's just literally never considered this point and it's why he doesn't understand this argument.
So we're going to go over his counter argument and we're going to go over the thing that he happens to be missing. And I'm quoting from him here, okay? One of the most common arguments against AI safety is, here's an example of a time someone was worried about something, but it didn't happen.
Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer. Okay. But there are other examples of times someone was worried about something and it did happen, right? How do we know AI isn't more like those? The people I'm arguing with always seem [00:03:00] so surprised by this response, as if I'm committing some sort of betrayal by destroying their beautiful arguments.
So specifically he is arguing against the form of apocalypticism that when we talk about it more sounds like our argument against AI apocalypticism is every 20 years or so you get one of these apocalyptic movements. And this is why we're discounting this movement. Okay. And I'm going to go further with his argument here. So he says, I keep trying to steel man this argument. So keep in mind, he's trying to steel man it. This is not us saying like he wants it steel man, okay. I keep trying to steel man this argument and it keeps resisting my steel manning. For example, maybe the argument is a failed attempt to gesture at a principle of quote, most technologies don't go wrong, but people make the same argument with technologies that aren't technologies like global cooling or overpopulation.
Maybe the argument is a failed attempt to gesture at a principle of Quote, the world is never destroyed. So [00:04:00] doomsday prophecies have an abysmal track record in quote, but over population and global cooling, don't claim that no one will die. Just that a lot of people will, and plenty of prophecies about mass deaths events have come true.
EG black plague, World War II, AIDS, and none of this explains coffee. So there's some weird coffee argument that he comes back to that I don't actually think is. important to understand this, but I can read it if you're interested. I'm sufficiently intrigued. Okay. People basically made the thing of, once people were worried about coffee, but now we know coffee is safe, therefore AI will also be safe.
Which is to say there was a period where everyone was afraid of coffee, and there was a lot of apocalypticism about it, and there really was. Like people were afraid of caffeine for a period. And the fears turned out wrong. And then people correlate that with AI. And I think that is a bad argument.
But the other type of argument he's making here, so you can see and I will create a [00:05:00] final framing from him here that I think is a pretty good summation of his argument. There is at least one thing that was possible. Therefore, super intelligent AI is also possible. And. And only slightly less hostile reframing.
So he's that's the way that he hears it. When people make this argument, there is at least one thing that was possible. Therefore, super intelligent AI. is also possible and safe, presumably, right? Because it's one thing was past technologies that we're talking about. And then he says, in an only slightly less hostile rephrasing people were wrong when they said nuclear reactions were impossible.
Therefore, they might also be wrong when they say super intelligent AI is possible. Conclusion, I genuinely don't know what these people are thinking. And then he says, I would like to understand the mindset. .
So this is how he ends the article, so people know this isn't an attack piece, this is what he asked for in the article. He says, conclusion, I genuinely don't know what these people are thinking.
I would like to understand the [00:06:00] mindset of people who make arguments like this, but I'm not sure I've succeeded. The best I can say is that sometimes people on my side make similar arguments. The nuclear chain reaction one, which I don't immediately flag as dumb, and maybe I can follow this thread to figure why they seem tempting sometimes.
All right, so great. What is he missing according to you? Actually I'd almost take a pause moment here to see if our audience because he is missing something absolutely giant in everything that he's laid out. There is a very logical reason to be making this argument and it is a point that he is missing in everything that he's looking at.
And it is a very important point and it's very clear from his write up that this idea had just never occurred to him.
Simone Collins: Is this the Margaret Thatcher Irish terrorists idea?
Malcolm Collins: No. Okay. Can you think what if I was trying to predict [00:07:00] the probability of a current apocalyptic movement being wrong, what would I use in a historic context?
And I usually don't lay out this point because I thought it was so obvious. And now I'm realizing that to even fairly smart people, it's not an obvious point.
Simone Collins: I have no idea.
Malcolm Collins: People historically have sometimes built up panics about things that didn't happen. And then sometimes people have raised red flags, as outliers, about things that did end up happening. What we can do To find out if the current event is just a moral panic, or is it actually a legitimate panic, is to correlate it with historical circumstances to figure out what things did the historical accurate predictions have in common, and what things did the pure moral panics have in common.
Simone Collins: So what are examples of past [00:08:00] genuine apocalypses? So like the plague, what else? Yes,
Malcolm Collins: so I went through and we'll go through examples of yeah. So it's history time
Simone Collins: with Malcolm Collins. It's history
Malcolm Collins: time with Malcolm Collins. When people actually predicted the beginnings of something that was going to be huge.
And then times, and hold on, I should actually word this a bit differently. Ooh, the
Simone Collins: industrial revolution. That's a good one.
Malcolm Collins: Simone we'll get to these in a second, okay? The point being is I want to better explain this argument to people because people may still struggle to understand like the really core point that he's missing.
Historically speaking, people made predictions around things that were beginning to happen was in their times, becoming huge and apocalyptic events in
Information
- Show
- FrequencyUpdated Daily
- PublishedJune 25, 2024 at 12:36 PM UTC
- Length45 min
- RatingClean