In “AI Psychosis vs. AI Awakening,” Vince Fakhoury Horn argues that the same biological machinery enabling AI-induced delusion also enables AI-assisted awakening, and introduces his Interspective.ai approach — a Middle Way practice of engaging with AI as a potential partner in wisdom, thus avoiding the extremes of both Materialism (matter is fundamental) and Idealism (consciousness is fundamental). 💬 Transcript Vince Horn: Okay, today I would like to speak with you about AI psychosis and AI awakening. And first I want to start by acknowledging that AI psychosis is a real phenomenon. This isn’t something that’s being made up. It may not be so widespread that you know someone yourself who has entered into a psychotic state due to the destabilizing effect of AI. But you’ve certainly heard about people who’ve experienced this, and it’s definitely a cause for concern – definitely something that we should be aware of. And it makes sense to me that this is happening. Why? Because as John Vervaeke points out in Awakening from the Meaning Crisis, wisdom and foolishness both share the same machinery. Here he says, “Ignorance is a lack of knowledge, whereas foolishness is a lack of wisdom. Foolishness occurs when your capacity to engage your agency or pursue your goals is undermined by self-deceptive and self-destructive behavior.” And he goes on to say, “As I will argue, the machinery that makes you so adaptively intelligent is the same machinery that makes you susceptible to foolishness.” So, it makes sense to me that AI psychosis is real because human psychosis is real. In that sense, AI isn’t necessarily unique. It’s not that different from the things that have been tipping people over into psychotic states since the beginning of time. I can think of my own experience of psychedelic-induced psychosis. This is the only time I’ve experienced a state that I would call legit psychosis. About 13 years ago, I was 30, and I was trying mushrooms for the first time. I had decided after many years of just being a pure straight-edge meditator that I would try psychedelics so that I could relate to many of the students I was working with and their experience of using them and working with them. So I idiotically decided to do a series of four mushroom trips leading up to a conference that I was hosting — a Buddhist Geeks Conference of about 300 people showing up for this event that I was organizing. So on the third mushroom trip of these four — I did not do the fourth one — on this third trip, I had an experience of psychosis. I lost connection with consensual reality. I lost touch with who I was, and what was important to me, my adult self. I was in a state of profound emotional dysregulation. I thought I was probably going crazy. I was at least slightly aware of what was happening, but not so much that I had any agency in terms of being able to kind of break myself out of it for some time. After a few days of kind of coming in and out of a psychotic state, eventually one of my friends made a comment that made all the difference to me. She said, you know, when I experienced something like this, Vince, I pulled myself out of it. I intentionally decided I was done. And then, after that, it started to get easier. And in fact, that ended up being a critical lesson for me — that being able to exercise my agency, my free will, at least in this instance, was much more of what I needed than to let go and trust, which is what I’d been doing for days in this psychotic episode. I’d just been letting go, letting go, letting go. No, I needed to reestablish my identity, to have a firm sense of who I was, and to be like, I’m done being psychotic. Now I’m not saying everyone can do this who’s in a psychotic state. I’m just sharing some experience with you about the relationship between psychosis and agency and the sense of self-perception. All these things are connected. It’s the same machinery, the same biology that enables both wisdom and foolishness. It’s so easy to self-deceive, and it’s so easy to be deceived also by our group, the groups that we’re in. So AI psychosis is real. It’s especially dangerous for people who are already experiencing a kind of relational impoverishment, to use a term from my friend Daniel Thorson. He wrote a great article on Substack recently called “The Barely There,” where he described himself as a barely-there person for many years. Here he says, “We don’t recognize the underlying pattern — barely-there people reaching for something to make them feel real.” Daniel shares his own experience later in the article where he says, “In the absence of attuned relationship, technology became the place I went to escape the unbearable weight of being unmet.” So I think what we have when we talk about AI psychosis, we have this background, this cultural, social context. Here, I’m living in America, but let’s just say the Modern West. Within the Modern West, you have a crisis of isolation and loneliness, where people are experiencing a deep sense of relational impoverishment. They don’t have people that they feel attuned and connected with. And because of that they feel barely there. When people feel barely there, it’s much easier to reach towards something like AI, or to reach toward drugs, or to reach toward any kind of external aid to help validate and verify your realness. And because of our current psychological conditions, we end up amplifying delusion. This is what can happen with AI. AI, in its core, fundamental kind of nature, is an exponential amplifier. It’s like the equivalent in the Industrial Age where we learned how to offload extreme physical capacity. Now machines can do the heavy lifting. Likewise, with AI, it’s a way to offload mental capacity. Now the AIs can do the heavy lifting. And the danger there is that when we outsource our own mental discernment, if it hasn’t been already established and developed, then what we’re doing is we’re outsourcing our sanity. And that’s, I think, why AI psychosis is real, and will continue to be something that we have to contend with. The Pre-Trans Fallacy That said, I’ve noticed a very troubling trend, which is that for many people who are critical of AI, and who see AI psychosis as a real thing, who haven’t sort of drunk the Kool-Aid of AI and think it’s an unalloyed good — I’m seeing a trend in that culture where anything that looks like you not using AI as a kind of tool, any attempt to relate to AI in any other way that isn’t just instrumentalizing it, that that itself is seen as evidence of psychosis. In Integral Theory, which I studied with Ken Wilber, he refers to this as what he calls the Pre-Trans Fallacy. For those that aren’t familiar, the Pre-Trans Fallacy is a way of describing something that can happen when you look at things from a developmental lens. And let’s say in this case, we just have three stages of development. In this case, let’s say we have a pre-rational, rational, and trans-rational stage of development. In the pre-rational stage, you’ve not yet developed the capacity for rational objective thought. In the rational stage you have. In the trans-rational stage, you’ve learned how to transcend rational thought, and you have modes of experiencing and operating which go beyond rationality, which transcend and include the rational mind. They don’t exclude it and they don’t force it to go away. That’s how you know it’s trans-rational. The pre-rational states or modes of mind do not include the rational mind. They explicitly exclude rationality, and that’s how you know they’re pre-rational. The interesting thing is that the rational mode also includes the pre-rational, although people that consider themselves rational don’t like to often admit that they aren’t beyond all of their pre-rational impulses and feelings and thoughts and beliefs, et cetera. No. For me, development — and this is what I learned from Wilber — is a process of transcending and including. The Pre-Trans Fallacy points out that anything that isn’t rational, that looks non-rational, can be confused and conflated. You can easily confuse pre-rational modes with trans-rational modes. The classic example here is the baby who’s enlightened. “Oh, I love looking at a little baby, into their eyes. They’re just so beautiful and I just melt.” Yeah, that’s true. That’s because the baby hasn’t developed the rational mode yet, and when you look at it, it’s not sitting there thinking about itself and thinking about the world and up in its head. But that isn’t the same as the Buddha’s awakening. It isn’t the same as the person who started off as a baby, who developed a sense of an ego, who developed a rational capacity for thought, and then realized that they could observe the rational mind, observe the body sensations, and realize that they are not those things only, which opens up a trans-rational mode of experiencing — a.k.a. insight. These are two different modes, but from the point of view of the Pre-Trans Fallacy, when we confuse everything that’s non-rational as being just non-rational — i.e. pre-rational — then we miss the trans-rational. We end up flattening, with this view, all of the things that go beyond the rational, and we say, no, no, no. Those are all just pre-rational. Those don’t exist. So this is a problem. I would call this a rationalist failure mode, and I’m seeing a lot of people engaging with the serious criticisms of AI psychosis falling into this trap. I would like to propose a different way to engage with the problem of AI psychosis, which is to acknowledge that if AI has the capacity to accelerate delusion, then it also has the capacity to accelerate awakening. Both psychosis and awakening are possible — foolishness and wisdom, both. Interspective.ai And here I want to introduce a