Boy this conversation with End of Science author John Horgan and transhumanism fan Steve Fuller was fun, given how dark some of our conclusions were. Read the pre-show post to get a lot of relevant links and background: Precautionary versus “proactionary” strategies for managing the present with the future in mind Fuller said it’s important to let go of many of our worries about how present actions related to science and technology might affect future generations. “We get very hyped up about future generations,” he said. He added: I think we need to imagine when we think about future generations that their baseline about what counts as a good life will be whatever they’re born into. So in other words, they will not be thinking like us, just like we’re not thinking like Aristotle…. But there is an issue here about how you face the uncertainty of the future. And this gets to the business of the precautionary principle versus what I’ve proposed — the proactionary principle. These are two different attitudes toward risk, right? And the precautionary principle, you could trace it back to the Hippocratic Oath. Above all, cause no harm, right? So it’s a harm avoidance approach to risk because you treat uncertainty under those conditions as potential threat. So you set very high standards with regard to regulation for new technologies, stuff like that. The European Union actually has a version of the precautionary principle built into its environmental regulations. The other is the thing that is associated with transhumanism, and that is the proactionary principle. And the pro-actionary principle treats risk as an opportunity. So in other words, you treat it as like a fair throw of the dice almost. You adopt the attitude kind of the way entrepreneurs do. When they see a sort of uncertain situation, they’re going to make something out of it. And this idea then leads to a much more open sense of what the future can be. I mused on the reality that there’s little sign among the current world’s great powers — big tech firms, the oligarch class, superpowers — that regulation can be meaningfully applied. Horgan, long largely a techno-optimist, wrapped up our chat with this uplifting thought: And I’ve just concluded over the last five years, and it’s just been growing on me lately, that humanity doesn’t really give a s**t about understanding, illumination. It is always all been about power with the quest for truth as kind of marketing and window dressing. My view of the future of science and even of civilization is quite dark right now. There is much, much more. Please listen to the full show if you can and post reactions. I’ll drop the paywall, although I would love it if a few more of you decide to chip in to help me keep this Sustain What project going. Please consider becoming a financial supporter of Sustain What: Insert, Feb. 19 - Via Googl AI, here’s a summary: * Introduction to the Guests and Discussion Themes (0:44-2:25) * Andrew Revkin introduces his long-time friends and intellectual sparring partners, John Horgan and Steve Fuller. * The core topics of discussion are set: artificial intelligence (or synthetic/simulated intelligence), the “end of truth,” and the current state of our information environment. * Steve Fuller’s Background and Approach to Knowledge (2:38-5:04) * Steve Fuller explains his academic background in the history and philosophy of science. * He describes his focus on the social and political dimensions of science, particularly how technology and changing political economies influence the production and evaluation of knowledge. * The Impact of Social Media on Knowledge and Power (5:09-7:00) * The discussion shifts to how social media has drastically altered the dissemination of knowledge and the dynamics of power, especially in politics. * Steve Fuller highlights Andrew Breitbart and Steve Bannon as pioneers in using social media to channel information for ideological purposes, leading to a fragmented epistemic landscape. * John Horgan’s “End of Science” Revisited (10:56-12:05) * John Horgan reflects on his book The End of Science, suggesting that major scientific breakthroughs aimed at understanding the world (like relativity, quantum mechanics, and evolutionary theory) are largely behind us. * He expresses a dark view of the future of science and civilization, concluding that humanity primarily seeks power rather than truth or illumination. * AI: Horror vs. Positive Potential (15:13-17:03) * John Horgan admits his horror at AI, viewing it as bringing out his “Luddite” tendencies, despite his love for other technologies like his MacBook and iPhone. * He contrasts this with Steve Fuller’s more positive outlook on AI, particularly its potential to utilize vast amounts of scientific material that currently goes unused. * The “Replication Crisis” and AI’s Role in Science (27:00-27:51) * Steve Fuller attributes the “replication crisis” in science to narrow and competitive research frontiers, where pressure to be first leads to cutting corners. * He suggests that a broader distribution of scientific effort would reduce incentives for fraud. * The Future of Wikipedia in the Age of Generative AI (28:08-29:00) * Steve Fuller predicts that generative AI will put Wikipedia out of business because AI can provide customized, Wikipedia-style answers more efficiently. * He views Wikipedia as “old-fashioned crowdsourcing” that is laborious and prone to disputes. * Science as Faith and the “Conservation of Ignorance” (1:16:14-1:19:00) * The host plays a clip of Pete Seeger discussing his father’s view that scientists have the “most dangerous religious belief” – the idea that an infinite increase in empirical information is inherently good. * John Horgan challenges this, noting that science, unlike religious faith, has materially altered the world through technologies like the hydrogen bomb. * The “Conspiracy Mentality” and Endless Data Seeking (1:19:00-1:20:05) * Steve Fuller connects Pete Seeger’s critique to the “conspiracy mentality”, where people constantly seek more information, believing something is being hidden. * He argues that science, when working correctly, engages in “self-limitation” through method and tests, drawing lines rather than seeking data endlessly. And do share this post with friends concerned about the future and the present state of science. Thank you Larry Hogue, Jeanne Manion, Karen Malpede, Eleanor Margulis, and many others for tuning into my live video! Join me for my next live video in the app. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit revkin.substack.com/subscribe