Fluidity Matt Arnold
-
- Society & Culture
-
After the collapse of the 20th-century systematic mode of social organization, how can we move from our internet-enabled atomized mode, toward a fluid mode? We take problems of meaning-making, typically considered spiritual, and turn them into practical problems, which are more tractable.
"Meaningness" begins with this episode: https://fluidity.libsyn.com/an-appetizer-purpose
"Meaningness And Time" begins with this episode: https://fluidity.libsyn.com/meaningness-and-time-how-meaning-fell-apart
'In The Cells Of The Eggplant" begins with this episode: https://fluidity.libsyn.com/intro-to-metarationality
You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
This is a nonfiction audiobook narrated by Matt Arnold with the permission of the author, David Chapman. Full text at: https://meaningness.com
Please email me at fluidity@hey.com.
-
Only You Can Stop An AI Apocalypse
We now begin narrating the book Better Without AI, by David Chapman.
https://betterwithout.ai/only-you-can-stop-an-AI-apocalypse
You can support the podcast and get episodes a week early, by supporting the Patreon:
https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee:
https://www.buymeacoffee.com/mattarnold]
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License. -
Scary AI, and, Superintelligence
Scary AI: Apocalyptic AI scenarios usually involve some qualitatively different future form of artificial intelligence. No one can explain clearly what would make that exceptionally dangerous in a way current AI isn’t. This confusion draws attention away from risks of existing and near-future technologies, and from ways of forestalling them.
https://betterwithout.ai/scary-AI
Superintelligence: Maybe AI will kill you before you finish reading this section. The extreme scenarios typically considered by the AI safety movement are possible in principle, but unfortunately no one has any idea how to prevent them. This book discusses moderate catastrophes instead, offering pragmatic approaches to avoiding or diminishing them.
https://betterwithout.ai/superintelligence
You can support the podcast and get episodes a week early, by supporting the Patreon:
https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee:
https://www.buymeacoffee.com/mattarnold]
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License. -
Mind-Like AI
We have a powerful intuition that some special mental feature, such as self-awareness, is a prerequisite to intelligence. This causes confusion because we don’t have a coherent understanding of what the special feature is, nor what role it plays in intelligent action. It may be best to treat mental characteristics as in the eye of the beholder, and therefore mainly irrelevant to AI risks.
https://betterwithout.ai/mind-like-AI
You can support the podcast and get episodes a week early, by supporting the Patreon:
https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee:
https://www.buymeacoffee.com/mattarnold
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License. -
Autonomous AI Agents
Most apocalyptic scenarios involve an AI acting as an autonomous agent, pursuing goals that conflict with human ones. Many people reject AI risk, saying that machines can’t have real goals or intentions. However, agency seems nebulous; and subtracting “real” agency from the scenario doesn’t seem to remove the risk.
https://betterwithout.ai/agency
A video in which white blood cells look as if they have agency:
https://www.youtube.com/watch?v=3KrCmBNiJRI
The US National Security Commission on Artificial Intelligence’s 2021 Report, which recommends spending $32bn per year on AI research to dramatically increase weapon systems agency:
https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf
You can support the podcast and get episodes a week early, by supporting the Patreon:
https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee:
https://www.buymeacoffee.com/mattarnold
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License. -
Diverse Forms Of Agency
It’s a mistake to think that human-like agency is the only dangerous kind. That risks overlooking AIs causing agent-like harms in inhuman ways.
https://betterwithout.ai/diverse-agency#fn_meme_critics
You can support the podcast and get episodes a week early, by supporting the Patreon:
https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License. -
Motivation, Morals, and Monsters
Thanks for your patience while I ran Fluidity Forum. We now resume "Better Without AI" by David Chapman.
Speculations about autonomous AI assume simplistic theories of motivation. They also mistakenly confuse those with ethical theories. Building AI systems on these ideas would produce monsters.
https://betterwithout.ai/AI-motivation
Coherent Extrapolated Volition
https://betterwithout.ai/AI-motivation#fn_Turchin:~:text=%E2%80%9C-,Coherent%20Extrapolated%20Volition,-%E2%80%9D%20at%20LessWrong%2C%20undated
A.I. Alignment Problem: "Human Values" Don't Actually Exist
https://www.lesswrong.com/posts/ngqvnWGsvTEiTASih/ai-alignment-problem-human-values-don-t-actually-exist
“Can we survive technology?” by John Von Neumann
http://geosci.uchicago.edu/~kite/doc/von_Neumann_1955.pdf
You can support the podcast and get episodes a week early, by supporting the Patreon:
https://www.patreon.com/m/fluidityaudiobooks
If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
Original music by Kevin MacLeod.
This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.