What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.
Links & References:
References:
- CouchSurfing - Wikipedia | CouchSurfing.org | Website
- Tristan Harris: How a handful of tech companies control billions of minds every day | TED Talk
- Center for Humane Technology | Website
- MEANING ALIGNMENT INSTITUTE | Website
- Replika - AI Girlfriend/Boyfriend
- Will AI Improve Exponentially At Value Judgments? - by Matt Prewitt | RadicalxChange
- Moral Realism (Stanford Encyclopedia of Philosophy)
- Summa Theologica - Wikipedia
- When Generative AI Refuses To Answer Questions, AI Ethics And AI Law Get Deeply Worried | AI Refusals
- Amanda Askell: The 100 Most Influential People in AI 2024 | TIME | Amanda Askells' work at Anthropic
- Overcoming Epistemology by Charles Taylor
- God, Beauty, and Symmetry in Science - Catholic Stand | Thomas Aquinas on symmetry
- Friedrich Hayek - Wikipedia | “Hayekian”
- Eliezer Yudkowsky - Wikipedia | “AI policy people, especially in this kind Yudkowskyian scene”
- Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources | Resource rational (cognitive science term)
Papers & posts mentioned
- [2404.10636] What are human values, and how do we align AI to them? | Paper by Oliver Klingefjord, Ryan Lowe, Joe Edelman
- Model Integrity - by Joe Edelman and Oliver Klingefjord | Meaning Alignment Institute Substack
Bios:
Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.
Joe’s Social Links:
- Meaning Alignment Institute | Website
- Meaning Alignment Institute (@meaningaligned) / X
- Joe Edelman (@edelwax) / X
Matt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.
Matt’s Social Links:
- ᴍᴀᴛᴛ ᴘʀᴇᴡɪᴛᴛ (@m_t_prewitt) / X
Connect with RadicalxChange Foundation:
- RadicalxChange Website
- @RadxChange | Twitter
- RxC | YouTube
- RxC | Instagram
- RxC | LinkedIn
- Join the conversation on Discord.
Credits:
- Produced by G. Angela Corpus.
- Co-Produced, Edited, and Audio Engineered by Aaron Benavides.
- Executive Produced by G. Angela Corpus and Matt Prewitt.
- Intro/Outro music by MagnusMoone, “Wind in the Willows,” is licensed under an Attribution-NonCommercial-ShareAlike 3.0 International License (CC BY-NC-SA 3.0)
Información
- Programa
- FrecuenciaCada dos semanas
- Publicado6 de diciembre de 2024, 19:33 UTC
- Duración1 h y 22 min
- Episodio23
- ClasificaciónExplícito