9 episodes

Seeking to understand the world views of our mutuals.

mutualunderstanding.substack.com

Mutual Understanding Podcast Ben & Divia

    • Society & Culture
    • 5.0 • 2 Ratings

Seeking to understand the world views of our mutuals.

mutualunderstanding.substack.com

    Sarah Constantin

    Sarah Constantin

    Sarah is a director at Nanotronics and writes on Twitter and on Substack.
    Timestamps
    [00:01:00] Why AI is probably a good thing
    [00:08:00] The limits of current robotics
    [00:13:00] Nanotronics and process improvements with AI
    [00:23:00] Predictions on AI
    [00:26:00] Input output limitations on AI models
    [00:35:00] Drug discovery
    [00:45:00] Instrumental convergence
    [01:05:00] Progress studies
    [1:13:00] Morality
    [01:27:30] Game Theory and Social Norms
    [01:41:00] Shrimp Welfare
    [01:48:00] Longevity
    Show Notes
    AlphaDev discovers faster sorting algorithm
    It Looks Like You’re Trying to Take Over the World
    EA has a lying problem
    Goals (and not having them)
    Sarcopenia Experimental Treatments
    Reality Has a Surprising Amount of Detail



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

    • 2 hr 2 min
    Ozzie Gooen

    Ozzie Gooen

    Coordination among 8 billion people is very tough. We're very far away from doing that with the intelligence that we have now, it's incredibly costly to send information to different people and for different people to learn about each other in order to trust each other…  we should kind of expect that people will have a lot of trouble coordinating on a big scale. But if we could then we’d grow a lot!

    In terms of least ‘failing with dignity’ do we think that the public did a good job in trying to investigate this and was misled? Or do we think that the public just like did a terrible job in like doing anything coordinated and just got hoodwinked super easily? I think we fall a lot more into the latter camp
    Ozzie Gooen is the president of the Quantified Uncertainty Research Institute (QURI). In this episode we discuss Utilitarianism, improving trust in organizations, communities, and governments, and his work in building better software for thinking, forecasting, and estimation.
    Links from the Show
    * The Quantified Uncertainty Research Institute
    * The QURI Medley
    * A video explaining QURI’s new Relative Value Estimation tool.
    * Reports on the failure to identify Bernie Madoff’s fraud
    * How to Measure Anything
    * The OpenAI Board
    Timestamps
    [00:01:00] Worldview
    [00:08:00] Starter Pack Philosophies
    [00:12:00] How tools for better Epistemics fits into Utilitarianism
    [00:20:00] Mistake theory vs Conflict Theory
    [00:24:00] Improving EA institutions
    [00:30:00] Justified Trust in Governments
    [00:38:00] Contracts and monitoring for evaluating orgs
    [00:46:30] Estimation utopias
    [00:52:00] Centralization vs Decentralization
    [00:58:00] The value of a good investigation in the case of FTX
    [01:05:00] The importance of the OpenAI Board
    [01:09:00] Estimating Relative Values
    [01:18:15] Shared intellectual infrastructure
    [01:26:00] Epistemically mature civilizations


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

    • 1 hr 29 min
    Ronny Fernandez and Quintin Pope talk AI

    Ronny Fernandez and Quintin Pope talk AI

    This episode may make more sense after reading Quintin’s LessWrong post about evolution and the sharp left turn, but I think we end up talking through most of it.
    Our conversation appeared as two Twitter Spaces:
    https://twitter.com/diviacaroline/status/1649925920529223680?s=20 and
    https://twitter.com/diviacaroline/status/1649957263535394821?s=20


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

    • 3 hr 1 min
    Robin Hanson Ronny Fernandez AI Conversation

    Robin Hanson Ronny Fernandez AI Conversation

    Robin talks about why he thinks developments in AI will be on a continuum with civilizational progress in general, and Ronny, who is mostly trying to understand Robin, talks some about why he thinks many of the things he values about humanity aren't on track to being preserved, and that he cares about that.
    Video version available at: https://www.youtube.com/watch?v=G-fBdPnwFrI


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

    • 2 hr 39 min
    Recursing on the Discourse

    Recursing on the Discourse

    In this episode we discuss several ‘frames’ - ways of processing and orienting to information - that are appearing in public discussions around AI, and in particular reflect on how grounded they are in longstanding debates within the AI X-Risk community.
    "It’s sort of like the democratic ideal where everybody's sort of out there at the public square arguing with each other, talking about the things… I don't think there are a lot taboos about what to say yet either. It hasn't gotten super corrupted.
    Like it's very rare that in the real world, I'm encountering trolleys running over people right? And you can set up these fake scenarios that mess up my moral intuitions…  but maybe it’s not always virtuous to endorse the repugnant conclusion that these very limited, decoupled thought experiment bring you to."
    Links:
    * MusicLM from Google
    * TV’s War With the Robots Is Already Here (Writer’s Strike connection with AI)
    * Ronny Fernandez and Robin Hanson Discuss AI Futures
    * Roko and Alexandros on AI Risk
    * Universal Fire
    * Spandrels
    Timestamps
    [00:01:00] Updates on Politics
    [00:07:00] The State of the AI Discourse
    [00:13:00] The Yudkowskian Foomer vs Hansonian Continualist
    [00:18:00] Mood Affiliations
    [00:22:00] Biting Bullets vs Rejecting Fake Hypotheticals
    [00:28:00] Divide between Game Theorists and Engineers
    [00:36:00] The complexity of predictions about the future
    [00:47:00] How similar are the values of optimizing systems?
    [00:54:00] The rupture of the GMU Rationalist Alliance
    [01:00:00] Public Perception that AI has Moral Weight
    [01:03:00] AI Veganism/Freeganism
    [01:06:00] Overton Window Shifts


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

    • 1 hr 14 min
    Roko and Alexandros AI Conversation

    Roko and Alexandros AI Conversation

    No transcript for this episode yet, but I wanted to get it out there anyway, since it can be hard to listen on Twitter Spaces.


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit mutualunderstanding.substack.com

    • 3 hr 47 min

Customer Reviews

5.0 out of 5
2 Ratings

2 Ratings

Top Podcasts In Society & Culture

Inconceivable Truth
Wavland
This American Life
This American Life
Stuff You Should Know
iHeartPodcasts
Fallen Angels: A Story of California Corruption
iHeartPodcasts
Soul Boom
Rainn Wilson
Shawn Ryan Show
Shawn Ryan | Cumulus Podcast Network