
75 episodes

Hear This Idea Fin Moorhouse and Luca Righetti
-
- Science
-
-
5.0 • 15 Ratings
-
Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
-
Liv Boeree on Healthy vs Unhealthy Competition
Liv Boeree is a former poker champion turned science communicator and podcaster, with a background in astrophysics. In 2014, she founded the nonprofit Raising for Effective Giving, which has raised more than $14 million for effective charities. Before retiring from professional poker in 2019, Liv was the Female Player of the Year for three years running. Currently she hosts the Win-Win podcast (you’ll enjoy it if you enjoy this podcast).
You can see more links and a full transcript at hearthisidea.com/episodes/boeree.
In this episode we talk about:
Is the ‘poker mindset’ valuable? Is it learnable?
How and why to bet on your beliefs — and whether there are outcomes you shouldn’t make bets on
Would cities be better without public advertisements?
What is Moloch, and why is it a useful abstraction?
How do we escape multipolar traps?
Why might advanced AI (not) act like profit-seeking companies?
What’s so important about complexity? What is complexity, for that matter?
You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! -
Jon Y (Asianometry) on Problems And Progress in Semiconductor Manufacturing
Jon Y is the creator of the Asianometry YouTube channel and accompanying newsletter. He describes his channel as making "video essays on business, economics, and history. Sometimes about Asia, but not always."
You can see more links and a full transcript at hearthisidea.com/episodes/asianometry
In this episode we talk about:
Compute trends driving recent progress in Artificial Intelligence;
The semiconductor supply chain and its geopolitics;
The buzz around LK-99 and superconductivity.
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! -
Steven Teles on what the Conservative Legal Movement Teaches about Policy Advocacy
Steven Teles s is a Professor of Political Science at Johns Hopkins University and a Senior Fellow at the Niskanen Center. His work focuses on American politics and he written several books on topics such as elite politics, the judiciary, and mass incarceration.
You can see more links and a full transcript at hearthisidea.com/teles
In this episode we talk about:
The rise of the conservative legal movement;
How ideas can come to be entrenched in American politics;
Challenges in building a new academic field like "law and economics";
The limitations of doing quantitative evaluations of advocacy groups.
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
Key links: -
Guive Assadi on Whether Humanity Will Choose Its Future
Guive Assadi is a Research Scholar at the Center for the Governance of AI. Guive’s research focuses on the conceptual clarification of, and prioritisation among, potential risks posed by emerging technologies. He holds a master’s in history from Cambridge University, and a bachelor’s from UC Berkeley.
In this episode, we discuss Guive's paper, Will Humanity Choose Its Future?.
What is an 'evolutionary future', and would it count as an existential catastrophe?
How did the agricultural revolution deliver a world which few people would have chosen?
What does it mean to say that we are living in the dreamtime? Will it last?
What competitive pressures in the future could drive the world to undesired outcomes?
Digital minds
Space settlement
What measures could prevent an evolutionary future, and allow humanity to more deliberately choose its future?
World government
Strong global coordination
Defensive advantage
Should this all make us more or less hopeful about humanity's future?
Ideas for further research
Guive's recommended reading:
Rationalist Explanations for War by James D. Fearon
Meditations on Moloch by Scott Alexander
The Age of Em by Robin Hanson
What is a Singleton? By Nick Bostrom
Other key links:
Will Humanity Choose Its Future? by Guive Assadi
Colder Wars by Gwern
The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter by Joseph Henrich (and a review by Scott Alexander) -
Michael Cohen on Input Tampering in Advanced RL Agents
Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely.
You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen.
We discuss:
What is reinforcement learning, and how is it different from supervised and unsupervised learning?
Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward'
Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want?
Why might advanced RL systems might tamper with their sources of input, and why could this be very bad?
What assumptions need to hold for this "input tampering" outcome?
Is reward really the optimisation target? Do models "get reward"?
What's wrong with the analogy between RL systems and evolution?
Key links:
Michael's personal website
'Advanced artificial agents intervene in the provision of reward' by Michael K. Cohen, Marcus Hutter, and Michael A. Osborne
'Pessimism About Unknown Unknowns Inspires Conservatism' by Michael Cohen and Marcus Hutter
'Intelligence and Unambitiousness Using Algorithmic Information Theory' by Michael Cohen, Badri Vallambi, and Marcus Hutter
'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor
'RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning' by Marc Rigter, Bruno Lacerda, and Nick Hawes
'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor
Season 40 of Survivor -
Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up
Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum.
We discuss:
What is AI Impacts working on?
Counterarguments to the basic AI x-risk case
Reasons to doubt that superhuman AI systems will be strongly goal-directed
Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights
Aren't deep learning systems fairly good at understanding our 'true' intentions?
Reasons to doubt that (misaligned) superhuman AI would overpower humanity
The case for slowing down AI
Is AI really an arms race?
Are there examples from history of valuable technologies being limited or slowed down?
What does Katja think about the recent open letter on pausing giant AI experiments?
Why read George Saunders?
Key links:
World Spirit Sock Puppet (Katja's main blog)
Counterarguments to the basic AI x-risk case
Let's think about slowing down AI
We don't trade with ants
Thank You, Esther Forbes (George Saunders)
You can see more links and a full transcript at hearthisidea.com/episodes/grace.
Customer Reviews
Excellent philosophy (etc) podcasts
I’ve listened to the philosophy episodes (plus the episode with Diane Coyle) and would recommend these extremely highly. (I’m confident that the podcasts in other areas are also similarly excellent.)
The guests are extremely good at explaining their work in a way that’s accessible without dumbing-down and the hosts ask really helpful questions. The length of the episodes is about right. They would be of interest to everyone from enthusiastic amateur philosophers, undergrads, or faculty.
Beautifully done
Just listening to the episode with Simon Beard, and it's fantastic. Smart, quick, energetic, super understandable. The important stuff, and the fascinating stuff.
Consistently interesting and engaging
Consistently interesting conversations from the charming Fin and Luca!