
21 episodes

The Sentience Institute Podcast Sentience Institute
-
- Science
-
-
5.0 • 10 Ratings
-
Interviews with activists, social scientists, entrepreneurs and change-makers about the most effective strategies to expand humanity’s moral circle, with an emphasis on expanding the circle to farmed animals. Host Jamie Harris, a researcher at moral expansion think tank Sentience Institute, takes a deep dive with guests into advocacy strategies from political initiatives to corporate campaigns to technological innovation to consumer interventions, and discusses advocacy lessons from history, sociology, and psychology.
-
Matti Wilks on human-animal interaction and moral circle expansion
“Speciesism being socially learned is probably our most dominant theory of why we think we're getting the results that we're getting. But to be very clear, this is super early research. We have a lot more work to do. And it's actually not just in the context of speciesism that we're finding this stuff. So basically we've run some studies showing that while adults will prioritize humans over even very large numbers of animals in sort of tragic trade-offs, children are much more likely to prioritize humans and animals lives similarly. So an adult will save one person over a hundred dogs or pigs, whereas children will save, I think it was two dogs or six pigs over one person. And this was children that were about five to 10 years old. So often when you look at biases in development, so something like minimal group bias, that peaks quite young.”
Matti WilksWhat does our understanding of human-animal interaction imply for human-robot interaction? Is speciesism socially learned? Does expanding the moral circle dilute it? Why is there a correlation between naturalness and acceptableness? What are some potential interventions for moral circle expansion and spillover from and to animal advocacy?
Matti Wilks is a lecturer (assistant professor) in psychology at the University of Edinburgh. She uses approaches from social and developmental psychology to explore barriers to prosocial and ethical behavior—right now she is interested in factors that shape how we morally value others, the motivations of unusually altruistic groups, why we prefer natural things, and our attitudes towards cultured meat. Matti completed her PhD in developmental psychology at the University of Queensland, Australia, and was a postdoc at Princeton and Yale Universities.
Topics discussed in the episode:
Introduction (0:00)What matters ethically? (1:00)The link between animals and digital minds (3:10)Higher vs lower orders of pleasure/suffering (4:15)Psychology of human-animal interaction and what that means for human-robot interaction (5:40)Is speciesism socially learned? (10:15)Implications for animal advocacy strategy (19:40)Moral expansiveness scale and the moral circle (23:50)Does expanding the moral circle dilute it? (27:40)Predictors for attitudes towards species and artificial sentience (30:05)Correlation between naturalness and acceptableness (38:30)What does our understanding of naturalness and acceptableness imply for attitudes towards cultured meat? (49:00)How can we counter concerns about naturalness in cultured meat? (52:00)What does our understanding of attitudes towards naturalness imply for artificial sentience? (54:00)Interventions for moral circle expansion and spillover from and to animal advocacy (56:30)Academic field building as a strategy for developing a cause area (1:00:50)
Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show -
David Gunkel on robot rights
“Robot rights are not the same thing as a set of human rights. Human rights are very specific to a singular species, the human being. Robots may have some overlapping powers, claims, privileges, or immunities that would need to be recognized by human beings, but their grouping or sets of rights will be perhaps very different.”
David GunkelCan and should robots and AI have rights? What’s the difference between robots and AI? Should we grant robots rights even if they aren’t sentient? What might robot rights look like in practice? What philosophies and other ways of thinking are we not exploring enough? What might human-robot interactions look like in the future? What can we learn from science fiction? Can and should we be trying to actively get others to think of robots in a more positive light?
David J. Gunkel is an award-winning educator, scholar, and author, specializing in the philosophy and ethics of emerging technology. He is the author of over 90 scholarly articles and book chapters and has published twelve internationally recognized books, including The Machine Question: Critical Perspectives on AI, Robots, and Ethics (MIT Press 2012), Of Remixology: Ethics and Aesthetics After Remix (MIT Press 2016), and Robot Rights (MIT Press 2018). He currently holds the position of Distinguished Teaching Professor in the Department of Communication at Northern Illinois University (USA).
Topics discussed in the episode:
Introduction (0:00)Why robot rights and not AI rights? (1:12)The other question: can and should robots have rights? (5:39)What is the case for robot rights? (10:21)What would robot rights look like? (19:50)What can we learn from other, particularly non-western, ways of thinking for robot rights? (26:33)What will human-robot interaction look like in the future? (33:20)How artificial sentience being less discrete than biological sentience might affect the case for rights (40:45)Things we can learn from science fiction for human-robot interaction and robot rights (42:55)Can and should we do anything to encourage people to see robots in a more positive light? (47:55)Why David pursued philosophy of technology over computer science more generally (52:01)Does having technical expertise give you more credibility (54:01)Shifts in thinking about robots and AI David has noticed over his career (58:03)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show -
Kurt Gray on human-robot interaction and mind perception
“And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.”
Kurt GrayWhat is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?
Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.
Topics discussed in the episode:
Introduction (0:00)How did a geophysicist come to be doing social psychology? (0:51)What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)What is mind perception? (4:45)What is a mind? (7:45)Agency vs experience, or thinking vs feeling (9:40)Why do people see moral exemplars as being insensitive to pain? (10:45)How will people perceive minds in robots/AI? (18:50)Perspective taking as a tool to reduce substratism towards AI (29:30)Why don’t people like using AI to make moral decisions? (32:25)What would be the moral status of AI if they are not sentient? (38:00)The presence of robots can make people seem more similar (44:10)What can we expect about discrimination towards digital minds in the future? (48:30)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show -
Thomas Metzinger on a moratorium on artificial sentience development
And for an applied ethics perspective, I think the most important thing is if we want to minimize suffering in the world, and if we want to minimize animal suffering, we should always, err on the side of caution, we should always be on the safe side.
Thomas MetzingerShould we advocate for a moratorium on the development of artificial sentience? What might that look like, and what would be the challenges?
Thomas Metzinger was a full professor of theoretical philosophy at the Johannes Gutenberg Universitat Mainz until 2022, and is now a professor emeritus. Before that, he was the president of the German cognitive science society from 2005 to 2007, president of the association for the scientific study of consciousness from 2009 to 2011, and an adjunct fellow at the Frankfurt Institute for advanced studies since 2011. He is also a co-founder of the German Effective Altruism Foundation, president of the Barbara Wengeler Foundation, and on the advisory board of the Giordano Bruno Foundation. In 2009, he published a popular book, The Ego Tunnel: The Science of the Mind and the Myth of the Self, which addresses a wider audience and discusses the ethical, cultural, and social consequences of consciousness research. From 2018 to 2020 Metzinger worked as a member of the European Commission's high level expert group on artificial intelligence.
Topics discussed in the episode:
0:00 introduction2:12 Defining consciousness and sentience9:55 What features might a sentient artificial intelligence have?17:11 Moratorium on artificial sentience development37:46 Case for a moratorium49:30 What would a moratorium look like?53:07 Social hallucination problem55:49 Incentives of politicians1:01:51 Incentives of tech companies1:07:18 Local vs global moratoriums1:11:52 Repealing the moratorium1:16:01 Information hazards1:22:21 Trends in thinking on artificial sentience over time1:39:38 What are the open problems in this field, and how might someone work on them with their career?Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show -
Tobias Baumann of the Center for Reducing Suffering on global priorities research and effective strategies to reduce suffering
“We think that the most important thing right now is capacity building. We’re not so much focused on having impact now or in the next year, we’re thinking about the long term and the very big picture… Now, what exactly does capacity building mean? It can simply mean getting more people involved… I would frame it more in terms of building a healthy community that’s stable in the long term… And one aspect that’s just as important as the movement building is that we need to improve our knowledge of how to best reduce suffering. You could call it ‘wisdom building’… And CRS aims to contribute to [both] through our research… Some people just naturally tend to be more inclined to explore a lot of different topics… Others have maybe more of a tendency to dive into something more specific and dig up a lot of sources and go into detail and write a comprehensive report and I think both these can be very valuable… What matters is just that overall your work is contributing to progress on… the most important questions of our time.”
Tobias BaumannThere are many different ways that we can reduce suffering or have other forms of positive impact. But how can we increase our confidence about which actions are most cost-effective? And what can people do now that seems promising?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
Who is currently working to reduce risks of astronomical suffering in the long-term future (“s-risks”) and what are they doing? (2:50)What are “information hazards,” how concerned should we be about them, and how can we reduce them? (12:21)What is the Center for Reducing Suffering’s theory of change and what are its research plans? (17:52)What are the main bottlenecks to further progress in the field of work focused on reducing s-risks? (29:46)Does it make more sense to work directly on reducing specific s-risks or on broad risk factors that affect many different risks? (34:27)Which particular types of global priorities research seem most useful? (38:15)What are some of the implications of taking a longtermist approach for animal advocacy? (45:31)If we decide that focusing directly on the interests of artificial sentient beings is a high priority, what are the most important next steps in research and advocacy? (1:00:04)What are the most promising career paths for reducing s-risks? (1:09:25)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show -
Tobias Baumann of the Center for Reducing Suffering on moral circle expansion, cause prioritization, and reducing risks of astronomical suffering in the long-term future
“If some beings are excluded from moral consideration then the results are usually quite bad, as evidenced by many forms of both current and historical suffering… I would definitely say that those that don’t have any sort of political representation or power are at risk. That’s true for animals right now; it might be true for artificially sentient beings in the future… And yeah, I think that is a plausible priority. Another candidate would be to work on other broad factors to improve the future such as by trying to fix politics, which is obviously a very, very ambitious goal… [Another candidate would be] trying to shape transformative AI more directly. We’ve talked about the uncertainty there is regarding the development of artificial intelligence, but at least there’s a certain chance that people are right about this being a very crucial technology; and if so, shaping it in the right way is very important obviously.”
Tobias BaumannExpanding humanity’s moral circle to include farmed animals and other sentient beings is a promising strategy for reducing the risk of astronomical suffering in the long-term future. But are there other causes that we could focus on that might be better? And should reducing future suffering actually be our goal?
Tobias Baumann is a co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings.
Topics discussed in the episode:
Why moral circle expansion is a plausible priority for those of us focused on doing good (2:17)Tobias’ view on why we should accept longtermism — the idea that the value of our actions is determined primarily by their impacts on the long-term future (5:50)Are we living at the most important time in history? (14:15)When, if ever, will transformative AI arrive? (20:35)Assuming longtermism, should we prioritize focusing on risks of astronomical suffering in the long-term future (s-risks) or on maximizing the likelihood of positive outcomes? (27:00)What sorts of future beings might be excluded from humanity’s moral circle in the future, and why might this happen? (37:45)What are the main reasons to believe that moral circle expansion might not be a very promising way to have positive impacts on the long-term future? (41:40)Should we focus on other forms of values spreading that might be broadly positive, rather than expanding humanity’s moral circle? (48:55)Beyond values spreading, which other causes should people focused on reducing s-risks consider prioritizing (50:25)Should we expend resources on moral circle expansion and other efforts to reduce s-risk now or just invest our money and resources in order to benefit from compound interest? (1:00:02)If we decide to focus on moral circle expansion, should we focus on the current frontiers of the moral circle, such as farmed animals, or focus more directly on groups of future beings we are concerned about? (1:03:06)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast
Support the show