Welcome to the Philosophy Exchange Podcast! In our general podcast you will hear our members talk about philosophical ideas and our experiences in academia, with new episodes published every two weeks. Additionally, we have two new podcast series, our PX interviews, where we interview people who have influenced our work, and a special series, PX on AI dedicated to Artificial Intelligence. We hope you enjoy listening to the podcast and make sure to check us out on Twitter @PhilXchange and on our website philosophyexchange.org to get to know more about the community!
PX and PhD Research - Nicholas Makins: "Attitudinal Ambivalence: Moral Uncertainty for Non-Cognitivists"
We interview Nicholas Makins on his recent publication "Attitudinal Ambivalence: Moral Uncertainty for Non-Cognitivists". In this paper, Makins adopts a non-cognivist account of moral judgements, characterized not as beliefs but as cognitive attitudes, like desires or forms of will. The aim of the article is to show that this view can be better defended if one defines moral doubt not in terms of credential uncertainty (viz., lack of information) but as ambivalence – i.e., a situation where two conflicting desires clash with each other. The fascinating proposal is illustrated via some examples through the podcast conversation, which opens insightful reflection about the nature of moral attitudes and the strategy one should adopt in cases of moral doubt. You can read the paper here https://doi.org/10.1080/00048402.2021.1908380
PX on AI - Edinburgh’s Shannon Vallor on AI and Society
In this episode Karl and Roze are joined by Philosophy Exchange member Johanna Sarisoy to interview Shannon Vallor, Professor at the Edinburgh Futures Institute (EFI). Together, they discuss the future of artificial intelligence (AI) from a variety of perspectives such as what it means to develop moral AI. Further, they discuss how experts can dialogue within academia and to the broader public.
PX and PhD Research - Cecliy Whitely: "Aphantasia, imagination and dreaming"
In this episode, we interview Cecily Whitely, a PhD student at the LSE who recently published an article with the title "Aphantasia, imagination and dreaming" (2021). In her article, she test a recent philosophical theory of dreaming as a type of imagination by looking at the empirical research on aphantasic patients, i.e. people who are not able to voluntarily create mental images. Through this appeal to medical enquiry, Whitely shows the inadequacy of the standard philosophical view of dreaming as a form of imagination and proposes her own amended account of dream. The discussion with the author raises interesting questions about dreaming and imagination, but also regarding the interplay of science and philosophy and the role of philosophers of science in the progress of both the disciplines. The paper can be found here - https://doi.org/10.1007/s11098-020-01526-8
PX Interviews - Jonathan Birch
In this second episode of PX Interview, we chat with Dr. Jonathan Birch, Associate Professor at LSE, specializing in the philosophy of the biological sciences. We asked him about his research on animal welfare, how understanding of animal sentience could transform how we live our everyday life, and what are some top moments of his academic career.
PX - How We Value Nature: Perspectives from Philosophy and Economics
If we put a price on nature is everything up for sale? And if we don’t, is everything up for grabs? We first discuss valuing nature from a philosophical perspective before moving to the question of putting a monetary value on nature. Amongst others, we talk about the Dasgupta review which proposes to include biodiversity in our economic accounting and argue about the pros and cons of doing so.
PX on AI - Cooperative AI with Ed Hughes of Google's DeepMind
In this episode Karl and Roze interview Ed Hughes from Google’s DeepMind. They chat about a recent paper* co-written by Ed on Cooperative AI and investigate the following questions: What is cooperative AI and why is it of importance? Further, what are the so-called capabilities of cooperative AI? Finally, should humanity be afraid of this new approach, or will it beneficial for society?
*For those interested, the paper can be found here: https://arxiv.org/abs/2012.08630