60 episodes

Interviews with experts about the philosophy of the future.

Philosophical Disquisitions John Danaher

    • Philosophy

Interviews with experts about the philosophy of the future.

    Mass Surveillance, Artificial Intelligence and New Legal Challenges

    Mass Surveillance, Artificial Intelligence and New Legal Challenges

    [This is the text of a talk I gave to the Irish Law Reform Commission Annual Conference in Dublin on the 13th of November 2018. You can listen to an audio version of this lecture here or using the embedded player above.] In the mid-19th century, a set of laws were created to address the menace that newly-invented automobiles and locomotives posed to other road users. One of the first such laws was the English The Locomotive Act 1865, which subsequently became known as the ‘Red Flag Act’. Under this act, any user of a self-propelled vehicle had to ensure that at least two people were employed to manage the vehicle and that one of these persons: “while any locomotive is in motion, shall precede such locomotive on foot by not less than sixty yards, and shall carry a red flag constantly displayed, and shall warn the riders and drivers of horses of the approach of such locomotives…” The motive behind this law was commendable. Automobiles did pose a new threat to other, more vulnerable, road users. But to modern eyes the law was also, clearly, ridiculous. To suggest that every car should be preceded by a pedestrian waving a red flag would seem to defeat the point of having a car: the whole idea is that it is faster and more efficient than walking. The ridiculous nature of the law eventually became apparent to its creators and all such laws were repealed in the 1890s, approximately 30 years after their introduction.[1] The story of the Red Flag laws shows that legal systems often get new and emerging technologies badly wrong. By focusing on the obvious or immediate risks, the law can neglect the long-term benefits and costs. I mention all this by way of warning. As I understand it, it has been over 20 years since the Law Reform Commission considered the legal challenges around privacy and surveillance. A lot has happened in the intervening decades. My goal in this talk is to give some sense of where we are now and what issues may need to be addressed over the coming years. In doing this, I hope not to forget the lesson of the Red Flag laws. 1. What’s changed?   Let me start with the obvious question. What has changed, technologically speaking, since the LRC last considered issues around privacy and surveillance? Two things stand out. First, we have entered an era of mass surveillance. The proliferation of digital devices — laptops, computers, tablets, smart phones, smart watches, smart cars, smart fridges, smart thermostats and so forth — combined with increased internet connectivity has resulted in a world in which we are all now monitored and recorded every minute of every day of our lives. The cheapness and ubiquity of data collecting devices means that it is now, in principle, possible to imbue every object, animal and person with some data-monitoring technology. The result is what some scholars refer to as the ‘internet of everything’ and with it the possibility of a perfect ‘digital panopticon’. This era of mass surveillance puts increased pressure on privacy and, at least within the EU, has prompted significant legislative intervention in the form of the GDPR. Second, we have created technologies that can take advantage of all the data that is being collected. To state the obvious: data alone is not enough. As all lawyers know, it is easy to befuddle the opposition in a complex law suit by ‘dumping’ a lot of data on them during discovery. They drown in the resultant sea of information. It is what we do with the data that really matters. In this respect, it is the marriage of mass surveillance with new kinds of artificial intelligence that creates the new legal challenges that we must now tackle with some urgency. Artificial intelligence allows us to do three important things with the vast quantities of data that are now being collected: (i) It enables new kinds of pattern matching - what I mean h

    67 - Rini on Deepfakes and the Epistemic Backstop

    67 - Rini on Deepfakes and the Epistemic Backstop

    In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation. You can download this episode here or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed here). Show Notes0:00 - Introduction3:20 - What are deepfakes?7:35 - What is the academic justification for creating deepfakes (if any)?11:35 - The different uses of deepfakes: Porn versus Politics16:00 - The epistemic backstop and the role of audiovisual recordings22:50 - Two ways that recordings regulate our testimonial practices26:00 - But recordings aren't a window onto the truth, are they?34:34 - Is the Golden Age of recordings over?39:36 - Will the rise of deepfakes lead to the rise of epistemic elites?44:32 - How will deepfakes fuel political partisanship?50:28 - Deepfakes and the end of public reason54:15 - Is there something particularly disruptive about deepfakes?58:25 - What can be done to address the problem?  Relevant LinksRegina's HomepageRegina's Philpapers Page"Deepfakes and the Epistemic Backstop" by Regina"Fake News and Partisan Epistemology" by ReginaJeremy Corbyn and Boris Johnson Deepfake Video"California’s Anti-Deepfake Law Is Far Too Feeble" Op-Ed in Wired #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

    66 - Wong on Confucianism, Robots and Moral Deskilling

    66 - Wong on Confucianism, Robots and Moral Deskilling

    In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction2:56 - How do robots disrupt our moral lives?7:18 - Robots and Moral Deskilling12:52 - The Folk Model of Virtue Acquisition21:16 - The Confucian approach to Ethics24:28 - Confucianism versus the European approach29:05 - Confucianism and situationism34:00 - The Importance of Rituals39:39 - A Confucian Response to Moral Deskilling43:37 - Criticisms (moral silencing)46:48 - Generalising the Confucian approach50:00 - Do we need new Confucian rituals? Relevant LinksPak's homepage at the University of HamburgPak's Philpeople Profile"Rituals and Machines: A Confucian Response to Technology Driven Moral Deskilling" by Pak"Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?" by Pak"Consenting to Geoengineering" by PakEpisode 45 with Shannon Vallor on Technology and the Virtues #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

    65 - Vold on How We Can Extend Our Minds With AI

    65 - Vold on How We Can Extend Our Minds With AI

    In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes0:00 - Introduction1:55 - Some examples of AI cognitive extension13:07 - Defining cognitive extension17:25 - Extended cognition versus extended mind19:44 - The Coupling-Constitution Fallacy21:50 - Understanding different theories of situated cognition27:20 - The Coupling-Constitution Fallacy Redux30:20 - What is distinctive about AI-based cognitive extension?34:20 - The three/four different ways of thinking about human interactions with AI40:04 - Problems with this framework49:37 - The Problem of Cognitive Atrophy53:31 - The Moral Status of AI Extenders57:12 - The Problem of Autonomy and Manipulation58:55 - The policy implications of recognising AI cognitive extension  Relevant LinksKarina's homepageKarina at the Leverhulme Centre for the Future of Intelligence"AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI" by José Hernández Orallo and Karina Vold"The Parity Argument for Extended Consciousness" by Karina"Are ‘you’ just inside your skin or is your smartphone part of you?" by Karina"The Extended Mind" by Clark and ChalmersTheory and Application of the Extended Mind (series by me) #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

    Escaping Skinner's Box: AI and the New Era of Techno-Superstition

    Escaping Skinner's Box: AI and the New Era of Techno-Superstition

    [The following is the text of a talk I delivered at the World Summit AI on the 10th October 2019. The talk is essentially a nugget taken from my new book Automation and Utopia. It's not an excerpt per se, but does look at one of the key arguments I make in the book. You can listen to the talk using the plugin above or download it here.] The science fiction author Arthur C. Clarke once formulated three “laws” for thinking about the future. The third law states that “any sufficiently advanced technology is indistinguishable from magic”. The idea, I take it, is that if someone from the Paleolithic was transported to the modern world, they would be amazed by what we have achieved. Supercomputers in our pockets; machines to fly us from one side of the planet to another in less than a day; vaccines and antibiotics to cure diseases that used to kill most people in childhood. To them, these would be truly magical times. It’s ironic then that many people alive today don’t see it that way. They see a world of materialism and reductionism. They think we have too much knowledge and control — that through technology and science we have made the world a less magical place. Well, I am here to reassure these people. One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about. Let me explain by way of an analogy. In the late 1940s, the behaviorist psychologist BF Skinner — famous for his experiments on animal learning —got a bunch of pigeons and put them into separate boxes. Now, if you know anything about Skinner you’ll know he had a penchant for this kind of thing. He seems to have spent his adult life torturing pigeons in boxes. Each box had a window through which a food reward would be presented to the bird. Inside the box were different switches that the pigeons could press with their beaks. Ordinarily, Skinner would set up experiments like this in such a way that pressing a particular sequence of switches would trigger the release of the food. But for this particular experiment he decided to do something different. He decided to present the food at random intervals, completely unrelated to the pressing of the switches. He wanted to see what the pigeons would do as a result. The findings were remarkable. Instead of sitting idly by and waiting patiently for their food to arrive, the pigeons took matters into their own hands. They flapped their wings repeatedly, they danced around in circles, they hopped on one foot, convinced that their actions had something to do with the presentation of the food reward. Skinner and his colleagues likened what the pigeons were doing to the ‘rain dances’ performed by various tribes around the world: they were engaging in superstitious behaviours to control an unpredictable and chaotic environment. It’s important that we think about this situation from the pigeon’s perspective. Inside the Skinner box, they find themselves in an unfamiliar world that is deeply opaque to them. Their usual foraging tactics and strategies don’t work. Things happen to them, food gets presented, but they don’t really understand why. They cannot cope with the uncertainty; their brains rush to fill the gap and create the illusion of control. Now what I want to argue here is that modern workers, and indeed all of us, in an environment suffused with AI, can end up sharing the predicament of Skinner’s pigeons. We can end up working inside boxes, fed information and stimuli by artificial intelligence. And inside these boxes, stuff can happen to us, work can get done, but we are not quite sure if or how our actions make a difference. We end up resorting to od

    Assessing the Moral Status of Robots: A Shorter Defence of Ethical Behaviourism

    Assessing the Moral Status of Robots: A Shorter Defence of Ethical Behaviourism

    [This is the text of a lecture that I delivered at Tilburg University on the 24th of September 2019. It was delivered as part of the 25th Anniversary celebrations for TILT (Tilburg Institute for Law, Technology and Society). My friend and colleague Sven Nyholm was the discussant for the evening. The lecture is based on my longer academic article ‘Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism’ but was written from scratch and presents some key arguments in a snappier and clearer form. I also include a follow up section responding to criticisms from the audience on the evening of the lecture. My thanks to all those involved in organizing the event (Aviva de Groot, Merel Noorman and Silvia de Conca in particular). You can download an audio version of this lecture, minus the reflections and follow ups, here or listen to it above] 1. Introduction My lecture this evening will be about the conditions under which we should welcome robots into our moral communities. Whenever I talk about this, I am struck by how much my academic career has come to depend upon my misspent youth for its inspiration. Like many others, I was obsessed with science fiction as a child, and in particular with the representation of robots in science fiction. I had two favourite, fictional, robots. The first was R2D2 from the original Star Wars trilogy. The second was Commander Data from Star Trek: the Next Generation. I liked R2D2 because of his* personality - courageous, playful, disdainful of authority - and I liked Data because the writers of Star Trek used him as a vehicle for exploring some important philosophical questions about emotion, humour, and what it means to be human. In fact, I have to confess that Data has had an outsized influence on my philosophical imagination and has featured in several of my academic papers. Part of the reason for this was practical. When I grew up in Ireland we didn’t have many options to choose from when it came to TV. We had to make do with what was available and, as luck would have it, Star Trek: TNG was on every day when I came home from school. As a result, I must have watched each episode of its 7-season run multiple times. One episode in particular has always stayed with me. It was called ‘Measure of a Man’. In it, a scientist from the Federation visits the Enterprise because he wants to take Data back to his lab to study him. Data, you see, is a sophisticated human-like android, created by a lone scientific genius, under somewhat dubious conditions. The Federation scientist wants to take Data apart and see how he works with a view to building others like him. Data, unsurprisingly, objects. He argues that he is not just a machine or piece of property that can be traded and disassembled to suit the whims of human beings. He has his own, independent moral standing. He deserves to be treated with dignity. But how does Data prove his case? A trial ensues and evidence is given on both sides. The prosecution argue that Data is clearly just a piece of property. He was created not born. He doesn’t think or see the world like a normal human being (or, indeed, other alien species). He even has an ‘off switch’. Data counters by giving evidence of the rich relationships he has formed with his fellow crew members and eliciting testimony from others regarding his behaviour and the interactions they have with him. Ultimately, he wins the case. The court accepts that he has moral standing. Now, we can certainly lament the impact that science fiction has on the philosophical debate about robots. As David Gunkel observes in his 2018 book Robot Rights: “[S]cience fiction already — and well in advance of actual engineering practice — has established expectations for what a robot is or can be. Even before engineers have sought to develop working prototypes, writers, artists, and filmmak

Customer Reviews

***ELIZA*** ,

Best philosophy of tech podcast out there

For academics or lay-people with an interest in keeping up with the current philosophical books and thinking on tech, media and human enhancement, this podcast should be your first stop. It’s remarkably accessible, devoid of ads (so far), and the guests and the host know their stuff. Thanks!

Top Podcasts In Philosophy

Listeners Also Subscribed To