27 episodes

Over the last decade, concerns about the power and danger of Artificial Intelligence have moved from the fantasy of “Terminator” to reality, and anxieties about killer robots have been joined by many others that are more immediate. Robotic systems threaten a massive disruption of employment and transport, while algorithms fuelled by machine learning on (potentially biased) “big data” increasingly play a role in life-changing decisions, whether financial, legal, or medical. More subtly, AI combines with social media to give huge potential for the manipulation of opinion and behaviour, whether to sell a product, influence financial markets, provoke divisive factionalism, or fix an election. All of this raises huge ethical questions, some fairly familiar (e.g. concerning privacy, information security, appropriate rules of automated behaviour) but many quite new (e.g. concerning algorithmic bias, transparency, and wider impacts). It is in this context that Oxford is creating an Institute for AI Ethics, to open up a broad conversation between the University’s researchers and students in the many related disciplines, including Philosophy, Computer Science, Engineering, Social Science, and Medicine (amongst others).
The Ethics in AI seminars are intended to facilitate this broad conversation, exploring ethical questions in AI in a truly interdisciplinary way that brings together students and leading experts from around the University.

Ethics in AI Oxford University

    • Education
    • 4.0 • 4 Ratings

Over the last decade, concerns about the power and danger of Artificial Intelligence have moved from the fantasy of “Terminator” to reality, and anxieties about killer robots have been joined by many others that are more immediate. Robotic systems threaten a massive disruption of employment and transport, while algorithms fuelled by machine learning on (potentially biased) “big data” increasingly play a role in life-changing decisions, whether financial, legal, or medical. More subtly, AI combines with social media to give huge potential for the manipulation of opinion and behaviour, whether to sell a product, influence financial markets, provoke divisive factionalism, or fix an election. All of this raises huge ethical questions, some fairly familiar (e.g. concerning privacy, information security, appropriate rules of automated behaviour) but many quite new (e.g. concerning algorithmic bias, transparency, and wider impacts). It is in this context that Oxford is creating an Institute for AI Ethics, to open up a broad conversation between the University’s researchers and students in the many related disciplines, including Philosophy, Computer Science, Engineering, Social Science, and Medicine (amongst others).
The Ethics in AI seminars are intended to facilitate this broad conversation, exploring ethical questions in AI in a truly interdisciplinary way that brings together students and leading experts from around the University.

    Ethics in AI Seminar: Responsible Research and Publication in AI

    Ethics in AI Seminar: Responsible Research and Publication in AI

    Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University
    What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
    What does it mean to conduct and publish AI research responsibly?
    What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
    How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?

    AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.

    Speakers
    Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’

    Dr Carolyn Ashurst
    Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS works

    • 1 hr 26 min
    Ethics in AI Colloquium with Adrienne Mayor: Gods and Robots: Myths, Machines, and Ancient Dreams of Technology

    Ethics in AI Colloquium with Adrienne Mayor: Gods and Robots: Myths, Machines, and Ancient Dreams of Technology

    Part of the Colloquium on AI Ethics series presented by the Institute of Ethics in AI. This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. What, if anything, can the ancient Greeks teach us​ about robots and AI? Perhaps the answer is nothing, or nothing so straightforward as a correct 'solution' to the problems thrown up by robots and AI, but instead a way of thinking about them. Join us for a fascinating presentation from Adrienne Mayor, Stanford University, who will discuss her latest book, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. This book investigates how the Greeks imagined automatons, replicants, and Artificial Intelligence in myths and later designed self-moving devices and robots.
    Adrienne Mayor, research scholar in the Classics Department and the History and Philosophy of Science program at Stanford University since 2006, is a folklorist and historian of ancient science who investigates natural knowledge contained in pre-scientific myths and oral traditions. Her research looks at ancient "folk science" precursors, alternatives, and parallels to modern scientific methods. She was a Berggruen Fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, 2018-2019. Mayor's latest book, Gods and Robots: Myths, Machines, and Ancient Dreams of Technology, investigates how the Greeks imagined automatons, replicants, and Artificial Intelligence in myths and later designed actual self-moving devices and robots. Mayor's 2014 book, The Amazons: Lives and Legends of Warrior Women across the Ancient World, analyzes the historical and archaeological evidence underlying myths and tales of warlike women (Sarasvati Prize for Women in Mythology). Her biography of King Mithradates VI of Pontus, The Poison King, won the Gold Medal for Biography, Independent Publishers' Book Award 2010, and was a 2009 National Book Award Finalist. Mayor’s other books include The First Fossil Hunters (rev. ed. 2011); Fossil Legends of the First Americans (2005); and Greek Fire, Poison Arrows, and Scorpion Bombs: Biological and Chemical Warfare in the Ancient World (2009, rev. ed. forthcoming).



    Commentators:

    Shadi Bartsch-Zimmer - Helen A. Regenstein Distinguished Service Professor of Classics and the Program in Gender Studies. Professor Bartsch-Zimmer works on Roman imperial literature, the history of rhetoric and philosophy, and on the reception of the western classical tradition in contemporary China. She is the author of 5 books on the ancient novel, Neronian literature, political theatricality, and Stoic philosophy, the most recent of which is Persius: A Study in Food, Philosophy, and the Figural (Winner of the 2016 Goodwin Award of Merit). She has also edited or co-edited 7 wide-ranging essay collections (two of them Cambridge Companions) and the “Seneca in Translation” series from the University of Chicago. Bartsch’s new translation of Vergil’s Aeneid is forthcoming from Random House in 2020; in the following year, she is publishing a new monograph on the contemporary Chinese reception of ancient Greek political philosophy. Bartsch has been a Guggenheim fellow, edits the journal KNOW, and has held visiting scholar positions in St. Andrews, Taipei, and Rome. Starting in academic year 2015, she has led a university-wide initiative to explore the historical and social contexts in which knowledge is created, legitimized, and circulated.



    Armand D'Angour is Professor of Classical Languages and Literature at the University of Oxford. Professor D'Angour pursued careers as a cellist and businessman before becoming a Tutor in Classics at Jesus College in 2000. In addition to my monograph The Greeks and the New (CUP 2011), he is the author of articles and chapters on the language, literature, psychology and culture of ancient Greece. In 2013-14 he was awarded a British Academy Fellowship to u

    • 1 hr 26 min
    AI in a Democratic Culture - Presented by the Institute for Ethics in AI

    AI in a Democratic Culture - Presented by the Institute for Ethics in AI

    Launch of the Institute for Ethics in AI with Sir Nigel Shadbolt, Joshua Cohen and Hélène Landemore. Part of the Colloquium on AI Ethics series presented by the Institute for Ethics in AI Introduced by the Vice-Chancellor, Professor Louise Richardson and chaired by Professor John Tasioulas.

    Speakers Professor Joshua Cohen (Apple University), Professor Hélène Landemore (Yale University), and Professor Sir Nigel Shadbolt (Computer Science, Oxford)



    Speakers:

    Professor Sir Nigel Shadbolt

    Professor Sir Nigel Shadbolt is Principal of Jesus College Oxford and a Professor of Computer Science at the University of Oxford. He has researched and published on topics in artificial intelligence, cognitive science and computational neuroscience. In 2009 he was appointed along with Sir Tim Berners-Lee as Information Advisor to the UK Government. This work led to the release of many thousands of public sector data sets as open data. In 2010 he was appointed by the Coalition Government to the UK Public Sector Transparency Board which oversaw the continued release of Government open data. Nigel continues to advise Government in a number of roles. Professor Shadbolt is Chairman and Co-founder of the Open Data Institute (ODI), based in Shoreditch, London. The ODI specialised in the exploitation of Open Data supporting innovation, training and research in both the UK and internationally.



    Professor Joshua Cohen
    Joshua Cohen is a political philosopher. He has written on issues of democratic theory, freedom of expression, religious freedom, political equality, democracy and digital technology, good jobs, and global justice. His books include On Democracy; Democracy and Associations; Philosophy, Politics, Democracy; Rousseau: A Free Community of Equals; and The Arc of the Moral Universe and Other Essays. He is co-editor of the Norton Introduction to Philosophy. Cohen taught at MIT (1977-2005), Stanford (2005-2014), is currently on the faculty at Apple University, and is Distinguished Senior Fellow in Law, Philosophy, and Political Science at Berkeley. Cohen held the Romanell-Phi Beta Kappa Professorship in 2002-3; was Tanner Lecturer at UC Berkeley in 2007; and gave the Comte Lectures at LSE in 2012. Since 1991, he has been editor of Boston Review.



    Professor Hélène Landemore (Yale) is Associate Professor of Political Science, with Tenure. Her research and teaching interests include democratic theory, political epistemology, theories of justice, the philosophy of social sciences (particularly economics), constitutional processes and theories, and workplace democracy. Hélène is the author of Hume (Presses Universitaires de France: 2004), a historical and philosophical investigation of David Hume’s theory of decision-making; Democratic Reason (Princeton University Press: 2013, Spitz prize 2015), an epistemic defense of democracy; and Open Democracy (Princeton University Press 2020), a vision for a new kind, more open form of democracy based on non-electoral forms of representation, including representation based on random selection.



    Chaired by Professor John Tasioulas, the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. Professor Tasioulas was at The Dickson Poon School of Law, Kings College London, from 2014, as the inaugural Chair of Politics, Philosophy and Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy and Law. He has degrees in Law and Philosophy from the University of Melbourne, and a D.Phil in Philosophy from the University of Oxford, where he studied as a Rhodes Scholar. He was previously a Lecturer in Jurisprudence at the University of Glasgow, and Reader in Moral and Legal Philosophy at the University of Oxford, where he taught from 1998-2010. He has also acted as a consultant on human rights for the World Bank.

    • 1 hr 30 min
    Does AI threaten Human Autonomy?

    Does AI threaten Human Autonomy?

    This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. How can AI systems influence our decision-making in ways that undermine autonomy? Do they do so in new or more problematic ways?
    To what extent can we outsource tasks to AI systems without losing our autonomy?
    Do we need a new conception of autonomy that incorporates considerations of the digital self?
    Autonomy is a core value in contemporary Western societies – it is a value that is invoked across a range of debates in practical ethics, and it lies at the heart of liberal democratic theory. It is therefore no surprise that AI policy documents frequently champion the importance of ensuring the protection of human autonomy. At first glance, this sort of protection may appear unnecessary – after all, in some ways, it seems that AI systems can serve to significantly enhance our autonomy. They can give us more information upon which to base our choices, and they may allow us to achieve many of our goals more effectively and efficiently. However, it is becoming increasingly clear that AI systems do pose a number of threats to our autonomy. One (but not the only) example is the fact that they enable the pervasive and covert use of manipulative and deceptive techniques that aim to target and exploit well-documented vulnerabilities in our decision-making. This raises the question of whether it is possible to harness the considerable power of AI to improve our lives in a manner that is compatible with respect for autonomy, and whether we need to reconceptualize both the nature and value of autonomy in the digital age. In this session, Carina Prunkl, Jessica Morley and Jonathan Pugh engage with these general questions, using the example of mHealth tools as an illuminating case study for a debate about the various ways in which an AI system can both enhance and hinder our autonomy.

    Speakers
    Dr Carina Prunkl, Research Fellow at the Institute for Ethics in AI, University of Oxford (where she is one of the inaugural team); also Research Affiliate at the Centre for the Governance of AI, Future of Humanity Institute. Carina works on the ethics and governance of AI, with a particular focus on autonomy, and has both publicly advocated and published on the importance of accountability mechanisms for AI.

    Jessica Morley, Policy Lead at Oxford’s DataLab, leading its engagement work to encourage use of modern computational analytics in the NHS, and ensuring public trust in health data records (notably those developed in response to the COVID-19 pandemic). Jess is also pursuing a related doctorate at the Oxford Internet Institute’s Digital Ethics Lab. As Technical Advisor for the Department of Health and Social Care, she co-authored the NHS Code of Conduct for data-driven technologies.

    Dr Jonathan Pugh, Senior Research Fellow at the Oxford Uehiro Centre for Practical Ethics, University of Oxford researching on how far AI Ethics should incorporate traditional conceptions of autonomy and “moral status”. He recently led a three-year project on the ethics of experimental Deep Brain Stimulation and “neuro-hacking”, and in 2020 published Autonomy, Rationality and Contemporary Bioethics (OUP). he has written on a wide range of ethical topics, but has particular interest in issues concerning personal autonomy and informed consent.

    Chair
    Professor Peter Millican is Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford. He has researched and published over a wide range, including Early Modern Philosophy, Epistemology, Ethics, Philosophy of Language and of Religion, but has a particular focus on interdisciplinary connections with Computing and AI. He founded and oversees the Oxford undergraduate degree in Computer Science and Philosophy, which has been running since 2012, and last year he instituted this ongoing series of Ethics in AI Seminars.

    • 1 hr 38 min
    Privacy Is Power

    Privacy Is Power

    Part of the Colloquium on AI Ethics series presented by the Institute of Ethics in AI. This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. In conversation with author, Dr Carissa Veliz (Associate Professor Faculty of Philosophy, Institute for Ethics in AI, Tutorial Fellow at Hertford College University of Oxford). The author will be accompanied by Sir Michael Tugendhat and Dr Stephanie Hare in a conversation about privacy, power, and democracy, and the event will be chaired by Professor John Tasioulas (inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford).
    Summary

    Privacy Is Power argues that people should protect their privacy because privacy is a kind of power. If we give too much of our data to corporations, the wealthy will rule. If we give too much personal data to governments, we risk sliding into authoritarianism. For democracy to be strong, the bulk of power needs to be with the citizenry, and whoever has the data will have the power. Privacy is not a personal preference; it is a political concern. Personal data is a toxic asset, and should be regulated as if it were a toxic substance, similar to asbestos. The trade in personal data has to end.

    As surveillance creeps into every corner of our lives, Carissa Véliz exposes how our personal data is giving too much power to big tech and governments, why that matters, and what we can do about it.

    Have you ever been denied insurance, a loan, or a job? Have you had your credit card number stolen? Do you have to wait too long when you call customer service? Have you paid more for a product than one of your friends? Have you been harassed online? Have you noticed politics becoming more divisive in your country? You might have the data economy to thank for all that and more.

    The moment you check your phone in the morning you are giving away your data. Before you've even switched off your alarm, a whole host of organisations have been alerted to when you woke up, where you slept, and with whom. Our phones, our TVs, even our washing machines are spies in our own homes.

    Without your permission, or even your awareness, tech companies are harvesting your location, your likes, your habits, your relationships, your fears, your medical issues, and sharing it amongst themselves, as well as with governments and a multitude of data vultures. They're not just selling your data. They're selling the power to influence you and decide for you. Even when you've explicitly asked them not to. And it's not just you. It's all your contacts too, all your fellow citizens. Privacy is as collective as it is personal.

    Digital technology is stealing our personal data and with it our power to make free choices. To reclaim that power, and our democracy, we must take back control of our personal data. Surveillance is undermining equality. We are being treated differently on the basis of our data.

    What can we do? The stakes are high. We need to understand the power of data better. We need to start protecting our privacy. And we need regulation. We need to pressure our representatives. It is time to pull the plug on the surveillance economy.
    To purchase a copy of ‘Privacy is Power’, please click https://www.amazon.co.uk/Privacy-Power-Should-Take-Control/dp/1787634043

    Biographies:

    Dr Carissa Véliz is an Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI, and a Tutorial Fellow in Philosophy at Hertford College. Carissa completed her DPhil in Philosophy at the University of Oxford. She was then a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. To find out more about Carissa’s work, visit her website: www.carissaveliz.com

    Sir Michael Tugendhat was a Judge of the High Court of England

    • 1 hr 1 min
    Algorithms Eliminate Noise (and That Is Very Good)

    Algorithms Eliminate Noise (and That Is Very Good)

    Part of the Colloquium on AI Ethics series presented by the Institute of Ethics in AI. This event is also part of the Humanities Cultural Programme, one of the founding stones for the future Stephen A. Schwarzman Centre for the Humanities. Imagine that two doctors in the same city give different diagnoses to identical patients - or that two judges in the same courthouse give different sentences to people who have committed the same crime. Suppose that different food inspectors give different ratings to indistinguishable restaurants - or that when a company is handling customer complaints, the resolution depends on who happens to be handling the particular complaint. Now imagine that the same doctor, the same judge, the same inspector, or the same company official makes different decisions, depending on whether it is morning or afternoon, or Monday rather than Wednesday. These are examples of noise: variability in judgments that should be identical. Noise contributes significantly to errors in all fields, including medicine, law, economic forecasting, police behavior, food safety, bail, security checks at airports, strategy, and personnel selection. Algorithms reduce noise - which is a very good thing.
    Background reading: two papers (i) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3300171; (ii) https://hbr.org/2016/10/noise

    Speakers
    Professor Cass Sunstein (Harvard Law School)

    Commentators: Professor Ruth Chang (Faculty of Law, University of Oxford) and Professor Sir Nigel Shadbolt (Jesus College, Oxford and Department of Computer Science, University of Oxford)
    Chaired by Professor John Tasioulas (inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford).


    Biographies:

    Professor Cass Sunstein is currently the Robert Walmsley University Professor at Harvard. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. In 2018, he received the Holberg Prize from the government of Norway, sometimes described as the equivalent of the Nobel Prize for law and the humanities. In 2020, the World Health Organization appointed him as Chair of its technical advisory group on Behavioural Insights and Sciences for Health. From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs, and after that, he served on the President's Review Board on Intelligence and Communications Technologies and on the Pentagon's Defense Innovation Board. Mr. Sunstein has testified before congressional committees on many subjects, and he has advised officials at the United Nations, the European Commission, the World Bank, and many nations on issues of law and public policy. He serves as an adviser to the Behavioural Insights Team in the United Kingdom.



    Professor Sir Nigel Shadbolt is Principal of Jesus College Oxford and a Professor of Computer Science at the University of Oxford. He has researched and published on topics in artificial intelligence, cognitive science and computational neuroscience. In 2009 he was appointed along with Sir Tim Berners-Lee as Information Advisor to the UK Government. This work led to the release of many thousands of public sector data sets as open data. In 2010 he was appointed by the Coalition Government to the UK Public Sector Transparency Board which oversaw the continued release of Government open data. Nigel continues to advise Government in a number of roles. Professor Shadbolt is Chairman and Co-founder of the Open Data Institute (ODI), based in Shoreditch, London. The ODI specialised in the exploitation of Open Data supporting innovation, training and research in both the UK and internationally.



    Professor Ruth Chang is the Chair and Professor of Jurisprudence and a Professorial Fellow of University College. Before coming to Oxford, she was Professor of Philosophy at Rutgers University, New Brunswick in New Jersey, USA. Be

    • 1 hr 16 min

Customer Reviews

4.0 out of 5
4 Ratings

4 Ratings

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
The Rich Roll Podcast
Rich Roll
TED Talks Daily
TED
Do The Work
Do The Work
The Subtle Art of Not Giving a F*ck Podcast
Mark Manson

You Might Also Like

A Beginner's Guide to AI
Dietmar Fischer
AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI & Data Today
The Rest Is Politics
Goalhanger Podcasts
TED Radio Hour
NPR
Practical AI: Machine Learning, Data Science
Changelog Media
Last Week in AI
Skynet Today

More by Oxford University

Theoretical Physics - From Outer Space to Plasma
Oxford University
Approaching Shakespeare
Oxford University
Philosophy for Beginners
Oxford University
The Secrets of Mathematics
Oxford University
Anthropology
Oxford University
Computer Science
Oxford University