Uehiro Lectures: Practical solutions for ethical challenges

Oxford University
Uehiro Lectures: Practical solutions for ethical challenges

The annual public Uehiro Lecture Series captures the ethos of the Uehiro Centre, which is to bring the best scholarship in analytic philosophy to bear on the most significant problems of our time, and to make progress in the analysis and resolution of these issues to the highest academic standard, in a manner that is also accessible to the general public. Philosophy should not only create knowledge, it should make people’s lives better.

  1. 31/05/2022

    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (3 of 3)

    In last of the three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations. In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations? I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

    1h 14m
  2. 31/05/2022

    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (2 of 3)

    In the second 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations. In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations? I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

    1h 8m
  3. 31/05/2022

    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (1 of 3)

    In the first of three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations. In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations? I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

    1h 30m
  4. 17/11/2020

    2020 Annual Uehiro Lectures in Practical Ethics (3/3): The case for an unfunded pay as you go (PAYG) pension

    Professor Michael Otsuka (London School of Economics) delivers the final of three public lectures in the series 'How to pool risks across generations: the case for collective pensions' The previous two lectures grappled with various challenges that funded collective pension schemes face. In the final lecture, I ask whether an unfunded 'pay as you go' (PAYG) approach might provide a solution. With PAYG, money is directly transferred from those who are currently working to pay the pensions of those who are currently retired. Rather than drawing from a pension fund consisting of a portfolio of financial assets, these pensions are paid out of the Treasury's coffers. The pension one is entitled to in retirement is often, however, a function of, even though not funded by, the pensions contributions one has made during one’s working life. I explore the extent to which a PAYG pension can be justified as a form of indirect reciprocity that cascades down generations. This contrasts with a redistributive concern to mitigate the inequality between those who are young, healthy, able-bodied, and productive and those who are elderly, infirm, and out of work. I explore claims inspired by Ken Binmore and Joseph Heath that PAYG pensions in which each generation pays the pensions of the previous generation can be justified as in mutually advantageous Nash equilibrium. I also discuss the relevance to the case for PAYG of Thomas Piketty's claim that r > g, where "r" is the rate of return on capital and "g" is the rate of growth of the economy.

    46 min
  5. 17/11/2020

    2020 Annual Uehiro Lectures in Practical Ethics (2/3): The case for collective defined contribution (CDC)

    Professor Michael Otsuka (London School of Economics) delivers the second of three public lectures in the series 'How to pool risks across generations: the case for collective pensions' On any sensible approach to the valuation of a DB scheme, ineliminable risk will remain that returns on a portfolio weighted towards return-seeking equities and property will fall significantly short of fully funding the DB pension promise. On the actuarial approach, this risk is deemed sufficiently low that it is reasonable and prudent to take in the case of an open scheme that will be cashflow positive for many decades. But if they deem the risk so low, shouldn't scheme members who advocate such an approach be willing to put their money where their mouth is, by agreeing to bear at least some of this downside risk through a reduction in their pensions if returns are not good enough to achieve full funding? Some such conditionality would simply involve a return to the practices of DB pension schemes during their heyday three and more decades ago. The subsequent hardening of the pension promise has hastened the demise of DB. The target pensions of collective defined contribution (CDC) might provide a means of preserving the benefits of collective pensions, in a manner that is more cost effective for all than any form of defined benefit promise. In one form of CDC, the risks are collectively pooled across generations. In another form, they are collectively pooled only among the members of each age cohorts.

    49 min

About

The annual public Uehiro Lecture Series captures the ethos of the Uehiro Centre, which is to bring the best scholarship in analytic philosophy to bear on the most significant problems of our time, and to make progress in the analysis and resolution of these issues to the highest academic standard, in a manner that is also accessible to the general public. Philosophy should not only create knowledge, it should make people’s lives better.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada