36 episodes

The annual public Uehiro Lecture Series captures the ethos of the Uehiro Centre, which is to bring the best scholarship in analytic philosophy to bear on the most significant problems of our time, and to make progress in the analysis and resolution of these issues to the highest academic standard, in a manner that is also accessible to the general public. Philosophy should not only create knowledge, it should make people’s lives better.

Uehiro Lectures: Practical solutions for ethical challenges Oxford University

    • Education

The annual public Uehiro Lecture Series captures the ethos of the Uehiro Centre, which is to bring the best scholarship in analytic philosophy to bear on the most significant problems of our time, and to make progress in the analysis and resolution of these issues to the highest academic standard, in a manner that is also accessible to the general public. Philosophy should not only create knowledge, it should make people’s lives better.

    2023 Annual Uehiro Lectures in Practical Ethics: Knowledge and Achievement as Public Policy Goals (3 of 3)

    2023 Annual Uehiro Lectures in Practical Ethics: Knowledge and Achievement as Public Policy Goals (3 of 3)

    A recording of the third and final of Professor Hurka's rescheduled lectures, series title Knowledge and Achievement: Their Value, Nature, and Public Policy Role We were honoured to welcome Professor Thomas Hurka to Oxford to deliver the 2023 Annual Uehiro Lectures in Practical Ethics. Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 59 min
    2023 Annual Uehiro Lectures in Practical Ethics: Degrees of Value in Knowledge and Achievement (2 of 3)

    2023 Annual Uehiro Lectures in Practical Ethics: Degrees of Value in Knowledge and Achievement (2 of 3)

    A recording of the second of Professor Hurka's rescheduled lectures, series title "Knowledge and Achievement: Their Value, Nature, and Public Policy Role" We were honoured to welcome Professor Thomas Hurka to Oxford to deliver the 2023 Annual Uehiro Lectures in Practical Ethics. Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 1 hr 1 min
    2023 Annual Uehiro Lectures in Practical Ethics: Knowledge and Achievement as Organic Goods (1 of 3)

    2023 Annual Uehiro Lectures in Practical Ethics: Knowledge and Achievement as Organic Goods (1 of 3)

    A recording of the first of Professor Hurka's rescheduled lectures, series title "Knowledge and Achievement: Their Value, Nature, and Public Policy Role" We were honoured to welcome Professor Thomas Hurka to Oxford to deliver the 2023 Annual Uehiro Lectures in Practical Ethics.
    Creative Commons Attribution-Non-Commercial-Share Alike 2.0 UK: England & Wales; http://creativecommons.org/licenses/by-nc-sa/2.0/uk/

    • 1 hr 7 min
    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (3 of 3)

    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (3 of 3)

    In last of the three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

    In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

    I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

    • 1 hr 13 min
    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (2 of 3)

    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (2 of 3)

    In the second 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

    In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

    I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

    • 1 hr 7 min
    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (1 of 3)

    2022 Annual Uehiro Lectures in Practical Ethics: Ethics and Artificial Intelligence (1 of 3)

    In the first of three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

    In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

    I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

    • 1 hr 29 min

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
Coffee Break Spanish
Coffee Break Languages
Mick Unplugged
Mick Hunt
Small Doses with Amanda Seales
Urban One Podcast Network
TED Talks Daily
TED