Ethics in AI Seminar - presented by the Institute for Ethics in AI Chair: Peter Millican, Gilbert Ryle Fellow and Professor of Philosophy at Hertford College, Oxford University
What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
What does it mean to conduct and publish AI research responsibly?
What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?
AI and related technologies are having an increasing impact on the lives of individuals, as well as society as a whole. Alongside many current and potential future benefits, there has been an expanding catalogue of harms arising from deployed systems, raising questions about fairness and equality, privacy, worker exploitation, environmental impact, and more. In addition, there have been increasing incidents of research publications which have caused an outcry over ethical concerns and potential negative societal impacts. In response, many are now asking whether the technical AI research community itself needs to do more to ensure ethical research conduct, and to ensure beneficial outcomes from deployed systems. But how should individual researchers and the research community more broadly respond to the existing and potential impacts from AI research and AI technology? Where should we draw the line between academic freedom and centering societal impact in research, or between openness and caution in publication? Are technical researchers well placed to grapple with issues of ethics and societal impact, or should these be left to other actors and disciplines? What can we learn from other high-stakes, ‘dual-use’ fields? In this seminar, Rosie Campbell, Carolyn Ashurst and Helena Webb will discuss these and related issues, drawing on examples such as conference impact statements, release strategies for large language models, and responsible research innovation in practice.
Rosie Campbell leads the Safety-Critical AI program the Partnership on AI . She is currently focused on responsible publication and deployment practices for increasingly advanced AI, and was a co-organizer of the NeurIPS workshop on Navigating the Broader Impacts of AI Research . Previously, Rosie was the Assistant Director of the Center for Human-Compatible AI (CHAI) , a technical AI safety research group at UC Berkeley working towards provably beneficial AI. Before that, Rosie worked as a research engineer at BBC R and D, a multidisciplinary research lab based in the UK. There, she worked on emerging technologies for media and broadcasting, including an award-winning project exploring the use of AI in media production. Rosie holds a Master’s in Computer Science and a Bachelor’s in Physics, and also has academic experience in Philosophy and Machine Learning. She co-founded a futurist community group in the UK to explore the social implications of emerging tech, and was recently named one of ‘100 Brilliant Women to follow in AI Ethics.’
Dr Carolyn Ashurst
Carolyn is a Senior Research Scholar at the Future of Humanity Institute and Research Affiliate with the Centre for the Governance of AI . Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. On the question of responsible research and publication, Carolyn recently co-authored A Guide to Writing the NeurIPS Impact Statement , Institutionalizing Ethics in AI through Broader Impact requirements , and co-organised the NeurIPS works