50 min

Artificial Intelligence and Health Security, managing the risks Evidence-Based Health Care

    • Education

Professor Karl Roberts, University of New England, NSW, Australia gives a talk on generative AI and large language models as applied to healthcare. Dr Karl Roberts is the Head of the School of Health and Professor of Health and Wellbeing at the University of New England, NSW, Australia. Karl has over thirty years-experience working in academia at institutions in Australia, the UK and USA. He has also acted as an advisor for various international bodies and governments on issues related to wellbeing, violence prevention and professional practice. Notably, this has included working with policing agencies, developing policy and practice on suicide, stalking, and homicide prevention. Interpol developing guidance for organisational responses to deliberate events such as biological weapon use. The UK government SAGE advisory group throughout the Covid19 pandemic focusing upon security planning. The European Union advising on biological terrorism, and extremist use of AI. World Health Organisation where he worked in a unit developing policy and practice related to deliberate biological threat events.

There has been substantial recent interest in the benefits and risks of artificial intelligence (AI). This has ranged from extolling its virtues as a harmless aid to decision making, as a tool in research, and as a means of improving economic productivity. To those claiming that unchecked AI is a significant threat to human wellbeing and could be an existential threat to humanity. One area of significant recent advancement in AI has been the field of Large Language Models (LLMs). Exemplified by tools such as Chat-GPT, or DALL-E, these so-called generative AI models allow individuals to generate new outputs through interacting with the models using simple natural language inputs. Various versions of LLMs have been applied to healthcare, and have variously been shown to be useful in areas as diverse as case formulation, diagnosis, novel drug discovery, and policy development. However, as with any new technology, there is a potential 'darkside,' and it is possible to utilise these tools for nefarious purposes. This talk will give a brief introduction to generative AI and large language models as applied to healthcare. It will then discuss the potential for misuse of these models, seeking to highlight how they may be misused and how significant a threat they could pose to health security. Finally we will consider strategies for managing the risks set against the possible benefits of generative AI. This talk is based on work carried out by the author and colleagues at the World Health Organisation and the Royal United Services Institute.

Professor Karl Roberts, University of New England, NSW, Australia gives a talk on generative AI and large language models as applied to healthcare. Dr Karl Roberts is the Head of the School of Health and Professor of Health and Wellbeing at the University of New England, NSW, Australia. Karl has over thirty years-experience working in academia at institutions in Australia, the UK and USA. He has also acted as an advisor for various international bodies and governments on issues related to wellbeing, violence prevention and professional practice. Notably, this has included working with policing agencies, developing policy and practice on suicide, stalking, and homicide prevention. Interpol developing guidance for organisational responses to deliberate events such as biological weapon use. The UK government SAGE advisory group throughout the Covid19 pandemic focusing upon security planning. The European Union advising on biological terrorism, and extremist use of AI. World Health Organisation where he worked in a unit developing policy and practice related to deliberate biological threat events.

There has been substantial recent interest in the benefits and risks of artificial intelligence (AI). This has ranged from extolling its virtues as a harmless aid to decision making, as a tool in research, and as a means of improving economic productivity. To those claiming that unchecked AI is a significant threat to human wellbeing and could be an existential threat to humanity. One area of significant recent advancement in AI has been the field of Large Language Models (LLMs). Exemplified by tools such as Chat-GPT, or DALL-E, these so-called generative AI models allow individuals to generate new outputs through interacting with the models using simple natural language inputs. Various versions of LLMs have been applied to healthcare, and have variously been shown to be useful in areas as diverse as case formulation, diagnosis, novel drug discovery, and policy development. However, as with any new technology, there is a potential 'darkside,' and it is possible to utilise these tools for nefarious purposes. This talk will give a brief introduction to generative AI and large language models as applied to healthcare. It will then discuss the potential for misuse of these models, seeking to highlight how they may be misused and how significant a threat they could pose to health security. Finally we will consider strategies for managing the risks set against the possible benefits of generative AI. This talk is based on work carried out by the author and colleagues at the World Health Organisation and the Royal United Services Institute.

50 min

Top Podcasts In Education

The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
Mick Unplugged
Mick Hunt
TED Talks Daily
TED
School Business Insider
John Brucato
The Rich Roll Podcast
Rich Roll

More by Oxford University

Approaching Shakespeare
Oxford University
Theoretical Physics - From Outer Space to Plasma
Oxford University
Philosophy for Beginners
Oxford University
The Secrets of Mathematics
Oxford University
Aesthetics and Philosophy of Art lectures
Oxford University
Anthropology
Oxford University