How existing safety mitigation safeguards fail in LLMs with Khaoula Chehbouni, PhD Researcher at McGill and MILA

Activists Of Tech — The responsible tech podcast

Large Language Models, or LLMs, may be the most popular type of AI systems, often seen as an alternative to search engines, even though they should not as the information they throw at users only resemble and mimic human speech and is not always factual, among many other issues that are talked about in this episode.

Our guest today is Khaoula Chehbouni, she is a PhD Student in Computer Science at McGill University and Mila (Quebec AI Institute). Khaoula was awarded the prestigious FRQNT Doctoral Training Scholarship to research fairness and safety in large language models. She previously worked as a Senior Data Scientist at Statistics Canada and completed her Masters in Business Intelligence at HEC Montreal, where she received the Best Master Thesis award.

In this episode, we talked about the impact of Western narratives on which LLMs are trained, the limits of trust and safety, how racism and stereotypes are mirrored and amplified by LLMs, and what it is like to be a minority in a STEM academic environment. I hope you’ll enjoy this episode.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada