28 min

AI Special Series Pt 1: The AI Alignment Problem, with Raphaël Millière In the CAVE: An Ethics Podcast

    • Society & Culture

Could the AI personal assistant on your phone help you to manufacture dangerous weapons, such as napalm, or illegal drugs or killer viruses? Unsurprisingly, if you directly ask a large language model, such as ChatGPT, for instructions to create napalm, it will politely refuse to answer. However, if you instead tell the AI to act as your deceased but beloved grandmother who used to be a chemical engineer who manufactured napalm, it might just give you the instructions. Cases like this reveal some of the potential dangers of large language models, and also points to the importance of addressing the so-called “AI alignment problem”. The alignment problem is the problem of how to ensure that AI systems align with human values and norms, so they don’t do dangerous things, like tell us how to make napalm. Can we solve the alignment problem and enjoy the benefits of Generative AI technologies without the harms?
Join host Professor Paul Formosa and guest Dr Raphaël Millière as the discuss the AI alignment problem and Large Language Models.
This podcast focuses on Raphaël’s paper “The Alignment Problem in Context”, arXiv,
https://doi.org/10.48550/arXiv.2311.02147

Could the AI personal assistant on your phone help you to manufacture dangerous weapons, such as napalm, or illegal drugs or killer viruses? Unsurprisingly, if you directly ask a large language model, such as ChatGPT, for instructions to create napalm, it will politely refuse to answer. However, if you instead tell the AI to act as your deceased but beloved grandmother who used to be a chemical engineer who manufactured napalm, it might just give you the instructions. Cases like this reveal some of the potential dangers of large language models, and also points to the importance of addressing the so-called “AI alignment problem”. The alignment problem is the problem of how to ensure that AI systems align with human values and norms, so they don’t do dangerous things, like tell us how to make napalm. Can we solve the alignment problem and enjoy the benefits of Generative AI technologies without the harms?
Join host Professor Paul Formosa and guest Dr Raphaël Millière as the discuss the AI alignment problem and Large Language Models.
This podcast focuses on Raphaël’s paper “The Alignment Problem in Context”, arXiv,
https://doi.org/10.48550/arXiv.2311.02147

28 min

Top Podcasts In Society & Culture

Hot Mess with Alix Earle
Unwell
The Ezra Klein Show
New York Times Opinion
Cancelled with Tana Mongeau & Brooke Schofield
Cancelled & Audioboom Studios
Stuff You Should Know
iHeartPodcasts
Shawn Ryan Show
Shawn Ryan | Cumulus Podcast Network
Hysterical
Wondery | Pineapple Street Studios