AI Safety...Ok Doomer: with Anca Dragan

Google DeepMind: The Podcast

 Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind. 

For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".

Thanks to everyone who made this possible, including but not limited to: 

  • Presenter: Professor Hannah Fry
  • Series Producer: Dan Hardoon
  • Editor: Rami Tzabar, TellTale Studios 
  • Commissioner & Producer: Emma Yousif
  • Production support: Mo Dawoud
  • Music composition: Eleni Shaw
  • Camera Director and Video Editor: Tommy Bruce
  • Audio Engineer: Perry Rogantin
  • Video Studio Production: Nicholas Duke
  • Video Editor: Bilal Merhi
  • Video Production Design: James Barton
  • Visual Identity and Design: Eleanor Tomlinson
  • Commissioned by Google DeepMind

Please subscribe on your preferred podcast platform. Want to share feedback? Why not leave a review? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes. 

Apresentadores e convidados

Para ouvir episódios explícitos, inicie sessão.

Fique por dentro deste podcast

Inicie sessão ou crie uma conta para seguir podcasts, salvar episódios e receber as atualizações mais recentes.

Selecionar um país ou região

África, Oriente Médio e Índia

Ásia‑Pacífico

Europa

América Latina e Caribe

Estados Unidos e Canadá