342. Superalignment with Sam Altman’s Values

This Machine Kills Podcast

We talk about how everybody on the superalignment team at OpenAI—focused on safety, risk, adversarial testing, societal impacts, and existential concerns—is resigning, including high-profile people like Illya Sutskever. And nobody can talk about it because of draconian rules (even for Silicon Valley) about non-disclosure and non-disparagement people must sign (or risk their vested equity) upon exiting the company. For us, the turmoil of OpenAI is indicative of conflict between true believers (superalignment) and cynical operators (Sam Altman). Outro: Aunty Donna – Real Estate Agents https://www.youtube.com/watch?v=VGm267O04a8 ••• “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence ••• ChatGPT can talk, but OpenAI employees sure can’t https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada