2 min

“Articles about recent OpenAI departures” by bruce EA Forum Podcast (Curated & popular)

    • Philosophy

This is a link post. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.
Some quotes perhaps worth highlighting:
Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there’ll be much focus on avoiding catastrophic risk from future AI models.
-Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more").
“I joined with substantial hope that OpenAI [...]


The original text contained 1 footnote which was omitted from this narration.
---

First published:

May 17th, 2024


Source:

https://forum.effectivealtruism.org/posts/ckYw5FZFrejETuyjN/articles-about-recent-openai-departures

---
Narrated by TYPE III AUDIO.

This is a link post. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.
Some quotes perhaps worth highlighting:
Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there’ll be much focus on avoiding catastrophic risk from future AI models.
-Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more").
“I joined with substantial hope that OpenAI [...]


The original text contained 1 footnote which was omitted from this narration.
---

First published:

May 17th, 2024


Source:

https://forum.effectivealtruism.org/posts/ckYw5FZFrejETuyjN/articles-about-recent-openai-departures

---
Narrated by TYPE III AUDIO.

2 min