LessWrong Curated Podcast LessWrong
-
- Teknologi
-
Audio version of the posts shared in the LessWrong Curated newsletter.
-
“Non-Disparagement Canaries for OpenAI” by aysja, Adam Scholl
Since at least 2017, OpenAI has asked departing employees to sign offboarding agreements which legally bind them to permanently—that is, for the rest of their lives—refrain from criticizing OpenAI, or from otherwise taking any actions which might damage its finances or reputation.[1]If they refused to sign, OpenAI threatened to take back (or make unsellable) all of their already-vested equity—a huge portion of their overall compensation, which often amounted to millions of dollars. Given this...
-
“MIRI 2024 Communications Strategy” by Gretta Duleba
As we explained in our MIRI 2024 Mission and Strategy update, MIRI has pivoted to prioritize policy, communications, and technical governance research over technical alignment research. This follow-up post goes into detail about our communications strategy. The Objective: Shut it Down[1]Our objective is to convince major powers to shut down the development of frontier AI systems worldwide before it is too late. We believe that nothing less than this will prevent future misaligned smarter-than...
-
“OpenAI: Fallout” by Zvi
Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett JohanssonWe have learned more since last week. It's worse than we knew.How much worse? In which ways? With what exceptions?That's what this post is about. The Story So FarFor years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents cont...
-
[HUMAN VOICE] Update on human narration for this podcast
Contact: patreon.com/lwcurated or [perrin dot j dot walker plus lesswrong fnord gmail].All Solenoid's narration work found here.
-
“Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman
Crossposted from AI Lab Watch. Subscribe on Substack.Introduction. Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board. Anthropic sometimes emphasizes that the Trust is an experiment, but mostly points to it to argue that Anthropic will be able to promote safety and benefit-sharing over profit.[1]But the Trust's details have not been published and some information Anthropic has shared is concerning. In particular, ...
-
“Notifications Received in 30 Minutes of Class” by tanagrabeast
Introduction. If you are choosing to read this post, you've probably seen the image below depicting all the notifications students received on their phones during one class period. You probably saw it as a retweet of this tweet, or in one of Zvi's posts. Did you find this data plausible, or did you roll to disbelieve? Did you know that the image dates back to at least 2019? Does that fact make you more or less worried about the truth on the ground as of 2024?Last month, I performed...