LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 22 GIỜ TRƯỚC

    “Rogue internal deployments via external APIs” by Fabien Roger, Buck

    Once AI companies build powerful AIs, they may: Give internal AIs access to sensitive internal privileges (e.g. access to the internal infra that touches model weights, help with training of the next generation of AIs, …); Have mitigations specific to those internal deployments enforced via monitoring of the internal LLM API; Have an external API deployment without those mitigations. In this situation, an internal AI may build an agent scaffold that makes calls to the external API and uses the internal sensitive privileges. Such agent scaffold would be in a better position to cause a catastrophe because it would not be subject to the same monitoring as the internal API. I call this a rogue internal deployment via external APIs. (It is “internal” because the model weights and the agent scaffold never leave the cluster.) I think preventing those is similarly important from a misalignment perspective as preventing [...] --- Outline: (01:43) Rogue internal deployments via external APIs (03:29) A variation: rogue internal deployments via cross-company APIs (04:28) A possible mitigation: preventing the creation of scaffolds that use external LLM APIs via monitoring (06:29) Why I am more pessimistic about other solutions (06:34) Monitoring the external API (08:47) Preventing access to external APIs (10:22) Monitoring access to sensitive permissions (10:57) Final thoughts --- First published: October 15th, 2025 Source: https://www.lesswrong.com/posts/fqRmcuspZuYBNiQuQ/rogue-internal-deployments-via-external-apis --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 phút
  2. 1 NGÀY TRƯỚC

    “It will cost you nothing to ‘bribe’ a Utilitarian” by Gabriel Alfour

    Audio note: this article contains 41 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Abstract We present a formal model demonstrating how utilitarian reasoning creates a structural vulnerability that allows AI corporations to acquire a public veneer of safety at arbitrary low cost. Drawing from the work from Houy [2014], we prove that an organisation can acquire _k_ safety minded employees for a vanishingly small premium _epsilon_. This results formalises a well known phenomenon in AI safety, wherein researchers concerned about existential risks from AI joins an accelerationist corporation under the rationale of "changing things from the inside", without ever producing measurable safety improvements. We discuss implications for AI governance, organisational credibility, and the limitations of utilitarian decision-making in competitive labour markets. 1) Introduction The title is a play on It will [...] --- Outline: (00:22) Abstract (01:13) 1) Introduction (02:06) 2) Formal Framework (04:42) 3) Implications (06:22) 4) Future Work (08:10) Conclusion The original text contained 2 footnotes which were omitted from this narration. --- First published: October 15th, 2025 Source: https://www.lesswrong.com/posts/MFg7nvR2QGd6KkLJZ/it-will-cost-you-nothing-to-bribe-a-utilitarian --- Narrated by TYPE III AUDIO.

    9 phút

Giới Thiệu

Audio narrations of LessWrong posts.

Có Thể Bạn Cũng Thích