LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. قبل ٦ ساعات

    “Beliefs about formal methods and AI safety” by Quinn

    I appreciate Theodore Ehrenborg's comments. As a wee lad, I heard about mathematical certainty of computer programs. Let's go over what I currently believe and don’t believe. First: what is formal verification  Sometimes you get pwned because of the spec-implementation gap. The computer did not do what it should’ve done. Other times, you get pwned by the world-spec gap. The computer wasn’t wrong, your “shoulds” were. Expanding the domain of compiletime knowledge A compiler tells you the problem with your code when it is, in some sense, “wrong”. When you can define the sense in which your code can be “wrong”, you have circumscribed some domain of compiletime knowledge. In other words, you’ve characterized the kinds of things you can know at compiletime. The less you know at compiletime, the more you find out at runtime. The less you can afford to wait till [...] --- Outline: (00:25) First: what is formal verification (00:44) Expanding the domain of compiletime knowledge (01:29) Isolating the bug surface to the world-spec gap, or sidechannels (02:10) Exploiting inductive structure (03:50) What I do not say (04:00) Specify what you want (04:42) Prove that the AI is correct (05:56) TLDR, this whole genre of point the formal methods at the learned component itself is viewed by me as a nonstarter. (06:13) What I do say (06:20) Swiss cheese! (07:24) Infrastructure hardening! (08:30) Boxing/interfaces! (09:29) Conclusion The original text contained 3 footnotes which were omitted from this narration. --- First published: October 23rd, 2025 Source: https://www.lesswrong.com/posts/CCT7Qc8rSeRs7r5GL/beliefs-about-formal-methods-and-ai-safety --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١١ من الدقائق
  2. قبل يوم واحد

    “New Statement Calls For Not Building Superintelligence For Now” by Zvi

    Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies. We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon. Thus, the Statement on Superintelligence from FLI, which I have signed. Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties [...] --- Outline: (02:02) A Brief History Of Prior Statements (03:51) This Third Statement (05:08) Who Signed It (07:27) Pushback Against the Statement (09:05) Responses To The Pushback (12:32) Avoid Negative Polarization But Speak The Truth As You See It --- First published: October 24th, 2025 Source: https://www.lesswrong.com/posts/QzY6ucxy8Aki2wJtF/new-statement-calls-for-not-building-superintelligence-for --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ١٤ من الدقائق

حول

Audio narrations of LessWrong posts.

قد يعجبك أيضًا