LessWrong (30+ Karma)

LessWrong

Audio narrations of LessWrong posts.

  1. 5 GIỜ TRƯỚC

    “Consider donating to Alex Bores, author of the RAISE Act” by Eric Neyman

    Written by Eric Neyman, in my personal capacity. The views expressed here are my own. Thanks to Zach Stein-Perlman, Jesse Richardson, and many others for comments. Over the last several years, I’ve written a bunch of posts about politics and political donations. In this post, I’ll tell you about one of the best donation opportunities that I’ve ever encountered: donating to Alex Bores, who announced his campaign for Congress today. If you’re potentially interested in donating to Bores, my suggestion would be to: Read this post to understand the case for donating to Alex Bores. Understand that political donations are a matter of public record, and that this may have career implications. Decide if you are willing to donate to Alex Bores anyway. If you would like to donate to Alex Bores: donations today, Monday, Oct 20th, are especially valuable. You can donate at this link. Or if [...] --- Outline: (01:16) Introduction (04:55) Things I like about Alex Bores (08:55) Are there any things about Bores that give me pause? (09:43) Cost-effectiveness analysis (10:10) How does an extra $1k affect Alex Bores' chances of winning? (12:22) How good is it if Alex Bores wins? (12:54) Direct influence on legislation (14:46) The House is a first step toward even more influential positions (15:35) Encouraging more action in this space (16:20) How does this compare to other AI safety donation opportunities? (16:37) Comparison to technical AI safety (17:28) Comparison to non-politics AI governance (18:25) Comparison to other political opportunities (19:39) Comparison to non-AI safety opportunities (21:20) Logistics and details of donating (21:24) Who can donate? (21:34) How much can I donate? (23:16) How do I donate? (24:07) Will my donation be public? What are the career implications of donating? (25:37) Is donating worth the career capital costs in your case? (26:32) Some examples of potential donor profiles (30:34) A more quantitative cost-benefit analysis (32:33) Potential concerns (32:37) What if Bores loses? (33:21) What about the press coverage? (34:09) Feeling rushed? (35:16) Appendix (35:19) Details of the cost-effectiveness analysis of donating to Bores (35:25) Probability that Bores loses by fewer than 1000 votes (38:37) How much marginal funding would net Bores an extra vote? (40:42) Early donations help consolidate support (42:47) One last adjustment: the big tech super PAC (45:25) Cost-benefit analysis of donating to Bores vs. adverse career effects (45:40) The philanthropic benefit of donating (46:32) The altruistic cost of donating (48:18) Cost-benefit analysis (49:01) Caveats The original text contained 14 footnotes which were omitted from this narration. --- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/TbsdA7wG9TvMQYMZj/consider-donating-to-alex-bores-author-of-the-raise-act-1 --- Narrated by TYPE III AUDIO.

    50 phút
  2. 5 GIỜ TRƯỚC

    “Considerations around career costs of political donations” by GradientDissenter

    I’m close to a single-issue voter/donor. I tend to like politicians who show strong support for AI safety, because I think it's an incredibly important and neglected problem. So when I make political donations, it's not as salient to me which party the candidate is part of, if they've gone out of their way to support AI safety and have some integrity.[1] I think many people who focus on AI safety feel similarly. But working in government also seems important. I want the government to have the tools and technical understanding it needs to monitor AI and ensure it doesn’t cause a catastrophe. Some people are concerned that donating to Democrats makes it harder to work in a Republican administration, or that donating to Republicans makes it harder to work in a Democrat administration. Administrations understandably care about loyalty (though they also care about domain expertise), and they have [...] --- Outline: (02:33) Background/facts (10:15) Recommendation (17:50) Explanation (21:49) Is it possible you will end up working in government? (23:27) Other advice (24:37) Counterarguments (25:38) Rambling (26:36) How did you come up with these heuristics? The original text contained 5 footnotes which were omitted from this narration. --- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/8A8g4ryyZnaMhAQQF/considerations-around-career-costs-of-political-donations --- Narrated by TYPE III AUDIO.

    27 phút
  3. 6 GIỜ TRƯỚC

    “Bubble, Bubble, Toil and Trouble” by Zvi

    We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’ Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’ So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment. So, what's the case we’re in a bubble? What's the case we’re not? My Answer In Brief People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what [...] --- Outline: (01:17) My Answer In Brief (02:18) Time Sensitive Point of Order: Alex Bores Launches Campaign For Congress, If You Care About AI Existential Risk Consider Donating (04:09) So They're Saying There's a Bubble (05:04) AI Is Atlas And People Worry It Might Shrug (05:35) Can A Bubble Be Common Knowledge? (08:33) Steamrollers, Picks and Shovels (09:27) What Can Go Up Must Sometimes Go Down (11:36) What Can Go Up Quite A Lot Can Go Even More Down (13:17) Step Two Remains Important (15:00) Oops We Might Do It Again (15:49) Derek Thompson Breaks Down The Arguments (17:47) AI Revenues Are Probably Going To Go Up A Lot (20:19) True Costs That Matter Are Absolute Not Relative (21:05) We Are Spending a Lot But Also Not a Lot (22:46) Valuations Are High But Not Super High (23:59) Official GPU Depreciation Schedules Seem Pretty Reasonable To Me (29:14) The Bubble Case Seems Weak (30:53) What It Would Mean If Prices Did Go Down --- First published: October 20th, 2025 Source: https://www.lesswrong.com/posts/rkiBknhWh3D83Kdr3/bubble-bubble-toil-and-trouble --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    33 phút
  4. 1 NGÀY TRƯỚC

    “AI #138 Part 2: Watch Out For Documents” by Zvi

    As usual when things split, Part 1 is mostly about capabilities, and Part 2 is mostly about a mix of policy and alignment. Table of Contents The Quest for Sane Regulations. The GAIN Act and some state bills. People Really Dislike AI. They would support radical, ill-advised steps. Chip City. Are we taking care of business? The Week in Audio. Hinton talks to Jon Stewart, Klein to Yudkowsky. Rhetorical Innovation. How to lose the moral high ground. Water Water Everywhere. AI has many big issues. Water isn’t one of them. Read Jack Clark's Speech From The Curve. It was a sincere, excellent speech. How One Other Person Responded To This Thoughtful Essay. Some aim to divide. A Better Way To Disagree. Others aim to work together and make things better. Voice Versus Exit. The age old [...] --- Outline: (00:20) The Quest for Sane Regulations (05:56) People Really Dislike AI (12:22) Chip City (13:12) The Week in Audio (13:24) Rhetorical Innovation (20:53) Water Water Everywhere (23:57) Read Jack Clark's Speech From The Curve (28:26) How One Other Person Responded To This Thoughtful Essay (38:43) A Better Way To Disagree (59:39) Voice Versus Exit (01:03:51) The Dose Makes The Poison (01:06:44) Aligning a Smarter Than Human Intelligence is Difficult (01:10:08) You Get What You Actually Trained For (01:15:54) Messages From Janusworld (01:18:37) People Are Worried About AI Killing Everyone (01:22:40) The Lighter Side The original text contained 1 footnote which was omitted from this narration. --- First published: October 17th, 2025 Source: https://www.lesswrong.com/posts/gCuJ5DabY9oLNDs9B/ai-138-part-2-watch-out-for-documents --- Narrated by TYPE III AUDIO. --- Images from the article: Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    1 giờ 25 phút
  5. 1 NGÀY TRƯỚC

    “The Dark Arts of Tokenization or: How I learned to start worrying and love LLMs’ undecoded outputs” by Lovre

    Audio note: this article contains 225 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Introduction There are _208_ ways to output the text ▁LessWrong[1] with the Llama 3 tokenizer, but even if you were to work with Llama 3 for thousands of hours, you would be unlikely to see any but one. An example that generalizes quite widely: if you prompt Llama 3.2 3B Base with the text You're interested in rationality and AI? You should visit, there is a _approx 22.7003%_ chance that it outputs the text ▁LessWrong, of which  _approx 22.7001%_ is that it outputs exactly the tokens ▁Less and Wrong, _approx 0.00024%_ that it outputs exactly the tokens ▁Less, W, and rong, and _approx 0.000017%_ chance that it outputs any of the other _206_ tokenizations which result in the text ▁LessWrong.All _208_[2] possible tokenizations of  LessWrong, the [...] --- Outline: (00:26) Introduction (05:38) A motivating example (07:23) Background information on tokenizers (08:40) Related works (12:05) Basic computations with Llama 3 tokenizer (14:31) Constructing a distribution over all tokenizations of a string (15:05) To which extent do alternative tokenizations break Llama? (15:31) ARC-Easy (20:28) A little bit of introspection (22:55) Learning multiple functions redux, finally (25:44) Function maximizer (29:42) An example (30:48) Results (31:23) What I talk about when I talk about both axes (32:59) Encoding and decoding bits (35:56) Decoding (36:44) Encoding (38:54) Could the usage of alternative tokenizations arise naturally? (41:38) Has it already happened? (43:27) Appendix: The psychological effects of tokenization The original text contained 18 footnotes which were omitted from this narration. --- First published: October 17th, 2025 Source: https://www.lesswrong.com/posts/g9DmSzHxJXBD9poJR/the-dark-arts-of-tokenization-or-how-i-learned-to-start --- Narrated by TYPE III AUDIO. --- Images from the article: __T3A_INLINE_LATEX_PLACEHOLDER___208___T3A_INLINE_LATEX_END_PLACEHOLDER__ __T3A_FOOTNOTE_REMOVED__ possible tokenizations of  LessWrong, the number (#) of tokens in each tokenization, and the conditional probability of the token sequence given the prompt. Green highlight denotes the most likely continuation with that number of tokens, and red highlight the least likely one (except for 2 and 10, for which there exists a unique tokenization of that length)." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    47 phút

Giới Thiệu

Audio narrations of LessWrong posts.

Có Thể Bạn Cũng Thích