에피소드 326개

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.

If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

EA Forum Podcast (Curated & popular‪)‬ EA Forum Team

    • 사회 및 문화

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.

If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

    “Why so many ‘racists’ at Manifest?” by Austin

    “Why so many ‘racists’ at Manifest?” by Austin

    Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to “would you recommend to a friend” was a 9.0/10. Reviewers said nice things like “one of the best weekends of my life” and “dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams” and “I’ve always found tribalism mysterious, but perhaps that was just because I hadn’t yet found my tribe.”
    Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here.
    However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as “racist”. Why did we invite these folks?
    First: our sessions and guests were mostly not controversial — [...]
    ---
    Outline:
    (01:01) First: our sessions and guests were mostly not controversial — despite what you may have heard
    (03:03) Okay, but there sure seemed to be a lot of controversial ones…
    (06:03) Bringing people together with prediction markets
    (07:31) Anyways, controversy bad
    (08:57) Aside: Is Manifest an Effective Altruism event?
    ---

    First published:

    June 18th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest

    ---
    Narrated by TYPE III AUDIO.

    • 10분
    “Help Fund Insect Welfare Science” by Bob Fischer, Daniela R. Waldhorn, abrahamrowe

    “Help Fund Insect Welfare Science” by Bob Fischer, Daniela R. Waldhorn, abrahamrowe

    The Arthropoda Foundation
    Tens of trillions of insects are used or killed by humans across dozens of industries. Despite being the most numerous animal species reared by animal industries, we know next to nothing about what's good or bad for these animals. And right now, funding for this work is scarce. Traditional science funders won’t pay for it; and within EA, the focus is on advocacy, not research. So, welfare science needs your help.
    We’re launching the Arthropoda Foundation, a fund to ensure that insect welfare science gets the essential resources it needs to provide decision-relevant answers to pressing questions. Every dollar we raise will be granted to research projects that can’t be funded any other way.
    We’re in a critical moment for this work. Over the last year, field-building efforts have accelerated, setting up academic labs that can tackle key studies. However, funding for these studies is [...]
    ---
    Outline:
    (00:10) The Arthropoda Foundation
    (01:17) Why do we need a fund?
    (02:55) Team
    ---

    First published:

    June 14th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/2NsS7gjccJAKMf4co/help-fund-insect-welfare-science

    ---
    Narrated by TYPE III AUDIO.

    • 4분
    “Maybe let the non-EA world train you” by ElliotT

    “Maybe let the non-EA world train you” by ElliotT

    This post is for EAs at the start of their careers who are considering which organisations to apply to, and their next steps in general.
    Conclusion up front: It can be really hard to get that first job out of university. If you don’t get your top picks, your less exciting backup options can still be great for having a highly impactful career. If those first few years of work experience aren’t your best pick, they will still be useful as a place where you can ‘learn how to job’, save some money, and then pivot or grow from there.
    The main reasons are:
    The EA job market can be grim. Securing a job at an EA organisation out of university is highly competitive, often resulting in failing to get a job, or chaotic job experiences due to the nascent nature of many EA orgs. An alternative [...] ---
    Outline:
    (01:58) What's the problem? Three failure modes of trying to get an EA job
    (06:15) Maybe let the non-EA world train you
    (08:50) Let's get specific. Some of my story
    (11:45) Caveats
    (12:58) Wrapping up
    ---

    First published:

    June 14th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/ZvXBSs9Nz3dKBKcAo/maybe-let-the-non-ea-world-train-you

    ---
    Narrated by TYPE III AUDIO.

    • 13분
    “Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman

    “Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman

    Crossposted from AI Lab Watch. Subscribe on Substack.
    Introduction. Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board. Anthropic sometimes emphasizes that the Trust is an experiment, but mostly points to it to argue that Anthropic will be able to promote safety and benefit-sharing over profit.[1]
    But the Trust's details have not been published and some information Anthropic has shared is concerning. In particular, Anthropic's stockholders can apparently overrule, modify, or abrogate the Trust, and the details are unclear.
    Anthropic has not publicly demonstrated that the Trust would be able to actually do anything that stockholders don't like.
    The facts
    There are three sources of public information on the Trust:
    The Long-Term Benefit Trust (Anthropic 2023) Anthropic Long-Term Benefit Trust (Morley et al. 2023) The $1 billion gamble to ensure AI doesn't destroy humanity (Vox: Matthews 2023) They say there's [...]
    ---
    Outline:
    (00:53) The facts
    (02:51) Conclusion
    The original text contained 2 footnotes which were omitted from this narration.
    ---

    First published:

    May 27th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/JARcd9wKraDeuaFu5/maybe-anthropic-s-long-term-benefit-trust-is-powerless

    ---
    Narrated by TYPE III AUDIO.

    • 5분
    “Summary of Situational Awareness - The Decade Ahead” by OscarD

    “Summary of Situational Awareness - The Decade Ahead” by OscarD

    Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him.
    Short Summary
    Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of [...] ---
    Outline:
    (00:12) Short Summary
    (02:16) 1. From GPT-4 to AGI: Counting the OOMs
    (02:23) Past AI progress
    (05:37) Training data limitations
    (06:41) Trend extrapolations
    (07:57) The modal year of AGI is soon
    (09:29) 2. From AGI to Superintelligence: the Intelligence Explosion
    (09:36) The basic intelligence explosion case
    (10:46) Objections and responses
    (14:06) The power of superintelligence
    (16:28) III The Challenges
    (16:31) IIIa. Racing to the Trillion-Dollar Cluster
    (20:58) IIIb. Lock Down the Labs: Security for AGI
    (21:05) The power of espionage
    (22:09) Securing model weights
    (23:46) Protecting algorithmic insights
    (24:41) Necessary steps for improved security
    (26:35) IIIc. Superalignment
    (29:15) IIId. The Free World Must Prevail
    (32:15) 4. The Project
    (34:47) 5. Parting Thoughts
    (35:51) Responses to Situational Awareness
    The original text contained 1 footnote which was omitted from this narration.
    ---

    First published:

    June 8th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead

    ---
    Narrated by TYPE III AUDIO.

    • 36분
    “I doubled the world record cycling without hands for AMF” by Vincent van der Holst

    “I doubled the world record cycling without hands for AMF” by Vincent van der Holst

    A couple weeks ago I announced I was going to try and break the world record cycling without hands for AMF. That post also explains why I wanted to break that record. Last Friday we broke that record and raised nearly €10.000 for AMF. Here's what happened on friday. You can still donate here.
    What was the old record?
    Canadian Robert John Murray rode the old record of 130.29 kilometers in 5:37 hours in Calgary on June 12, 2023. His average speed was 23.2 kilometers per hour. See here the Guinness World Records page.
    I managed to double the record and these were my stats.
    How did the record attempt itself go?
    On Friday, June 7, I started the record attempt on the closed cycling course of WV Amsterdam just after 6 am. I got up at half past four and immediately drank a [...]
    ---

    First published:

    June 11th, 2024


    Source:

    https://forum.effectivealtruism.org/posts/5ru7nEtC6mufuBXbk/i-doubled-the-world-record-cycling-without-hands-for-amf

    ---
    Narrated by TYPE III AUDIO.

    • 8분

인기 사회 및 문화 팟캐스트

여둘톡
여자 둘이 토크하고 있습니다
Korean. American. Podcast
Daniel and Jun
무소속 생활자
도아,예진
This American Life
This American Life
삼모고
세레나, 윤호피디 | 라즈베리 유니버스 클럽
Happy Place
Fearne Cotton

추천 항목

Astral Codex Ten Podcast
Jeremiah
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Dwarkesh Podcast
Dwarkesh Patel
The Ezra Klein Show
New York Times Opinion