86 episodes
EA Forum Podcast (Curated & popular) EA Forum Team
-
- Society & Culture
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.
-
“EA organizations should have a transparent scope” by Joey
Executive summaryOne of the biggest challenges of being in a community that really cares about counterfactuals is knowing where the most important gaps are and which areas are already effectively covered. This can be even more complex with meta organizations and funders that often have broad scopes that change over time. However, I think it is really important for every meta organization to clearly establish what they cover and thus where these gaps are; there is a substantial negative flowthrough effect when a community thinks an area is covered when it is not. Why this mattersThe topic of having a transparent scope recently came up at a conference as one of the top concerns with many EA meta orgs. Some negative effects that have been felt by the community are in large part due to unclear scopes, including: Organizations leaving a space thinking it's covered when it's not. Funders reducing funding in an area due to an assumption that someone else is covering it when there are still major gaps.Two organizations working on the same thing without knowledge of each other, due to both having a broad mandate, but simultaneously putting resources into an overlapping subcomponent of this mandate.Talent being turned off or feeling misled by EA when they think an org misportrays itself. Talent ‘dropping out of the funnel’ when they go to what they believe is the primary organization covering an area and finding that what they care about isn’t covered, due to the organization claiming too broad a mandate.There can also be a significant amount of general frustration caused when people think an organization will cover, or is covering, an area and then an organization fails to deliver (often on something they did not even plan on doing). What do I mean when I say that organizations should have a transparent scope: Broadly, I mean organizations being publicly clear and specific about what they are planning to cover both in terms of action and cause area. In a relevant timeframe: I think this is most important in the short term (e.g., there is a ton of value in an organization saying what they are going to cover over the next 12 months, and what they have covered over the last months). For the most important questions: This clarity needs to both be in priorities (e.g., cause prioritization) and planned actions (e.g., working with student chapters). This can include things the organization might like or think is impactful to do but are not doing due to capacity constraints or its current strategic direction.For the areas most likely for people to confuse: It is particularly important to provide clarity about things that people think one might be doing (for example, Charity Entrepreneurship probably doesn’t need to clarify that it doesn’t sell flowers, but should really be transparent over whether it plans to incubate projects in a certain cause area or not). How to do thisWhen I have talked to organizations about this, I sometimes think that the “perfect” becomes the enemy of the good and they do not [...]
Source:
https://forum.effectivealtruism.org/posts/mzzPMrBjGpra2JSDw/ea-organizations-should-have-a-transparent-scope
Share feedback on this narration.
Narrated by TYPE III AUDIO. -
“Effective altruism organizations should avoid using “polarizing techniques”” by Joey
TL;DR: The EA movement should not use techniques that alienate people from the EA community as a whole if they do not align with a particular subgroup within the community. These approaches not only have an immediate negative impact on the EA community, but also have long-term repercussions on the sub-community utilizing them. Right now, the EA movement uses these sorts of tactics too often. People connect with the EA movement through many different channels, and often encounter sub-communities before they have a full understanding of the movement and the wide variety of opinions and viewpoints within it. These sub-communities can sometimes make the mistake of using "polarizing techniques". By this, I mean strategies that alienate people or burn bridges with the broader community. This could be from pushing a sub-perspective too hard, or being aggressively dismissive of other views.An example of this might be if I met a talented person at a party and they said they wanted to change to an impactful career, but had never heard of EA. If I then proceeded to aggressively push founding a charity through Charity Entrepreneurship (the organization) as a career path to them, to the point where they got turned off of EA altogether if they don’t come on board with my claims, I would consider that a polarizing approach: either they choose charity entrepreneurship as a path, or they don’t engage with effective altruism at all. Note that in the short term, all Charity Entrepreneurship really measures impact-wise is how many great charities get started, so a good person going into policy due to me connecting them to Probably Good means nothing to our organizational impact. Taken to an extreme, it might be worth pushing quite hard if I think that founding nonprofits is many times more important as a career path than policy. However, I think this style hurts both the community and Charity Entrepreneurship long-term. This phenomenon occurs across a diverse range of people, both in terms of funding and career transitions. Most often, it revolves around cause prioritization. It can be disappointing when someone does not share your enthusiasm for your preferred causes, but there is still a lot of value in directing them to the most impactful path they would in fact consider pursuing. The clearest way this technique is damaging, is that turning off someone from one part of the community often demotivates them to engage positively in other parts of the community. It makes them more likely to become an active critic instead of a neutral or contributing member of a different sub-community, or to the philosophy of effective altruism as a whole.Different sub-communities look for different types of people and resources. It's difficult for one person to have a bird's eye view on all sub-communities within EA, and it’s easy to overvalue your own community's certain needs or strengths. On numerous occasions, I have witnessed instances where one sub-community dismisses individuals possessing skills that would be immensely valuable in another segment of the community. It seems [...]
Source:
https://forum.effectivealtruism.org/posts/viXCv8thAAd68Qnfs/effective-altruism-organizations-should-avoid-using
Share feedback on this narration.
Narrated by TYPE III AUDIO. -
“Critiques of prominent AI safety labs: Conjecture” by Omega
In this series, we consider AI safety organizations that have received more than $10 million per year in funding. There have already been several conversations and critiques around MIRI (1) and OpenAI (1,2,3), so we will not be covering them. The authors include one technical AI safety researcher (>4 years experience), and one non-technical community member with experience in the EA community. We’d like to make our critiques non-anonymously but believe this will not be a wise move professionally speaking. We believe our criticisms stand on their own without appeal to our positions. Readers should not assume that we are completely unbiased or don’t have anything to personally or professionally gain from publishing these critiques. We’ve tried to take the benefits and drawbacks of the anonymous nature of our post seriously and carefully, and are open to feedback on anything we might have done better.This is the second post in this series and it covers Conjecture. Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement. We shared a draft of this document with Conjecture for feedback prior to publication, and include their response below. We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post.We would like to invite others to share their thoughts in the comments openly if you feel comfortable, or contribute anonymously via this form. We will add inputs from there to the comments section of this post, but will likely not be updating the main body of the post as a result (unless comments catch errors in our writing).Key TakeawaysFor those with limited knowledge and context on Conjecture, we recommend first reading or skimming the About Conjecture section. Time to read the core sections (Criticisms & Suggestions and Our views on Conjecture) is 22 minutes. Criticisms and SuggestionsWe think Conjecture’s research is low quality (read more). Their posts don’t always make assumptions clear, don’t make it clear what evidence base they have for a given hypothesis, and evidence is frequently cherry-picked. We also think their bar for publishing is too low, which increases the signal to noise ratio. Conjecture has acknowledged some of these criticisms, but not all (read more).We make specific critiques of examples of their research from their initial research agenda (read more).There is limited information available on their new research direction (cognitive emulation), but from the publicly available information it appears extremely challenging and so we are skeptical as to its tractability (read more).We have some concerns with the CEO’s character and trustworthiness because, in order of importance (read more):The CEO and Conjecture have misrepresented themselves to external parties multiple times (read more);The CEO’s involvement in EleutherAI and Stability AI has contributed to race dynamics (read more);The CEO [...]
Source:
https://forum.effectivealtruism.org/posts/gkfMLX4NWZdmpikto/critiques-of-prominent-ai-safety-labs-conjecture
Share feedback on this narration.
Narrated by TYPE III AUDIO. -
“Why I spoke to TIME magazine, and My Experience as a Female AI Researcher in Silicon Valley [SA Sequence Intro, Advice, and AMA]” by Lucretia
Crossposted on Medium here.Twitter: @lucreti_aFrom Lore Olympus.Thank you to the supportive EA members who encouraged me to publicly share this difficult experience, to my friends and research collaborators for your kindness, and to the courageous women who helped me in writing this post, who I hope can someday speak publicly.To those who know me, please call me Lucretia.This is a megapost. Each section has a distinct purpose and may evolve into its own standalone post. For the full picture, I recommend reading to the end. My cross-posted version on Medium is broken into sections for easier reading.0. OverviewIntroduction. I was one of the women who spoke to TIME magazine about sexual harassment and abuse in EA. Here is my story without media distortions.Advice for Female Founders and AI Researchers in the Valley. Silicon Valley can be a brutal place for women. This is what I wish I knew five years ago.My Case Study: I am an AI researcher. I believe my AI alignment research career was needlessly encumbered by:My experience with the sexually abusive red pill and pickup artist sphere, which entwined with a branch of AI safety in Cambridge, MA and Silicon Valley. I describe the unethical core of red pill ideology, including the running of “rape scripts.”The recent retaliation by a Silicon Valley AI community to my report of harm. This community’s aggressive reaction showed many gender biases latent in AI culture.Systemic Sexual Violence in Silicon Valley. I believe the male-dominated environment, nepotistic connections to investor money, extreme power disparities between wealthy AI researchers and aspiring young women in the AI and startup sectors, hacker house party culture, psychedelics misused as date rape drugs, cults of personality, substantial population of low empathy, risk-seeking, and/or narcissistic men, and lack of functional policing mechanisms make sexual violence a systemic problem in a critical X-risk industry.Why I Spoke to TIME. I address some misconceptions about the original TIME article on sexual harassment, and why I spoke to TIME in the first place.Helpful Books and Movies. I share learnings about sexual harassment and abuse after ~15 months of focusing on the problem, including my favorite books and movies about sexual harassment/abuse to flesh out more conceptual space. For all the seriousness of this post, these books and movies are entertaining, gorgeous, and healing!Future Sequences? Depending on the reactions to this post, I would love to write a Sequence of sexual harassment and abuse from first principles.Call to Action: Recovery and Litigation Funds. AGI should neither be built nor aligned in environments of deceit. We propose a call-to-action for a Recovery Fund and Sociological AI Alignment Fund / Litigation Fund to counteract the sexual predation Moloch in Silicon Valley, which is a sociological AI safety problem.AppendixExcerpts from red pill literatureNotes on Rape vs Consent Culture1. IntroductionSome recent posts on the EA forum have thoughtfully and earnestly addressed sexual harassment and abuse. Thank you to the EA community for your insightful posts and comments, and for genuinely trying to address the problem, which made [...]
Source:
https://forum.effectivealtruism.org/posts/LqjG4bAxHfmHC5iut/why-i-spoke-to-time-magazine-and-my-experience-as-a-female
Share feedback on this narration.
Narrated by TYPE III AUDIO. -
“How economists got Africa’s AIDS epidemic wrong” by Justin Sandefur
I'm reposting this from the CGDev site, as I thought it might be interesting to EA folks (thanks to Ryan Briggs for the suggestion). For the short version, here's a twitter thread.--In the 2000s, cost-effectiveness analysis said it was a bad use of money to send antiretroviral drugs to low-income countries—drugs that ended up saving millions of lives.Twenty years ago, in the same State of the Union speech in which he made the case for invading Iraq, George W. Bush asked Congress for $15 billion over five years for an ambitious new plan to pay for antiretroviral drugs for two million AIDS patients in Africa and the Caribbean.The President’s Emergency Plan for AIDS Relief, or PEPFAR, went on to become probably the most celebrated American foreign aid program since the Marshall Plan. An evaluation by the National Academy of Sciences estimates PEPFAR has saved millions of lives (PEPFAR itself claims 25 million). Impacts on total mortality rates across fourteen African countries were visible within just the first few years of the program (see figure 1). Separate research suggests the rollout of antiretrovirals, of which PEPFAR was a major component, explained about a third of Africa's economic growth resurgence in the 2000s.Figure 1. Adult mortality in PEPFAR focus and non-focus countries (from Bendavid et al 2012, JAMA)But at the time, some economists balked. The conventional wisdom within health economics was that sending AIDS drugs to Africa was a waste of money. The dominant conceptual apparatus economists use to evaluate social policies—comparative cost-effectiveness analysis, which focuses on a specific goal like saving lives, and ranks policies by lives saved per dollar—suggested America’s foreign aid budget could’ve been better spent on condoms and awareness campaigns, or even malaria and diarrheal diseases.“Treating HIV doesn’t pay”In a now infamous op-ed published in Forbes in 2005, before PEPFAR’s impacts were well documented, Brown University economist Emily Oster declared that “treating HIV doesn’t pay.” “It is humane to pay for AIDS drugs in Africa,” she wrote, “but it isn’t economical. The same dollars spent on prevention would save more lives.”In fairness to Oster and others, the phrasing here is important. Her argument was not that African HIV patients’ lives weren’t worth the cost—that retroviral drug prices exceeded the “value of a statistical life”, as economists might phrase it—but rather that if we take the budget as fixed, and the prices as fixed, the money could do more good if spent on other health programs.Oster wasn’t alone. While her delivery was perhaps deliberately provocative, her basic reasoning reflected a broad professional consensus, which viewed antiretrovirals through the lens of comparative cost-effectiveness analysis, and deemed them middling to poor value.A systematic review published in the Lancet in 2002, just as the Bush administration was privately plotting the PEPFAR announcement, found that in terms of saving “disability-adjusted life years” or DALYs, "a case of HIV/AIDS can be prevented for $11, and a DALY gained for $1” by improving the safety of blood transfusions and distributing condoms, whereas “antiretroviral therapy for [...]
Source:
https://forum.effectivealtruism.org/posts/qyhDz9djZAmxZ6Qzx/how-economists-got-africa-s-aids-epidemic-wrong
Share feedback on this narration.
Narrated by TYPE III AUDIO. -
"Cause area report: Antimicrobial Resistance" by Akhil
This post is a summary of some of my work as a field strategy consultant at Schmidt Futures' Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed here.
Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and quality of human life since penicillin and other antimicrobials were first developed in the early and mid 20th century.
Antimicrobial resistance, or AMR, occurs when bacteria, viruses, fungi, and parasites evolve resistance to antimicrobials. As a result, antimicrobial medicine such as antibiotics and antifungals become ineffective and unable to fight infections in the body.
AMR is responsible for millions of deaths each year, more than HIV or malaria (ARC 2022). The AMR Visualisation Tool, produced by Oxford University and IHME, visualises IHME data which finds that 1.27 million deaths per year are attributable to bacterial resistance and 4.95 million deaths per year are associated with bacterial resistance, as shown below.
Source:
https://forum.effectivealtruism.org/posts/W93Pt7xch7eyrkZ7f/cause-area-report-antimicrobial-resistance
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Share feedback on this narration.