The Nonlinear Library: EA Forum Top Posts

The Nonlinear Fund
The Nonlinear Library: EA Forum Top Posts

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio

  1. 2021/12/12

    My mistakes on the path to impact by Denise_Melchin

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: My mistakes on the path to impact, published by Denise_Melchin on the effective altruism forum. Doing a lot of good has been a major priority in my life for several years now. Unfortunately I made some substantial mistakes which have lowered my expected impact a lot, and I am on a less promising trajectory than I would have expected a few years ago. In the hope that other people can learn from my mistakes, I thought it made sense to write them up here! I will attempt to list the mistakes which lowered my impact most over the past several years in this post and then analyse their causes. Writing this post and previous drafts has also been very personally useful to me, and I can recommend undertaking such an analysis. Please keep in mind that my analysis of my mistakes is likely at least a bit misguided and incomprehensive. It would have been nice to condense the post a bit more and structure it better, but having already spent a lot of time on it and wanting to move on to other projects, I thought it would be best not to let the perfect be the enemy of the good! To put my mistakes into context, I will give a brief outline of what happened in my career-related life in the past several years before discussing what I consider to be my main mistakes. Background I came across the EA Community in 2012, a few months before I started university. Before that point my goal had always been to become a researcher. Until early 2017, I did a mathematics degree in Germany and received a couple of scholarships. I did a lot of ‘EA volunteering’ over the years, mostly community building and large-scale grantmaking. I also did two unpaid internships at EA orgs, one during my degree and one after graduating, in summer 2017. After completing my summer internship, I started to try to find a role at an EA org. I applied to ~7 research and grantmaking roles in 2018. I got to the last stage 4 times, but received no offers. The closest I got was receiving a 3-month-trial offer as a Research Analyst at Open Phil, but it turned out they were unable to provide visas. In 2019, I worked as a Research Assistant for a researcher at an EA aligned university institution on a grant for a few hundred hours. I stopped as there seemed to be no route to a secure position and the role did not seem like a good fit. In late 2019 I applied for jobs suitable for STEM graduates with no experience. I also stopped doing most of my EA volunteering. In January 2020 I began to work in an entry-level data analyst role in the UK Civil Service which I have been really happy with. In November, after 6.5mon full-time equivalent worked, I received a promotion to a more senior role with management responsibility and a significant pay rise. First I am going to discuss what I think I did wrong from a first-order practical perspective. Afterwards I will explain which errors in my decision making process I consider the likely culprits for these mistakes - the patterns of behaviour which need to be changed to avoid similar mistakes in the future. A lot of the following seems pretty silly to me now, and I struggle to imagine how I ever fully bought into the mistakes and systematic errors in my thinking in the first place. But here we go! What did I get wrong? I did not build broad career capital nor kept my options open. During my degree, I mostly focused on EA community building efforts as well as making good donation decisions. I made few attempts to build skills for the type of work I was most interested in doing (research) or skills that would be particularly useful for higher earning paths (e.g. programming), especially later on. My only internships were at EA organisations in research roles. I also stopped trying to do well in my degree later on, and stopped my previously-substantial involvement in political work. In my

    16 分鐘
  2. 2021/12/12

    Growth and the case against randomista development by HaukeHillebrandt, John G. Halstead

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Growth and the case against randomista development, published by HaukeHillebrandt, John G. Halstead on the effective altruism forum. Update, 3/8/2021: I (Hauke) gave a talk at Effective Altruism Global on this post: Summary Randomista development (RD) is a form of development economics which evaluates and promotes interventions that can be tested by randomised controlled trials (RCTs). It is exemplified by GiveWell (which primarily works in health) and the randomista movement in economics (which primarily works in economic development). Here we argue for the following claims, which we believe to be quite weak: Prominent economists make plausible arguments which suggest that research on and advocacy for economic growth in low- and middle-income countries is more cost-effective than the things funded by proponents of randomista development. Effective altruists have devoted too little attention to these arguments. Assessing the soundness of these arguments should be a key focus for current generation-focused effective altruists over the next few years. We hope to start a conversation on these questions, and potentially to cause a major reorientation within EA. We also believe the following stronger claims: 4. Improving health is not the best way to increase growth. 5. A ~4 person-year research effort will find donation opportunities working on economic growth in LMICs which are substantially better than GiveWell’s top charities from a current generation human welfare-focused point of view. However, economic growth is not all that matters. GDP misses many crucial determinants of human welfare, including leisure time, inequality, foregone consumption from investment, public goods, social connection, life expectancy, and so on. A top priority for effective altruists should be to assess the best way to increase human welfare outside of the constraints of randomista development, i.e. allowing intervention that have not or cannot be tested by RCTs. We proceed as follows: We define randomista development and contrast it with research and advocacy for growth-friendly policies in low- and middle-income countries. We show that randomista development is overrepresented in EA, and that, in contradistinction, research on and advocacy for growth-friendly economic policy (we refer to this as growth throughout) is underrepresented We then show why some prominent economists believe that, a priori, growth is much more effective than most RD interventions. We present a quantitative model that tries to formalize these intuitions and allows us to compare global development interventions with economic growth interventions. The model suggests that under plausible assumptions a hypothetical growth intervention can be thousands of times more cost-effective than typical RD interventions such as cash-transfers. However, when these assumptions are relaxed and compared to the very good RD interventions, growth interventions are on a similar level of effectiveness as RD interventions. We consider various possible objections and qualifications to our argument. Acknowledgements Thanks to Stefan Schubert, Stephen Clare, Greg Lewis, Michael Wiebe, Sjir Hoeijmakers, Johannes Ackva, Gregory Thwaites, Will MacAskill, Aidan Goth, Sasha Cooper, and Carl Shulman for comments. Any mistakes are our own. Opinions are ours, not those of our employers. Marinella Capriati at GiveWell commented on this piece, and the piece does not represent her views or those of GiveWell. 1. Defining Randomista Development We define randomista development (RD) as an approach to development economics which investigates, evaluates and recommends only interventions which can be tested by randomised controlled trials (RCTs). RD can take low-risk or more “hits-based” forms. Effective altruists have especially focused on the low-ri

    57 分鐘
  3. 2021/12/12

    Announcing my retirement by Aaron Gertler

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Announcing my retirement, published by Aaron Gertler on the effective altruism forum. A few sharp-eyed readers noticed my imminent departure from CEA in our last quarterly report. Gold stars all around! My last day as our content specialist — and thus, my last day helping to run the Forum — is December 10th. The other moderators will continue to handle the basics, and we’re in the process of hiring my replacement. (Let me know if anyone comes to mind!) Managing this place was fun. It wasn’t always fun, but — on the whole, a good time. I’ve enjoyed giving feedback to a few hundred people, organizing some interesting AMAs, running a writing contest, building up the Digest, hosting workshops for EA groups around the world, and deleting a truly staggering number of comments advertising escort services (I’ll spare you the link). More broadly, I’ve felt a continual sense of admiration for everyone who cares about the Forum and tries to make it better — by reading, voting, posting, crossposting, commenting, tagging, Wiki-editing, bug-reporting, and/or moderating. Collectively, you’ve put in tens of thousands of hours of work to develop our strange, complicated, unique website, with scant compensation besides karma. (Now that I’m leaving, it’s time to be honest — despite the rumors, our karma isn’t the kind that gets you a better afterlife.) Thank you for everything you’ve done to make this job what it was. What’s next? In January, I’ll join Open Philanthropy as their communications officer, working to help their researchers publish more of their work. I’ll also be joining Effective Giving Quest as their first partnered streamer. Wish me luck: moderating this place sometimes felt like herding cats, but it’s nothing compared to Twitch chat. My Forum comments will be less frequent, but probably spicier. thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.

    2 分鐘
  4. 2021/12/12

    My current impressions on career choice for longtermists by Holden Karnofsky

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: My current impressions on career choice for longtermists, published by Holden Karnofsky on the effective altruism forum. This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it's valuable for there to be multiple perspectives on this topic out there. Edited to add: see below for why I chose to focus on longtermism in this post. While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize "paths" to particular roles working on particular causes; by contrast, I emphasize "aptitudes" one can build in a wide variety of roles and causes (including non-effective-altruist organizations) and then apply to a wide variety of longtermist-relevant jobs (often with options working on more than one cause). Example aptitudes include: "helping organizations achieve their objectives via good business practices," "evaluating claims against each other," "communicating already-existing ideas to not-yet-sold audiences," etc. (Other frameworks for career choice include starting with causes (AI safety, biorisk, etc.) or heuristics ("Do work you can be great at," "Do work that builds your career capital and gives you more options.") I tend to feel people should consider multiple frameworks when making career choices, since any one framework can contain useful insight, but risks being too dogmatic and specific for individual cases.) For each aptitude I list, I include ideas for how to explore the aptitude and tell whether one is on track. Something I like about an aptitude-based framework is that it is often relatively straightforward to get a sense of one's promise for, and progress on, a given "aptitude" if one chooses to do so. This contrasts with cause-based and path-based approaches, where there's a lot of happenstance in whether there is a job available in a given cause or on a given path, making it hard for many people to get a clear sense of their fit for their first-choice cause/path and making it hard to know what to do next. This framework won't make it easier for people to get the jobs they want, but it might make it easier for them to start learning about what sort of work is and isn't likely to be a fit. I’ve tried to list aptitudes that seem to have relatively high potential for contributing directly to longtermist goals. I’m sure there are aptitudes I should have included and didn’t, including aptitudes that don’t seem particularly promising from a longtermist perspective now but could become more so in the future. In many cases, developing a listed aptitude is no guarantee of being able to get a job directly focused on top longtermist goals. Longtermism is a fairly young lens on the world, and there are (at least today) a relatively small number of jobs fitting that description. However, I also believe that even if one never gets such a job, there are a lot of opportunities to contribute to top longtermist goals, using whatever job and aptitudes one does have. To flesh out this view, I lay out an "aptitude-agnostic" vision for contributing to longtermism. Some longtermism-relevant aptitudes "Organization building, running, and boosting" aptitudes[1] Basic profile: helping an organization by bringing "generally useful" skills to it. By "generally useful" skills, I mean skills that could help a wide variety of organizations accomplish a wide variety of different objectives. Such skills could include: Business operations and project management (including setting objectives, metrics, etc.) People management and management coaching (some manager jobs require specialized skills, but some just require general management-associated skills) Executive leadership (setting

    44 分鐘
  5. 2021/12/12

    After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation by EA applicant

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation, published by EA applicant on the effective altruism forum. (I am writing this post under a pseudonym because I don’t want potential future non-EA employers to find this with a quick google search. Initially my name could be found on the CV linked in the text, but after this post was shared much more widely than I had expected, I got cold feet and removed it.) In the past 12 months, I applied for 20 positions in the EA community. I didn’t get any offer. At the end of this post, I list all those positions, and how much time I spent in the application process. Before that, I write about why I think more posts like this could be useful. Please note: The positions were all related to long-termism, EA movement building, or meta-activities (e.g. grant-making). To stress this again, I did not apply for any positions in e.g. global health or animal welfare, so what I’m going to say might not apply to these fields. Costs of applications Applying has considerable time-costs. Below, I estimate that I spent 7-8 weeks of full-time work in application processes alone. I guess it would be roughly twice as much if I factored in things like searching for positions, deciding which positions to apply for, or researching visa issues. (Edit: Some organisations reimburse for time spent in work tests/trials. I got paid in 4 of the 20 application processes. I might have gotten paid in more processes if I had advanced further). At least for me, handling multiple rejections was mentally challenging. Additionally, the process may foster resentment towards the EA community. I am aware the following statement is super in-accurate and no one is literally saying that, but sometimes this is the message I felt I was getting from the EA community: “Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint. (20 applications later) . Yeah, when we said that we need people, we meant capable people. Not you. You suck.” Why I think more posts like this would have been useful for me Overall, I think it would have helped me to know just how competitive jobs in the EA community (long-termism, movement building, meta-stuff) are. I think I would have been more careful in selecting the positions I applied for and I would probably have started exploring other ways to have an impactful career earlier. Or maybe I would have applied to the same positions, but with less expectations and less of a feeling of being a total loser that will never contribute anything towards making the world a better place after being rejected once again 😊 Of course, I am just one example, and others will have different experiences. For example, I could imagine that it is easier to get hired by an EA organisation if you have work experience outside of research and hospitals (although many of the positions I applied for were in research or research-related). However, I don’t think I am a very special case. I know several people who fulfil all of the following criteria: - They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school) - They are within the top 5% of their course - They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, .) - They are very motivated and EA aligned - They applied fo

    7 分鐘
  6. 2021/12/12

    EAF's ballot initiative doubled Zurich’s development aid by Jonas Vollmer

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: EAF"s ballot initiative doubled Zurich’s development aid, published by Jonas Vollmer on the effective altruism forum. Summary In 2016, the Effective Altruism Foundation (EAF), then based in Switzerland, launched a ballot initiative asking to increase the city of Zurich’s development cooperation budget and to allocate it more effectively. In 2018, we coordinated a counterproposal with the city council that preserved the main points of our original initiative and had a high chance of success. In November 2019, the counterproposal passed with a 70% majority. Zurich’s development cooperation budget will thus increase from around $3 million to around $8 million per year. The city will aim to allocate it “based on the available scientific research on effectiveness and cost-effectiveness.” This seems to be the first time that Swiss legislation on development cooperation mentions effectiveness requirements. The initiative cost around $25,000 in financial costs and around $190,000 in opportunity costs. Depending on the assumptions, it raised a present value of $20–160 million in development funding. EAs should consider launching similar initiatives in other Swiss cities and around the world. Initial proposal and signature collection In spring 2016, the Effective Altruism Foundation (EAF), then still based in Basel, Switzerland, launched a ballot initiative asking for the city of Zurich’s development cooperation budget to be increased and to be allocated more effectively. (For information on EAF’s current focus, see this article.) We chose Zurich due to its large budget and leftist/centrist majority. I published an EA Forum post introducing the initiative and a corresponding policy paper (see English translation). (Note: In the EA Forum post, I overestimated the publicity/movement-building benefits and the probability that the original proposal would pass. I overemphasized the quantitative estimates, especially the point estimates, which don’t adequately represent the uncertainty. I underestimated the success probability of a favorable counterproposal. Also, the policy paper should have had a greater focus on hits-based, policy-oriented interventions because I think these have a chance of being even more cost-effective than more “straightforward” approaches and also tend to be viewed more favorably by professionals.) We hired people and coordinated volunteers (mostly animal rights activists we had interacted with before) to collect the required 3,000 signatures (plus 20% safety margin) over six months to get a binding ballot vote. Signatures had to be collected in person in handwritten form. For city-level initiatives, people usually collect about 10 signatures per hour, and paying people to collect signatures costs about $3 per signature on average. Picture: Start of signature collection on 25 May 2016. Picture: Submission of the initiative at Zurich’s city hall on 22 November 2016. The legislation we proposed (see the appendix) focused too strongly on Randomized Controlled Trials (RCTs) and demanded too much of a budget increase (from $3 million to $87 million per year). We made these mistakes because we had internal disagreements about the proposal and did not dedicate enough time to resolving them. This led to negative initial responses from the city council and influential charities (who thought the budget increase was too extreme, were pessimistic about the odds of success, and disliked the RCT focus), implying a Counterproposal As is common for Swiss ballot initiatives, the city d...

    24 分鐘
  7. 2021/12/12

    Is effective altruism growing? An update on the stock of funding vs. people by Benjamin_Todd

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Is effective altruism growing? An update on the stock of funding vs. people, published by Benjamin_Todd on the effective altruism forum. This is a cross-post from 80,000 Hours. See part 2 on the allocation across cause areas. In 2015, I argued that funding for effective altruism – especially within meta and longtermist areas – had grown faster than the number of people interested in it, and that this was likely to continue. As a result, there would be a funding ‘overhang’, creating skill bottlenecks for the roles needed to deploy this funding. A couple of years ago, I wondered if this trend was starting to reverse. There hadn’t been any new donors on the scale of Good Ventures (the main partner of Open Philanthropy), which meant that total committed funds were growing slowly, giving the number of people a chance to catch up. However, the spectacular asset returns of the last few years and the creation of FTX, seem to have shifted the balance back towards funding. Now the funding overhang seems even larger in both proportional and absolute terms than 2015. In the rest of this post, I make some rough guesses at total committed funds compared to the number of interested people, to see how the balance of funding vs. talent might have changed over time. This will also serve as an update on whether effective altruism is growing – with a focus on what I think are the two most important metrics: the stock of total committed funds, and of committed people. This analysis also made me make a small update in favour of giving now vs. investing to give later. Here’s a summary of what’s coming up: How much funding is committed to effective altruism (going forward)? Around $46 billion. How quickly have these funds grown? About 37% per year since 2015, with much of the growth concentrated in 2020–2021. How much is being donated each year? Around $420 million, which is just over 1% of committed capital, and has grown maybe about 21% per year since 2015. How many committed community members are there? About 7,400 active members and 2,600 ‘committed’ members, growing 10–20% per year 2018–2020, and growing faster than that 2015–2017. Has the funding overhang grown or shrunk? Funding seems to have grown faster than the number of people, so the overhang has grown in both proportional and absolute terms. What might be the implications for career choice? Skill bottlenecks have probably increased for people able to think of ways to spend lots of funding effectively, run big projects, and evaluate grants. To caveat, all of these figures are extremely rough, and are mainly estimated off the top of my head. I haven’t checked them with the relevant donors, so they might not endorse these estimates. However, I think they’re better than what exists currently, and thought it was important to try to give some kind of rough update on how my thinking has changed. There are likely some significant mistakes; I’d be keen to see a more thorough version of this analysis. Overall, please treat this more like notes from a podcast than a carefully researched article. Which growth metrics matter? Broadly, the future[1] impact of effective altruism depends on the total stock of: The quantity of committed funds The number of committed people (adjusted for skills and influence) The quality of our ideas (which determine how effectively funding and labour can be turned into impact) (In economic growth models, this would be capital, labour, and productivity.) You could consider other resources like political capital, reputation, or public support as well, though we can also think of these as being a special type of labour. In this post, I’m going to focus on funding and labour. (To do an equivalent analysis for ideas, which could easily matter more, we could try to estimate whether

    35 分鐘
  8. 2021/12/12

    Announcing "Naming What We Can"! by GidonKadosh, EdoArad, Davidmanheim, ShayBenMoshe, sella, Guy Raveh, Asaf Ifergan

    welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio. this is: Announcing "Naming What We Can"!, published by GidonKadosh, EdoArad, Davidmanheim, ShayBenMoshe, sella, Guy Raveh, Asaf Ifergan on the effective altruism forum. We hereby announce a new meta-EA institution - "Naming What We Can". Vision We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects. Goal To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis Mission Using our superior humor and language articulation prowess, we will come up with names for stuff. About us We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org’s title - we Name What We Can. What Do We Do? We're here for the service of the EA community. Whatever you need to rename - we can name. Although we also rename whatever we can. Even if you didn't ask. Examples As a demonstration, we will now see some examples where NWWC has a much better name than the one currently used. 80,000 Hours => 64,620 Hours. Better fits the data and more equal toward women, two important EA virtues. Charity Entrepreneurship => Charity Initiatives. (We don't know anyone who can spell entrepreneurship on their first try. Alternatively, own all of the variations: Charity Enterpeneurship, Charity Entreprenreurshrip, Charity Entrepenurship, Charity Entepenoorship, .) Global Priorities Institute => Glomar Priorities Institute. We suggest including the dimension of time, making our globe a glome. OpenPhil => Doing Right Philanthropy. Going by Dr.Phil would give a lot more clicks. EA Israel => זולתנים יעילים בארץ הקודש ProbablyGood => CrediblyGood. Because in EA we usually use credence rather than probability. EA Hotel => Centre for Enabling EA Learning & Research. Giving What We Can => Guilting Whoever We Can. Because people give more when they are feeling guilty about being rich. Cause Prioritization => Toby Ordering. Max Dalton => Max Delta. This represents the endless EA effort to maximize our ever-marginal utility. Will MacAskill => will McAskill. Evidently a more common use: Peter singer & steven pinker should be the same person, to avoid confusion. OpenAI => ProprietaryAI. Followed by ClosedAI, UnalignedAI, MisalignedAI, and MalignantAI. FHI => Bostrom's Squad. GiveWell => Don'tGivePlayPumps. We feel that the message could be stronger this way. Doing Good Better => Doing Right Right. Electronic Arts, also known as EA, should change its name to Effective Altruism. They should also change all of their activities to Effective Altruism activities. Impact estimation Overall, we think the impact of the project will be net negative on expectation (see our Guesstimate model). That is because we think that the impact is likely to be somewhat positive, but there is a really small tail risk that we will cause the termination of the EA movement. However, as we are risk-averse we can mostly ignore high tails in our impact assessment so there is no need to worry. Call to action As a first step, we offer our services freely here on this very post! This is done to test the fit of the EA community to us. All you need to do is to comment on this post and ask us to name or rename whatever you desire. Additionally, we hold a public recruitment process here on this very post! If you want to apply to NWWC as a member, comment on this post with a name suggestion of your choosing! Due to our current lack of diversity in our team, we particularly encourage

    5 分鐘

簡介

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio

若要收聽兒少不宜的單集,請登入帳號。

隨時掌握此節目最新消息

登入或註冊後,即可追蹤節目、儲存單集和掌握最新資訊。

選取國家或地區

非洲、中東和印度

亞太地區

歐洲

拉丁美洲與加勒比海地區

美國與加拿大