1,659 episodes

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your
podcast player. We use text-to-speech software to create an automatically updating repository of audio content from
the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

The Nonlinear Library: EA Forum The Nonlinear Fund

    • Education

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your
podcast player. We use text-to-speech software to create an automatically updating repository of audio content from
the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

    EA - Project ideas: Epistemics by Lukas Finnveden

    EA - Project ideas: Epistemics by Lukas Finnveden

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas: Epistemics, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum.This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See here for the introductory post.If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.Before I start listing projects, I'll discuss:Why AI could matter a lot for epistemics. (Both positively and negatively.)Why working on this could be urgent. (And not something we should just defer to the future.) Here, I'll separately discuss:That it's important for epistemics to be great in the near term (and not just in the long run) to help us deal with all the tricky issues that will arise as AI changes the world.That there may be path-dependencies that affect humanity's long-run epistemics.Why AI matters for epistemicsOn the positive side, here are three ways AI could substantially increase our ability to learn and agree on what's true.Truth-seeking motivations. We could be far more confident that AI systems are motivated to learn and honestly report what's true than is typical for humans. (Though in some cases, this will require significant progress on alignment.) Such confidence would make it much easier and more reliable for people to outsource investigations of difficult questions.Cheaper and more competent investigations. Advanced AI would make high-quality cognitive labor much cheaper, thereby enabling much more thorough and detailed investigations of important topics. Today, society has some ability to converge on questions with overwhelming evidence. AI could generate such overwhelming evidence for much more difficult topics.Iteration and validation. It will be much easier to control what sort of information AI has and hasn't seen. (Compared to the difficulty of controlling what information humans have and haven't seen.) This will allow us to run systematic experiments on whether AIs are good at inferring the right answers to questions that they've never seen the answer to.For one, this will give supporting evidence to the above two bullet points. If AI systems systematically get the right answer to previously unseen questions, that indicates that they are indeed honestly reporting what's true without significant bias and that their extensive investigations are good at guiding them toward the truth.In addition, on questions where overwhelming evidence isn't available, it may let us experimentally establish what intuitions and heuristics are best at predicting the right answer.[1]On the negative side, here are three ways AI could reduce the degree to which people have accurate beliefs.Super-human persuasion. If AI capabilities keep increasing, I expect AI to become significantly better than humans at persuasion.Notably, on top of high general cognitive capabilities, AI could have vastly more experience with conversation and persuasion than any human has ever had. (Via being deployed to speak with people across the world and being trained on all that data.)With very high persuasion capabilities, people's beliefs might (at least directionally) depend less on what's true and more on what AI systems' controllers want people to believe.Possibility of lock-in. I think it's likely that people will adopt AI personal assistants for a great number of tasks, including helping them select and filter the information they get exposed to. While this could be crucial for defending aga...

    • 33 min
    EA - Project ideas for making transformative AI go well, other than by working on alignment by Lukas Finnveden

    EA - Project ideas for making transformative AI go well, other than by working on alignment by Lukas Finnveden

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas for making transformative AI go well, other than by working on alignment, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum.This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:Would be especially valuable if transformative AI is coming in the next 10 years or so.Are not primarily about controlling AI or aligning AI to human intentions.[1]Most of the projects would be valuable even if we were guaranteed to get aligned AI.Some of the projects would be especially valuable if we were inevitably going to get misaligned AI.The posts contain some discussion of how important it is to work on these topics, but not a lot. For previous discussion (especially: discussing the objection "Why not leave these issues to future AI systems?"), you can see the sectionHow ITN are these issues? from my previousmemo on some neglected topics.The lists are definitely not exhaustive. Failure to include an idea doesn't necessarily mean I wouldn't like it. (Similarly, although I've made some attempts to link to previous writings when appropriate, I'm sure to have missed a lot of good previous content.)There's a lot of variation in how sketched out the projects are. Most of the projects just have some informal notes and would require more thought before someone could start executing. If you're potentially interested in working on any of them and you could benefit from more discussion, I'd be excited if you reached out to me! [2]There's also a lot of variation in skills needed for the projects. If you're looking for projects that are especially suited to your talents, you can search the posts for any of the following tags (including brackets):[ML] [Empirical research] [Philosophical/conceptual] [survey/interview] [Advocacy] [Governance] [Writing] [Forecasting]The projects are organized into the following categories (which are in separate posts). Feel free to skip to whatever you're most interested in.Governance during explosive technological growthIt's plausible that AI will lead to explosive economic and technological growth.Our current methods of governance can barely keep up with today's technological advances. Speeding up the rate of technological growth by 30x+ would cause huge problems and could lead to rapid, destabilizing changes in power.This section is about trying to prepare the world for this. Either generating policy solutions to problems we expect to appear or addressing the meta-level problem about how we can coordinate to tackle this in a better and less rushed manner.A favorite direction is to develop Norms/proposals for how states and labs should act under the possibility of an intelligence explosion.EpistemicsThis is about helping humanity get better at reaching correct and well-considered beliefs on important issues.If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.A couple of favorite projects are: Create an organization that gets started with using AI for investigating important questions or Develop & advocate for legislation against bad persuasion.Sentience and rights of digital minds.It's plausible that there will soon be digital minds that are sentient and deserving of rights. This raises several important issues that we don't know how to deal with.It seems tractable both to make progress in understanding these issues and in implementing policies that reflect this understanding.A favorite direction is to take existing ideas for what labs could be doing and spell ou...

    • 5 min
    EA - Apply now to CE's second Research Training Program by CE

    EA - Apply now to CE's second Research Training Program by CE

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply now to CE's second Research Training Program, published by CE on January 3, 2024 on The Effective Altruism Forum.What we have learned from our pilot and when our next program is happeningTL;DR: We are excited to announce the second round of ourResearch Training Program. This online program is designed to equip participants with the tools and skills needed to identify, compare, and recommend the most effective charities, interventions, and organisations. It is a full-time (35 hours per week), fully cost-covered program that will run remotely for 12 weeks.[APPLY HERE]Deadline for application: January 28, 2024.The program dates are April 15 - July 5, 2024.If you are progressing to the last stage of the application process you will receive a final decision by the 15th of March the latest. Please let us know if you need a decision before that date.What have we learned from our pilot?The theory of change for the Research Training Program has three outputs: helping train people to switch into impactful research positions, creating intervention reports that influence CE's decisions of which new organisations to start, and creating evaluations that help organisations have the most impact and funders to make impact maximising decisions. We have outlined what we have learned about each of these aspects below:Intervention reports: For eight out of the eleven weeks, the sixteen research fellows have investigated fifteen cause areas, created forty-six shallow reviews, and written twenty-two deep dives, five of which have already been published on the EA forum (find themhere). Although we are planning some changes to improve the fellows' experience in the program, we are deeply impressed by these results and look forward to replicating them with a slightly different approach.People: Since the program ended only a couple of weeks ago, it is too early to tell what career switches will happen because of the program. We have some early and very promising results with two research fellows already having made career changes that we consider highly impactful. If you are currently hiring and are interested in people with intervention prioritisation skills applying, please contact us.Charity evaluations: Traditionally, Charity Entrepreneurship has focused most of its research on investigating the potential impact of interventions. We believe that more impact-focused accountability is essential for the sector, and we would like to support the evaluated organisations and funders in making more informed decisions. This is why at the end of the program the research fellows focused on writing charity evaluations in group projects.We were too confident in our timelines and are planning a major restructuring of this part of the program. However, we are happy that three evaluations could be shared directly with the evaluated organisations. We are looking forward to learning from other evaluators in the space.What will the next program look like?Content: The program will start with a week of providing an overview of the most important research skills. The program's first part will then focus on writing cause area reports in groups in which fellows take a problem and identify the most promising solutions. Afterwards, the fellows investigate those most promising ideas through a shallow review.After conducting a shallow review, research fellows will evaluate the most promising interventions through a deep dive, which will be polished and published, and influence decision-making within Charity Entrepreneurship and beyond. After these reports are published, there will be some time to think about careers and apply to different opportunities, before jumping into some charity evaluations that can influence the decisions of funders as well as strategic decisions within the evaluated o...

    • 6 min
    EA - My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry by Silas Strawn

    EA - My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry by Silas Strawn

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Experience Donating Blood Stem Cells, or: Why You Should Join a Bone Marrow Registry, published by Silas Strawn on January 3, 2024 on The Effective Altruism Forum.Note: I'm not a doctor. Please don't make decisions about your health based on an EA forum post before at least talking with a physician or other licensed healthcare practitioner.TLDR: I donated blood stem cells in early 2021. Immediately prior, I had been identified as the best match for someone in need of a bone marrow transplant, likely with leukemia, lymphoma, or similar condition. Although the first attempt to collect my blood stem cells failed, my experience was overwhelmingly positive as well as fulfilling on a personal level. The foundation running the donation took pains to make it as convenient as possible - and free, other than my time.I recovered quickly and have had no long-term issues related to the donation[1]. I would encourage everyone to at least do the cheek swab to join the registry if they are able.this page to join the Be The Match registry.This post was prompted - very belatedly - by a comment from "demost_" on Scott Alexander's post about his experience donating a kidney[2]. The commenter was speculating about the differences between bone marrow donation and kidney donation[3]. I'm typically a lurker, but I figured this is a case where I actually do have something to say[4]. According to demost_, fewer than 1% of those on the bone marrow registry get matched, so my experience is relatively rare.I checked and couldn't find any other forum posts about being a blood stem cell or bone marrow donor. I hope to shine a light on what the experience is like as a donor. I know EAs are supposed to be motivated by cold, hard facts and rationality and so this post may stick out since it's recounting a personal experience[5]. Nevertheless, given how close-to-home matters of health are, I figured this could be useful for those considering joining the registry or donating.My Donation ExperienceI joined the registry toward the end of my college years. I don't recall the exact details, but I've pieced together the timeline from my email archives. Be The Match got my cheek swab sample in December 2019 and I officially joined the registry in January 2020. If you're a university student (at least in America[6]), there's a good chance that at some point there will be a table in your commons or quad where volunteers will be offering cheek swabs to join the bone marrow donor registry. The whole process takes a few minutes and I'd encourage everyone to at least join the registry if they can.Mid-December 2020, I was matched and started the donation process. For the sake of privacy, they don't tell you anything about the recipient at that point beyond the vaguest possible demographic info. I think they told me the gender and an age range, but nothing besides.demost_ supposed that would-be donors should be more moved to donate bone marrow than kidneys since there's a particular, identifiable person in need (and marrow is much more difficult to match, so you're less replaceable as a donor). I can personally attest to this. Even though I didn't know much about the recipient at all, I felt an extreme moral obligation to see the process through. I knew that my choice to donate could make a massive difference to this person.I imagined how I would feel if it were a friend or loved one in need or even myself. The minor inconveniences of donating felt doubly minor next to the weight of someone's life.As a college student, I had a fluid schedule. I was also fortunate that my distributed systems professor was happy to let me defer an exam scheduled for the donation date. To their credit, Be The Match offered not only to compensate any costs associated with the donation, but also to replace any wages missed...

    • 12 min
    EA - Why EA should (probably) fund ceramic water filters by Bernardo Baron

    EA - Why EA should (probably) fund ceramic water filters by Bernardo Baron

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why EA should (probably) fund ceramic water filters, published by Bernardo Baron on January 3, 2024 on The Effective Altruism Forum.Epistemic status: after researching for more than 80 hours each, we are moderately certain that ceramic filters (CFs) can be more cost-effective than chlorination to prevent waterborne diseases at least in some - and possibly in many - LMICs. We are less certain of the real size of the effects from CFs, and how some factors like household sizes affect the final cost-effectiveness.At least 1.7 billion people globally used drinking water sources contaminated with feces in 2022, leading to significant health risks from waterborne enteric infections. According to the Global Burden of Disease (GBD) 2019 study, more than 2.5% of total DALYs lost that year were linked to unsafe water consumption - and there is some evidence that this burden can be even bigger.This makes the improvement of access to clean water a particularly pressing problem in the Global Health and Development area.As a contribution to target this problem, we have put together a report on ceramic water filters as a potential intervention to improve access to safe water in low and medium income countries. This was written during our time as research fellows at Charity Entrepreneurship's Research Training Program (Fall 2023).In this post, we summarize the main findings of the report. Nonetheless, we invite people interested in the subject to check out the full report, which provides much more detail into each topic we outline here.Key takeaways:There are several (controlled, peer-reviewed) studies that link the distribution of ceramic filters to less frequent episodes of diarrhea in LMICs. Those studies have been systematically reviewed and graded low to medium quality.Existing evidence supports the hypothesis that ceramic filters are even more effective than chlorination to reduce diarrhea episodes. However, percentage reductions here should be taken with a grain of salt due to lack of masking and self-report and publication biases.Despite limitations in current evidence, we are cautiously optimistic that ceramic filters can be more cost-effective than chlorination, especially in countries where diarrheal diseases are primarily caused by bacteria and protozoa (and not by viruses). Average household sizes can also play a role, but we are less certain on the extent to which this is true.We provide a Geographic Weighted Factor Model and a country-specific back-of-the envelope analysis of the cost-effectiveness for a hypothetical charity that wants to distribute free ceramic filters in LMICs. Our central scenario for the cost-effectiveness of the intervention in the top prioritized country (Nigeria) is $8.47 U.S. dollars per DALY-averted.We ultimately recommend that EA donors and meta-organizations should invest at least some resources in the distribution of ceramic filters, either by bringing up new charities in this area, or by supporting existing, non-EA organizations that already have lots of expertise in how to manufacture, distribute and monitor the usage of the filters.Why ceramic filters?There are plenty of methods to provide access to safe(r) water in very low-resource settings. Each one of those has some pros and cons, but ceramic filters stand out for being cheap to make, easy to install and operate, effective at improving health, and durable (they are said to last for a minimum of 2 years).In short, a ceramic filter is a combination of a porous ceramic element and a recipient for the filtered water (usually made of plastic). Water is manually put into the ceramic part and flows through its pores due to gravity. Since pores are very small, they let water pass, but physically block bigger particles - including bacteria, protozoa and sediments - from passing....

    • 21 min
    EA - How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models by GiveWell

    EA - How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models by GiveWell

    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models, published by GiveWell on January 3, 2024 on The Effective Altruism Forum.Author: Adam Salisbury, Senior Research AssociateSummaryIn a nutshellWe've received criticism from multiple sources that we should model uncertainty more explicitly in our cost-effectiveness analyses. These critics argue that modeling uncertainty, via Monte Carlos or other approaches, would keep us from being fooled by the optimizer's curse[1] and have other benefits.Our takeaways:We think we're mostly addressing the optimizer's curse already by skeptically adjusting key model inputs, rather than taking data at face value. However, that's not always true, and we plan to take steps to ensure we're doing this more consistently.We also plan to make sensitivity checks on our parameters and on bottom-line cost-effectiveness a more routine part of our research. We think this will help surface potential errors in our models and have other transparency and diagnostics benefits.Stepping back, we think taking uncertainty more seriously in our work means considering perspectives beyond our model, rather than investing more in modeling. This includes factoring in external sources of evidence and sense checks, expert opinion, historical track records, and qualitative features of organizations.Ways we could be wrong:We don't know if our parameter adjustments and approach to addressing the optimizer's curse are correct. Answering this question would require comparing our best guesses to "true" values for parameters, which we typically don't observe.Though we think there are good reasons to consider outside-the-model perspectives, we don't have a fully formed view of how to bring qualitative arguments to bear across programs in a consistent way. We expect to consider this further as a team.What is the criticism we've received?In our cost-effectiveness analyses, we typically do not publish uncertainty analyses that show how sensitive our models are to specific parameters or uncertainty ranges on our bottom line cost-effectiveness estimates. We've received multiple critiques of this approach:Noah Haber argues that, by not modeling uncertainty explicitly, we are subject to the optimizer's curse. If we take noisy effect sizes, burden, or cost estimates at face value, then the programs that make it over our cost-effectiveness threshold will be those that got lucky draws. In aggregate, this would make us biased toward more uncertain programs. To remedy this, he recommends that (i) we quantify uncertainty in our models by specifying distributions on key parameters and then running Monte Carlo simulations and (ii) we base decisions on a lower bound of the distribution (e.g., the 20th percentile).Others[2] have argued we're missing out on other benefits that come from specifying uncertainty. By not specifying uncertainty on key parameters or bottom line cost-effectiveness, we may be missing opportunities to prioritize research on the parameters to which our model is most sensitive and to be fully transparent about how uncertain our estimates are. (more)What do we think about this criticism?We think we're mostly guarding against the optimizer's curse by skeptically adjusting key inputs in our models, but we have some room for improvement.The optimizer's curse would be a big problem if we, e.g., took effect sizes from study abstracts or charity costs at face value, plugged them into our models, and then just funded programs that penciled above our cost-effectiveness bar.We don't think we're doing this. For example, in our vitamin A supplementation cost-effectiveness analysis (CEA), we apply skeptical adjustments to treatment effects to bring them closer to what we consider plausible. In our CEAs more broadly, we triangulate our cost e...

    • 46 min

Top Podcasts In Education

The Jefferson Fisher Podcast
Civility Media
The Mel Robbins Podcast
Mel Robbins
The Jordan B. Peterson Podcast
Dr. Jordan B. Peterson
The Jamie Kern Lima Show
Jamie Kern Lima
This Is Purdue
Purdue University
TED Talks Daily
TED