80,000 Hours Podcast

Rob, Luisa, and the 80,000 Hours team
80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.

  1. If digital minds could suffer, how would we ever know? (Article)

    -1 ДН.

    If digital minds could suffer, how would we ever know? (Article)

    “I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world. Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way. But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions: We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering. We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise. This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem. You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website. Chapters: Introduction (00:00:00)Understanding the moral status of digital minds (00:00:58)Summary (00:03:31)Our overall view (00:04:22)Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)Clearing up common misconceptions (00:12:16)Creating digital minds could go very badly - or very well (00:14:13)Dangers for digital minds (00:14:41)Dangers for humans (00:16:13)Other dangers (00:17:42)Things could also go well (00:18:32)We don't know how to assess the moral status of AI systems (00:19:49)There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)Many plausible theories of consciousness could include digital minds (00:24:16)The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)The scale of this issue might be enormous (00:36:08)Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)Summing up so far (00:52:22)Arguments against the moral status of digital minds as a pressing problem (00:53:25)Two key cruxes (00:53:31)Maybe this problem is intractable (00:54:16)Maybe this issue will be solved by default (00:58:19)Isn't risk from AI more important than the risks to AIs? (01:00:45)Maybe current AI progress will stall (01:02:36)Isn't this just too crazy? (01:03:54)What can you do to help? (01:05:10)Important considerations if you work on this problem (01:13:00)

    1 ч. 15 мин.
  2. #132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

    -5 ДН.

    #132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

    If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. Rebroadcast: this episode was originally released in June 2022. Links to learn more, highlights, and full transcript. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge. The worries aren’t purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we’ll develop so-called artificial ‘general’ intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society. If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately. If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally ‘go rogue,’ breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can’t be shut off. As Nova explains, in either case, we don’t want such models disseminated all over the world before we’ve confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic — perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly, something we can only speculate on at this point. If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world. We’ll soon need the ability to ‘sandbox’ (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough. Chapters: Cold open (00:00:00)Rob's intro (00:00:52)The interview begins (00:02:44)Why computer security matters for AI safety (00:07:39)State of the art in information security (00:17:21)The hack of Nvidia (00:26:50)The most secure systems that exist (00:36:27)Formal verification (00:48:03)How organisations can protect against hacks (00:54:18)Is ML making security better or worse? (00:58:11)Motivated 14-year-old hackers (01:01:08)Disincentivising actors from attacking in the first place (01:05:48)Hofvarpnir Studios (01:12:40)Capabilities vs safety (01:19:47)Interesting design choices with big ML models (01:28:44)Nova’s work and how she got into it (01:45:21)Anthropic and career advice (02:05:52)$600M Ethereum hack (02:18:37)Personal computer security advice (02:23:06)LastPass (02:31:04)Stuxnet (02:38:07)Rob's outro (02:40:18)Producer: Keiran HarrisAudio mastering: Ben Cordell and Beppe RådvikTranscriptions: Katy Moore

    2 ч. 41 мин.
  3. #138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

    22 ЯНВ.

    #138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

    What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisdom, and plenty more. The question is a classic that makes for great dorm-room philosophy discussion. But it’s hardly just of academic interest. The issue of what (if anything) is intrinsically valuable bears on every action we take, whether we’re looking to improve our own lives, or to help others. The wrong answer might lead us to the wrong project and render our efforts to improve the world entirely ineffective. Today’s guest, Sharon Hewitt Rawlette — philosopher and author of The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness — wants to resuscitate an answer to this question that is as old as philosophy itself. Rebroadcast: this episode was originally released in September 2022. Links to learn more, highlights, and full transcript. That idea, in a nutshell, is that there is only one thing of true intrinsic value: positive feelings and sensations. And similarly, there is only one thing that is intrinsically of negative value: suffering, pain, and other unpleasant sensations. Lots of other things are valuable too: friendship, fairness, loyalty, integrity, wealth, patience, houses, and so on. But they are only instrumentally valuable — that is to say, they’re valuable as means to the end of ensuring that all conscious beings experience more pleasure and other positive sensations, and less suffering. As Sharon notes, from Athens in 400 BC to Britain in 1850, the idea that only subjective experiences can be good or bad in themselves — a position known as ‘philosophical hedonism’ — has been one of the most enduringly popular ideas in ethics. And few will be taken aback by the notion that, all else equal, more pleasure is good and less suffering is bad. But can they really be the only intrinsically valuable things? Over the 20th century, philosophical hedonism became increasingly controversial in the face of some seemingly very counterintuitive implications. For this reason the famous philosopher of mind Thomas Nagel called The Feeling of Value “a radical and important philosophical contribution.” So what convinces Sharon that philosophical hedonism deserves another go? In today’s interview with host Rob Wiblin, Sharon explains the case for a theory of value grounded in subjective experiences, and why she believes these counterarguments are misguided. A philosophical hedonist shouldn’t get in an experience machine, nor override an individual’s autonomy, except in situations so different from the classic thought experiments that it no longer seems strange they would do so. Chapters: Cold open (00:00:00)Rob’s intro (00:00:41)The interview begins (00:04:27)Metaethics (00:05:58)Anti-realism (00:12:21)Sharon's theory of moral realism (00:17:59)The history of hedonism (00:24:53)Intrinsic value vs instrumental value (00:30:31)Egoistic hedonism (00:38:12)Single axis of value (00:44:01)Key objections to Sharon’s brand of hedonism (00:58:00)The experience machine (01:07:50)Robot spouses (01:24:11)Most common misunderstanding of Sharon’s view (01:28:52)How might a hedonist actually live (01:39:28)The organ transplant case (01:55:16)Counterintuitive implications of hedonistic utilitarianism (02:05:22)How could we discover moral facts? (02:19:47)Rob’s outro (02:24:44)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

    2 ч. 26 мин.
  4. #134 Classic episode – Ian Morris on what big-picture history teaches us

    15 ЯНВ.

    #134 Classic episode – Ian Morris on what big-picture history teaches us

    Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obey their husbands, and commoners should obey their monarchs. Wind back 10,000 years and things look very different again. Most hunter-gatherer groups thought men who got too big for their britches needed to be put in their place rather than obeyed, and lifelong monogamy could hardly be expected of men or women. Why such big systematic changes — and why these changes specifically? That's the question bestselling historian Ian Morris takes up in his book, Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Ian has spent his academic life studying long-term history, trying to explain the big-picture changes that play out over hundreds or thousands of years. Rebroadcast: this episode was originally released in July 2022. Links to learn more, highlights, and full transcript. There are a number of possible explanations one could offer for the wide-ranging shifts in opinion on the 'right' way to live. Maybe the natural sciences progressed and people realised their previous ideas were mistaken? Perhaps a few persuasive advocates turned the course of history with their revolutionary arguments? Maybe everyone just got nicer? In Foragers, Farmers and Fossil Fuels Ian presents a provocative alternative: human culture gradually evolves towards whatever system of organisation allows a society to harvest the most energy, and we then conclude that system is the most virtuous one. Egalitarian values helped hunter-gatherers hunt and gather effectively. Once farming was developed, hierarchy proved to be the social structure that produced the most grain (and best repelled nomadic raiders). And in the modern era, democracy and individuality have proven to be more productive ways to collect and exploit fossil fuels. On this theory, it's technology that drives moral values much more than moral philosophy. Individuals can try to persist with deeply held values that limit economic growth, but they risk being rendered irrelevant as more productive peers in their own society accrue wealth and power. And societies that fail to move with the times risk being conquered by more pragmatic neighbours that adapt to new technologies and grow in population and military strength. There are many objections one could raise to this theory, many of which we put to Ian in this interview. But the question is a highly consequential one: if we want to guess what goals our descendants will pursue hundreds of years from now, it would be helpful to have a theory for why our ancestors mostly thought one thing, while we mostly think another. Big though it is, the driver of human values is only one of several major questions Ian has tackled through his career. In this classic episode, we discuss all of Ian's major books. Chapters: Rob's intro (00:00:53)The interview begins (00:02:30)Geography is Destiny (00:03:38)Why the West Rules—For Now (00:12:04)War! What is it Good For? (00:28:19)Expectations for the future (00:40:22)Foragers, Farmers, and Fossil Fuels (00:53:53)Historical methodology (01:03:14)Falsifiable alternative theories (01:15:59)Archaeology (01:22:56)Energy extraction technology as a key driver of human values (01:37:43)Allowing people to debate about values (02:00:16)Can productive wars still occur? (02:13:28)Where is history contingent and where isn’t it? (02:30:23)How Ian thinks about the future (03:13:33)Macrohistory myths (03:29:51)Ian’s favourite archaeology memory (03:33:19)The most unfair criticism Ian’s ever received (03:35:17)Rob's outro (03:39:55)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

    3 ч. 41 мин.
  5. #140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

    8 ЯНВ.

    #140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

    Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age. Rebroadcast: this episode was originally released in November 2022. Links to learn more, highlights, and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war." In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect?What would Bear's critics say in response to all this?What do the optimists get right?How does one do proper statistical tests for events that are clumped together, like war deaths?Why are deaths in war so concentrated in a handful of the most extreme events?Did the ideas of the Enlightenment promote nonviolence, on balance?Were early states more or less violent than groups of hunter-gatherers?If Bear is right, what can be done?How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century?Which wars are remarkable but largely unknown?Chapters: Cold open (00:00:00)Rob's intro (00:01:01)The interview begins (00:05:37)Only the Dead (00:08:33)The Enlightenment (00:18:50)Democratic peace theory (00:28:26)Is religion a key driver of war? (00:31:32)International orders (00:35:14)The Concert of Europe (00:44:21)The Bismarckian system (00:55:49)The current international order (01:00:22)The Better Angels of Our Nature (01:19:36)War datasets (01:34:09)Seeing patterns in data where none exist (01:47:38)Change-point analysis (01:51:39)Rates of violent death throughout history (01:56:39)War initiation (02:05:02)Escalation (02:20:03)Getting massively different results from the same data (02:30:45)How worried we should be (02:36:13)Most likely ways Only the Dead is wrong (02:38:31)Astonishing smaller wars (02:42:45)Rob’s outro (02:47:13)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

    2 ч. 48 мин.
  6. 2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

    27.12.2024

    2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

    "A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob Wiblin It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including: How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptopWhy mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever doneWhy evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to othersHow superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timingWhy the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research todayHow much of the gender pay gap is due to direct pay discrimination vs other factorsHow cleaner wrasse fish blow the mirror test out of the waterWhy effective altruism may be too big a tent to work wellHow we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with…as well as 27 other top observations and arguments from the past year of the show. Check out the full transcript and episode links on the 80,000 Hours website. Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there. It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown. Enjoy, and look forward to speaking with you in 2025! Chapters: Rob's intro (00:00:00)Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)Meghan Barrett on the likelihood of insect sentience (00:11:26)Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)Sella Nevo on side-channel attacks (00:19:32)Zvi Mowshowitz on AI sleeper agents (00:22:59)Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)Emily Oster on the impact of kids on women's careers (00:40:29)Carl Shulman on robot nannies (00:45:19)Nathan Labenz on kids and artificial friends (00:50:12)Nathan Calvin on why it's not too early for AI policies (00:54:13)Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)Nick Joseph on why he’s a big fan of the responsible scaling policy approach (01:03:11)Sihao Huang on how the US and UK might coordinate with China (01:06:09)Nathan Labenz on better transparency about predicted capabilities (01:10:18)Ezra Karger on what explains forecasters’ disagreements about AI risks (01:15:22)Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)Vitalik Buterin on defensive acceleration (01:29:43)Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)Nate Silver on whether effective altruism is too big to succeed (01:38:42)Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)Anil Seth on how our brain interprets reality (02:01:03)Eric Schwitzgebel on whether consciousness can be nested (02:04:53)Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)Peter Godfrey-Smith on uploads of ourselves (02:14:34)Laura Deming on surprising things that make mice live longer (02:21:17)Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)Cameron Meyer Shorb on vaccines for wild animals (02:42:53)Spencer Greenberg on personal principles (02:46:08)Producing and editing: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore

    2 ч. 50 мин.
  7. #211 – Sam Bowman on why housing still isn't fixed and what would actually work

    19.12.2024

    #211 – Sam Bowman on why housing still isn't fixed and what would actually work

    Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over. The result of this ‘vetocracy’ has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies. This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they’re comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the ’60s and ’70s. Today’s guest — economist and editor of Works in Progress Sam Bowman — isn’t content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you. Links to learn more, highlights, video, and full transcript. So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home. But democracies are majoritarian, so if most existing residents think they’ll be a little worse off if more dwellings are built in their area, it’s no surprise they aren’t getting built. Luckily we already have a simple way to get people to do things they don’t enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn’t do for free: compensate them.  Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success — which he discusses in detail with host Rob Wiblin. Chapters: Cold open (00:00:00)Introducing Sam Bowman (00:00:59)We can’t seem to build anything (00:02:09)Our inability to build is ruining people's lives (00:04:03)Why blocking growth of big cities is terrible for science and invention (00:09:15)It's also worsening inequality, health, fertility, and political polarisation (00:14:36)The UK as the 'limit case' of restrictive planning permission gone mad (00:17:50)We've known this for years. So why almost no progress fixing it? (00:36:34)NIMBYs aren't wrong: they are often harmed by development (00:43:58)Solution #1: Street votes (00:55:37)Are street votes unfair to surrounding areas? (01:08:31)Street votes are coming to the UK — what to expect (01:15:07)Are street votes viable in California, NY, or other countries? (01:19:34)Solution #2: Benefit sharing (01:25:08)Property tax distribution — the most important policy you've never heard of (01:44:29)Solution #3: Opt-outs (01:57:53)How to make these things happen (02:11:19)Let new and old institutions run in parallel until the old one withers (02:18:17)The evil of modern architecture and why beautiful buildings are essential (02:31:58)Northern latitudes need nuclear power — solar won't be enough (02:45:01)Ozempic is still underrated and “the overweight theory of everything” (03:02:30)How has progress studies remained sane while being very online? (03:17:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions: Katy Moore

    3 ч. 26 мин.
  8. #210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals

    29.11.2024

    #210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals

    "I really don’t want to give the impression that I think it is easy to make predictable, controlled, safe interventions in wild systems where there are many species interacting. I don’t think it’s easy, but I don’t see any reason to think that it’s impossible. And I think we have been making progress. I think there’s every reason to think that if we continue doing research, both at the theoretical level — How do ecosystems work? What sorts of things are likely to have what sorts of indirect effects? — and then also at the practical level — Is this intervention a good idea? — I really think we’re going to come up with plenty of things that would be helpful to plenty of animals." —Cameron Meyer Shorb In today’s episode, host Luisa Rodriguez speaks to Cameron Meyer Shorb — executive director of the Wild Animal Initiative — about the cutting-edge research on wild animal welfare. Links to learn more, highlights, and full transcript. They cover: How it’s almost impossible to comprehend the sheer number of wild animals on Earth — and why that makes their potential suffering so important to consider.How bad experiences like disease, parasites, and predation truly are for wild animals — and how we would even begin to study that empirically.The tricky ethical dilemmas in trying to help wild animals without unintended consequences for ecosystems or other potentially sentient beings.Potentially promising interventions to help wild animals — like selective reforestation, vaccines, fire management, and gene drives.Why Cameron thinks the best approach to improving wild animal welfare is to first build a dedicated research field — and how Wild Animal Initiative’s activities support this.The many career paths in science, policy, and technology that could contribute to improving wild animal welfare.And much more.Chapters: Cold open (00:00:00)Luisa's intro (00:01:04)The interview begins (00:03:40)One concrete example of how we might improve wild animal welfare (00:04:04)Why should we care about wild animal suffering? (00:10:00)What’s it like to be a wild animal? (00:19:37)Suffering and death in the wild (00:29:19)Positive, benign, and social experiences (00:51:33)Indicators of welfare (01:01:40)Can we even help wild animals without unintended consequences? (01:13:20)Vaccines for wild animals (01:30:59)Fire management (01:44:20)Gene drive technologies (01:47:42)Common objections and misconceptions about wild animal welfare (01:53:19)Future promising interventions (02:21:58)What’s the long game for wild animal welfare? (02:27:46)Eliminating the biological basis for suffering (02:33:21)Optimising for high-welfare landscapes (02:37:33)Wild Animal Initiative’s work (02:44:11)Careers in wild animal welfare (02:58:13)Work-related guilt and shame (03:12:57)Luisa's outro (03:19:51) Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore

    3 ч. 21 мин.

Трейлер

4,8
из 5
Оценок: 279

Об этом подкасте

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.

Вам может также понравиться

Чтобы прослушивать выпуски с ненормативным контентом, войдите в систему.

Следите за новостями подкаста

Войдите в систему или зарегистрируйтесь, чтобы следить за подкастами, сохранять выпуски и получать последние обновления.

Выберите страну или регион

Африка, Ближний Восток и Индия

Азиатско-Тихоокеанский регион

Европа

Латинская Америка и страны Карибского бассейна

США и Канада