Pigeon Hour

Aaron Bergman
Pigeon Hour

Recorded conversations; a minimal viable pod www.aaronbergman.net

  1. APR 11

    #12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more

    Please follow Arthur on Twitter and check out his blog! Thank you for just summarizing my point in like 1% of the words -Aaron, to Arthur, circa 34:45 Summary (Written by Claude Opus aka Clong) * Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse". * They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time. * Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online. * Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy. * They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations. * Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor. * Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing. * They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists. * Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing. * Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it. * They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over. Transcript Very imperfect - apologies for the errors. AARON Hello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want? ARTHUR Yeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, de

    2h 13m
  2. MAR 9

    Drunk Pigeon Hour!

    Intro Around New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000. To absolutely nobody’s surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you. And although our active promotion on Twitter has come to an end, it is not too late to give! I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour: Transcript (Note: very imperfect - sorry!) Monologue Hi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons. The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that. But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that. And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe. Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. Yeah so here it is! Conversation AARON Hello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now. MATT Um, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward. LAURA I am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College. MATT Weirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product. So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually. AARON I think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me. MATT So, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media. Like maybe I'll try like Tumblr as a, as an alternative. And I would get a lot of ads for like plus-sized women's flannels. So, so like the Twitter ad targeting does not faze me because I'm like, oh, ok

    1h 36m
  3. Best of Pigeon Hour

    JAN 24

    Best of Pigeon Hour

    Table of contents Note: links take you to the corresponding section below; links to the original episode can be found there. * Laura Duffy solves housing, ethics, and more [00:01:16] * Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47] * Nathan Barnard on how financial regulation can inform AI regulation [00:17:16] * Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48] * Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10] * Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54] * Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00] * Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43] * Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08] Intro [00:00:00] To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough. It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that. So without further ado, please enjoy. #1: Laura Duffy solves housing, ethics, and more [00:01:16] In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory. AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that? LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that. AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption. LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago. AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're go

    1h 48m
  4. 12/28/2023

    #10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse

    Intro At the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts. What I believe In spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism. Here’s my basic meta-ethical take: * Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but * Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true. Of course the label per se doesn’t really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best. Some definitions Stanford Encyclopedia of Philosophy: Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way). Wikipedia: The robust model of moral realism commits moral realists to three theses: * The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on). * The alethic thesis: Some moral propositions are in fact true. * The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties. And to settle the matter, here’s what some LLM have to say. My prompt: Suppose a person believes the following: 1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but 2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true. Is this person best categorized as a moral realist? ChatGPT Given these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value. One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions. This position is quite sophisticated and

    1h 8m
  5. 11/15/2023

    #9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more

    Note: I can’t seem to edit or remove the “transcript” tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below. Most importantly, follow Sarah on Twitter! Summary (Written by chatGPT, as you can probably tell) In this episode of Pigeon Hour host Aaron delves deep into the world of AI safety with his guest, Sarah Woodhouse. Sarah shares her unexpected journey from fearing job automation to becoming a recognized voice on AI safety Twitter. Her story starts with a simple Google search that led her down a rabbit hole of existential dread and unexpected fame on social media. As she narrates her path from lurker to influencer, Sarah reflects on the quirky dynamics of the AI safety community, her own existential crisis, and the serendipitous tweet that resonated with thousands. Aaron and Sarah’s conversation takes unexpected turns, discussing everything from the peculiarities of EA rationalists to the surprisingly serious topic of shrimp welfare. They also explore the nuances of AI doom probabilities, the social dynamics of tech Twitter, and Sarah’s unexpected viral fame as a tween. This episode is a rollercoaster of insights and anecdotes, perfect for anyone interested in the intersection of technology, society, and the unpredictable journey of internet fame. Topics discussed Discussion on AI Safety and Personal Journeys: * Aaron and Sarah discuss her path to AI safety, triggered by concerns about job automation and the realization that AI could potentially replace her work. * Sarah's deep dive into AI safety started with a simple Google search, leading her to Geoffrey Hinton's alarming statements, and eventually to a broader exploration without finding reassuring consensus. * Sarah's Twitter engagement began with lurking, later evolving into active participation and gaining an audience, especially after a relatable tweet thread about an existential crisis. * Aaron remarks on the rarity of people like Sarah, who follow the AI safety rabbit hole to its depths, considering its obvious implications for various industries. AI Safety and Public Perception: * Sarah discusses her surprise at discovering the AI safety conversation happening mostly in niche circles, often with a tongue-in-cheek attitude that could seem dismissive of the serious implications of AI risks. * The discussion touches on the paradox of AI safety: it’s a critically important topic, yet it often remains confined within certain intellectual circles, leading to a lack of broader public engagement and awareness. Cultural Differences and Personal Interests: * The conversation shifts to cultural differences between the UK and the US, particularly in terms of sincerity and communication styles. * Personal interests, such as theater and musicals (like "Glee"), are also discussed, revealing Sarah's background and hobbies. Effective Altruism (EA) and Rationalist Communities: * Sarah points out certain quirks of the EA and rationalist communities, such as their penchant for detailed analysis, hedging statements, and the use of probabilities in discussions. * The debate around the use of "P(Doom)" (probability of doom) in AI safety discussions is critiqued, highlighting how it can be both a serious analytical tool and a potentially alienating jargon for outsiders. Shrimp Welfare and Ethical Considerations: * A detailed discussion on shrimp welfare as an ethical consideration in effective altruism unfolds, examining the moral implications and effectiveness of focusing on animal welfare at a large scale. * Aaron defends his position on prioritizing shrimp welfare in charitable giving, based on the principles of importance, tractability, and neglectedness. Personal Decision-Making in Charitable Giving: * Strategies for personal charitable giving are explored, including setting a donation cutoff point to balance moral obligations with personal needs and aspirations. Transcript AARON: Whatever you want. Okay. Yeah, I fee

    1h 15m
  6. 11/06/2023

    #8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all

    * Follow ⁠Max on Twitter⁠ * And read his ⁠blog⁠ * Listen here or on Spotify or Apple Podcasts * RIP Google Podcasts 🪦🪦🪦 Summary In this philosophical and reflective episode, hosts Aaron and Max engage in a profound debate over the nature of consciousness, moral realism, and subjective experience. Max, a skeptic of moral realism, challenges Aaron on the objective moral distinction between worlds with varying levels of suffering. They ponder the hard problem of consciousness, discussing the possibility of philosophical zombies and whether computations could account for consciousness. As they delve into the implications of AI on moral frameworks, their conversation extends to the origins of normativity and the nonexistence of free will. The tone shifts as they discuss practical advice for running an Effective Altruism group, emphasizing the importance of co-organizers and the balance between being hospitable and maintaining normalcy. They exchange views on the potential risks and benefits of being open in community building and the value of transparency and honest feedback. Transitioning to lighter topics, Max and Aaron share their experiences with social media, the impact of Twitter on communication, and the humorous side of office gossip. They also touch on the role of anonymity in online discussions, pondering its significance against the backdrop of the Effective Altruism community. As the episode draws to a close, they explore the consequences of public online behavior for employment and personal life, sharing anecdotes and contemplating the broader implications of engaging in sensitive discourses. Despite their digressions into various topics, the duo manages to weave a coherent narrative of their musings, leaving listeners with much to reflect upon. Transcript AARON: Without any ado whatsoever. Max Alexander and I discuss a bunch of philosophy things and more. MAX: I don't think moral realism is true or something. AARON: Okay, yeah, we can debate this. MAX: That's actually an issue then, because if it's just the case that utilitarianism and this an axiology, which is true or something, whether or not I'm bothered by or would make certain traits personally doesn't actually matter. But if you had the godlike AI or like, I need to give it my axiological system or something, and there's not an objective one, then this becomes more of a problem that you keep running into these issues or something. AARON: Okay, yeah, let's debate. Because you think I'm really wrong about this, and I think you're wrong, but I think your position is more plausible than you think. My position is probably. I'm at like 70%. Some version of moral realism is true. And I think you're at, like, what? Tell me. Like, I don't know, 90 or something. MAX: I was going to probably 99% or something. I've yet to hear a thing that's plausible or something here. AARON: Okay, well, here, let's figure it out once and for all. So you can press a button that doesn't do Nick. The only thing that happens is that it creates somebody in the world who's experiencing bad pain. There's no other effect in the world. And then you have to order these two worlds. There's no normativity involved. You only have to order them according to how good they are. This is my intuition pump. This isn't like a formal argument. This is my intuition pump that says, okay, the one without that suffering person and no other changes. Subjectively, not subjectively. There's a fact of the matter as to which one is better is, like, not. I mean, I feel like, morally better and better here just are synonyms. All things considered. Better, morally better, whatever. Do you have a response, or do you just want to say, like, no, you're a formal argument. MAX: What makes this fact of the matter the case or something like that? AARON: Okay, I need to get into my headspace where I've done this or had this debate before. I do know. I'll defer to Sharon Roulette not too long ago

    1h 11m
  7. 10/17/2023

    #7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can

    * Listen on Spotify or Apple Podcasts * Be sure to check out and follow Holly’s Substack and org Pause AI. Blurb and summary from Clong Blurb Holly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues. Detailed summary * Holly's background - PhD in evolutionary biology, got into EA through New Atheism and looking for community with positive values, did EA organizing at Harvard * Worked at Rethink Priorities on wild animal welfare but had reservations about imposing values on animals and whether we're at the right margin yet * Got inspired by FLI letter to focus more on AI safety advocacy and importance of public opinion * Discussed genetic conflict and challenges of alignment even with "closest" agents * Talked about the value of public opinion polls and influencing politicians * Discussed the psychology and motives of AI researchers * Disagreed a bit on whether certain labs like OpenAI might be net positive actors * Holly argued for importance of public advocacy in AI safety, thinks we have power to shift Overton window * Talked about the dynamics between different AI researchers and competition for status * Discussed how rationalists often dismiss advocacy and politics * Holly thinks advocacy is neglected and can push the Overton window even within EA * Also discussed Holly's evolutionary biology takes, memetic drive, gradient descent vs. natural selection Full transcript (very imperfect) AARON You're an AI pause, Advocate. Can you remind me of your shtick before that? Did you have an EA career or something? HOLLY Yeah, before that I was an academic. I got into EA when I was doing my PhD in evolutionary biology, and I had been into New Atheism before that. I had done a lot of organizing for that in college. And while the enlightenment stuff and what I think is the truth about there not being a God was very important to me, but I didn't like the lack of positive values. Half the people there were sort of people like me who are looking for community after leaving their religion that they grew up in. And sometimes as many as half of the people there were just looking for a way for it to be okay for them to upset people and take away stuff that was important to them. And I didn't love that. I didn't love organizing a space for that. And when I got to my first year at Harvard, harvard Effective Altruism was advertising for its fellowship, which became the Elite Fellowship eventually. And I was like, wow, this is like, everything I want. And it has this positive organizing value around doing good. And so I was totally made for it. And pretty much immediately I did that fellowship, even though it was for undergrad. I did that fellowship, and I was immediately doing a lot of grad school organizing, and I did that for, like, six more years. And yeah, by the time I got to the end of grad school, I realized I was very sick in my fifth year, and I realized the stuff I kept doing was EA organizing, and I did not want to keep doing work. And that was pretty clear. I thought, oh, because I'm really into my academic area, I'll do that, but I'll also have a component of doing good. I took giving what we can in the middle of grad school, and I thought, I actually just enjoy doing this more, so why would I do anything else? Then after grad school, I started applying for EA jobs, and pretty soon I got a job at Rethink P

    1h 38m

Ratings & Reviews

4.8
out of 5
4 Ratings

About

Recorded conversations; a minimal viable pod www.aaronbergman.net

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada