Pigeon Hour

Aaron Bergman

Recorded conversations; a minimal viable pod www.aaronbergman.net

  1. ١٤ مايو

    #14: Jesse Smith on HVAC, indoor air quality, and generally being an extremely based person

    Summary Join host Aaron with Jesse Smith, a self-described "unconventional EA" (Effective Altruist) who bridges blue-collar expertise with intellectual insight. Jesse recounts his wild early adventures in Canadian "bush camps," from planting a thousand trees daily as a teen to remote carpentry with helicopter commutes. Now a carpenter, HVAC technician, and business owner (Tay River Builders), he discusses his Asterisk magazine article, "Lies, Damned Lies, and Manometer Readings." Discover the HVAC industry's surprising shortcomings, the difficulty of achieving good indoor air quality (even for the affluent!), and the systemic issues impacting public health and climate goals, with practical insights on CO2 and radon monitors like the Airthings View Plus. Jesse’s links * Lies, Damned Lies, and Manometer Readings, the Asterisk magazine article discussed at length * Tay River Builders, his contracting company * Willard Brothers Woodcutters, his wood store (and its viral Instagram page) * Jesse on Twitter * The Airthings View Plus air quality monitor discussed (currently $239 on Amazon) (no they’re not paying either of us for this but they should Transcript Aaron: Okay. First recorded pigeon hour in a while. I'm here with Jesse Smith resident dad of EA Twitter. I don't know if. I don't know if you'll accept that. Accept that honor. Okay, cool. and I actually, we haven't chatted, face to face in, like, a while, but I know you have, like, a really interesting. You're very, like, unconventional EA in some respects. Do you want to, like, give me your whole like, life story? In brief? Jesse: okay. So I guess one thing is that I'm super old for EA, right? like. And so being a dad and owning, like, a kind of normal business, I guess another is kind of more blue collar background, right? So, I was originally a carpenter also, then took on being an HVAC technician. So I, the businesses that I own Kind of like focus on a little bit of both those. yeah. So like my my background. I was raised in Canada. I left school, I didn't go to college. yeah. I went into, like, after a few years of, like a few years after high school, went into the trades basically. Aaron: Okay. Yeah. Nice. Okay. Like. Yes, I think that. Yeah, that definitely, like, makes you at least, at least stereotypically. But I think also like, in real life, like, there just aren't that many, like, carpentry businessmen who are, like happen to, like, hang out on Twitter also. So no, this is like legitimately really interesting. And at one point, I swear, I thought you went to Princeton. You must have mentioned the city, and I must have interpreted it as the town. Jesse: Yeah, my my brother, my brothers and I lived in Princeton for quite a while. Two of my brothers actually. Aaron: Still. Jesse: Okay around Princeton. I'm not far from Princeton. That's kind of the area where we work. it is where my dad went to grad school, so could have been that as well. so. Yeah. Yeah, but I did not attend Princeton. Aaron: I mean. Jesse: I worked on some of their buildings, but I have not attended. Aaron: Maybe, maybe that was where I got the that like. Yeah, like like myth from. So I know I have, I got like a couple at least. Matt. Matt from Twitter sent in a question, but I as usual, I've done a minimal level of preparation. So also we can we can talk about talk about truly whatever, but like, maybe. Yeah. So how did you, how did you, like, find out about Yale's? Like one thing. Jesse: Well, yeah. Okay. So there's some, I guess, some weird stuff. So I was fairly enamored with Peter Singer. Kind of just like starting with the book Animal Liberation. It would have been. I forget when he wrote that. Like it would have been years after he wrote it, right? Because I think he wrote it in even when, like when I was super young. But I probably read that in my late teens. Okay. And so, yeah. Aaron: That's that's the 1975 book. Jesse: So yeah, that sounds right. Yeah. I was going to guess the 70s. Right. So I was like. Aaron: Nice. Jesse: Nice or something. Right. So what. Aaron: Year old? Jesse: Yes, exactly. But so so when I was 16, I briefly dropped out of high school and I was working. This is really weird. I was working in these, like, bush camps in Canada. It's somewhat popular to do this. And so, like, I was 16, I celebrated my 17th birthday in a bush camp. That was like a tree planting bush camp. But this. Okay, so this is really weird. It sounds like this is like core blue collar, but it's not quite. The guy who owned the company I was working for was a friend of my dad's, and he was Baha'i and vegetarian. And so he had these vegetarian bush camps that we planted trees and did like some brushing out of. Right. So we ran like brush saws and stuff. And so I sort of I think that's kind of what, like I became a vegetarian out of those camps and was reading kind of Peter Singer's stuff at the time. And I think partly being there made me realize like, oh, this is going to not be that difficult. A lot of guys were really irritated by vegetarian bush camps, right? Like some of it was kind of core blue collar, mediating type guys. But like, I was totally fine. I was actually like super happy because it was kind of my first experience in a full time job. And I was nervous because everybody was like, oh, you know, it's going to be hell. And I actually thought it was great. It was much better than being in high school. I thought at the time, like, they just like everything was squared away, like they just fed you. You just had to go and like, try to put as it was piecework. So it was like $0.22 a tree or something. And after a few days, I think on my third day I put something like a thousand trees into the ground or something. Right. So I was. Aaron: Like, Jesus Christ. Jesse: I was like, oh, this is amazing, right? Like, all I have to do is like. Run as fast as I possibly can with these big bags of trees in the woods in, like, this beautiful setting. Eat the food they give me and then like, go to sleep and, like, read or whatever. Right. So like it was a it was a great experience. I know that's the total effect, right. What's that. Aaron: No no no it's not it's not a digression. One thing is can you just define bush camp like for for us dumb American like, oh yeah, dumb like Americans or whatever. Jesse: Yeah. So I, I don't know, I, I guess maybe I haven't heard the terms. They're in the US, but they must exist for some purpose. Right. So usually it's like somewhere remote that you are basically camped out of. In my case, it was literally like camping. It was tents, which I didn't mind at the time. So you'd be like, you know, in our case, it was big camps. Like, I think sometimes they can be as little as maybe ten people, let's say. Right. And this was like a pretty decent company. So they were running 40. The max I saw was maybe 100 people working out of this camp in the remote wilderness. The first year I did it was around an area called mica, which I understand now is a popular heli skiing destination. Like I have a friend who now skis and mica, which is hilarious to me, but it would maybe take you to the nearest town to mica was probably Revelstoke, which was in this case we could drive there, you know, maybe like a year or two later. There were ones that we were flown into, and in some cases there were even ones where I'm trying to think like I was in one in my like early 20s where they would helicopter. You would take a helicopter ride every day to the site, like so like you. So they would like, you'd see this helicopter coming in and like, they'd land in the camp and then they take you. But it was just it wasn't like it didn't feel like special operations. It was like the helicopter was rented from the, like, small towns Weather channel. Right. Aaron: Well, that's so badass as like, I feel like the correct term for all this is, is very based. Jesse: Yeah, I don't know. I mean, I like it seems weird now to describe this to people and it's not in people's experience. But it wasn't it didn't feel the helicopter thing. Maybe did initially felt weird. Right. Because like, I don't know anything about helicopters, right. Like but it didn't feel that weird at the time. And I knew a lot of people growing up who worked out of bush camps and then years later. So like probably around when I was in my early 20s is when I started my carpentry apprenticeship formally, like I had worked in construction a bit and then done the bush camp thing on and off. And so then I ended up doing some some remote wilderness bush camp carpentry work as well, maybe midway through my apprenticeship. So I worked on a it. An Indian reserve building, a water treatment facility that would have been like probably late 90s. Like I'm thinking like probably right before I moved to the US. And that was like that was months and months. That was actually not a good camp. One of the things I hate is that the first camp I went to was incredible, like incredible, like incredible food, like they would haul in saunas like you had you had a trailer with a sauna. And so, like when you're 16, you know, you're just like, oh yeah, this is like normal, right? And I've often thought like. And the food was amazing. Like the, the lead cook would make like she made it for my 17th birthday. She made me a cake. Right. But I was like, and I'm I'm sure I said like, thank you. But it should have been like effusive with praise, right? Because it was just like, yeah, incredible. And then if you, you know. And then I was probably in like over the years maybe 2 or 3 other camps and they suck. Like I remember showing up and being like hey, when is. And like, so this woman would, she would have like Indian night and Mexican night, like themed food nights and like you like they had generators and you could watch movies and like, it was just crazy. And I remember rolling into, like, this next, logging camps and logging camps are legendarily crappy

    ١ س ٣٠ د
  2. ٢٥ مارس

    Preparing for the Intelligence Explosion (paper readout and commentary)

    Preparing for the Intelligence Explosion is a recent paper by Fin Moorhouse and Will MacAskill. * 00:00 - 1:58:04 is me reading the paper. * 1:58:05 - 2:26:06 is a string of random thoughts I have related to it I am well-aware that I am not the world's most eloquent speaker lol. This is also a bit of an experiment in getting myself to read something by reading it out loud. Maybe I’ll do another episode like this (feel free to request papers/other things to read out, ideally a bit shorter than this one lol) Below are my unfiltered, unedited, quarter-baked thoughts. My unfiltered, unedited, quarter-baked thoughts Okay, this is Aaron. I'm in post-prod, as we say in the industry, and I will just spitball some random thoughts, and then I'm not even with my computer right now, so I don't even have the text in front of me. I feel like my main takeaway is that the vibes debate is between normal to AI is as important as the internet, maybe. That's on the low end, to AI is a big deal. But if you actually do the not math, all approximately all of the variation is actually just between insane and insane to the power of insane. And I don't fully know what to do with that. I guess, to put a bit more of a point on it, I'm not just talking about point estimates. It seems that even if you make quite conservative assumptions, it's quite overdetermined that there will be something explosive technological progress unless something really changes. And that is just, yeah, that is just a big deal. It's not one that I think of fully incorporated into my emotional worldview. I mean, I have it, I think, in part, but not, not to the degree that I think my, my intellect has. So another thing is that one of the headline results, something that Will MacAskill, I think, wants to emphasize and did emphasize in the paper, is the century in a decade meme. But if you actually read the paper, that is kind of a lower bound, unless something crazy happens. And I'll, this is me editorializing right now. So, I think something crazy could happen first, for example, nuclear war with China, that would destroy data centers and mean that, you know, AI progress is significantly set back, or it's an unknown unknown. But the century in a decade is really a truly a lower bound. You need to be super pessimistic with all the in-model uncertainty. Obviously there's out of model uncertainty, but the actual point estimates, whether you take geometric, however you do it, arithmetic means over distributions, or geometric means, however you combine the variables, you actually get much much faster than that. So that is a 10x speed up, and that is, yeah, as I said 10 times, as pessimistic as you can get, I don't actually have a good enough memory to remember exactly what the point estimate numbers are. I should go back and look. So chatting with Claude, it seems that there's actually a lot of different specific numbers and things. So one question you might have is, okay, over the fastest growing decade in terms of technological progress or economic growth in the next 10 decades, what will the peak average growth rate be? But there's a lot of different ways you can play with that to change it. It's, oh, what's the average going to be over the next decade? What about this coming decade? What about before 2030? Do we're talking about economic progress, progress or some less well-defined sense of technological and social progress. But basically it seems the conservative scenario is, is that the intelligence explosion happens and at some, in some importantly long series of years, you get a 5x year over year. So not a doubling every year, but after two years, you get a 25x expansion of, of AI labor. And then 125 after three years. And I need to look back. I think one thing they don't talk about specifically is, oh yeah, sorry. They do talk about one important thing to emphasize. And as you can tell, I'm not the most eloquent person in the world. Is that they talk about pace significantly and about limiting factors. But the third, the thing you might solve for, if you know those two variables is the length of time that such an explosion might take place across and just talking, thinking out loud, that is something that they, whether intentionally or otherwise, or me being dumb and missing it. I don't think that they give a ton of attention to, and that's yeah. I mean, my intuition is approximately fine. Does it matter if the intelligence explosion conditional on conditional on knowing how to distribution of rates of say blocks of years, say, so we're not talking about seconds, we're not talking about, I guess we could be talking about months, but we're not talking about weeks, and we're not talking about multiple decades. So we're talking about something in the realm of single digit to double digit numbers of years, maybe a fraction of a year. So two ish, three orders of magnitude of range. And so the question is, conditional on having a distribution of peak average growth rate for some block of time. Does it matter whether we're talking about two years, or 10 years or what? And sorry, backtracking, also conditional on having a distribution for the limiting factors. So at what point do you stop scaling? Because we know that there's the talking point, infinite growth in a finite world is true. They're just off by 1000 orders of magnitude, or maybe 100. So there actually are genuine limiting factors. And they discussed this, at what point you might get true limits on power consumption or whatever. But yeah, just to recap this little mini ramble. We don't, one thing the paper doesn't go over much is the length of time specifically, except insofar as that is implied by distributions you have for peak growth rates and limiting factors. So another thing that wasn't in the paper, but that was, I'm just spitballing that was in Will MacAskill recent interview on the 80,000 hours podcast with Robert Roeblin about the world's most pressing problems and how you can use your career to solve them. Is that, yeah, I think Rob said this, he wishes that the AIX community hadn't been so tame or timid, in terms of hedging, saying, emphasizing uncertainty, saying, you know, there's a million ways it can be wrong, which is of course true. But I think his, the takeaway he was trying to get at was, even ex-ante, they should have been a little bit more straightforward. And I actually kind of think there's a reasonable critique of this paper, which is that the century in a decade meme is not a good approximation of the actual expectations, you know, the expectations is something like 100 to 1000x, not a 10x speed up. As lucky as a reasonable conservative baseline, you have to be really within model pessimistic to get to the 10x point. Another big thing to comment on is just the grand challenges. And so I've been saying for a while that my P doom, as they say, is something in the 50% range. Maybe now it's 60% or something after reading this paper up from 35% right after the bottom executive order. And what I mean by that, I actually think is some sort of loose sense of, no, we actually don't solve all these challenges. Well, so it's not one thing MacAskill and Morehouse emphasize, but in both the podcast that I listened to and the paper is it's not just about AI control. It's not just about the alignment problem. You really have to get a lot of things right. I think this relates to other work that MacAskill is on that I'm not super well acquainted with. But there's the question of how much do you have to get right in order for the future to go well. And actually think there's a lot of strands there. Like I remember on the podcast with Rob, that we're talking in terms of percentage, percentage value of the best outcome. I'm not, yeah, I'm just thinking out loud here, but I'm not actually sure that's the right metric to go with. It's a little bit like, so you can imagine just we have the current set of possibilities and then exogenously we get one future strand in the multiverse, the Everettian multiverse. And a single Everettian multiverse thread points to the future going a billion times better than it could otherwise. I feel like this approximately should not change approximately anything because you know it's not going to happen. But it does revise down those numbers, your estimate of the expected value, the expected percentage of the best future, it revises that down a billion fold. And so this sort of, no I'm not actually sure if this ends up cashing, I'm just not smart enough to intuit well whether this ends up cashing out in terms of what you should do. But I suspect that it might, that's really just an intuition, so yeah I'm not sure. You know something that will never be said about me is that I am an extremely well organized and straightforward thinker. So it might be worth noting these audio messages are just random things that come to mind as I'm walking around basically a park. Also that's why the audio quality might be worse. Oh yeah getting back to what I was originally thinking about with the grand challenges and my P. Doom. They just enumerate a bunch of things that in my opinion really do have to go right in order for some notion of the future to be good. And so there's just a concatenation, I forget what the term is, but a concatenation issue of even if you're relatively optimistic and I kind of don't know if you should be on any one issue. Like okay, so some of these, let me just list them off. AI takeover, highly destructive technologies, power concentrating mechanisms, value lock-in mechanisms, AI agents and digital minds, space governance, new competitive pressures, epistemic disruption, abundance, so capturing the upside and unknown unknowns. No, they're not, it's not as clean a model as each of these are fully independent. It's much more complex than that, but it's not as simple as you just, oh, if you have a 70% chance on each, you can just take that to the power of eight

    ٢ س ٢٥ د
  3. ١١‏/٠٤‏/٢٠٢٤

    #12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more

    Please follow Arthur on Twitter and check out his blog! Thank you for just summarizing my point in like 1% of the words -Aaron, to Arthur, circa 34:45 Summary (Written by Claude Opus aka Clong) * Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse". * They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time. * Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online. * Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy. * They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations. * Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor. * Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing. * They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists. * Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing. * Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it. * They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over. Transcript Very imperfect - apologies for the errors. AARON Hello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want? ARTHUR Yeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, despite being two months from completion, at least how I feel. And I also do, like, housing policy research, but I think have, I don't know, random, eclectic interests in various EA related topics. And, yeah, I don't. I feel like my soft goal for this podcast was to, like, somehow get Aaron cancelled. AARON I'm in the process. ARTHUR We should solve gender discourse. AARON Oh, yeah. Is it worth, like, discussing? No, honestly, it's just very online. It's, like, not like there's, like, better, more interesting things. ARTHUR I agree. There are more. I was sort of joking. There are more interesting things. Although I do think, like, the general topic that you talked to max a little bit about a while ago, if I remember correctly, of, like, kind of. I don't know to what degree. Like, one's online Persona or, like, being sort of active in public, sharing your opinions is, like, you know, positive or negative for your general. AARON Yeah. What do you think? ARTHUR Yeah, I don't really. AARON Well, your. Your name is on Twitter, and you're like. ARTHUR Yeah. You're. AARON You're not, like, an alt. ARTHUR Yeah, yeah, yeah. Well, I. So, like, I first got on Twitter as an alt account in, like, 2020. I feel like it was during my, like, second to last semester of college. Like, the vaccine didn't exist yet. Things were still very, like, hunkered down in terms of COVID And I feel like I was just, like, out of that isolation. I was like, oh, I'll see what people are talking about on the Internet. And I think a lot of the, like, sort of more kind of topical political culture war, whatever kind of stuff, like, always came back to Twitter, so I was like, okay, I should see what's going on on this Twitter platform. That seems to be where all of the chattering classes are hanging out. And then it just, like, made my life so much worse. AARON Wait, why? ARTHUR Well, I think part of it was that I just, like, I made this anonymous account because I was like, oh, I don't want to, like, I don't want to, like, have any reservations about, like, you know, who I follow or what I say. I just want to, like, see what's going on and not worry about any kind of, like, personal, like, ramifications. And I think that ended up being a terrible decision because then I just, like, let myself get dragged into, like, the most ultimately, like, banal and unimportant, like, sort of, like, culture war shit as just, like, an observer, like, a frustrated observer. And it was just a huge waste of time. I didn't follow anyone interesting or, like, have any interesting conversations. And then I, like, deleted my Twitter. And then it was in my second semester of my current grad program. We had Caleb Watney from the Institute for Progress come to speak to our fellowship because he was an alumni of the same fellowship. And I was a huge fan of the whole progress studies orientation. And I liked what their think tank was doing as, I don't know, a very different approach to being a policy think tank, I think, than a lot of places. And one of the things that he said for, like, people who are thinking about careers in, like, policy and I think sort of applies to, like, more ea sort of stuff as well, was like, that. Developing a platform on Twitter was, like, opened a lot of doors for him in terms of, like, getting to know people in the policy world. Like, they had already seen his stuff on Twitter, and I got a little bit, like, more open to the idea that there could be something constructive that could come from, like, engaging with one's opinions online. So I was like, okay, fuck it. I'll start a Twitter, and this time, like, I won't be a coward. I won't get dragged into all the worst topics. I'll just, like, put my real name on there and, like, say things that I think. And I don't actually do a lot of that, to be honest. AARON I've, like, thought about gotta ramp it. ARTHUR Off doing more of that. But, like, you know, I think when it's not eating too much time into my life in terms of, like, actual deadlines and obligations that I have to meet, it's like, now I've tried to cultivate a, like, more interesting community online where people are actually talking about things that I think matter. AARON Nice. Same. Yeah, I concur. Or, like, maybe this is, like, we shouldn't just talk about me, but I'm actually, like, legit curious. Like, do you think I'm an idiot or, like, cuz, like, hmm. I. So this is getting back to the, like, the current, like, salient controversy, which is, like, really just dumb. Not, I mean, controversy for me because, like, not, not like an actual, like, event in the world, but, like, I get so, like, I think it's, like, definitely a trade off where, like, yeah, there's, like, definitely things that, like, I would say if I, like, had an alt. Also, for some reason, I, like, really just don't like the, um, like, the idea of just, like, having different, I don't know, having, like, different, like, selves. Not in, like, a. And not in, like, any, like, sort of actual, like, philosophical way, but, like, uh, yeah, like, like, the idea of, like, having an online Persona or whatever, I mean, obviously it's gonna be different, but, like, in. Only in the same way that, like, um, you know, like, like, you're, like, in some sense, like, different people to the people. Like, you're, you know, really close friend and, like, a not so close friend, but, like, sort of a different of degree. Like, difference of, like, degree, not kind. And so, like, for some reason, like, I just, like, really don't like the idea of, like, I don't know, having, like, a professional self or whatever. Like, I just. Yeah. And you could, like, hmm. I don't know. Do you think I'm an idiot for, like, sometimes tweeting, like, things that, like, evidently, like, are controversial, even if they, like, they're not at all intent or, like, I didn't even, you know, plan, like, plan on them being. ARTHUR Yeah, I think it's, like, sort of similar to the, like, decoupli

    ٢ س ١٣ د
  4. ٠٩‏/٠٣‏/٢٠٢٤

    Drunk Pigeon Hour!

    Intro Around New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000. To absolutely nobody’s surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you. And although our active promotion on Twitter has come to an end, it is not too late to give! I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour: Transcript (Note: very imperfect - sorry!) Monologue Hi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons. The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that. But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that. And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe. Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. Yeah so here it is! Conversation AARON Hello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now. MATT Um, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward. LAURA I am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College. MATT Weirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product. So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually. AARON I think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me. MATT So, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media. Like maybe I'll try like Tumblr as a, as an alternative. And I would get a lot of ads for like plus-sized women's flannels. So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on. AARON Sorry, keep going. I can see every ad I've ever. MATT Come across, actually, in your giant CSV of Twitter data. AARON Just because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there. MATT I feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time. AARON It's a giant JSON, actually. MATT Are you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads. AARON Wait, hold on. This is such a. MATT Wait. Do people call it “C-Long” or “Clong”? AARON Why would it be long? MATT Well, because it's like Claude Long. LAURA I've never heard this phrase. MATT This is like Anthropic’s chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history? AARON Oh yeah, I got clong. Apparently that wasn't acceptable so that it. MATT Can summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss. LAURA Enough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care. MATT I lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will. AARON Sketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything. LAURA Have you ever gone into the ad personalization and tried see what demographics they peg you? AARON Oh yeah. We can pull up mine right now. LAURA It's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird. AARON That's weird. MATT Did you test this when you were in and not in relationships to see if they got it right? LAURA No, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough. MATT That's sort of close. LAURA Yeah. AARON Sorry. Keep on podcasting. LAURA Okay. MATT Do they include political affiliation in the data you can see? AARON Okay. MATT I would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious. LAURA The algorithm is like, advertising deSantis. AARON Yeah. MATT I guess it must think that there's some probability that you're going to vote in a republican primary. LAURA I live in DC. Why on earth would I even vote, period. MATT Well, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth. AARON I feel like I've seen, like, a. MATT I think it's probably hopeless to live. Find your demographic information from Twitter. But, like. AARON Age 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General. MATT What's this list above? AARON Oh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these. MATT Wait, you watch cocaine there? AARON Big bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script. MATT Or whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say. AARON These are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me. MATT Oh, wait, it's like, true or false for each of these? AARON No, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo. MATT Wait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics? AARON No

    ١ س ٣٦ د
  5. Best of Pigeon Hour

    ٢٤‏/٠١‏/٢٠٢٤

    Best of Pigeon Hour

    Table of contents Note: links take you to the corresponding section below; links to the original episode can be found there. * Laura Duffy solves housing, ethics, and more [00:01:16] * Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47] * Nathan Barnard on how financial regulation can inform AI regulation [00:17:16] * Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48] * Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10] * Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54] * Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00] * Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43] * Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56] * Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08] Intro [00:00:00] To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough. It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that. So without further ado, please enjoy. #1: Laura Duffy solves housing, ethics, and more [00:01:16] In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory. AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that? LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that. AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption. LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago. AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense. LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people. AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you. LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text. AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right. LAURA: I did take a quarter of it. AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like. LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle. AARON: Oh God. LAURA: I took a course specifically on Plato and Aristotle. AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person. LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi. AARON: Think he was wrong about all that. LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing. AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that. LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference. AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know. LAURA: I don't know either. #2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47] All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews. ARJUN: Oh, you were in the book chat, though. The book rant group chat, right? AARON: Yeah, I think I might have just not read any of it. So do you want to fil

    ١ س ٤٨ د
  6. ٢٨‏/١٢‏/٢٠٢٣

    #10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse

    Intro At the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid” to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts. What I believe In spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism. Here’s my basic meta-ethical take: * Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but * Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true. Of course the label per se doesn’t really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best. Some definitions Stanford Encyclopedia of Philosophy: Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way). Wikipedia: The robust model of moral realism commits moral realists to three theses: * The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on). * The alethic thesis: Some moral propositions are in fact true. * The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties. And to settle the matter, here’s what some LLM have to say. My prompt: Suppose a person believes the following: 1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but 2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true. Is this person best categorized as a moral realist? ChatGPT Given these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value. One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions. This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives. Great, so much for trying to fit things into nice little buckets. Claude, fortunately, is more definitive: No, the view described is not best categorized as moral realism. Here's why: Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true. The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2). By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations. So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense. In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view. I’m gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seems…wrong? Like all the definitions could replace the term “moral fact” or “moral claim” with “normative fact” and “normative claim” but they do not! So I really don’t have a great answer to which label works best. One of two ~human level chatbots giving me a definitive-sounding “anti-realism” answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the “realist label,” as well as for “neither” (i.e., ‘secret third thing’). In fact all of these seem pretty similar in terms of argument convincingness/correctness. So, in sum, 🤷‍♂️. Some tweets on the matter, both of which are images that link to the original: Ok now you can vote: Anyway, long story short, I believe something kinda sorta like moral realism, maybe, and that is what we discuss. Without further ado, here’s the (very imperfect) transcript of the episode. Please note that some of the long blocks of text have been cleaned up a bit via minor rewording. I am too lazy to do that for everything. Transcript Sarah: Hello, and welcome to my first experiment in low effort podcasting. In this episode of the podcast, which I'm now calling Consistently Candid because some people thought that was funny, I talked to Aaron Bergman and Max Alexander about moral realism. Sarah: They kind of debate it. And I, having read the Wikipedia page about five minutes previously, a, occasionally chime in with some opinions that I hadn't thought out very well. So enjoy! Sarah: Anyway, I guess this is my podcast now, but I don't have a name for it yet.Max: That's a good podcast nameSarah: Introduce it.Aaron: Can I broadcast this on Pigeon Hour as well?Sarah: Yeah, sure.Max: Okay, cool.Aaron: Sweet.Sarah: But I also want to make my own thing because people.Aaron: No, totally. But yeah, you can say no, you can copyright it and then sue me.Sarah: No. Well, that's fine. This is totally, like, anyone can broadcast it anywhere they want.Max: You can text or whatever, get on the Trump website.Sarah: Yeah. So you guys have a disagreement, apparently, about moral realism. I have briefly skimmed the Wikipedia page, and I don't have an opinion, but I thought we have it out.Aaron: No, I feel like the format should be that we try to convince you…Sarah: So, yeah, you try and convince me that you each try and convince me you're right, and I will come to a conclusion and let you know who I'm persuaded by. And if at any point I have, like, a thought that's worth articulating, I'll weigh in with that. But I think that's kind of unlikely because I don't really know anything; I'm playing a moderating role here.Max: Well, confusion is worth pointing out or something like that, right?Sarah: Yeah, I can do that at regular intervals. I can tell you how confused I am. That's definitely doable.Aaron: Maybe you should start with, like, do you have an initial take at all, or are you really 50/50?Sarah: I mean, from very briefly reading the Wikipedia, it liked doesn't sound true to me.Max: Oh, hell yeah!Aaron: No. Okay, podcast over.Max: Way over the Wikipedia just to see what says. Did you read actual Wikipedia?Sarah: Wikipedia? Yeah, it says “moral realism (also, ethical realism) is the position that ethical Sentences express propositions that refer to objective features of the world. That is, features independent of subjective opinion.”Aaron: Yeah, facts.Max: Good summary from Wikipedia.Sarah: Fake.Max: My job is going to be easy.Sarah: Then, but I'm totally open to be persuaded.Aaron: Okay. The first thing is that I recognize that it sounds fake, it sounds very sus, but then it actually surprisingly checks out. So I just want to get that on the table.Sarah: Okay, what about if each of you do, like, a little opening spiel about why you think you're right, and then you can yell at each other about it afterwards.Aaron: Yeah. Max, do you want to go first or second?Max: I'll go second.Sarah: Okay. Aaron: Well, th

    ١ س ٨ د

التقييمات والمراجعات

٤٫٨
من ٥
‫٤ من التقييمات‬

حول

Recorded conversations; a minimal viable pod www.aaronbergman.net