Pigeon Hour

Aaron Bergman

Recorded conversations; a minimal viable pod www.aaronbergman.net

  1. #15: Robi Rahman and Aaron tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics

    JAN 25

    #15: Robi Rahman and Aaron tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics

    Summary In this episode, Aaron and Robi reunite to dissect the nuances of effective charitable giving. The central debate revolves around a common intuition: should a donor diversify their contributions across multiple organizations, or go “all in” on the single best option? Robi breaks down standard economic arguments against splitting donations for individual donors, while Aaron sorta kinda defends the “normie intuition” of diversification. The conversation spirals into deep philosophical territory, exploring the “Moral Parliament” simulator by Rethink Priorities and various decision procedures for handling moral uncertainty—including the controversial “Moral Marketplace” and “Maximize Minimum” rules. They also debate the validity of Evidential Decision Theory as applied to voting and donating, discuss moral realism, and grapple with “Unique Entity Ethics” via a thought experiment involving pigeons, apples, and 3D-printed silicon brains. Topics Discussed * The Diversification Debate: Why economists and Effective Altruists generally advise against splitting donations for small donors versus the intuitive appeal of a diversified portfolio. * The Moral Parliament: Using a parliamentary metaphor to resolve internal conflicts between different moral frameworks (e.g., Utilitarianism vs. Deontology). * Decision Rules: An analysis of different voting methods for one’s internal moral parliament, including the “Moral Marketplace,” “Random Dictator,” and the “Maximize Minimum” rule. * Pascal’s Mugging & “Shrimpology”: Robi’s counter-argument to the “Maximize Minimum” rule using an absurd hypothetical deity. * Moral vs. Empirical Uncertainty: Distinguishing between not knowing which charity is effective (empirical) and not knowing which moral theory is true (moral), and how that changes donation strategies. * Voting Theory & EDT: Comparing donation logic to voting logic, specifically regarding Causal Decision Theory vs. Evidential Decision Theory (EDT). * Donation Timing: Why the ability to coordinate and see neglectedness over time makes donation markets different from simultaneous elections. * Moral Realism: A debate on whether subjective suffering translates to objective moral facts. * The Repugnant Conclusion: Briefly touching on population ethics and “Pigeon Hours.” * Unique Entity Ethics: A thought experiment regarding computational functionalism: Does a silicon chip simulation of a brain double its moral value if you make the chip twice as thick? Transcript AI generated, likely imperfect AARON Cool. So we are reporting live from Washington DC and New York. You’re New York, right? ROBI Mm-hmm. AARON Yes. Uh, I have strep throat, so I’m not actually feeling 100%, but we’re still gonna make a banger podcast episode. ROBI Um, I might also, yeah. AARON Oh, that’s very exciting. So this is— hope you’re doing okay. It was— I hope you’re— if you, if you, like, it was surprisingly easy to get, to get, uh, tested and prescribed antibiotics. So that might be a thing to consider if you have, uh, you think you might have something. Um, mm-hmm. So we, like, a while ago— should we just jump in? I mean, you know, we can cut. ROBI Stuff or whatever, but— Yeah, um, you can explain, uh, so I talked to Max, uh, like 13 months ago. AARON It’s been a little while. Yeah, yeah. Oh yeah, yeah. And so this is, um, I just had takes. So actually, this is for, for the, uh, you guys talked for the as an incentive for the 2024, uh, holiday season EA Twitter/online giving fundraiser. Um, and I listened to the— it was a good— it was a surprisingly good conversation, uh, like totally podcast-worthy. Um, I actually don’t re— wait, did I ever put that on? I’m actually not sure if I ever put that on, um, the Pigeon Hour podcast feed, but I think I will with— I think I got you guys’ permission, but obviously I’ll check again. And then if so, then I, I will. Um, and I just had takes because some of your takes are good, some of your takes are bad. And so that’s what we have to—. ROBI Oh, um, I think your takes about my takes being bad are themselves bad takes. Uh, at least the first 4 in a weird doc that I went through. Um, yeah, I saw you published it somewhere on YouTube, I think. I don’t know if it also went on Pigeon Hour, but it’s up somewhere. AARON Yes, yes. So that we will— I will link that. Uh, people can watch it. There’s a chance I’ll even just like edit these together or something. I’m not really sure. Figure that out later. Um, yeah. Yes. So it’s, yeah, definitely on you. Um, so let me pull up the— no, I, I think at least two of— so I only glanced at what you said. Um, so two of the four points I just agree with. I just like concede because at least one of them. So I just like dumped a ramble into, into some LLM. ROBI Yeah. AARON Like, These aren’t necessarily like the faithful, um, uh, things of what I believe, but like the first one was just, um, so like I have this normie intuition, and I don’t have that many normie intuitions, so like it’s, it’s like a little suspicious that like maybe there’s a, a reason that we should actually diversify donations instead of just maximizing by giving to the one. Mm-hmm. Like just like, yeah, every dollar you just like give to the best place. And that like quite popular smaller donors say people giving less than like $100,000 or quite possibly much more than that, up to like, say, a million or more than that. Um, that just works out as, as donating to like a single organization or project. ROBI Yeah. Okay. Um, I, I think we should explain, uh, what was previously said on this. So there’s some argument over— okay. So like, um, normal people donate some amount of money to charity and they just give like, I don’t know, $50 here and there to every charity that like pitches them and sounds cute or sympathetic or whatever. Um, And then EAs want to, um, first of all, they, I don’t know, strive to give at least 10% or, I don’t know, at least some amount that’s significant to them and, uh, give it to charities that are highly effective, uh, and they try to optimize the impact of those dollars that they donate. Whatever amount you donate, they want to, like, do the most good with it. Um, so the, like, standard economist take on this is, um, So every charity has, uh, or every intervention has diminishing marginal returns, right, to the— or every cause area or every charity, um, possibly every intervention, um, or like at the level of an individual intervention, maybe it’s like flat and then goes to zero if you can’t do any more. Anyway, um, so cause areas or charities have diminishing marginal returns. If you like donate so much money to them, they’re no longer, um, they’ve like done the most high priority thing they can do with that money. And then they move on to other lower priority things. Um, so generally the more money a charity gets, the less, um, the less effective it is per dollar. This is all else equal, so this is not like— like, actually, if you know you’re going to get billions of dollars, you can like do some planning and then like use economies of scale. Uh, so it’s like not strictly decreasing in that way with like higher-order effects, but for like Time held constant, if you’re just like donating dollars now, there’s diminishing marginal returns. Okay, so, uh, it is— the economist’s take is like, it is almost always the case that the level of an individual donor who donates something like, let’s say, 10% of $100K, like, the, the world’s best charity is not going to become like no longer the world’s best charity after you donate $10,000. And most people donate like much less than that. So the, uh, like standard advice here is, um, if you are an individual donor, not a like, um, institutional donor or grantmaker or someone directing a ton of funds, um, you should just like take your best guess at the best charity and then donate to that. And then there are ways to optimize this for like bigger amounts. So you’ve probably heard of donor lotteries, which is like 100 or 1,000 people who want to save time all pool their money and then, then someone is picked at random and then they do research and maybe they split those donations 3 ways. Or like it all goes to something, or like— [Speaker:HOWIE] Yeah. AARON [Speaker:Kevin] Hmm. ROBI $10,000 times, uh, 100 or 1,000 is like a million or $10 million. At that level, it’s plausible that you should donate to multiple things. Um, so in that case, maybe it makes sense. AARON Um, so I don’t, I don’t— oh, sorry, go ahead. ROBI Uh, so that’s the standard argument. Um, and, um, I, I’m happy to, um, explain why this still holds, uh, to anyone who is like engaged at least this far. Um, most people haven’t even heard of it and they’re like, um, well, but what if I’m not sure about which of these two things, then I should like donate 50/50 to them. AARON Um, uh. ROBI I’ll let you go on, but I just want to say this is a really lucky time to record this podcast because Yesterday someone replied to me on the EA forum linking to some, um, uh, have you heard of, uh, Rethink Priorities, um, Moral of Parliament simulator? AARON [Speaker:Howie] Yes. ROBI [Speaker:Keiran] Okay, so it has some, um, pretty wacky and out-there decision rules, and, um, so I was— I was arguing with someone on the EA forum about this, like, um, saying Uh, it doesn’t make sense to, um, to, to split your donations, uh, at the level of an individual donor, um, even moral uncertain— and they said, but what about moral uncertainty? What if I’m not sure, like, if animals even matter? Um, uh, and I said, well, even then you should take your, like, probability estimate that animals matter and then get your, like, EV of a dollar to each and then give all of your dollars to whichever is better. Um, and they

    1h 6m
  2. JAN 25

    Vegan Hot Ones | EA Twitter Fundraiser 2024

    A great discussion between my two friends Max Alexander of Scouting Ahead and Robi Rahman (in response to a fundraiser that we wrapped up more than 13 months ago)Tweet with context: https://x.com/AaronBergman18/status/1999918243205779864?s=20 Transcript (AI-generated, likely imperfect) MAX Hello to the internet, maybe. ROBI Hey internet. MAX Um, I’m Max. ROBI I’m Robi. MAX Um, thank you all for donating, especially you. Um, so we’re gonna do a vegan version of Hot Ones. I actually don’t know if the camera can properly see. I mean, we took a photo as well, so someone will see it eventually. Um, but I have some very not spicy questions for you, and I hope. ROBI They get spicy, right? MAX Do you think it’s a little spicy? ROBI Um, I don’t know. Anyway. MAX Yeah, they’re not, you know, I’m sure someone will judge me greatly for this online. ROBI Um, yeah, the, uh, the food is spicy at least, or it gets a bit spicy. So, um, we’ve got, um, uh, we’ve got field roast, uh, buffalo wings without the buffalo sauce. We’ve got some spices on them. We’ve got, uh, Jack and Annie jackfruit nuggets and Impossible fake chicken nuggets, uh, with— my god, Sriracha, um, spicy chili. MAX Crisp. ROBI Calabrian hot chili powder, habanero hot salsa, Scotch bonnet puree, Elijah’s Extreme Regret Screamin’ Hot, um, Scorpion Reaper hot sauce. MAX Cool. ROBI And, um, some, uh, Dave’s Hot Chicken. MAX Reaper seasoning and Carolina I’m going to have a much worse time than you are. ROBI I’m looking forward to this. MAX Yeah, uh, I guess I think in tradition of hot ones, um, the guest, um, introduces themselves and like says a background. So I don’t know if you want. ROBI To— okay, yeah, um, let’s see, um, I’ve been involved in EA for— well, I think the first meetup I went to was 2017. Um, they, uh, EA was much smaller then and, uh, we didn’t have our own meetups. They were, um, the DCEA meetup group was, uh, combined with a vegan feminist environmentalist— [Speaker:MAX] That’s cool. [Speaker:ROBI] —something meetup. [Speaker:MAX] Yeah, nice. [Speaker:ROBI] Eventually we, we had enough EAs that we, you know, spun off our own, uh, effective altruism only thing. [Speaker:MAX] Cool. [Speaker:ROBI] Yeah, um, yeah, but, uh, that was fun. Um, that was also the first year I played giving games. Um, And then, uh, I was, I was kind of a global health person back then, but, um, um, Matt Ginsel was way ahead of his time, and he, um, like in the Giving Games, you get to— you like play all the games like poker or like whatever, whatever, and you win the chips, and then at the end you put the chips in, into the box for whatever charity you think should get the money. And, um, He surprised me by donating to pandemic prevention, which wasn’t even on my radar then. And then, like, 3 years later, he was totally right. MAX Yeah, unfortunately. ROBI Yeah. Uh, yeah. MAX And now you work at Epoch. ROBI I work at Epoch. Yeah. Um, I do AI forecasting, basically. My job is kind of to figure out when everyone else’s job will be automated. Delightful. MAX You know? Yeah. Cool. Um, yeah, I guess maybe our very lukewarm, uh, question is, uh, which do you think is better, fuel or soil land? ROBI Um, I think I prefer Soylent for the drinks. MAX [Speaker:Robi] Interesting. ROBI [Speaker:Max] But, um, Hewlett Hot Savory was great. They’ve recently rebranded, right? [Speaker:ROBI] I don’t know. [Speaker:MAX] Hot Savory to, um, Instant Meals or like, something like that? I haven’t bought it in a while. MAX [Speaker:Robi] I, yeah, I bought some for the fundraiser. ROBI [Speaker:Max] Should we eat some lukewarm nuggets to go with the lukewarm questions? MAX [Speaker:Robi] Yeah, yeah, exactly. ROBI [Speaker:Max] So let’s start off with the Chili Crisp, um, uh, buffalo wing. [Speaker:ROBI] Okay. [Speaker:MAX] Cheers. MAX Yeah, that’s not that spicy. ROBI [Speaker:Robi] Eat the whole thing. MAX [Speaker:Max] Oh no. ROBI I’m sorry. It’s so far. Chicken nugget. [Speaker:ROBI] Yeah, um, yeah, I don’t think I would— I don’t know if I would notice that’s not chicken. MAX [Speaker:Max] Oh yeah. For sure. ROBI I mean, I’m not a huge fan of chicken nuggets anyway, but yeah. Um. MAX Cool. Okay, um, let’s see. ROBI Uh. MAX Okay, well, this one’s a little spicy at least. Uh, what’s one thing you think everyone in EA is getting wrong? ROBI Um, I’m kind of like very EA orthodox, and I think EA is like basically right about everything. Um, the The thing I think EAs get wrong— I think the, um, I don’t believe in the, like, perils of maximizing stuff, or like— like, maximizing does have the problems that they point out, but like, I don’t think anyone has a good argument that, like, you should not maximize. MAX Sure. ROBI I think all of the, like— I don’t know, I just bite the bullet. I’m taking everything to the— like, if the principles are right and you have the facts, yeah, the conclusion is what it is. MAX Okay, well, that’s good. I think I have a question later that’s like Is the repugnant conclusion actually repugnant? ROBI I’ll have some thoughts on that. Yeah, I think I basically disagree with Holden Karnofsky and Scott Alexander on, like, you should get off the crazy train if it seems too weird. Like, no, if the reasoning checks out, you should do what you should do. MAX Cool. ROBI Yeah, I kind of think— this might be a bit spicy— Okay. I kind of think, um, they are— I slightly suspect they’re just saying that as cover, like after the FTX scandal and whatnot. Like, no, no, no, no, we don’t really believe in that stuff where you like take it to the extreme and like, yeah, yeah, yeah. MAX That is plausible. I don’t know Holden, so I cannot say for sure. ROBI Neither do I, but I’d like to think he’s smarter than to— sure. MAX Yeah, yeah. Um, cool. Yeah, though Yeah, I mean EA is a whole big thing, so, you know, um, cool, that’s a good one. That’s a— if you brought that to a party, you know, you would start a 3-hour discussion, sort of. ROBI No, I think that would be like, um, a 30th percentile EA spicy opinion. MAX Well, yeah, but then the other people, you like start the whole thing and they, uh, yeah, cool. ROBI Um, cool. MAX Oh wait, should we eat another thing first? ROBI Yeah, how many questions are there? MAX 16? I have 16, but some of them are like not— Yeah, 2, 3 questions. Okay, cool. ROBI Um, yeah, uh, so you spoke at UHG once, right? I— not— I wasn’t quite a speaker. I was a, um, I ran a session. Yeah, it was, but it was, um, it was like a forecasting interactive exercise. So it was a, like, short presentation, and then we did a workshop. MAX Cool. ROBI Yeah, I think the EAG team has been trying to move away from static content and lectures, because EA has this meme of, like, you don’t go for the content, you go for the one-on-ones. Or a lot of people say, like, well, why should I watch a talk when my time is scarce and I could just watch it on YouTube anyway at 2x speed, thereby saving all this time? I don’t think people would— I don’t think the counterfactual is actually watching. I think it’s just never seeing the talk. Exactly. MAX Yeah. ROBI But, but, um, And there have been some really good talks at the AGs. Kevin Esvelt at EAJxBoston was incredible. Yeah, very, very good biosecurity presentation. But yeah, so I offered to— or like was, you know, talking to the content team about like they might have wanted a presentation, but they didn’t want it to just be a lecture. I could just give an Epoch spiel, but I think it was more fun with, you know, people who are in current views. MAX [Speaker:Max] Cool. Yeah, I guess if you were to do it now, has anything changed or. ROBI Is it mostly the— [Speaker:ROBI] Well, I would fix— one of my forecasting questions had a loophole. I think we were— so Matthew Barnett is another AI forecasting guy. He has just left Epoch to form a startup. Spicier than anything I’m doing. I can talk about that later. MAX [Speaker:Max] Yes, that’s a good question actually. ROBI Well, I’ll finish. Um, Matthew and I, you know, uh, had some questions. We adapted them for the UAG format. Um, I think I made some last-minute changes and then overlooked a loophole, which was— so the, um, I don’t remember what it was exactly, but it, it was something like one of the questions ended up being like— so there were 3 big questions of like different domains. Um, one was like superhuman in math, one was like, um, do all like households tasks by inventing robotics, and one was, um, um, synthetic biology capabilities. And one, uh, the last question was something like, um, when will it be possible to, with the aid of AI, invent a virus at least— like, synthesize a virus at least as dangerous as COVID or something. But I think I edited it last minute and then left some loophole where someone raised their hand and was like, “Well, you can already acquire a sample of a virus at least as dangerous as COVID by getting a sample of COVID.” Simply just have someone sneeze and then deliver it. So AI can already do that. But that’s not the point of the question. No, it was something like, “When will a rogue terror— when will it be possible for a rogue terrorist group with the aid of AI to get a sample of a virus at least as dangerous as COVID?” And they can already get COVID. Yeah, yeah, yeah, yeah. MAX Uh, yeah, cool. ROBI Uh, that wasn’t the exact question, but something like that. Yeah, nice. MAX Um, cool, that’s very fun. Yeah. ROBI Um. MAX Let’S see, uh, I guess, yeah, so if you kind of weren’t in EA now, is there like a career you would— do you have like a dream career that you’re like, ah, it’s just not impactful enough? ROBI So, um, that is a great question. I really like data science. Um, this is a little suspicious. Um, like Maybe I would do the same thing anyway. But yea

    49 min
  3. 05/14/2025

    #14: Jesse Smith on HVAC, indoor air quality, and generally being an extremely based person

    Summary Join host Aaron with Jesse Smith, a self-described "unconventional EA" (Effective Altruist) who bridges blue-collar expertise with intellectual insight. Jesse recounts his wild early adventures in Canadian "bush camps," from planting a thousand trees daily as a teen to remote carpentry with helicopter commutes. Now a carpenter, HVAC technician, and business owner (Tay River Builders), he discusses his Asterisk magazine article, "Lies, Damned Lies, and Manometer Readings." Discover the HVAC industry's surprising shortcomings, the difficulty of achieving good indoor air quality (even for the affluent!), and the systemic issues impacting public health and climate goals, with practical insights on CO2 and radon monitors like the Airthings View Plus. Jesse’s links * Lies, Damned Lies, and Manometer Readings, the Asterisk magazine article discussed at length * Tay River Builders, his contracting company * Willard Brothers Woodcutters, his wood store (and its viral Instagram page) * Jesse on Twitter * The Airthings View Plus air quality monitor discussed (currently $239 on Amazon) (no they’re not paying either of us for this but they should Transcript Aaron: Okay. First recorded pigeon hour in a while. I'm here with Jesse Smith resident dad of EA Twitter. I don't know if. I don't know if you'll accept that. Accept that honor. Okay, cool. and I actually, we haven't chatted, face to face in, like, a while, but I know you have, like, a really interesting. You're very, like, unconventional EA in some respects. Do you want to, like, give me your whole like, life story? In brief? Jesse: okay. So I guess one thing is that I'm super old for EA, right? like. And so being a dad and owning, like, a kind of normal business, I guess another is kind of more blue collar background, right? So, I was originally a carpenter also, then took on being an HVAC technician. So I, the businesses that I own Kind of like focus on a little bit of both those. yeah. So like my my background. I was raised in Canada. I left school, I didn't go to college. yeah. I went into, like, after a few years of, like a few years after high school, went into the trades basically. Aaron: Okay. Yeah. Nice. Okay. Like. Yes, I think that. Yeah, that definitely, like, makes you at least, at least stereotypically. But I think also like, in real life, like, there just aren't that many, like, carpentry businessmen who are, like happen to, like, hang out on Twitter also. So no, this is like legitimately really interesting. And at one point, I swear, I thought you went to Princeton. You must have mentioned the city, and I must have interpreted it as the town. Jesse: Yeah, my my brother, my brothers and I lived in Princeton for quite a while. Two of my brothers actually. Aaron: Still. Jesse: Okay around Princeton. I'm not far from Princeton. That's kind of the area where we work. it is where my dad went to grad school, so could have been that as well. so. Yeah. Yeah, but I did not attend Princeton. Aaron: I mean. Jesse: I worked on some of their buildings, but I have not attended. Aaron: Maybe, maybe that was where I got the that like. Yeah, like like myth from. So I know I have, I got like a couple at least. Matt. Matt from Twitter sent in a question, but I as usual, I've done a minimal level of preparation. So also we can we can talk about talk about truly whatever, but like, maybe. Yeah. So how did you, how did you, like, find out about Yale's? Like one thing. Jesse: Well, yeah. Okay. So there's some, I guess, some weird stuff. So I was fairly enamored with Peter Singer. Kind of just like starting with the book Animal Liberation. It would have been. I forget when he wrote that. Like it would have been years after he wrote it, right? Because I think he wrote it in even when, like when I was super young. But I probably read that in my late teens. Okay. And so, yeah. Aaron: That's that's the 1975 book. Jesse: So yeah, that sounds right. Yeah. I was going to guess the 70s. Right. So I was like. Aaron: Nice. Jesse: Nice or something. Right. So what. Aaron: Year old? Jesse: Yes, exactly. But so so when I was 16, I briefly dropped out of high school and I was working. This is really weird. I was working in these, like, bush camps in Canada. It's somewhat popular to do this. And so, like, I was 16, I celebrated my 17th birthday in a bush camp. That was like a tree planting bush camp. But this. Okay, so this is really weird. It sounds like this is like core blue collar, but it's not quite. The guy who owned the company I was working for was a friend of my dad's, and he was Baha'i and vegetarian. And so he had these vegetarian bush camps that we planted trees and did like some brushing out of. Right. So we ran like brush saws and stuff. And so I sort of I think that's kind of what, like I became a vegetarian out of those camps and was reading kind of Peter Singer's stuff at the time. And I think partly being there made me realize like, oh, this is going to not be that difficult. A lot of guys were really irritated by vegetarian bush camps, right? Like some of it was kind of core blue collar, mediating type guys. But like, I was totally fine. I was actually like super happy because it was kind of my first experience in a full time job. And I was nervous because everybody was like, oh, you know, it's going to be hell. And I actually thought it was great. It was much better than being in high school. I thought at the time, like, they just like everything was squared away, like they just fed you. You just had to go and like, try to put as it was piecework. So it was like $0.22 a tree or something. And after a few days, I think on my third day I put something like a thousand trees into the ground or something. Right. So I was. Aaron: Like, Jesus Christ. Jesse: I was like, oh, this is amazing, right? Like, all I have to do is like. Run as fast as I possibly can with these big bags of trees in the woods in, like, this beautiful setting. Eat the food they give me and then like, go to sleep and, like, read or whatever. Right. So like it was a it was a great experience. I know that's the total effect, right. What's that. Aaron: No no no it's not it's not a digression. One thing is can you just define bush camp like for for us dumb American like, oh yeah, dumb like Americans or whatever. Jesse: Yeah. So I, I don't know, I, I guess maybe I haven't heard the terms. They're in the US, but they must exist for some purpose. Right. So usually it's like somewhere remote that you are basically camped out of. In my case, it was literally like camping. It was tents, which I didn't mind at the time. So you'd be like, you know, in our case, it was big camps. Like, I think sometimes they can be as little as maybe ten people, let's say. Right. And this was like a pretty decent company. So they were running 40. The max I saw was maybe 100 people working out of this camp in the remote wilderness. The first year I did it was around an area called mica, which I understand now is a popular heli skiing destination. Like I have a friend who now skis and mica, which is hilarious to me, but it would maybe take you to the nearest town to mica was probably Revelstoke, which was in this case we could drive there, you know, maybe like a year or two later. There were ones that we were flown into, and in some cases there were even ones where I'm trying to think like I was in one in my like early 20s where they would helicopter. You would take a helicopter ride every day to the site, like so like you. So they would like, you'd see this helicopter coming in and like, they'd land in the camp and then they take you. But it was just it wasn't like it didn't feel like special operations. It was like the helicopter was rented from the, like, small towns Weather channel. Right. Aaron: Well, that's so badass as like, I feel like the correct term for all this is, is very based. Jesse: Yeah, I don't know. I mean, I like it seems weird now to describe this to people and it's not in people's experience. But it wasn't it didn't feel the helicopter thing. Maybe did initially felt weird. Right. Because like, I don't know anything about helicopters, right. Like but it didn't feel that weird at the time. And I knew a lot of people growing up who worked out of bush camps and then years later. So like probably around when I was in my early 20s is when I started my carpentry apprenticeship formally, like I had worked in construction a bit and then done the bush camp thing on and off. And so then I ended up doing some some remote wilderness bush camp carpentry work as well, maybe midway through my apprenticeship. So I worked on a it. An Indian reserve building, a water treatment facility that would have been like probably late 90s. Like I'm thinking like probably right before I moved to the US. And that was like that was months and months. That was actually not a good camp. One of the things I hate is that the first camp I went to was incredible, like incredible, like incredible food, like they would haul in saunas like you had you had a trailer with a sauna. And so, like when you're 16, you know, you're just like, oh yeah, this is like normal, right? And I've often thought like. And the food was amazing. Like the, the lead cook would make like she made it for my 17th birthday. She made me a cake. Right. But I was like, and I'm I'm sure I said like, thank you. But it should have been like effusive with praise, right? Because it was just like, yeah, incredible. And then if you, you know. And then I was probably in like over the years maybe 2 or 3 other camps and they suck. Like I remember showing up and being like hey, when is. And like, so this woman would, she would have like Indian night and Mexican night, like themed food nights and like you like they had generators and you could watch movies and like, it was just crazy. And I remember rolling into, like, this next, logging camps and logging camps are legendarily crappy

    1h 30m
  4. 03/25/2025

    Preparing for the Intelligence Explosion (paper readout and commentary)

    Preparing for the Intelligence Explosion is a recent paper by Fin Moorhouse and Will MacAskill. * 00:00 - 1:58:04 is me reading the paper. * 1:58:05 - 2:26:06 is a string of random thoughts I have related to it I am well-aware that I am not the world's most eloquent speaker lol. This is also a bit of an experiment in getting myself to read something by reading it out loud. Maybe I’ll do another episode like this (feel free to request papers/other things to read out, ideally a bit shorter than this one lol) Below are my unfiltered, unedited, quarter-baked thoughts. My unfiltered, unedited, quarter-baked thoughts Okay, this is Aaron. I'm in post-prod, as we say in the industry, and I will just spitball some random thoughts, and then I'm not even with my computer right now, so I don't even have the text in front of me. I feel like my main takeaway is that the vibes debate is between normal to AI is as important as the internet, maybe. That's on the low end, to AI is a big deal. But if you actually do the not math, all approximately all of the variation is actually just between insane and insane to the power of insane. And I don't fully know what to do with that. I guess, to put a bit more of a point on it, I'm not just talking about point estimates. It seems that even if you make quite conservative assumptions, it's quite overdetermined that there will be something explosive technological progress unless something really changes. And that is just, yeah, that is just a big deal. It's not one that I think of fully incorporated into my emotional worldview. I mean, I have it, I think, in part, but not, not to the degree that I think my, my intellect has. So another thing is that one of the headline results, something that Will MacAskill, I think, wants to emphasize and did emphasize in the paper, is the century in a decade meme. But if you actually read the paper, that is kind of a lower bound, unless something crazy happens. And I'll, this is me editorializing right now. So, I think something crazy could happen first, for example, nuclear war with China, that would destroy data centers and mean that, you know, AI progress is significantly set back, or it's an unknown unknown. But the century in a decade is really a truly a lower bound. You need to be super pessimistic with all the in-model uncertainty. Obviously there's out of model uncertainty, but the actual point estimates, whether you take geometric, however you do it, arithmetic means over distributions, or geometric means, however you combine the variables, you actually get much much faster than that. So that is a 10x speed up, and that is, yeah, as I said 10 times, as pessimistic as you can get, I don't actually have a good enough memory to remember exactly what the point estimate numbers are. I should go back and look. So chatting with Claude, it seems that there's actually a lot of different specific numbers and things. So one question you might have is, okay, over the fastest growing decade in terms of technological progress or economic growth in the next 10 decades, what will the peak average growth rate be? But there's a lot of different ways you can play with that to change it. It's, oh, what's the average going to be over the next decade? What about this coming decade? What about before 2030? Do we're talking about economic progress, progress or some less well-defined sense of technological and social progress. But basically it seems the conservative scenario is, is that the intelligence explosion happens and at some, in some importantly long series of years, you get a 5x year over year. So not a doubling every year, but after two years, you get a 25x expansion of, of AI labor. And then 125 after three years. And I need to look back. I think one thing they don't talk about specifically is, oh yeah, sorry. They do talk about one important thing to emphasize. And as you can tell, I'm not the most eloquent person in the world. Is that they talk about pace significantly and about limiting factors. But the third, the thing you might solve for, if you know those two variables is the length of time that such an explosion might take place across and just talking, thinking out loud, that is something that they, whether intentionally or otherwise, or me being dumb and missing it. I don't think that they give a ton of attention to, and that's yeah. I mean, my intuition is approximately fine. Does it matter if the intelligence explosion conditional on conditional on knowing how to distribution of rates of say blocks of years, say, so we're not talking about seconds, we're not talking about, I guess we could be talking about months, but we're not talking about weeks, and we're not talking about multiple decades. So we're talking about something in the realm of single digit to double digit numbers of years, maybe a fraction of a year. So two ish, three orders of magnitude of range. And so the question is, conditional on having a distribution of peak average growth rate for some block of time. Does it matter whether we're talking about two years, or 10 years or what? And sorry, backtracking, also conditional on having a distribution for the limiting factors. So at what point do you stop scaling? Because we know that there's the talking point, infinite growth in a finite world is true. They're just off by 1000 orders of magnitude, or maybe 100. So there actually are genuine limiting factors. And they discussed this, at what point you might get true limits on power consumption or whatever. But yeah, just to recap this little mini ramble. We don't, one thing the paper doesn't go over much is the length of time specifically, except insofar as that is implied by distributions you have for peak growth rates and limiting factors. So another thing that wasn't in the paper, but that was, I'm just spitballing that was in Will MacAskill recent interview on the 80,000 hours podcast with Robert Roeblin about the world's most pressing problems and how you can use your career to solve them. Is that, yeah, I think Rob said this, he wishes that the AIX community hadn't been so tame or timid, in terms of hedging, saying, emphasizing uncertainty, saying, you know, there's a million ways it can be wrong, which is of course true. But I think his, the takeaway he was trying to get at was, even ex-ante, they should have been a little bit more straightforward. And I actually kind of think there's a reasonable critique of this paper, which is that the century in a decade meme is not a good approximation of the actual expectations, you know, the expectations is something like 100 to 1000x, not a 10x speed up. As lucky as a reasonable conservative baseline, you have to be really within model pessimistic to get to the 10x point. Another big thing to comment on is just the grand challenges. And so I've been saying for a while that my P doom, as they say, is something in the 50% range. Maybe now it's 60% or something after reading this paper up from 35% right after the bottom executive order. And what I mean by that, I actually think is some sort of loose sense of, no, we actually don't solve all these challenges. Well, so it's not one thing MacAskill and Morehouse emphasize, but in both the podcast that I listened to and the paper is it's not just about AI control. It's not just about the alignment problem. You really have to get a lot of things right. I think this relates to other work that MacAskill is on that I'm not super well acquainted with. But there's the question of how much do you have to get right in order for the future to go well. And actually think there's a lot of strands there. Like I remember on the podcast with Rob, that we're talking in terms of percentage, percentage value of the best outcome. I'm not, yeah, I'm just thinking out loud here, but I'm not actually sure that's the right metric to go with. It's a little bit like, so you can imagine just we have the current set of possibilities and then exogenously we get one future strand in the multiverse, the Everettian multiverse. And a single Everettian multiverse thread points to the future going a billion times better than it could otherwise. I feel like this approximately should not change approximately anything because you know it's not going to happen. But it does revise down those numbers, your estimate of the expected value, the expected percentage of the best future, it revises that down a billion fold. And so this sort of, no I'm not actually sure if this ends up cashing, I'm just not smart enough to intuit well whether this ends up cashing out in terms of what you should do. But I suspect that it might, that's really just an intuition, so yeah I'm not sure. You know something that will never be said about me is that I am an extremely well organized and straightforward thinker. So it might be worth noting these audio messages are just random things that come to mind as I'm walking around basically a park. Also that's why the audio quality might be worse. Oh yeah getting back to what I was originally thinking about with the grand challenges and my P. Doom. They just enumerate a bunch of things that in my opinion really do have to go right in order for some notion of the future to be good. And so there's just a concatenation, I forget what the term is, but a concatenation issue of even if you're relatively optimistic and I kind of don't know if you should be on any one issue. Like okay, so some of these, let me just list them off. AI takeover, highly destructive technologies, power concentrating mechanisms, value lock-in mechanisms, AI agents and digital minds, space governance, new competitive pressures, epistemic disruption, abundance, so capturing the upside and unknown unknowns. No, they're not, it's not as clean a model as each of these are fully independent. It's much more complex than that, but it's not as simple as you just, oh, if you have a 70% chance on each, you can just take that to the power of eight

    2h 25m
  5. 04/11/2024

    #12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more

    Please follow Arthur on Twitter and check out his blog! Thank you for just summarizing my point in like 1% of the words -Aaron, to Arthur, circa 34:45 Summary (Written by Claude Opus aka Clong) * Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse". * They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time. * Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online. * Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy. * They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations. * Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor. * Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing. * They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists. * Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing. * Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it. * They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over. Transcript Very imperfect - apologies for the errors. AARON Hello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want? ARTHUR Yeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, despite being two months from completion, at least how I feel. And I also do, like, housing policy research, but I think have, I don't know, random, eclectic interests in various EA related topics. And, yeah, I don't. I feel like my soft goal for this podcast was to, like, somehow get Aaron cancelled. AARON I'm in the process. ARTHUR We should solve gender discourse. AARON Oh, yeah. Is it worth, like, discussing? No, honestly, it's just very online. It's, like, not like there's, like, better, more interesting things. ARTHUR I agree. There are more. I was sort of joking. There are more interesting things. Although I do think, like, the general topic that you talked to max a little bit about a while ago, if I remember correctly, of, like, kind of. I don't know to what degree. Like, one's online Persona or, like, being sort of active in public, sharing your opinions is, like, you know, positive or negative for your general. AARON Yeah. What do you think? ARTHUR Yeah, I don't really. AARON Well, your. Your name is on Twitter, and you're like. ARTHUR Yeah. You're. AARON You're not, like, an alt. ARTHUR Yeah, yeah, yeah. Well, I. So, like, I first got on Twitter as an alt account in, like, 2020. I feel like it was during my, like, second to last semester of college. Like, the vaccine didn't exist yet. Things were still very, like, hunkered down in terms of COVID And I feel like I was just, like, out of that isolation. I was like, oh, I'll see what people are talking about on the Internet. And I think a lot of the, like, sort of more kind of topical political culture war, whatever kind of stuff, like, always came back to Twitter, so I was like, okay, I should see what's going on on this Twitter platform. That seems to be where all of the chattering classes are hanging out. And then it just, like, made my life so much worse. AARON Wait, why? ARTHUR Well, I think part of it was that I just, like, I made this anonymous account because I was like, oh, I don't want to, like, I don't want to, like, have any reservations about, like, you know, who I follow or what I say. I just want to, like, see what's going on and not worry about any kind of, like, personal, like, ramifications. And I think that ended up being a terrible decision because then I just, like, let myself get dragged into, like, the most ultimately, like, banal and unimportant, like, sort of, like, culture war shit as just, like, an observer, like, a frustrated observer. And it was just a huge waste of time. I didn't follow anyone interesting or, like, have any interesting conversations. And then I, like, deleted my Twitter. And then it was in my second semester of my current grad program. We had Caleb Watney from the Institute for Progress come to speak to our fellowship because he was an alumni of the same fellowship. And I was a huge fan of the whole progress studies orientation. And I liked what their think tank was doing as, I don't know, a very different approach to being a policy think tank, I think, than a lot of places. And one of the things that he said for, like, people who are thinking about careers in, like, policy and I think sort of applies to, like, more ea sort of stuff as well, was like, that. Developing a platform on Twitter was, like, opened a lot of doors for him in terms of, like, getting to know people in the policy world. Like, they had already seen his stuff on Twitter, and I got a little bit, like, more open to the idea that there could be something constructive that could come from, like, engaging with one's opinions online. So I was like, okay, f**k it. I'll start a Twitter, and this time, like, I won't be a coward. I won't get dragged into all the worst topics. I'll just, like, put my real name on there and, like, say things that I think. And I don't actually do a lot of that, to be honest. AARON I've, like, thought about gotta ramp it. ARTHUR Off doing more of that. But, like, you know, I think when it's not eating too much time into my life in terms of, like, actual deadlines and obligations that I have to meet, it's like, now I've tried to cultivate a, like, more interesting community online where people are actually talking about things that I think matter. AARON Nice. Same. Yeah, I concur. Or, like, maybe this is, like, we shouldn't just talk about me, but I'm actually, like, legit curious. Like, do you think I'm an idiot or, like, cuz, like, hmm. I. So this is getting back to the, like, the current, like, salient controversy, which is, like, really just dumb. Not, I mean, controversy for me because, like, not, not like an actual, like, event in the world, but, like, I get so, like, I think it's, like, definitely a trade off where, like, yeah, there's, like, definitely things that, like, I would say if I, like, had an alt. Also, for some reason, I, like, really just don't like the, um, like, the idea of just, like, having different, I don't know, having, like, different, like, selves. Not in, like, a. And not in, like, any, like, sort of actual, like, philosophical way, but, like, uh, yeah, like, like, the idea of, like, having an online Persona or whatever, I mean, obviously it's gonna be different, but, like, in. Only in the same way that, like, um, you know, like, like, you're, like, in some sense, like, different people to the people. Like, you're, you know, really close friend and, like, a not so close friend, but, like, sort of a different of degree. Like, difference of, like, degree, not kind. And so, like, for some reason, like, I just, like, really don't like the idea of, like, I don't know, having, like, a professional self or whatever. Like, I just. Yeah. And you could, like, hmm. I don't know. Do you think I'm an idiot for, like, sometimes tweeting, like, things that, like, evidently, like, are controversial, even if they, like, they're not at all intent or, like, I didn't even, you know, plan, like, plan on them being. ARTHUR Yeah, I think it's, like, sort of similar to the, like, decoupli

    2h 13m
  6. 03/09/2024

    Drunk Pigeon Hour!

    Intro Around New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000. To absolutely nobody’s surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you. And although our active promotion on Twitter has come to an end, it is not too late to give! I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour: Transcript (Note: very imperfect - sorry!) Monologue Hi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons. The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that. But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that. And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe. Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. Yeah so here it is! Conversation AARON Hello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now. MATT Um, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward. LAURA I am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College. MATT Weirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product. So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually. AARON I think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me. MATT So, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media. Like maybe I'll try like Tumblr as a, as an alternative. And I would get a lot of ads for like plus-sized women's flannels. So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on. AARON Sorry, keep going. I can see every ad I've ever. MATT Come across, actually, in your giant CSV of Twitter data. AARON Just because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there. MATT I feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time. AARON It's a giant JSON, actually. MATT Are you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads. AARON Wait, hold on. This is such a. MATT Wait. Do people call it “C-Long” or “Clong”? AARON Why would it be long? MATT Well, because it's like Claude Long. LAURA I've never heard this phrase. MATT This is like Anthropic’s chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history? AARON Oh yeah, I got clong. Apparently that wasn't acceptable so that it. MATT Can summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss. LAURA Enough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care. MATT I lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will. AARON Sketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything. LAURA Have you ever gone into the ad personalization and tried see what demographics they peg you? AARON Oh yeah. We can pull up mine right now. LAURA It's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird. AARON That's weird. MATT Did you test this when you were in and not in relationships to see if they got it right? LAURA No, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough. MATT That's sort of close. LAURA Yeah. AARON Sorry. Keep on podcasting. LAURA Okay. MATT Do they include political affiliation in the data you can see? AARON Okay. MATT I would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious. LAURA The algorithm is like, advertising deSantis. AARON Yeah. MATT I guess it must think that there's some probability that you're going to vote in a republican primary. LAURA I live in DC. Why on earth would I even vote, period. MATT Well, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth. AARON I feel like I've seen, like, a. MATT I think it's probably hopeless to live. Find your demographic information from Twitter. But, like. AARON Age 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General. MATT What's this list above? AARON Oh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these. MATT Wait, you watch cocaine there? AARON Big bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script. MATT Or whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say. AARON These are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me. MATT Oh, wait, it's like, true or false for each of these? AARON No, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo. MATT Wait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics? AARON No

    1h 36m

Ratings & Reviews

4.8
out of 5
4 Ratings

About

Recorded conversations; a minimal viable pod www.aaronbergman.net