Increments

Ben Chugg and Vaden Masrani

Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com.

  1. #97 - Did Effective Altruism Have Ulterior Motives From the Beginning?

    23 JAN

    #97 - Did Effective Altruism Have Ulterior Motives From the Beginning?

    Two years without discussing effective altruism -- did you miss it? Not as much as Vaden, surely. And probably a right bit more than Ben. Well, we're back in the game with a spicy one. Was EA a front for AI safety from the beginning? Did the leaders care not a wit for global poverty? Is Ben going to throw himself out window if Vaden keeps this up? We discuss Feedback on our introspection episode The motives of the EA founders The felicia forum Is this a conspiracy theory? EA's strategic ambiguity Bostromism, transhumanism, and AI safety EA funding The public/core divide and the funnel model Quotes new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation - Will MacAskill Existential risk isn’t the most useful public face for effective altruism – everyone inc[l]uding Eliezer Yudkowsky agrees about that - Scott Alexander, 2015 Utilitymonster: GWWC is explicitly poverty-focused but high impact careers (HIC) is not. In fact, hardcore members of GWWC are heavily interested in x-risk, and I estimate that 10-15% of its general membership is as well. I’d take them seriously as a group for promoting utilitarianism in general. I’m a GWWC leader. [Redacted]: but HIC always seems to talk about things in terms of “lives saved”, ive never heard them mentioning other things to donate to. […] Utilitymonster: That’s exactly the right thing for HIC to do. Talk about lives saved with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk. - From felicia forum. References Gleiberman's paper: https://medialibrary.uantwerpen.be/files/8518/61565cb6-e056-4e35-bd2e-d14d58e35231.pdf Old EA wikipedia page (web archive): https://web.archive.org/web/20170409171350/https://en.wikipedia.org/wiki/Effective_altruism Old CEA webpage (web archive): https://web.archive.org/web/20161219031827/https://www.centreforeffectivealtruism.org/fundraising/ Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here. Click dem like buttons on youtube Let us funnel you into the core group of super secret patreon supporters. Send us an email at incrementspodcast@gmail.com

    1h 42m
  2. #95 (C&R Chap 10, Part II) - A Problem-First View of Scientific Progress

    29/11/2025

    #95 (C&R Chap 10, Part II) - A Problem-First View of Scientific Progress

    After a long hiatus where we both saw grief counsellors over our fight about Popper's theory of content in the last C&R episode, we are back. And we're ready to play nice ... for about 30 seconds until Vaden admits that two sentences from Popper changed his mind about something Ben had arguing for literally years. But eventually putting those disagreements aside, we return to the subject at hand: The Conjectures and Refutations Series: Chapter 10: Truth, Rationality, and the Growth of Scientific Knowledge (Part II). Here all goes smoothly. Just kidding, we start fighting about content again almost immediately. Where are the guests to break us up when you need them. We discuss Why Vaden changed his mind about "all thought is problem solving" Something that rhymes with wero horship Is Popper sloppy when it comes to writing about probability and content Is all modern data science based on the wrong idea? (Hint: No) Popper's problem-focused view of scientific progress How much formalization is too much? The difference between high verisimilitude and high probability Why do we value simplicity in science? Historical examples of science progressing via theories with increasing content Quotes Consciousness, world 2, was presumably an evaluating and discerning consciousness, a problem-solving consciousness, right from the start. I have said of the animate part of the physical world 1 that all organisms are problem solvers. My basic assumption regarding world 2 is that this problem-solving activity of the animate part of world 1 resulted in the emergence of world 2, of the world of consciousness. But I do not mean by this that consciousness solves problems all the time, as I asserted of the organisms. On the contrary. The organisms are preoccupied with problem-solving day in, day out, but consciousness is not only concerned with the solving of problems, although that is its most important biological function. My hypothesis is that the original task of consciousness was to anticipate success and failure in problem-solving and to signal to the organism in the form of pleasure and pain whether it was on the right or wrong path to the solution of the problem. In Search of a Better World, p.17 (emphasis added) The criterion of potential satisfactoriness is thus testability, or improbability: only a highly testable or improbable theory is worth testing, and is actually (and not merely potentially) satisfactory if it withstands severe tests—especially those tests to which we could point as crucial for the theory before they were ever undertaken. - C&R, Chapter 10 Consequently there is little merit in formalizing and elaborating a deductive system (intended for use as an empirical science) beyond the requirements of the task of criticizing and testing it, and of comparing it critically with competitors. - C&R, Chapter 10 Admittedly, our expectations, and thus our theories, may precede, historically, even our problems. Yet science starts only with problems. Problems crop up especially when we are disappointed in our expectations, or when our theories involve us in difficulties, in contradictions; and these may arise either within a theory, or between two different theories, or as the result of a clash between our theories and our observations. - C&R, Chapter 10 Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here. Click dem like buttons on youtube Is "Ben and Vaden will fight about content" high or low probability? Tell us at incrementspodcast@gmail.com

    58 min
  3. #93 (C&R Chap 10, Part I) - An Introduction to Popper's Theory of Content

    16/10/2025

    #93 (C&R Chap 10, Part I) - An Introduction to Popper's Theory of Content

    Back to basics baby. We're doing a couple introductory episodes on Popper's philosophy of science, following Chapter 10 of Conjectures and Refutations. We start with Popper's theory of content: what makes a good scientific theory? Can we judge some theories as better than others before we even run any empirical tests? Should we be looking for theories with high probability? Ben and Vaden also return to their roots in another way, and get into a nice little fight about how content relates to Bayesianism. We discuss Vaden's skin care routine If you find your friend's lost watch and proceed to lose it, are you responsible for the watch? Empirical vs logical content Whether and how content can be measured and compared How content relates to probability Quotes My aim in this lecture is to stress the significance of one particular aspect of science—its need to grow, or, if you like, its need to progress. I do not have in mind here the practical or social significance of this need. What I wish to discuss is rather its intellectual significance. I assert that continued growth is essential to the rational and empirical character of scientific knowledge; that if science ceases to grow it must lose that character. It is the way of its growth which makes science rational and empirical; the way, that is, in which scientists discriminate between available theories and choose the better one or (in the absence of a satisfactory theory) the way they give reasons for rejecting all the available theories, thereby suggesting some of the conditions with which a satisfactory theory should comply. You will have noticed from this formulation that it is not the accumulation of observations which I have in mind when I speak of the growth of scientific knowledge, but the repeated overthrow of scien- tific theories and their replacement by better or more satisfactory ones. This, incidentally, is a procedure which might be found worthy of attention even by those who see the most important aspect of the growth of scientific knowledge in new experiments and in new observations. C&R p. 291 Thus it is my first thesis that we can know of a theory, even before it has been tested, that if it passes certain tests it will be better than some other theory. My first thesis implies that we have a criterion of relative potential satisfactoriness, or of potential progressiveness, which can be applied to a theory even before we know whether or not it will turn out, by the passing of some crucial tests, to be satisfactory in fact. This criterion of relative potential satisfactoriness (which I formu- lated some time ago,2 and which, incidentally, allows us to grade the- ories according to their degree of relative potential satisfactoriness) is extremely simple and intuitive. It characterizes as preferable the theory which tells us more; that is to say, the theory which contains the greater amount of empirical information or content; which is logically stronger; which has the greater explanatory and predictive power; and which can therefore be more severely tested by comparing predicted facts with observations. In short, we prefer an interesting, daring, and highly informative theory to a trivial one. C&R p.294 Let a be the statement ‘It will rain on Friday’; b the statement ‘It willbe fine on Saturday’; and ab the statement ‘It will rain on Friday and itwill be fine on Saturday’: it is then obvious that the informative contentof this last statement, the conjunction ab, will exceed that of its com-ponent a and also that of its component b. And it will also be obviousthat the probability of ab (or, what is the same, the probability that abwill be true) will be smaller than that of either of its components. Writing Ct(a) for ‘the content of the statement a’, and Ct(ab) for ‘thecontent of the conjunction a and b’, we have (1) Ct(a) = Ct(b). This contrasts with the corresponding law of the calculus of probability, (2) p(a) >= p(ab) = p(b), where the inequality signs of (1) are inverted. Together these two laws, (1) and (2), state that with increasing content, probability decreases, and vice versa; or in other words, that content increases with increasing improbability. (This analysis is of course in full agreement with the general idea of the logical content of a statement as the class of all those statements which are logically entailed by it. We may also say that a statement a is logically stronger than a statement b if its content is greater than that of b—that is to say, if it entails more than b does.) This trivial fact has the following inescapable consequences: if growth of knowledge means that we operate with theories of increasing content, it must also mean that we operate with theories of decreasing probability (in the sense of the calculus of probability). Thus if our aim is the advancement or growth of knowledge, then a high probability (in the sense of the calculus of probability) cannot possibly be our aim as well: these two aims are incompatible. C&R p.295 Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here. Click dem like buttons on youtube How much content does the theory "dish soap is the ultimate face cleanser" have? Send your order of infinity over to incrementspodcast@gmail.com

    1h 47m
  4. #92 - Confronting the Paradox of Tolerance: Christianity in the age of Trump (w/ Jonathan Rauch)

    25/09/2025

    #92 - Confronting the Paradox of Tolerance: Christianity in the age of Trump (w/ Jonathan Rauch)

    We're joined by Jonathan Rauch to discuss what it means to be a radical incrementalist, how to foment revolution on geological timescales, and whether Christianity can be a force for good in politics. Can Jon convince angry-Hitchens-atheist Vaden that Christianity has some benefits? Will both Vaden and Ben be at Sunday prayer? Follow Jonathan on his website, at Brookings, at The Atlantic or on Bluesky. We discuss The constitution of knowledge and whether it's holding Norms vs laws, and whether we should introduce more laws to codify norms Popper's paradox of tolerance How should liberals respond to illiberalism? Which tactics, if any, should democrats adopt from MAGA to fight MAGA? Sharp Christianity and Christian nationalism Rauch's plea to Christians References The Constitution of Knowledge: A Defense of Truth Cross Purposes: Christianity's Broken Bargain with Democracy Errata Jonathan Rauch is the author of nine books, not eight! Socials Follow us on Twitter at @JonRauch, @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here. Click dem like buttons on youtube Anyone in Canada have a basement suite Jonathan could rent for a while? Send your address over to incrementspodcast@gmail.com Special Guest: Jonathan Rauch.

    1h 7m
  5. #91 - The Uses and Abuses of Statistics (w/ Ben Recht)

    04/09/2025

    #91 - The Uses and Abuses of Statistics (w/ Ben Recht)

    Professor of electrical engineering and computer science Ben Recht joins us to defend Bayesianism, AI doom, and assure us that the statisticians have everything under control. Just kidding. Recht might be even more suspicious of these things than we are. What has statistics ever done for us, really? When was the last time YOU ran a clinical trial after all, huh? HUH? After Ben Chugg defends his life decision to do a PhD in statistics, we talk AI, cults, philosophy, Paul Meehl, and discuss Ben Recht's forthcoming book, The Irrational Decision. Check out Ben's blog, website, and his story about machine learning. We discuss Ben Recht's theory of blogging Why is Berkeley the epicenter of AI doom? Where the word "robot" came from Is Bayesian reasoning responsible for AI doom? Paul Meehl and his contributions to science Ben Recht's bureaucratic theory of statistics What on earth is null hypothesis testing? What is the point of statistics? "Sweet spots" and "small worlds" Does science proceed by Popperian means? Can Popper get around the Duhem-Quine problem? Errata The z-score for the Pfizer trial was 20, not 12! References Argmin, Ben Recht's blog David Freedman, UC Berkeley Paul Meehl's online course Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology, Paul Meehl's 1978 paper. Clinical versus statistical prediction: A theoretical analysis and a review of the evidence, by Meehl On the near impossibility of estimating the returns to advertising A Bureaucratic Theory of Statistics by Recht The new riddle of induction by Goodman Announcing the Irrational Decision Patterns, Predictions, and Actions, textbook by Ben Recht and Moritz Hardt Socials Follow us on Twitter at @BeenWrekt, @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here. Click dem like buttons on youtube What's Berkeley's next cult? Send your guess over to incrementspodcast@gmail.com Special Guest: Ben Recht.

    1h 17m
  6. #90 (Reaction) - Disbelieving AI 2027: Responding to "Why We're Not Ready For Superintelligence"

    18/08/2025

    #90 (Reaction) - Disbelieving AI 2027: Responding to "Why We're Not Ready For Superintelligence"

    Always the uncool kids at the table, Ben and Vaden push back against the AGI hype domininating every second episode of every second podcast. We react to "We're not ready for superintelligence" by 80,000 Hours - a bleak portrayal of the pre and post AGI world. Can Ben keep Vaden's sass in check? Can the 80,000 hours team find enough cubes for AGI? Is Agent-5 listening to you RIGHT NOW? Listener Note: We strongly recommend watching the video for this one, available both on youtube and spotify: - https://www.youtube.com/@incrementspod - https://open.spotify.com/show/1gKKSP5HKT4Nk3i0y4UseB We discuss The incentives of superforecasters Arguments by authority Whether superintelligence is right around the corner The difference between model size and data Are we running out of high quality data? Does training on synthetic data work? The assumptions behind the AGI claims The pitfalls of reasoning from trends References Michael I Jordan Neil Lawrence [Important technical paper from Jordan pushing back on Doomerism](A Collectivist, Economic Perspective on AI) Jordan article talking about dangers of using AlphaFold data Nature paper showing you can't use synthetic data to train bigger models Paper estimating of when training data will run out (Coincidentally enough, sometime between 2027-2028) Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here. Or give us one-time cash donations to help cover our lack of cash donations here. Click dem like buttons on youtube But how many cubes until we get to AGI though? Send a few of your cubes over to incrementspodcast@gmail.com Episode header image from here.

    1h 36m

Ratings & Reviews

About

Vaden Masrani, a senior research scientist in machine learning, and Ben Chugg, a PhD student in statistics, get into trouble arguing about everything except machine learning and statistics. Coherence is somewhere on the horizon. Bribes, suggestions, love-mail and hate-mail all welcome at incrementspodcast@gmail.com.

You Might Also Like