Normal Curves: Sexy Science, Serious Statistics

Regina Nuzzo and Kristin Sainani

Normal Curves is a podcast about sexy science & serious statistics. Ever try to make sense of a scientific study and the numbers behind it? Listen in to a lively conversation between two stats-savvy friends who break it all down with humor and clarity. Professors Regina Nuzzo of Gallaudet University and Kristin Sainani of Stanford University discuss academic papers journal club-style — except with more fun, less jargon, and some irreverent, PG-13 content sprinkled in. Join Kristin and Regina as they dissect the data, challenge the claims, and arm you with tools to assess scientific studies on your own.

  1. Ultramarathons: Can vitamin D protect your bones?

    OCT 6

    Ultramarathons: Can vitamin D protect your bones?

    Ultramarathoners push their bodies to the limit, but can a giant pre-race dose of vitamin D really keep their bones from breaking down? In this episode, we dig into a trial that tested this claim – and found  a statistical endurance event of its own: six highly interchangeable papers sliced from one small study.  Expect missing runners, recycled figures, and a peer-review that reads like stand-up comedy, plus a quick lesson in using degrees of freedom as your statistical breadcrumbs. Statistical topics Data cleaning and validationDegrees of freedomExploratory vs confirmatory analysisFalse positives and Type I errorIntention-to-treat principleMultiple testingOpen data and transparencyP-hackingSalami slicingParametric vs non-parametric testsPeer review qualityRandomized controlled trialsResearch reproducibilityStatistical sleuthing Methodological morals “Degrees of freedom are the breadcrumbs in statistical sleuthing. They reveal the sample size even when the authors do not.”“Publishing the same study again and again with only the outcomes swapped is Mad Libs Science, better known as salami slicing.”References Boswell, Rachel. Pre-race vitamin D could do wonders for ultrarunners’ bone health, according to science. Runner’s World. September 25, 2025. Mieszkowski J, Stankiewicz B, Kochanowicz A, et al. Ultra-Marathon-Induced Increase in Serum Levels of Vitamin D Metabolites: A Double-Blind Randomized Controlled Trial. Nutrients. 2020;12(12):3629. Published 2020 Nov 25. doi:10.3390/nu12123629Mieszkowski J, Borkowska A, Stankiewicz B, et al. Single High-Dose Vitamin D Supplementation as an Approach for Reducing Ultramarathon-Induced Inflammation: A Double-Blind Randomized Controlled Trial. Nutrients. 2021;13(4):1280. Published 2021 Apr 13. doi:10.3390/nu13041280Mieszkowski J, Brzezińska P, Stankiewicz B, et al. Direct Effects of Vitamin D Supplementation on Ultramarathon-Induced Changes in Kynurenine Metabolism. Nutrients. 2022;14(21):4485. Published 2022 Oct 25. doi:10.3390/nu14214485Mieszkowski J, Brzezińska P, Stankiewicz B, et al. Vitamin D Supplementation Influences Ultramarathon-Induced Changes in Serum Amino Acid Levels, Tryptophan/Branched-Chain Amino Acid Ratio, and Arginine/Asymmetric Dimethylarginine Ratio. Nutrients. 2023;15(16):3536. Published 2023 Aug 11. doi:10.3390/nu15163536Stankiewicz B, Mieszkowski J, Kochanowicz A, et al. Effect of Single High-Dose Vitamin D3 Supplementation on Post-Ultra Mountain Running Heart Damage and Iron Metabolism Changes: A Double-Blind Randomized Controlled Trial. Nutrients. 2024;16(15):2479. Published 2024 Jul 31. doi:10.3390/nu16152479Stankiewicz B, Kochanowicz A, et al. Single high-dose vitamin D supplementation impacts ultramarathon-induced changes in serum levels of bone turnover markers: a double-blind randomized controlled trial. J Int Soc Sports Nutr. 2025 Dec;22(1):2561661. doi: 10.1080/15502783.2025.2561661.Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding   Clinical Trials: Design, Strategy, and Analysis  Medical Statistics Certificate Program   Writing in the Sciences  Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com  00:00 Intro & claim of the episode 00:44 Runner’s World headline: Vitamin D for ultramarathoners 02:03 Kristin’s connection to running and vitamin D skepticism 03:32 Ultramarathon world—Regina’s stories and Death Valley race 06:29 What ultramarathons do to your bones 08:02 Boy story: four stress fractures in one race 10:00 Study design—40 male runners in Poland 11:33 Missing flow diagram and violated intention-to-treat 13:02 The intervention: 150,000 IU megadose 15:09 Blinding details and missing randomization info 17:13 Measuring bone biomarkers—no primary outcome specified 19:12 The wrong clinicaltrials.gov registration 20:35 Discovery of six papers from one dataset (salami slicing) 23:02 Why salami slicing misleads readers 25:42 Inconsistent reporting across papers 29:11 Changing inclusion criteria and sloppy methods 31:06 Typos, Polish notes, and misnumbered references 32:39 Peer review comedy gold—“Please define vitamin D” 36:06 Reviewer laziness and p-hacking admission 39:13 Results: implausible bone growth mid-race 41:16 Degrees of freedom sleuthing reveals hidden sample sizes 47:07 Open data? Kristin emails the authors 48:42 Lessons from Kristin’s own ultramarathon dataset 51:22 Fishing expeditions and misuse of parametric tests 53:07 Strength of evidence: one smooch each 54:44 Methodologic morals—Mad Libs Science & degrees of freedom breadcrumbs 56:12 Anyone can spot red flags—trust your eyes 57:34 Outro: skip the vitamin D shot before your next run

    59 min
  2. P-Values: Are we using a flawed statistical tool?

    SEP 22

    P-Values: Are we using a flawed statistical tool?

    P-values show up in almost every scientific paper, yet they’re one of the most misunderstood ideas in statistics. In this episode, we break from our usual journal-club format to unpack what a p-value really is, why researchers have fought about it for a century, and how that famous 0.05 cutoff became enshrined in science. Along the way, we share stories from our own papers—from a Nature feature that helped reshape the debate to a statistical sleuthing project that uncovered a faulty method in sports science. The result: a behind-the-scenes look at how one statistical tool has shaped the culture of science itself. Statistical topics Bayesian statisticsConfidence intervals Effect size vs. statistical significanceFisher’s conception of p-valuesFrequentist perspectiveMagnitude-Based Inference (MBI)Multiple testing / multiple comparisonsNeyman-Pearson hypothesis testing frameworkP-hackingPosterior probabilitiesPreregistration and registered reportsPrior probabilitiesP-valuesResearcher degrees of freedomSignificance thresholds (p 0.05)Simulation-based inferenceStatistical power Statistical significanceTransparency in research Type I error (false positive)Type II error (false negative)Winner’s Curse Methodological morals “​​If p-values tell us the probability the null is true, then octopuses are psychic.”“Statistical tools don't fool us, blind faith in them does.”References Nuzzo R. Scientific method: statistical errors. Nature. 2014 Feb 13;506(7487):150-2. doi: 10.1038/506150a. Nuzzo, R., 2015. Scientists perturbed by loss of stat tools to sift research fudge from fact. Scientific American, pp.16-18.Nuzzo RL. The inverse fallacy and interpreting P values. PM&R. 2015 Mar;7(3):311-4. doi: 10.1016/j.pmrj.2015.02.011. Epub 2015 Feb 25. Nuzzo, R., 2015. Probability wars. New Scientist, 225(3012), pp.38-41.Sainani KL. Putting P values in perspective. PM&R. 2009 Sep;1(9):873-7. doi: 10.1016/j.pmrj.2009.07.003.Sainani KL. Clinical versus statistical significance. PM&R. 2012 Jun;4(6):442-5. doi: 10.1016/j.pmrj.2012.04.014.McLaughlin MJ, Sainani KL. Bonferroni, Holm, and Hochberg corrections: fun names, serious changes to p values. PM&R. 2014 Jun;6(6):544-6. doi: 10.1016/j.pmrj.2014.04.006. Epub 2014 Apr 22. Sainani KL. The Problem with "Magnitude-based Inference". Med Sci Sports Exerc. 2018 Oct;50(10):2166-2176. doi: 10.1249/MSS.0000000000001645. Sainani KL, Lohse KR, Jones PR, Vickers A. Magnitude-based Inference is not Bayesian and is not a valid method of inference. Scand J Med Sci Sports. 2019 Sep;29(9):1428-1436. doi: 10.1111/sms.13491. Lohse KR, Sainani KL, Taylor JA, Butson ML, Knight EJ, Vickers AJ. Systematic review of the use of "magnitude-based inference" in sports science and medicine. PLoS One. 2020 Jun 26;15(6):e0235318. doi: 10.1371/journal.pone.0235318. Wasserstein, R.L. and Lazar, N.A., 2016. The ASA statement on p-values: context, process, and purpose. The American Statistician, 70(2), pp.129-133.Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding   Clinical Trials: Design, Strategy, and Analysis  Medical Statistics Certificate Program   Writing in the Sciences  Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - Intro & claim of the episode (01:00) - Why p-values matter in science (02:44) - What is a p-value? (ESP guessing game) (06:47) - Big vs. small p-values (psychic octopus example) (08:29) - Significance thresholds and the 0.05 rule (09:00) - Regina’s Nature paper on p-values (11:32) - Misconceptions about p-values (13:18) - Fisher vs. Neyman-Pearson (history & feud) (16:26) - Botox analogy and type I vs. type II errors (19:41) - Dating app analogies for false positives/negatives (22:02) - How the 0.05 cutoff got enshrined (23:46) - Misinterpretations: statistical vs. practical significance (25:22) - Effect size, sample size, and “statistically discernible” (25:51) - P-hacking and researcher degrees of freedom (28:52) - Transparency, preregistration, and open science (29:58) - The 0.05 cutoff trap (p = 0.049 vs 0.051) (30:24) - The biggest misinterpretation: what p-values actually mean (32:35) - Paul the psychic octopus (worked example) (35:05) - Why Bayesian statistics differ (38:55) - Why aren’t we all Bayesian? (probability wars) (40:11) - The ASA p-value statement (behind the scenes) (42:22) - Key principles from the ASA white paper (43:21) - Wrapping up Regina’s paper (44:39) - Kristin’s paper on sports science (MBI) (47:16) - What MBI is and how it spread (49:49) - How Kristin got pulled in (Christie Aschwanden & FiveThirtyEight) (53:11) - Critiques of MBI and “Bayesian monster” rebuttal (55:20) - Spreadsheet autopsies (Welsh & Knight) (57:11) - Cherry juice example (why MBI misleads) (59:28) - Rebuttals and smoke & mirrors from MBI advocates (01:02:01) - Winner’s Curse and small samples (01:02:44) - Twitter fights & “establishment statistician” (01:05:02) - Cult-like following & Matrix red pill analogy (01:07:12) - Wrap-up

    1h 13m
  3. Exercise and Cancer: Does physical activity improve colon cancer survival?

    SEP 8

    Exercise and Cancer: Does physical activity improve colon cancer survival?

    Exercise has long been hailed as cancer-fighting magic, but is there hard evidence behind the hype? In this episode, we tackle the CHALLENGE trial, a large phase III study of colon cancer patients that tested whether prescribed exercise could improve cancer-free survival. We translate clinical jargon into plain English, show why ratio statistics make splashy headlines while absolute differences tell the real story, and take a detour into why statisticians think survival analysis is downright sexy. And we even bring in a classic reality show to make sense of the numbers. Statistical topics Data and Safety Monitoring Board (DSMB)Hazard ratiosIntention-to-treat analysisInterim analysesKaplan-Meier curvesPhase III trialsRandomized clinical trialRates and rate ratiosRelative vs absolute differencesStratified randomization with minimizationSurvival analysisTime-to-event variablesMethodological morals “Ratio statistics sell headlines. Absolute differences sell truth.”“Survival analysis is this sexy stats tool that makes every moment and every Cox count.”References Courneya KS, Vardy JL, O'Callaghan CJ, et al. Structured Exercise after Adjuvant Chemotherapy for Colon Cancer. NEJM. 2025;393:13-25. Rabin RC. Are Marathons and Extreme Running Linked to Colon Cancer? The New York Times. Aug 19, 2025.Sainani KL. Introduction to survival analysis. PM&R. 2016;  8:580-85.Sainani KL. Making sense of intention-to-treat. PM&R. 2010;2:209-13.Thanks Thanks to Caitlin Goodrich for the episode topic tip! Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding   Clinical Trials: Design, Strategy, and Analysis  Medical Statistics Certificate Program   Writing in the Sciences  Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - Intro (05:42) - Two different types of cancer studies (08:12) - Why might exercise affect cancer? (10:05) - Phase III trials are different (12:40) - Who was in the CHALLENGE trial? (13:31) - Stratified randomization with minimization (15:05) - The exercise prescription (18:23) - What did the CHALLENGE trial measure? (19:10) - Disease-free survival (21:05) - Data and Safety Monitoring Board – what do they do? (23:41) - Participants and adherence to exercise (26:00) - Intention-to-treat analysis (29:04) - Survival analysis overview (30:57) - Kaplan-Meier curves (33:33) - Reality-show analogy (36:00) - Ratio statistics are confusing (38:36) - Hazard ratios (46:09) - Wrap-up, rating, and methodological morals

    49 min
  4. Age Gaps: How much does age matter in dating?

    AUG 25

    Age Gaps: How much does age matter in dating?

    Are we all secretly ageist when it comes to dating? We put the stereotype that older men prefer younger women under the microscope using data from thousands of blind dates. What we found surprised us: the “age penalty” was real but microscopic, women wanted younger partners too, and hard age cutoffs weren’t so hard after all. Along the way, we unpack statistical significance versus practical importance, play with the infamous “half your age plus seven” rule, and imagine what it would take for love to die out… somewhere around age 628. Statistical topics Discontinuous regressionEffect sizesExtrapolation pitfallsLinear regressionLogistic regressionOdds ratiosOpen dataStatistical significance vs. practical significance Methodological morals “Do not be swept off your feet by statistical significance. Tiny effects in bed are still tiny.”“Fancy units sound smart, but plain English wins hearts.” Show Notes Technical Appendix (with step-by-step explanations) References Eastwick PW, Finkel EJ, Meza EM, Ammerman K. No gender differences in attraction to young partners: A study of 4500 blind dates. Proc Natl Acad Sci U S A. 2025 Feb 4;122(5):e2416984122. Matchmaking Dataset and Code on Open Science Framework: https://osf.io/rkm2d/?view_only=a0fe91dae0464077af7772e6890a8151Nuzzo RL. Communicating measures of relative risk in plain English. PM&R. 2022 Feb;14(2):283-7.O'Rell, Max. Her Royal Highness, Woman: And His Majesty--Cupid. Abbey Press, 1901.Sainani KL. Logistic regression. PM&R. 2014 Dec;6(12):1157-62.Sainani KL. Understanding odds ratios. PM&R. 2011 Mar;3:263-7. Sainani KL. Clinical versus statistical significance. PM&R. 2012 Jun;4:442-5.Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding   Clinical Trials: Design, Strategy, and Analysis  Medical Statistics Certificate Program   Writing in the Sciences  Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - Intro (04:01) - Half-your-age-plus-seven rule (09:15) - Matchmaking service for the study (17:05) - Blind dates as natural experiments (21:55) - Regression results part 1: Age penalties? (28:38) - Wait, how big of an effect was that? (34:09) - Odds ratio of a second date (38:01) - Surprising age pair-ups (40:53) - Regression results part 2: Deal-breaking age limits? (44:27) - Why the patterns may or may not be true (46:30) - Wrap-up, ratings, and methodological morals

    50 min
  5. Your Brain on AI: Is ChatGPT making us mentally lazy?

    AUG 11

    Your Brain on AI: Is ChatGPT making us mentally lazy?

    ChatGPT is melting our brainpower, killing creativity, and making us soulless — or so the headlines imply. We dig into the study behind the claims, starting with quirky bar charts and mysterious sample sizes, then winding through hairball-like brain diagrams and tens of thousands of statistical tests. Our statistical sleuthing leaves us with questions, not just about the results, but about whether this was science’s version of a first date that looked better on paper. Statistical topics ANOVABar graphsData visualization False Discovery Rate correctionMultiple testingPreprintsStatistical SleuthingMethodological morals "Treat your preprints like your blind dates. Show up showered and with teeth brushed.""Always check your N. Then check it again.""Never make a bar graph that just shows p-values. Ever."Link to paper Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding  Clinical Trials: Design, Strategy, and Analysis Medical Statistics Certificate Program  Writing in the Sciences Epidemiology and Clinical Research Graduate Certificate ProgramPrograms that we teach in: Epidemiology and Clinical Research Graduate Certificate Program Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - Intro (03:46) - Media coverage of the study (08:35) - The experiment (12:09) - Sample size issues (13:11) - Bar chart sleuthing (19:15) - Blind date analogy (22:57) - Interview results (29:07) - Simple text analysis results (33:07) - Natural language processing results (40:03) - N-gram and ontology analysis results (44:58) - Teacher evaluation results (51:33) - Neuroimaging analysis (59:35) - Multiple testing and connectivity issues (01:05:13) - Brain adaptation results (01:08:50) - Wrap-up, rating, and methodological morals

    1h 14m
  6. The Backfire Effect: Can fact-checking make false beliefs stronger?

    JUL 28

    The Backfire Effect: Can fact-checking make false beliefs stronger?

    Can correcting misinformation make it worse? The “backfire effect” claims that debunking myths can actually make false beliefs stronger. We dig into the evidence — from ghost studies to headline-making experiments — to see if this psychological plot twist really holds up. Along the way, we unpack interaction effects, randomization red flags, and what happens when bad citations take on a life of their own. Plus: dirty talk analogies, statistical sleuthing, and why “familiarity” might be your brain’s sneakiest trick. Statistical topics Computational replicationReplicationBlock randomizationProblems in randomizationBad citingInteractions in regressionUnpublished "Ghost Paper" PDF retrieved from the Wayback MachineCitations Nyhan B, Reifler J. When corrections fail: The persistence of political misperceptions. Political Behavior. 2010;32:303–330.Skurnik I, Yoon C, Schwarz N. “Myths & Facts” about the flu: Health education campaigns can reduce vaccination intentions. Unpublished manuscript, PDF posted separately.Schwarz N, Sanna LJ, Skurnik I, et al. Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Advances in Experimental Social Psychology. 2007;39:127–61.Lewandowsky S, Ecker UKH, Seifert CM, et al. Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest. 2012;13:106–131.Pluviano S, Watt C, Della Sala S. Misinformation lingers in memory: Failure of three pro-vaccination strategies. PLOS ONE. 2017;12:e0181640.Pluviano S, Watt C, Ragazzini G, et al. Parents’ beliefs in misinformation about vaccines are strengthened by pro‑vaccine campaigns. Cognitive Processing. 2019;20:325–31.Wood T, Porter E. The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior. 2019;41:135–63.Nyhan B, Porter E, Reifler J, Wood TJ. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior. 2020;42:939–60.Ecker UKH, Hogan JL, Lewandowsky S. Reminders and repetition of misinformation: Helping or hindering its retraction? Journal of Applied Research in Memory and Cognition. 2017;6:185–92.Swire B, Ecker UKH, Lewandowsky S. The role of familiarity in correcting inaccurate information. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2017;43:1948–61.Ecker UKH, O’Donnell M, Ang LC, et al. The effectiveness of short- and long-format retractions on misinformation belief and recall. British Journal of Psychology. 2020;111:36–54.Ecker UKH, Sharkey CXM, Swire-Thompson B. Correcting vaccine misinformation: A failure to replicate familiarity or fear-driven backfire effects. PLOS ONE. 2023;18:e0281140.Cook J, Lewandowsky S. The Debunking Handbook. University of Queensland. 2011.Lewandowsky S, Cook J, Ecker UKH, et al. The Debunking Handbook 2020. Available at https://sks.to/db2020. Swire‑Thompson B, DeGutis J, Lazer D. Searching for the backfire effect: Measurement and design considerations. Journal of Applied Research in Memory and Cognition. 2020;9:286–99. Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding  Clinical Trials: Design, Strategy, and Analysis Medical Statistics Certificate Program  Writing in the Sciences Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - (00:00) - Intro (02:05) - What is the backfire effect? (03:55) - The 2010 paper that panicked fact-checkers (06:25) - The ghost paper what it really said (12:35) - Study design of the 2010 paper (18:25) - Results of the 2010 paper (19:55) - Crossover interactions, regression models, and intimate talk (25:24) - Missing data and cleaning your bedroom analogy (28:11) - Fact-checking the fact-checking paper (33:07) - Replication and pushing the data to the limit (36:59) - The purported backfire effect spreads (41:06) - The 2017 paper that got a lot of attention (44:25) - Statistical sleuthing the 2017 paper (48:51) - Will researchers double down on their earlier conclusions? (54:46) - A review paper sums it all up (56:00) - Wrap up, rating, and methodological morals

    58 min
  7. Dating Wishlists: Are we happier when we get what we want in a mate?

    JUL 14

    Dating Wishlists: Are we happier when we get what we want in a mate?

    Loyal, funny, hot — you’ve probably got a wish list for your dream partner. But does checking all your boxes actually lead to happily ever after? In this episode, we dive into a massive global study that put the “ideal partner” hypothesis to the test. Do people really know what they want, and does getting it actually make them happier? We explore surprising statistical insights from over 10,000 romantics in 43 countries, from mean-centering and interaction effects to the good-catch confounder. Along the way, we dig into dessert metaphors, partner boat-count regression models, and the one trait that people say doesn’t matter — but secretly makes them happiest. Statistical topics RegressionRandom Slopes and Intercepts (Random Effects) in RegressionStandardized Beta Coefficients in RegressionInteraction Effects in RegressionMean CenteringExploratory AnalysesMethodological morals “Good science bares it all.” “When the world isn't one size fits all, don't fit just one line; use random slopes and intercepts.” References Eastwick PW, Sparks J, Finkel EJ, Meza EM, Adamkovič M, Adu P, Ai T, Akintola AA, Al-Shawaf L, Apriliawati D, Arriaga P, Aubert-Teillaud B, Baník G, Barzykowski K, Batres C, Baucom KJ, Beaulieu EZ, Behnke M, Butcher N, Charles DY, Chen JM, Cheon JE, Chittham P, Chwiłkowska P, Cong CW, Copping LT, Corral-Frias NS, Ćubela Adorić V, Dizon M, Du H, Ehinmowo MI, Escribano DA, Espinosa NM, Expósito F, Feldman G, Freitag R, Frias Armenta M, Gallyamova A, Gillath O, Gjoneska B, Gkinopoulos T, Grafe F, Grigoryev D, Groyecka-Bernard A, Gunaydin G, Ilustrisimo R, Impett E, Kačmár P, Kim YH, Kocur M, Kowal M, Krishna M, Labor PD, Lu JG, Lucas MY, Małecki WP, Malinakova K, Meißner S, Meier Z, Misiak M, Muise A, Novak L, O J, Özdoğru AA, Park HG, Paruzel M, Pavlović Z, Püski M, Ribeiro G, Roberts SC, Röer JP, Ropovik I, Ross RM, Sakman E, Salvador CE, Selcuk E, Skakoon-Sparling S, Sorokowska A, Sorokowski P, Spasovski O, Stanton SCE, Stewart SLK, Swami V, Szaszi B, Takashima K, Tavel P, Tejada J, Tu E, Tuominen J, Vaidis D, Vally Z, Vaughn LA, Villanueva-Moya L, Wisnuwardhani D, Yamada Y, Yonemitsu F, Žídková R, Živná K, Coles NA. A worldwide test of the predictive validity of ideal partner preference matching. J Pers Soc Psychol. 2025 Jan;128(1):123-146. doi: 10.1037/pspp0000524Love Factually Podcast: https://www.lovefactuallypod.com/Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding   Clinical Trials: Design, Strategy, and Analysis  Medical Statistics Certificate Program   Writing in the Sciences  Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - (00:00) - Intro (04:57) - Actual dating profile wishlists vs study wishlists (09:12) - Juicy paper details (18:31) - What the study actually asked – wishlist, partner resume, relationship satisfaction (24:10) - Linear regression illustrated through number of boats your partner has (30:37) - Standardized regression coefficients illustrated through spouse height concordance (34:52) - Good catch confounder: We all just want the same high-quality ice cream / mate (39:46) - Does your personalized wishlist matter? Results (42:01) - Wishlist regression interaction effects: like chocolate and peanut butter (45:51) - Partner traits result in happiness bonus points (49:51) - What do we say we want – and what really makes us happy? Surprise (54:10) - Gender stereotypes and whether they held up (56:51) - Random effects models and boats again (59:30) - Other cool things they did (01:00:41) - One-minute paper summary (01:02:23) - Wrap-up, rate the claim, methodological morals

    1h 6m
  8. Stats Reunion: What have we learned so far?

    JUN 30

    Stats Reunion: What have we learned so far?

    It’s our first stats reunion! In this special review episode, we revisit favorite concepts from past episodes—p-values, multiple testing, regression adjustment—and give them fresh personalities as characters. Meet the seductive false positive, the clingy post hoc ex, and Charlotte, the well-meaning but overfitting idealist. Statistical topics Bar charts vs Box plotsBonferroni correctionConfoundingFalse positives Multiple testingMultivariable regressionOutcome switchingOver-adjustmentPost hoc analysisPre-registrationResidual confoundingStatistical adjustment using regressionSubgroup analysis Unmeasured confounding Review Sheet References Nuzzo RL. The Box Plots Alternative for Visualizing Quantitative Data. PM R. 2016 Mar;8(3):268-72. doi: 10.1016/j.pmrj.2016.02.001. Epub 2016 Feb 15. PMID: 26892802.Sainani KL. The problem of multiple testing. PM R. 2009 Dec;1(12):1098-103. doi: 10.1016/j.pmrj.2009.10.004. PMID: 20006317. Kristin and Regina’s online courses:  Demystifying Data: A Modern Approach to Statistical Understanding   Clinical Trials: Design, Strategy, and Analysis  Medical Statistics Certificate Program   Writing in the Sciences  Epidemiology and Clinical Research Graduate Certificate Program  Programs that we teach in: Epidemiology and Clinical Research Graduate Certificate Program  Find us on: Kristin -  LinkedIn & Twitter/X Regina - LinkedIn & ReginaNuzzo.com (00:00) - Intro (02:26) - Mailbag (06:42) - P-values (12:43) - Multiple Testing Guy (16:05) - Bonferroni solution (17:11) - Post hoc analysis ex (22:22) - Subgroup analysis person (29:34) - Statistical adjustment idealist (43:00) - Unmeasured confounding (44:25) - Residual confounding (48:31) - Over-adjustment (53:48) - Wrap-up

    56 min

About

Normal Curves is a podcast about sexy science & serious statistics. Ever try to make sense of a scientific study and the numbers behind it? Listen in to a lively conversation between two stats-savvy friends who break it all down with humor and clarity. Professors Regina Nuzzo of Gallaudet University and Kristin Sainani of Stanford University discuss academic papers journal club-style — except with more fun, less jargon, and some irreverent, PG-13 content sprinkled in. Join Kristin and Regina as they dissect the data, challenge the claims, and arm you with tools to assess scientific studies on your own.

You Might Also Like