Research Notes

Eric Green

Research Notes is a podcast about the reasoning behind research. Each episode features a conversation with the author of a study discussed at ghrbook.com. We trace how the research question was formed, how causal logic was mapped, how analytic decisions were made, and how uncertainty was interpreted. For teachers, students, and practitioners in global health, epidemiology, and data science, Research Notes makes research reasoning visible.

Episodios

  1. HACE 1 DÍA

    Creating a synthetic South Africa to study tobacco taxes

    In this episode of Research Notes, I talk with economist Dr. Grieve Chelwa about his paper using the synthetic control method to estimate the impact of cigarette excise taxes on smoking in South Africa. We start with a simple question: did higher taxes actually reduce cigarette consumption, or were other forces—economic change, cultural shifts, or declining trends already underway—doing the work? Chelwa explains why answering that question rigorously matters for both public health policy and causal inference. We then walk through the core methodological challenge: cigarette consumption in South Africa was already declining before the major tax increases began in the mid-1990s. That makes simple before-and-after comparisons misleading. Chelwa introduces synthetic control as a way to construct a credible counterfactual—an estimate of what would have happened in South Africa had the tax policy never been implemented. The method builds a “synthetic” version of South Africa by combining data from comparable countries that did not enact similar tax changes, allowing for a more defensible causal comparison.Chelwa describes how he constructed the donor pool of countries, emphasizing both the “science” (data completeness, predictors of smoking behavior like income and prices) and the “art” (whether the comparison countries make intuitive sense). From an initial pool of roughly two dozen middle-income countries, the method ultimately selected a small weighted subset—countries like Brazil and Argentina—to construct the synthetic control. The result is a counterfactual trend that closely matches South Africa before the policy, then diverges afterward.We discuss the key finding: while smoking was already declining, the decline accelerated substantially after the tax increases compared to the synthetic control. This creates a growing gap over time, illustrating a dynamic treatment effect rather than a single static estimate. Chelwa highlights this as one of the strengths of synthetic control—it allows researchers to see how policy effects evolve year by year.The conversation also covers robustness checks, including “leave-one-out” analyses to ensure results are not driven by any single country in the donor pool. Chelwa emphasizes that while the results can feel almost “too good to be true,” careful validation and alignment with existing literature help build confidence in the findings.We close with Chelwa reflecting on his career trajectory—from a PhD student immersed in the “credibility revolution” in economics to a more interdisciplinary scholar thinking broadly about development and policy. He shares a memorable moment from the project: running the model for the first time, doubting the result, and coming back the next day to confirm it held. As he puts it, that moment captures something essential about research—the mix of skepticism, rigor, and excitement that defines the scientific process.

    19 min
  2. 9 MAR

    What do we mean by 'clinically meaningful'?

    What does “clinically meaningful” actually mean in psychiatry? Compass Pathways recently reported Phase 3 results for COMP360, a synthetic psilocybin treatment for treatment-resistant depression. The company said 39% of treated patients achieved a “clinically meaningful” reduction in symptoms. But who decides what counts as meaningful? And how should we interpret a 3–4 point difference on a scale like MADRS? In this episode of Research Notes, I talk with Dr. Jerrold “Jerry” Rosenbaum, Stanley Cobb Professor of Psychiatry at Harvard Medical School and director of the Massachusetts General Hospital Center for the Neuroscience of Psychedelics. Dr. Rosenbaum was not involved in the Compass study, but he has been closely watching the field and was quoted in STAT News saying the results “probably meet the bar for approval” but do not “shout out to you that this is miraculous.” We discuss: What makes a treatment effect clinically meaningful in psychiatryHow clinicians think about response, remission, and symptom scales like MADRSWhy Compass introduced a new category of “clinically meaningful” improvementHow restrictive trial criteria can make psychiatric studies hard to interpretWhy average effects may hide meaningful benefit in subgroupsWhether a 3–4 point difference on MADRS matters clinicallyWhy durability, cost, and functional unblinding matter for psychedelic treatmentsA key point from Dr. Rosenbaum: psychiatric trial outcomes are not just numbers on a page. They are consensus-based tools meant to approximate something much messier and more human — whether a person is suffering less, functioning better, and able to live their life again. For more: https://ghrbook.com/notes/clinically-meaningful.html

    27 min
  3. 25 FEB

    Episiotomy, Hemorrhage & DAGs

    In this interview, I speak with Dr. Judith Lieber (London School of Hygiene & Tropical Medicine) about her recent paper in The Lancet Global Health examining episiotomy and postpartum haemorrhage in women with moderate or severe anaemia.I originally came across this paper while searching for a real-world example to teach directed acyclic graphs (DAGs). It turned out to be a perfect case: clinically important, analytically rigorous, and explicit about how a DAG guided the study design and adjustment strategy.The study draws on data from the WOMAN-2 trial — a large, international trial of tranexamic acid conducted in Pakistan, Nigeria, Tanzania, and Zambia, focused on postpartum bleeding in women with moderate or severe anaemia. Judy joined the trial team toward the end to conduct exploratory analyses using this rich dataset of over 15,000 women.In this conversation, we focus primarily on methods:-How drawing the DAG clarified the causal question-How it determined what to adjust for — and what to avoid adjusting for-The challenge of distinguishing confounders from mediators-Using proxies when key confounders (like shoulder dystocia) are unmeasured-Conducting a quantitative bias analysis to bound potential unmeasured confounding-Balancing complexity and readability when building a DAGWe also discuss Judy’s pathway into epidemiology, her work at LSHTM’s Clinical Trials Unit, and her current project tackling time-varying treatment decisions with another (even more complicated) DAG.This is a practical, applied conversation about how causal diagrams are actually used in real research — not as theoretical exercises, but as tools for clarifying assumptions, structuring models, and understanding limitations.If you teach causal inference, work with observational data, or are trying to move beyond “control for everything” regression thinking, this is a great example of DAGs in action.

    14 min

Acerca de

Research Notes is a podcast about the reasoning behind research. Each episode features a conversation with the author of a study discussed at ghrbook.com. We trace how the research question was formed, how causal logic was mapped, how analytic decisions were made, and how uncertainty was interpreted. For teachers, students, and practitioners in global health, epidemiology, and data science, Research Notes makes research reasoning visible.