Machine learning for the physics of climate Citation: Bracco, A., Brajard, J., Dijkstra, H. A., Hassanzadeh, P., Lessig, C., & Monteleoni, C. (2025). Machine learning for the physics of climate. Nature Reviews Physics, 7, 6–20. https://doi.org/10.1038/s42254-024-00776-3 Main Takeaways: Breaking the El Niño Spring Barrier: For decades, forecasts of the El Niño Southern Oscillation hit a hard wall at roughly 6 months lead time — a limit known as the spring predictability barrier. Convolutional neural networks trained on a mix of climate model and reanalysis data have shattered this ceiling, delivering skillful forecasts at 17 months out, with newer architectures pushing to 21–24 months. ML models can also now anticipate which type of El Niño will develop (eastern vs. central Pacific), which matters enormously because the two flavors produce very different regional impacts around the world.Weather Forecasting at a Fraction of the Cost: A new generation of ML weather emulators — Pangu-Weather, GraphCast, FourCastNet, FuXi, NeuralGCM — now match or beat the European Centre's flagship physics-based forecasting system on most variables, including hurricane tracks, while running orders of magnitude faster. They achieve this with surprisingly compressed state representations: roughly 10 vertical atmospheric levels and 0.25° horizontal resolution, compared to 100+ levels and 0.1° in conventional models. The catch is that these models can violate basic physics — geostrophic balance, energy conservation, the butterfly effect — which currently blocks naive extension to climate timescales.Hybrid Models Are Eating the Climate Stack: Pure ML works for short-range forecasts, but for climate-length runs the field is converging on hybrid architectures that pair a traditional dynamical core with neural-network parameterizations of sub-grid processes like clouds, turbulence, and gravity waves. Google's NeuralGCM exemplifies the approach and already reduces biases in tropical cyclone frequency and tracks. A telling case study on the quasi-biennial oscillation showed that an offline-trained neural network produced unstable, unphysical results — but retraining just two layers online, coupled to the model, recovered the correct physics. Offline-only or online-only training each fail in characteristic ways; the mix is what works.The Data Wall Is the Real Bottleneck: Climate ML has less than 50 years of dense satellite-era observations to work with, and those observations are heavily biased toward the atmosphere and ocean surface — a single, spatiotemporally correlated realization of one climate. This limits how confidently ML models can extrapolate to warmer, unseen climates, which is exactly what climate projection requires. The path forward involves three parallel bets: hybrid physics-ML models that bake in conservation laws, large-scale "foundation models" for weather and climate trained across simulations and observations together (efforts like ClimaX and AtmoRep are early examples), and rare-event sampling strategies to handle the extremes that matter most for adaptation policy but are by definition underrepresented in any training set.