This research addresses out-of-distribution generalization by proposing a shift from traditional causal invariance to explicit environment modeling. While standard methods attempt to discard all environment-dependent information, this paper argues that such features can be predictive when the environment directly influences the target. The authors introduce neural generalized random-intercept models, which capture shared structures across settings while accounting for environment-specific variation through marginalization. This framework minimizes environment-average risk, ensuring robust predictions in entirely new contexts. Theoretical analysis and empirical tests on datasets like Colored MNIST and Camelyon-17 demonstrate that this approach consistently outperforms invariance-seeking techniques. Ultimately, the work proves that marginalizing environment effects preserves more useful information than attempting to force absolute representation stability.
Information
- Show
- FrequencyUpdated Daily
- PublishedMay 7, 2026 at 5:31 AM UTC
- Length23 min
- RatingClean
