Quantifying Uncertainty in the Presence of Distribution Shifts
Neural networks make accurate predictions but often fail to provide reliable uncertainty estimates, especially under covariate distribution shifts between training and testing. To address this problem, we propose a Bayesian framework for uncertainty estimation that explicitly accounts for covariate shifts. While conventional approaches rely on fixed priors, the key idea of our method is an adaptive prior, conditioned on both training and new covariates. This prior naturally increases uncertainty for inputs that lie far from the training distribution in regions where predictive performance is likely to degrade. To efficiently approximate the resulting posterior predictive distribution, we employ amortized variational inference. Finally, we construct synthetic environments by drawing small bootstrap samples from the training data, simulating a range of plausible covariate shift using only the original dataset. We evaluate our method on both synthetic and real-world data. It yields substantially improved uncertainty estimates under distribution shifts.
💡 Research Summary
This paper tackles a fundamental shortcoming of Bayesian neural networks (BNNs) when faced with covariate shift: the standard formulation treats the prior over weights, p(θ), as fixed and independent of the test input x*. Consequently, the posterior p(θ|D) does not change when a new test point arrives, and predictive uncertainty is driven solely by parameter uncertainty. In many real‑world scenarios—medical imaging, autonomous driving, or any high‑stakes domain—test inputs can lie far from the training covariate distribution, and the model should become less confident even if the parameter posterior is tight.
Adaptive, data‑dependent prior
The authors propose to replace the static prior with an adaptive prior that conditions on both the training covariates x₁:N and the specific test covariate x*. Formally,
\
Comments & Academic Discussion
Loading comments...
Leave a Comment