Uncertainty Quantification for Prior-Data Fitted Networks using Martingale Posteriors

Uncertainty Quantification for Prior-Data Fitted Networks using Martingale Posteriors
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Prior-data fitted networks (PFNs) have emerged as promising foundation models for prediction from tabular data sets, achieving state-of-the-art performance on small to moderate data sizes without tuning. While PFNs are motivated by Bayesian ideas, they do not provide any uncertainty quantification for predictive means, quantiles, or similar quantities. We propose a principled and efficient sampling procedure to construct Bayesian posteriors for such estimates based on Martingale posteriors, and prove its convergence. Several simulated and real-world data examples showcase the uncertainty quantification of our method in inference applications.


💡 Research Summary

Prior‑data fitted networks (PFNs) have become a popular foundation‑model approach for tabular prediction because they can be applied to new datasets without any fine‑tuning: a transformer is pre‑trained on a large collection of synthetic tables, and at inference time a single forward pass yields an approximation of the posterior predictive distribution (PPD) p(y | x, Dₙ). While the PPD quantifies uncertainty about individual labels, PFNs do not provide uncertainty estimates for derived quantities such as the conditional mean E


Comments & Academic Discussion

Loading comments...

Leave a Comment