Discretization-invariant Bayesian inversion and Besov space priors

Discretization-invariant Bayesian inversion and Besov space priors
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Bayesian solution of an inverse problem for indirect measurement $M = AU + {\mathcal{E}}$ is considered, where $U$ is a function on a domain of $R^d$. Here $A$ is a smoothing linear operator and $ {\mathcal{E}}$ is Gaussian white noise. The data is a realization $m_k$ of the random variable $M_k = P_kA U+P_k {\mathcal{E}}$, where $P_k$ is a linear, finite dimensional operator related to measurement device. To allow computerized inversion, the unknown is discretized as $U_n=T_nU$, where $T_n$ is a finite dimensional projection, leading to the computational measurement model $M_{kn}=P_k A U_n + P_k {\mathcal{E}}$. Bayes formula gives then the posterior distribution $\pi_{kn}(u_n | m_{kn})\sim\pi_n(u_n) \exp(-{1/2}|m_{kn} - P_kA u_n|2^2)$ in $R^d$, and the mean $U^{CM}{kn}:=\int u_n \pi_{kn}(u_n | m_k) du_n$ is considered as the reconstruction of $U$. We discuss a systematic way of choosing prior distributions $\prior_n$ for all $n\geq n_0>0$ by achieving them as projections of a distribution in a infinite-dimensional limit case. Such choice of prior distributions is {\em discretization-invariant} in the sense that $\prior_n$ represent the same {\em a priori} information for all $n$ and that the mean $U^{CM}{kn}$ converges to a limit estimate as $k,n\to\infty$. Gaussian smoothness priors and wavelet-based Besov space priors are shown to be discretization invariant. In particular, Bayesian inversion in dimension two with $B^1{11}$ prior is related to penalizing the $\ell^1$ norm of the wavelet coefficients of $U$.


💡 Research Summary

The paper addresses the Bayesian formulation of an inverse problem in which an unknown function U, defined on a domain Ω⊂ℝ^d, is observed indirectly through a smoothing linear operator A and additive white Gaussian noise ℰ, i.e., M = AU + ℰ. In practice the measurement device is represented by a finite‑dimensional linear operator P_k (e.g., a sampling matrix or sensor array). The actual data are realizations m_k of the random variable M_k = P_k A U + P_k ℰ, where the index k characterizes the resolution or the amount of data collected; as k →∞ the measurement model approaches the ideal continuous observation.

To make the problem tractable on a computer, the infinite‑dimensional unknown U must be discretized. The authors introduce a family of finite‑dimensional projection operators T_n, producing the discrete approximation U_n = T_n U. The computational measurement model then reads M_{kn}=P_k A U_n + P_k ℰ, and the observed data are m_{kn}. Applying Bayes’ theorem yields the posterior density on ℝ^n, \


Comments & Academic Discussion

Loading comments...

Leave a Comment