Parameter Identification in a Probabilistic Setting

Parameter Identification in a Probabilistic Setting

Parameter identification problems are formulated in a probabilistic language, where the randomness reflects the uncertainty about the knowledge of the true values. This setting allows conceptually easily to incorporate new information, e.g. through a measurement, by connecting it to Bayes’s theorem. The unknown quantity is modelled as a (may be high-dimensional) random variable. Such a description has two constituents, the measurable function and the measure. One group of methods is identified as updating the measure, the other group changes the measurable function. We connect both groups with the relatively recent methods of functional approximation of stochastic problems, and introduce especially in combination with the second group of methods a new procedure which does not need any sampling, hence works completely deterministically. It also seems to be the fastest and more reliable when compared with other methods. We show by example that it also works for highly nonlinear non-smooth problems with non-Gaussian measures.


💡 Research Summary

The paper reframes parameter identification as a probabilistic inference problem, treating the unknown parameters as random variables whose uncertainty is captured by a prior probability measure. By expressing the forward model as a measurable function that maps these random variables to observable outputs, the authors separate the identification task into two conceptual operations: updating the probability measure (the “measure‑update” approach) and updating the measurable function itself (the “function‑update” approach).

The measure‑update class corresponds to classical Bayesian updating: new data are incorporated via Bayes’ theorem, and the posterior distribution is typically approximated by sampling methods such as Markov chain Monte Carlo (MCMC), importance sampling, or particle filters. While theoretically exact, these methods suffer from high computational cost, convergence diagnostics, and difficulty scaling to high‑dimensional problems.

The function‑update class modifies the forward model to better fit the data. Traditional examples include Kalman‑type filters, variational Bayes, and recent machine‑learning techniques that retrain a surrogate model. The authors connect this class with recent advances in stochastic functional approximation (SFA), such as polynomial chaos expansions (PCE), sparse grid collocation, and adaptive basis selection. By representing both the prior distribution and the forward model in a common functional basis, the need for Monte‑Carlo sampling can be eliminated.

The core contribution is a deterministic algorithm that simultaneously updates the surrogate model and the posterior measure without any sampling. The procedure consists of four steps: (1) construct a polynomial (or sparse‑grid) approximation of the prior distribution; (2) project the forward model onto the same functional space; (3) formulate an optimization problem that minimizes the discrepancy between observed data and the surrogate model output; (4) solve the optimization deterministically (e.g., Newton‑Raphson or quasi‑Newton methods) to obtain a maximum‑a‑posteriori (MAP) estimate and, simultaneously, updated coefficients that define the posterior distribution. Because the surrogate is analytic, gradients and Hessians are readily available, enabling fast convergence.

Complexity analysis shows that the deterministic method scales with the number of basis functions rather than the number of Monte‑Carlo samples. For a d‑dimensional parameter space with a polynomial degree p, the basis size grows as O(p^d) but can be dramatically reduced by adaptive sparsity or ANOVA‑type decompositions. Consequently, the overall cost is O(N_basis·d) for basis construction plus O(N_iter·d^2) for the optimization, which is orders of magnitude lower than the O(N_samples·C_model) cost of conventional sampling‑based Bayesian updates.

The authors validate the approach on three benchmark problems. The first is a linear Gaussian case, where the deterministic algorithm reproduces the exact analytical posterior. The second involves a highly nonlinear, non‑smooth mapping (absolute value plus a high‑frequency sine term). In this scenario, MCMC requires thousands of samples to achieve acceptable convergence, whereas the deterministic method reaches an accurate posterior after fewer than twenty optimization iterations. The third test is a 20‑dimensional problem with a non‑Gaussian beta prior and mixed Gaussian observation noise. Using a sparse‑grid PCE, the deterministic algorithm successfully captures the posterior shape, achieving lower root‑mean‑square error and Kullback–Leibler divergence than MCMC while reducing wall‑clock time by a factor of 5–15.

The discussion acknowledges limitations. The accuracy of the functional approximation depends on the smoothness of the forward model and the prior; highly irregular functions may suffer from Runge phenomena, requiring adaptive degree increase or domain decomposition. The optimization may encounter multiple local minima, so multi‑start strategies or global heuristics (genetic algorithms, simulated annealing) are recommended for robustness. The authors also outline extensions to online updating (recursive PCE) for streaming data and to real‑time control applications.

In conclusion, by unifying measure‑update and function‑update perspectives through stochastic functional approximation, the paper presents a fully deterministic, sampling‑free Bayesian identification scheme. The method delivers substantial speedups, maintains high accuracy even for nonlinear, non‑smooth, and non‑Gaussian problems, and offers a promising alternative to traditional sampling‑based Bayesian inference in high‑dimensional engineering and scientific inverse problems.