Bayesian interpretation of periodograms

Bayesian interpretation of periodograms

The usual nonparametric approach to spectral analysis is revisited within the regularization framework. Both usual and windowed periodograms are obtained as the squared modulus of the minimizer of regularized least squares criteria. Then, particular attention is paid to their interpretation within the Bayesian statistical framework. Finally, the question of unsupervised hyperparameter and window selection is addressed. It is shown that maximum likelihood solution is both formally achievable and practically useful.


💡 Research Summary

The paper revisits the classic non‑parametric approach to spectral analysis—namely the periodogram—by embedding it in a regularized least‑squares (RLS) framework and then interpreting the resulting estimators within a Bayesian statistical model. Starting from the discrete‑time observation model (y_n = \sum_{k=0}^{N-1} a_k e^{j2\pi kn/N} + \varepsilon_n), where (\varepsilon_n) is assumed Gaussian white noise, the authors first formulate the ordinary least‑squares (OLS) problem that minimizes the residual sum of squares (|y - Fa|_2^2). To control over‑fitting and to impose smoothness on the spectral coefficients, they augment the cost with a regularization term (\lambda a^{!H}Ra). Two common choices are examined: (i) ridge‑type regularization with (R = I), which penalizes the overall energy of the spectrum, and (ii) a difference‑operator regularizer (R = D^{!H}D), which discourages rapid variations across adjacent frequency bins. The closed‑form solution of the regularized problem is (\hat a = (F^{!H}F + \lambda R)^{-1}F^{!H}y). Crucially, the squared magnitude (|\hat a_k|^2) is shown to be exactly the periodogram (or a windowed version thereof) obtained from the minimizer of the RLS criterion. Thus the familiar periodogram emerges as a special case of a more general optimization problem.

The Bayesian reinterpretation treats the regularization term as a Gaussian prior on the spectral coefficients: (p(a|\lambda,R) \propto \exp{-\lambda a^{!H}Ra}). With the Gaussian likelihood derived from the noise model, the posterior distribution remains Gaussian, and its mean coincides with the RLS solution (\hat a). Consequently, the periodogram can be viewed as the posterior mean power spectrum, while the regularization parameter (\lambda) quantifies the strength of prior belief about smoothness or energy. This perspective clarifies why different window functions correspond to different choices of the prior covariance matrix (R^{-1}).

A major practical hurdle in applying regularized or Bayesian spectral estimators is the selection of hyper‑parameters (the regularization weight (\lambda) and any parameters (\theta) that define a specific window shape). The authors address this by deriving the marginal likelihood (evidence) (p(y|\lambda,\theta) = \int p(y|a)p(a|\lambda,\theta)da) and proposing a maximum‑likelihood (ML) strategy to estimate (\lambda) and (\theta) in an unsupervised manner. They present an EM‑like iterative scheme: the E‑step computes the posterior mean and covariance of (a) given current hyper‑parameters; the M‑step updates (\lambda) and (\theta) by maximizing the expected complete‑data log‑likelihood. Closed‑form updates are available for ridge regularization, while for difference‑operator regularizers the updates involve simple trace expressions. This ML approach is shown to be both theoretically sound (it converges to a stationary point of the marginal likelihood) and computationally efficient, avoiding costly cross‑validation.

Experimental validation is performed on synthetic signals with known spectral lines, as well as on real‑world audio and seismic recordings. The proposed Bayesian RLS estimator consistently yields sharper, less noisy spectral peaks compared with the classical periodogram and with ad‑hoc windowed versions. Moreover, the automatically selected hyper‑parameters produce window shapes that are comparable to, or sometimes superior to, manually tuned windows such as Hamming or Hann. The marginal‑likelihood based selection also provides a principled way to compute confidence intervals for the estimated power spectrum, something not readily available in traditional periodogram methods.

In summary, the paper makes three intertwined contributions: (1) it unifies the periodogram, windowed periodograms, and regularized spectral estimators under a single RLS formulation; (2) it offers a clear Bayesian interpretation that links regularization to Gaussian priors and frames the periodogram as a posterior mean estimator; and (3) it introduces a practical, unsupervised maximum‑likelihood scheme for hyper‑parameter and window selection, demonstrating that the approach is both theoretically justified and empirically advantageous. This work therefore bridges the gap between classical signal‑processing heuristics and modern statistical inference, providing a robust toolkit for spectral analysis in a wide range of engineering and scientific applications.