On the usefulness of Meyer wavelets for deconvolution and density estimation

On the usefulness of Meyer wavelets for deconvolution and density   estimation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The aim of this paper is to show the usefulness of Meyer wavelets for the classical problem of density estimation and for density deconvolution from noisy observations. By using such wavelets, the computation of the empirical wavelet coefficients relies on the fast Fourier transform of the data and on the fact that Meyer wavelets are band-limited functions. This makes such estimators very simple to compute and this avoids the problem of evaluating wavelets at non-dyadic points which is the main drawback of classical wavelet-based density estimators. Our approach is based on term-by-term thresholding of the empirical wavelet coefficients with random thresholds depending on an estimation of the variance of each coefficient. Such estimators are shown to achieve the same performances of an oracle estimator up to a logarithmic term. These estimators also achieve near-minimax rates of convergence over a large class of Besov spaces. A simulation study is proposed to show the good finite sample performances of the estimator for both problems of direct density estimation and density deconvolution.


💡 Research Summary

The paper investigates the use of Meyer wavelets for two classical non‑parametric problems: direct density estimation and density deconvolution from noisy observations. Unlike compactly supported wavelets such as Daubechies or Symlet, Meyer wavelets are band‑limited in the frequency domain. This property allows the empirical wavelet coefficients to be obtained directly from the Fourier transform of the data, without evaluating the wavelet functions at non‑dyadic points. Consequently, the whole procedure can be implemented with a fast Fourier transform (FFT) at a computational cost of O(N log N), which is especially advantageous for large samples.

The authors first describe how to compute the empirical coefficients. For a sample (X_1,\dots,X_n) (direct case) or noisy observations (Y_i=X_i+\varepsilon_i) (deconvolution case), the empirical characteristic function (\hat f^(\omega)=\frac1n\sum_{i=1}^n e^{i\omega X_i}) (or its noisy analogue) is obtained via FFT. Because the Fourier transform of a Meyer wavelet (\hat\psi_{j,k}(\omega)=2^{-j/2}\hat\psi(2^{-j}\omega)e^{-i k\omega/2^j}) is supported only on a fixed frequency band, the coefficient (\hat c_{j,k}= \int \hat f^(\omega)\hat\psi_{j,k}(\omega),d\omega) can be evaluated by a simple multiplication and integration in the frequency domain. In the deconvolution setting the noise characteristic function (\phi(\omega)) is divided out before the multiplication, and the band‑limited nature of (\hat\psi) automatically avoids division by values close to zero.

A key methodological contribution is the use of coefficient‑specific random thresholds. The variance of each coefficient, (\sigma_{j,k}^2), is estimated from the data (including the effect of the deconvolution kernel when present). The threshold is then set as (\lambda_{j,k}= \tau,\hat\sigma_{j,k}\sqrt{2\log n}), where (\tau) is a constant (typically 1). Hard or soft thresholding is applied, yielding the final estimate (\tilde c_{j,k}= \hat c_{j,k}\mathbf 1_{{|\hat c_{j,k}|>\lambda_{j,k}}}). Because the thresholds adapt to the estimated variability of each coefficient, the procedure automatically performs a form of empirical Bayes shrinkage without requiring any prior distribution.

Theoretical analysis establishes two major results. First, an oracle inequality shows that the risk of the proposed estimator is bounded by the risk of an “oracle” estimator (which knows the optimal threshold for each coefficient) up to a logarithmic factor:
\


Comments & Academic Discussion

Loading comments...

Leave a Comment