Optimal PSF modeling for weak lensing: complexity and sparsity

Optimal PSF modeling for weak lensing: complexity and sparsity
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We investigate the impact of point spread function (PSF) fitting errors on cosmic shear measurements using the concepts of complexity and sparsity. Complexity, introduced in a previous paper, characterizes the number of degrees of freedom of the PSF. For instance, fitting an underlying PSF with a model with low complexity will lead to small statistical errors on the model parameters, however these parameters could suffer from large biases. Alternatively, fitting with a large number of parameters will tend to reduce biases at the expense of statistical errors. We perform an optimisation of scatters and biases by studying the mean squared error of a PSF model. We also characterize a model sparsity, which describes how efficiently the model is able to represent the underlying PSF using a limited number of free parameters. We present the general case and illustrate it for a realistic example of PSF fitted with shapelet basis sets. We derive the relation between complexity and sparsity of the PSF model, signal-to-noise ratio of stars and systematic errors on cosmological parameters. With the constraint of maintaining the systematics below the statistical uncertainties, this lead to a relation between the required number of stars to calibrate the PSF and the sparsity. We discuss the impact of our results for current and future cosmic shear surveys. In the typical case where the biases can be represented as a power law of the complexity, we show that current weak lensing surveys can calibrate the PSF with few stars, while future surveys will require hard constraints on the sparsity in order to calibrate the PSF with 50 stars.


💡 Research Summary

This paper addresses one of the most critical sources of systematic error in weak‑lensing surveys: the imperfect modeling of the point‑spread function (PSF). The authors build on a previously introduced notion of “complexity,” defined as the number of free parameters (degrees of freedom) in a PSF model, and introduce a complementary concept called “sparsity,” which quantifies how efficiently a limited set of parameters can represent the true underlying PSF.

The central idea is that a low‑complexity model yields small statistical uncertainties because each parameter is well constrained by the noisy star images, but it typically suffers from large bias (systematic deviation) relative to the true PSF. Conversely, a high‑complexity model can reduce bias by capturing finer PSF features, yet the statistical errors on the many parameters increase, potentially inflating the overall mean‑squared error (MSE). The authors formalize this trade‑off by decomposing the MSE into a bias‑squared term and a variance term. They assume that the bias scales with complexity as a power law, B ∝ C⁻ᵅ, where α is the sparsity exponent. The variance term scales as V ∝ C/(S/N)², where S/N is the signal‑to‑noise ratio of the calibration stars.

By minimizing the total MSE with respect to C, they derive an optimal complexity
C_opt ∝ (S/N)^{2/(2+α)}.
Thus, higher‑S/N stars permit the use of more complex models without inflating the variance. The authors then impose a practical requirement: systematic errors must remain below the statistical uncertainties of the cosmic‑shear measurement. This condition translates into a required number of calibration stars, N_, given by
N_
 ∝ α · (S/N)^{-2α/(2+α)}.
In this expression, a larger sparsity exponent (more “sparse” representation) dramatically reduces the number of stars needed to meet the systematic budget.

To illustrate the formalism, the paper adopts a realistic PSF model based on shapelet basis functions, a widely used approach in astronomical image analysis. Shapelets allow a straightforward control of both complexity (by truncating the series at a given order) and sparsity (by how rapidly the coefficients decay). Simulations with shapelet models yield sparsity exponents α≈2–3 for typical ground‑based data. Under these conditions, current weak‑lensing surveys such as the Dark Energy Survey (DES), the Kilo‑Degree Survey (KiDS), and the Hyper Suprime‑Cam (HSC) survey can achieve the required systematic control with only a few tens of stars per exposure.

However, next‑generation surveys (e.g., LSST, Euclid, and WFIRST) aim for sub‑10⁻⁴ level systematics, an order of magnitude tighter than present experiments. Maintaining the same star budget (≈50 stars) under such stringent requirements would demand a sparsity exponent α ≥ 4, implying that the PSF model must be considerably more efficient at capturing the true PSF with very few parameters. This could be realized by incorporating physical PSF models, higher‑order basis sets, or machine‑learning representations that enforce strong regularization.

The paper concludes that optimal PSF calibration is fundamentally a balance between model complexity and sparsity, mediated by the S/N of calibration stars. Survey designers must jointly consider the expected stellar density, observational depth, and the intrinsic sparsity of the chosen PSF representation to ensure that PSF‑induced systematics stay below the statistical error floor. The derived relations provide a quantitative framework for planning calibration strategies in both ongoing and future weak‑lensing experiments, guiding the selection of PSF models and the allocation of observational resources to meet ambitious cosmological goals.


Comments & Academic Discussion

Loading comments...

Leave a Comment