Principal Components for Model-Agnostic Modified Gravity with 3x2pt
To mitigate the severe information loss arising from widely adopted linear scale cuts in constraints on modified gravity parameterisations with Weak Lensing (WL) and Large-Scale Structure (LSS) data, we introduce a novel alternative method for data reduction. This Principal Component Analysis (PCA)-based framework extracts key features in the matter power spectrum arising from nonlinear effects in a set of representative gravity theories. By performing the analysis in the space of principal components, we can replace sweeping `linear-only’ scale cuts with targeted cuts on the transformed data vector, ultimately reducing parameter bias and significantly tightening constraints. We forecast constraints on a minimal parameterised extension to $Λ$CDM which includes modifications to the growth of structure and lensing of light ($Λ$CDM$+μ_0+Σ_0$) using mock Stage-IV data for two simulated cosmologies: the $Λ$CDM model and Extended Shift Symmetric (ESS) gravity. Under the assumption of a Universe defined by $Λ$CDM and General Relativity, our method offers constraints on $μ_0$ a factor of 1.65 tighter than traditional linear-only scale cuts. Crucially, our approach also provides the necessary constraining power to break key degeneracies in modified gravity without relying on $fσ_8$ measurements, introducing a promising new tool for the analysis of present and future WL and LSS photometric surveys.
💡 Research Summary
The authors address a critical bottleneck in testing modified‑gravity (MG) theories with upcoming Stage‑IV weak‑lensing and galaxy‑clustering surveys: the severe loss of information caused by the standard practice of imposing linear‑scale cuts. In the conventional approach, any data point whose theoretical prediction differs appreciably between a linear model and a more accurate nonlinear model (as quantified by a Δχ² criterion) is discarded. This protects against bias from poorly modelled nonlinearities but removes the majority of the signal‑to‑noise, especially in the highly nonlinear regime that dominates the constraining power of LSST, Euclid, and Roman. Consequently, constraints on phenomenological MG parameters such as the μ–Σ parametrisation (μ₀, Σ₀) become weak and highly degenerate, often requiring additional growth‑rate measurements (fσ₈) to break the degeneracy.
To overcome this, the paper proposes a data‑reduction pipeline based on Principal Component Analysis (PCA). The key idea is to construct a set of “data‑reduction models” that span a representative range of MG behaviours in the nonlinear regime. For each model the authors compute both a linear prediction (M_lin) and a high‑fidelity nonlinear prediction (M_NL) for the full 3×2pt data vector (galaxy‑galaxy, galaxy‑shear, shear‑shear angular power spectra across multiple redshift bins). The difference ΔM = M_NL – M_lin captures the specific nonlinear signature of that MG model. By assembling ΔM for several models (e.g., GR, nDGP, ESS, etc.) they form a matrix whose columns are the ΔM vectors.
A Cholesky decomposition of the data covariance C yields a whitening matrix L; the authors transform ΔM → ΔM_ch = L⁻¹ΔM and similarly whiten the observed data vector D → D_ch. Performing a singular‑value decomposition (or eigen‑decomposition) on the covariance‑weighted ΔM_ch matrix identifies the principal components (PCs) that contain the bulk of the MG‑induced nonlinear variance. The authors then construct a reduction matrix U_cut that projects the whitened data onto the subspace orthogonal to the leading PCs associated with MG‑specific nonlinearities. In practice, they retain the PCs that contribute negligibly to the MG signal (i.e., those dominated by statistical noise) and discard the rest. This operation is equivalent to applying a scale cut that is parameter‑dependent: at each step of an MCMC sampler the current cosmological and MG parameters define the specific ΔM vectors, and consequently a new U_cut is recomputed. Thus the data reduction adapts dynamically to the point in parameter space being explored, unlike the static linear‑cut approach.
The authors test the method on simulated LSST‑like Year‑1 (Y1) 3×2pt data. They generate mock observations for two fiducial cosmologies: (i) a standard ΛCDM + General Relativity universe, and (ii) an Extended Shift‑Symmetric (ESS) MG model that exhibits strong nonlinear deviations. For each case they run two analyses: (a) the traditional linear‑only scale cuts (Δχ² < 1) and (b) the new PCA‑based reduction. In the ΛCDM case, the PCA method yields a 68 % confidence interval on μ₀ that is 1.65 times tighter than the linear‑cut analysis, while keeping the bias well below the statistical error. The Σ₀ constraint improves similarly. In the ESS case, despite larger nonlinear effects, the PCA pipeline recovers the input μ₀ and Σ₀ values without bias and with comparable or better precision than the linear‑cut case. Importantly, the method achieves these gains without invoking external growth‑rate data (fσ₈), demonstrating that the retained nonlinear information is sufficient to break the μ–Σ degeneracy.
The paper also discusses practical considerations. The construction of the data‑reduction models requires a modest suite of MG simulations or accurate emulators; however, the authors argue that the set need not be exhaustive—its purpose is to span the space of plausible nonlinear deviations. The computational overhead of recomputing U_cut at each MCMC step is modest because the PCA can be pre‑computed for a grid of parameter values and interpolated, or the reduction matrix can be updated only when the parameters move beyond a predefined tolerance.
In summary, the study introduces a principled, model‑agnostic framework that replaces blunt linear‑scale cuts with a targeted, PCA‑driven data compression. By explicitly accounting for the nonlinear signatures of a broad class of MG theories, the method preserves the high‑signal‑to‑noise information residing in small‑scale modes while safeguarding against bias from imperfect modeling. The demonstrated factor‑of‑~1.6 improvement in μ₀ constraints, together with unbiased recovery of MG parameters in a non‑ΛCDM mock, suggests that this technique could become a standard tool for the analysis of forthcoming photometric WL and LSS surveys, enabling tighter tests of gravity on cosmological scales.
Comments & Academic Discussion
Loading comments...
Leave a Comment