Consistency of standard cosmologies using Bayesian model comparison and tension quantification

Consistency of standard cosmologies using Bayesian model comparison and tension quantification
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present a unified Bayesian assessment of model comparison and data-set consistency for LCDM (cold dark matter plus a cosmological constant) and minimal extensions (neutrino mass, spatial curvature, constant or evolving dark energy) using cosmic microwave background (CMB), baryon acoustic oscillation (BAO), and type Ia supernova (SN) data. The major results are summarized in the first three figures. We quantify model preference with Bayesian evidence and assess consistency with complementary evidence- and likelihood-based diagnostics applied uniformly across data-set combinations. For the models considered, updated Planck processing systematically improves internal CMB consistency (low-$\ell$ versus high-$\ell$, and primary CMB versus CMB lensing). The preference for a closed geometry and an associated ``curvature tension’’ with BAO and/or CMB lensing are largely confined to earlier Planck likelihood implementations and weaken substantially when using updated CMB processing and more recent BAO measurements. Apparent evidence for evolving dark energy in CMB+BAO+SN combinations depends sensitively on the specific pairing of CMB and SN likelihoods: plausible alternatives shift inferred tensions by more than $1,σ$ and can completely reverse the preferred model. Allowing a free neutrino mass tends to absorb residual shifts without introducing new inconsistencies, and we do not find robust evidence for a standalone $τ$-driven discrepancy once the full likelihood context is accounted for. We conclude that claims of a required update of our standard cosmological model from LCDM to $w_0w_a$CDM are premature.


💡 Research Summary

This paper presents a unified Bayesian framework for assessing both model comparison and data‑set consistency in contemporary cosmology. The authors focus on the standard ΛCDM model and four minimal extensions: (i) a free sum of neutrino masses Σ mν, (ii) spatial curvature ΩK, (iii) a constant‑w dark‑energy model (wCDM), and (iv) a time‑varying dark‑energy model parameterised by w0 and wa (w0waCDM). The data sets used are the latest Planck CMB temperature, polarisation and lensing likelihoods (including both the 2018 release and newer processing), baryon‑acoustic‑oscillation (BAO) measurements from SDSS and the recent DESI release, and Type Ia supernova (SN) distance‑modulus data from the Pantheon compilation and the Dark Energy Survey (DES).

Methodologically, the paper relies on three Bayesian “pillars”: (1) parameter estimation, (2) model comparison via the Bayesian evidence Z, and (3) data‑set consistency quantified by two complementary statistics. The evidence is decomposed into an average fit term ⟨ln L⟩P and an Occam penalty D (the Kullback–Leibler divergence), following the standard Occam equation ln Z = ⟨ln L⟩P − D. Model comparison therefore favours a more complex model only if the improvement in fit outweighs the extra compression of prior volume. For consistency, the authors compute the evidence ratio R = ZAB/(ZA ZB) and the “suspiciousness” S, which subtracts the prior‑dependent part of R, yielding a statistic that can be mapped to a p‑value and an equivalent σ‑tension under a χ² approximation.

To make the analysis computationally tractable, the authors train CosmoPower emulators for each of the five cosmological models using 200 000 Latin‑hypercube samples of the parameter space. The underlying Boltzmann solver CLASS generates CMB spectra (ℓ = 2–5000) and distance‑redshift relations (z = 0–20). These emulators are then interfaced with Cobaya, which runs adaptive MCMC chains (via CosmoMC) to sample the posterior distributions for each likelihood combination.

The main results are displayed in Figures 1–3. Figure 1 shows the log‑evidence differences Δln Z for each model relative to ΛCDM. Updated Planck likelihoods (which improve the low‑ℓ vs high‑ℓ and CMB‑lensing consistency) reduce the evidence gap for curvature and dark‑energy extensions, indicating that earlier reports of a “curvature tension” were largely driven by older CMB processing. When DESI BAO data are added, the Δln Z for ΩK becomes consistent with zero, effectively erasing the curvature anomaly.

Figure 2 examines the joint CMB + BAO + SN constraints on w0waCDM. The evidence for evolving dark energy is highly sensitive to the SN likelihood choice: using the Pantheon likelihood yields a modest preference for w0waCDM (Δln Z ≈ +2), while the DES SN likelihood reverses the sign (Δln Z ≈ −1). Correspondingly, the suspiciousness S shifts from a ~2σ tension to a non‑significant level. This demonstrates that current SN systematics and likelihood implementations dominate any claim of dark‑energy dynamics.

Figure 3 adds the free‑neutrino‑mass extension. Allowing Σ mν ≈ 0.1 eV absorbs the residual shifts seen in curvature and w0waCDM analyses, without substantially changing the evidence. In other words, a modest neutrino mass can act as a “buffer” that reconciles slight parameter drifts across data sets, but it does not introduce new tensions.

The authors also investigate the optical depth τ. When the full likelihood context is taken into account, the τ‑driven discrepancy reported in some earlier works disappears; the posterior on τ is consistent across CMB, BAO, and SN combinations, and the suspiciousness does not indicate any significant tension.

Overall, the unified Bayesian assessment finds that, with the most recent CMB processing and the DESI BAO measurements, none of the minimal extensions achieve a statistically compelling advantage over ΛCDM. The evidence ratios are typically within 1σ, and the consistency statistics R and S show no strong discordance among the three data sets. Consequently, the authors argue that claims of an imminent need to replace ΛCDM with a w0waCDM model are premature.

Beyond the specific results, the paper showcases the value of a coherent Bayesian pipeline that simultaneously reports evidence, Occam penalties, and prior‑independent tension metrics. By applying the same framework to multiple likelihood versions, the study makes transparent how methodological choices (e.g., which SN likelihood to use) can shift conclusions. This approach provides a robust template for future analyses as next‑generation CMB experiments (Simons Observatory, CMB‑S4) and larger BAO/SN surveys become available, ensuring that any future deviation from ΛCDM will be judged on a solid statistical footing.


Comments & Academic Discussion

Loading comments...

Leave a Comment