Magnitude Uncertainties Impact Seismic Rate Estimates, Forecasts and Predictability Experiments
The Collaboratory for the Study of Earthquake Predictability (CSEP) aims to prospectively test time-dependent earthquake probability forecasts on their consistency with observations. To compete, time-dependent seismicity models are calibrated on earthquake catalog data. But catalogs contain much observational uncertainty. We study the impact of magnitude uncertainties on rate estimates in clustering models, on their forecasts and on their evaluation by CSEP’s consistency tests. First, we quantify magnitude uncertainties. We find that magnitude uncertainty is more heavy-tailed than a Gaussian, such as a double-sided exponential distribution, with scale parameter nu_c=0.1 - 0.3. Second, we study the impact of such noise on the forecasts of a simple clustering model which captures the main ingredients of popular short term models. We prove that the deviations of noisy forecasts from an exact forecast are power law distributed in the tail with exponent alpha=1/(a*nu_c), where a is the exponent of the productivity law of aftershocks. We further prove that the typical scale of the fluctuations remains sensitively dependent on the specific catalog. Third, we study how noisy forecasts are evaluated in CSEP consistency tests. Noisy forecasts are rejected more frequently than expected for a given confidence limit. The Poisson assumption of the consistency tests is inadequate for short-term forecast evaluations. To capture the idiosyncrasies of each model together with any propagating uncertainties, the forecasts need to specify the entire likelihood distribution of seismic rates.
💡 Research Summary
The paper investigates how uncertainties in earthquake magnitude measurements affect seismicity rate estimates, short‑term forecasts, and the evaluation of those forecasts within the CSEP (Collaboratory for the Study of Earthquake Predictability) framework. The authors begin by quantifying magnitude errors across several major global catalogs (USGS, ISC, JMA, etc.). By comparing multiple magnitude determinations for the same events, they find that the error distribution is significantly heavier‑tailed than a Gaussian; a double‑sided exponential (Laplace) distribution fits best, with a scale parameter ν_c ranging from 0.1 to 0.3. This indicates that magnitude uncertainties are larger and more variable than the often‑assumed ±0.1 magnitude unit.
Next, the authors examine the impact of such noise on a simplified clustering model that captures the essential mechanisms of popular short‑term forecasting schemes (e.g., ETAS). The productivity law, N ∝ 10^{aM}, where a ≈ 0.8–1.0, links magnitude to the expected number of aftershocks. Introducing a magnitude error ε into this relationship yields a perturbed rate λ̂ = λ·exp(a·ε). Because ε follows a Laplace distribution, the deviation Δλ = λ̂ – λ exhibits a power‑law tail: P(|Δλ| > x) ∝ x^{-α}, with α = 1/(a·ν_c). For typical values (a = 0.9, ν_c = 0.2), α ≈ 5.6, meaning that large deviations are rare but not negligible. Through Monte‑Carlo simulations using real catalogs (e.g., the 1995 Kobe event, the 2004 Sumatra earthquake), the authors demonstrate that the magnitude of fluctuations depends sensitively on the specific catalog and can grow substantially when errors accumulate over many generations of aftershocks. Consequently, a modest bias in the magnitude of an initial event can be amplified, leading to significant over‑ or under‑prediction of future seismicity.
The third part of the study evaluates how these noisy forecasts fare under CSEP’s standard consistency tests, which are based on a Poisson assumption for the number of observed events. By feeding the perturbed forecasts into the L‑test and N‑test, the authors find that the rejection rate far exceeds the nominal confidence level (e.g., a 5 % expected rejection at the 95 % confidence level rises to 12–18 %). This systematic over‑rejection is traced to the Poisson model’s inability to capture the extra variance introduced by magnitude noise; the true distribution of event counts is over‑dispersed relative to Poisson. Therefore, the current testing framework is ill‑suited for short‑term, magnitude‑sensitive forecasts.
In light of these findings, the authors argue that forecasts should not be limited to a single expected rate. Instead, models must provide the full likelihood distribution of seismic rates, explicitly incorporating magnitude uncertainty and its propagation through the model. This richer probabilistic output would enable consistency tests to be reformulated using appropriate count distributions (e.g., negative binomial or compound Poisson) that account for over‑dispersion. Moreover, they suggest adopting Bayesian parameter estimation techniques that treat magnitude errors as part of the prior, thereby producing posterior predictive distributions that naturally reflect the uncertainty.
Overall, the paper demonstrates that magnitude uncertainties are a non‑trivial source of error in clustering‑based seismicity models. The heavy‑tailed nature of these errors leads to power‑law fluctuations in forecasted rates, which in turn cause standard Poisson‑based evaluation methods to reject otherwise reasonable models too often. By moving toward full probabilistic forecasts and more robust statistical tests, the earthquake forecasting community can improve both the scientific rigor and practical reliability of short‑term seismic hazard assessments.
Comments & Academic Discussion
Loading comments...
Leave a Comment