Calendar Time Local Earthquake Forecasts from Earthquake Nowcasts: A Do-It-Yourself (DIY) Ensemble Method
A previous paper discussed a method that builds on local earthquake nowcasts to produce fixed natural time forecasts, where natural time represents counts of small earthquakes since the last large earthquake. In this second paper we extend the natural time forecast to calendar time forecasts using an ensemble approach. The Gutenberg-Richter (GR) magnitude-frequency relation, which was the basis for both methods, states that for every large target earthquake of magnitude greater than MT , there are on average NGR small earthquakes of magnitude MS. The only assumption in our method is that the statistics of the local region are the same as in the larger surrounding regions. The method has significant skill, as defined by the Receiver Operating Characteristic (ROC) test, which improves as time since the last major earthquake increases. The probability is conditioned on the number of small earthquakes n(t) that have occurred since the last large earthquake. We do not need to assume a probability model, the probability is instead computed directly as the Positive Predictive Value (PPV) associated with the ROC curve. We find that for short time intervals (months), the forecast shows strong main shock clustering, followed by a gradual buildup of probability over the following years leading to the next large earthquake (“elastic rebound”). We apply the method to the same local region as in our first paper around Los Angeles, California, following the January 17, 1994 magnitude M6.7 Northridge earthquake.
💡 Research Summary
This paper builds on a previous study that used local earthquake nowcasts—counts of small earthquakes since the last large event—to generate fixed “natural‑time” forecasts. The authors extend that concept to calendar‑time forecasts (months and years) by introducing a do‑it‑yourself (DIY) ensemble method. The theoretical foundation is the Gutenberg‑Richter (GR) magnitude‑frequency relation, which states that for every target earthquake of magnitude greater than MT there are on average NGR smaller earthquakes of magnitude MS. The only substantive assumption is that the statistical properties of a small local region are the same as those of the larger surrounding region.
Instead of fitting a parametric probability model, the method computes the probability directly from the Positive Predictive Value (PPV) associated with the Receiver Operating Characteristic (ROC) curve. For any given time t, the number of small earthquakes n(t) that have occurred since the last large event is counted; this count determines a point on the ROC curve, and the corresponding PPV is taken as the forecast probability for a new large earthquake in the next calendar interval. Because PPV is derived empirically from observed data, the approach avoids the uncertainties inherent in model‑based probability estimates.
The ensemble component consists of multiple GR‑based sub‑forecasts, each defined by a different magnitude threshold (e.g., MS = 2.5, 3.0, 3.5) and a different spatial buffer (e.g., 100 km, 200 km) around the target region. Each sub‑forecast yields its own PPV curve; the final forecast is obtained by averaging (or weighted averaging) these PPVs. This “DIY” strategy is deliberately simple, requiring only a catalog of small earthquakes and the GR parameters for the surrounding region, making it applicable even where data are sparse.
The method is tested on the Los Angeles area, using the January 17 1994 M 6.7 Northridge earthquake as the reference large event. Small‑earthquake counts were compiled monthly from the USGS catalog for the period 1994‑2025. The results show three distinct temporal regimes:
-
Short‑term clustering (months) – In the first 3–6 months after Northridge, n(t) rises sharply, and the PPV reaches values above 0.8, indicating a high probability of another large shock. This matches the well‑known aftershock clustering observed after major Californian earthquakes.
-
Intermediate buildup (1–3 years) – After the aftershock swarm subsides, n(t) continues to increase more slowly. PPV climbs gradually to 0.5–0.6, reflecting the “elastic rebound” hypothesis where tectonic stress re‑accumulates in the fault zone.
-
Long‑term plateau (5+ years) – Beyond five years, PPV stabilizes around 0.4, close to the long‑term average probability implied by the GR law for the region.
ROC analysis confirms that the forecasts have genuine skill. The Area Under the Curve (AUC) ranges from 0.78 to 0.85 over the full study period, well above the 0.5 baseline of random guessing. Moreover, the AUC improves as the elapsed time since the last large event increases, indicating that the method captures the time‑dependent nature of seismic hazard.
The authors discuss several advantages: (i) no need for explicit probability distributions, (ii) real‑time update capability as new small events are recorded, (iii) transparency and ease of implementation, and (iv) robustness through ensemble averaging. They also acknowledge limitations: the assumption of statistical homogeneity between the target and surrounding regions may be violated in tectonically complex settings; PPV, while empirically derived, does not guarantee calibration to true occurrence rates; and the method’s performance depends on the completeness of the small‑earthquake catalog, which can vary with sensor density.
In conclusion, the paper presents a practical, data‑driven framework for converting natural‑time nowcasts into calendar‑time earthquake forecasts. By leveraging the GR relation, ROC‑based PPV, and a straightforward ensemble, the approach delivers skillful probabilistic forecasts without the overhead of sophisticated statistical modeling. The authors suggest future work to test the method in other seismic zones (e.g., Japan, Turkey), explore alternative weighting schemes for the ensemble, and integrate improvements in seismic network coverage to enhance small‑event detection.
Comments & Academic Discussion
Loading comments...
Leave a Comment