Regularized Ensemble Forecasting for Learning Weights from Historical and Current Forecasts
Combining forecasts from multiple experts often yields more accurate results than relying on a single expert. In this paper, we introduce a novel regularized ensemble method that extends the traditional linear opinion pool by leveraging both current forecasts and historical performances to set the weights. Unlike existing approaches that rely only on either the current forecasts or past accuracy, our method accounts for both sources simultaneously. It learns weights by minimizing the variance of the combined forecast (or its transformed version) while incorporating a regularization term informed by historical performances. We also show that this approach has a Bayesian interpretation. Different distributional assumptions within this Bayesian framework yield different functional forms for the variance component and the regularization term, adapting the method to various scenarios. In empirical studies on Walmart sales and macroeconomic forecasting, our ensemble outperforms leading benchmark models both when experts’ full forecasting histories are available and when experts enter and exit over time, resulting in incomplete historical records. Throughout, we provide illustrative examples that show how the optimal weights are determined and, based on the empirical results, we discuss where the framework’s strengths lie and when experts’ past versus current forecasts are more informative.
💡 Research Summary
The paper tackles a central problem in forecast combination: how to blend the predictions of multiple experts (or models) when both current forecasts and historical performance information are available, but each alone is insufficient. Existing approaches either rely solely on the current point forecasts (e.g., simple or trimmed means) or exclusively on past error statistics (e.g., covariance‑based weights, regression‑based stacking). Both families have clear drawbacks—current‑only methods are vulnerable to extreme forecasts, while history‑only methods require long, stable error histories and can be unstable when the number of experts is large relative to the length of the record.
To bridge this gap, the authors propose the Regularized Ensemble Forecasting (REF) framework. The core idea is to minimize a composite objective consisting of two parts: (i) a variance‑related term that penalizes experts whose current forecasts deviate strongly from the crowd’s centre, and (ii) a regularization term that pulls the weights toward a set of prior weights derived from historical performance. Formally, the problem is
w* = arg min_w f (∑_{i=1}^k w_i² (μ_i – E
Comments & Academic Discussion
Loading comments...
Leave a Comment