Prediction with expert evaluators advice
We introduce a new protocol for prediction with expert advice in which each expert evaluates the learner’s and his own performance using a loss function that may change over time and may be different from the loss functions used by the other experts. The learner’s goal is to perform better or not much worse than each expert, as evaluated by that expert, for all experts simultaneously. If the loss functions used by the experts are all proper scoring rules and all mixable, we show that the defensive forecasting algorithm enjoys the same performance guarantee as that attainable by the Aggregating Algorithm in the standard setting and known to be optimal. This result is also applied to the case of “specialist” (or “sleeping”) experts. In this case, the defensive forecasting algorithm reduces to a simple modification of the Aggregating Algorithm.
💡 Research Summary
The paper revisits the classic “prediction with expert advice” setting and introduces a substantially more flexible protocol in which each expert evaluates both the learner’s and his own predictions using a loss function that may be distinct from those of other experts and may change over time. Formally, on round t expert i supplies a loss function (L_i^t(p,y)) (where (p) is the learner’s probabilistic forecast and (y) the realized outcome). The learner’s objective is to guarantee, for every expert i, that the cumulative loss measured by that expert’s own loss functions never exceeds the expert’s cumulative loss by more than a small constant:
\
Comments & Academic Discussion
Loading comments...
Leave a Comment