Adaptive Off-Policy Inference for M-Estimators Under Model Misspecification

Adaptive Off-Policy Inference for M-Estimators Under Model Misspecification
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

When data are collected adaptively, such as in bandit algorithms, classical statistical approaches such as ordinary least squares and $M$-estimation will often fail to achieve asymptotic normality. Although recent lines of work have modified the classical approaches to ensure valid inference on adaptively collected data, most of these works assume that the model is correctly specified. The misspecified setting poses unique challenges because the parameter of interest itself may not be well-defined over a non-stationary distribution of rewards. We therefore tackle the problem of \emph{off-policy} inference in adaptive settings, where we uniquely define a projected solution over a stationary evaluation policy. Our method provides valid inference for $M$-estimators that use adaptively collected bandit data with a possibly misspecified working model. A key ingredient in our approach is the use of flexible approaches to stabilize the variance induced by adaptive data collection. A major novelty is that the procedure enables the construction of valid confidence sets even in settings where treatment policies are unstable and non-converging, such as when there is no unique optimal arm and standard bandit algorithms are used. Empirical results on semi-synthetic datasets constructed from the Osteoarthritis Initiative demonstrate that the method maintains type I error control, while existing methods for inference in adaptive settings do not cover in the misspecified case.


💡 Research Summary

This paper addresses a fundamental challenge in modern data analysis: performing valid statistical inference when data are collected adaptively, such as in contextual bandit experiments, and the working model used for estimation may be misspecified. Classical inference tools—ordinary least squares, standard M‑estimation, and their asymptotic normality results—break down under adaptive sampling because the data‑generating process becomes non‑stationary and the variance of score functions can be highly unstable. Existing recent works that adapt inference to bandit data typically assume correct model specification, which is rarely guaranteed in practice.

The authors propose a novel framework that defines an off‑policy projection parameter θ★ as the minimizer (or maximizer) of the expected loss under a fixed evaluation policy πe that is independent of the adaptive data‑collection policy. By anchoring the target to a stationary distribution (the one induced by πe), the parameter remains well‑defined even when the actual treatment probabilities fluctuate arbitrarily over time. This off‑policy perspective mirrors ideas from reinforcement‑learning evaluation but is applied here to general M‑estimators, covering linear regression, generalized linear models, and any loss that can be expressed as mθ(x,a,y).

The core theoretical contribution is a Central Limit Theorem (Theorem 1) for the adaptive M‑estimator. The estimator is defined as

\


Comments & Academic Discussion

Loading comments...

Leave a Comment