An Axiomatic Approach to Comparing Sensitivity Parameters
Many methods are available for assessing the importance of omitted variables in linear regression. These methods typically make different, non-falsifiable assumptions. Hence the data alone cannot tell us which method is most appropriate. Since it is unreasonable to expect results to be robust against all possible robustness checks, researchers often use methods deemed ``interpretable,’’ a subjective criterion with no formal definition. In contrast, we develop the first formal, axiomatic framework for comparing and selecting among these methods. Our framework is analogous to the standard approach for comparing estimators based on their sampling distributions. We propose that sensitivity parameters be selected based on their covariate sampling distributions, a design distribution of parameter values induced by an assumption on how covariates are assigned to be observed or unobserved. Using this idea, we define new concepts of parameter consistency and monotonicity, and argue that a reasonable sensitivity parameter should satisfy both properties. We prove that the literature’s most popular approach is inconsistent and non-monotonic, while several alternatives satisfy both.
💡 Research Summary
The paper tackles a pervasive problem in applied econometrics: how to choose among the many sensitivity‑analysis methods that quantify the impact of omitted variables on linear regression estimates. Existing approaches, such as those of Altonji, Elder, and Taber (2005) and Oster (2019), are widely used but rest on non‑falsifiable, often subjective assumptions (e.g., “interpretability”). Because the data alone cannot tell which set of assumptions is correct, researchers typically default to the most popular method or rely on intuition, leaving the selection process opaque.
Inspired by classical frequentist theory, the authors construct a formal, axiomatic framework for comparing sensitivity parameters. They imagine repeatedly drawing a set of covariates from a finite universe of potentially observable variables, with each draw randomly partitioning the universe into observed and unobserved subsets. This random partition induces a covariate sampling distribution for any given sensitivity parameter—the distribution of its values across all possible selections of observed covariates.
Two fundamental properties are proposed for a “reasonable” sensitivity parameter:
-
Consistency – Under an equal‑selection design (each covariate has a ½ chance of being observed), the covariate sampling distribution should collapse to the benchmark value 1 as the number of covariates grows. This mirrors the original intuition of Altonji et al. that, when observed and unobserved covariates are equally important, the relative importance measure should equal 1.
-
Monotonicity in selection – When the probability of a covariate being observed exceeds ½, the parameter’s distribution should shift below 1; when it is below ½, the distribution should shift above 1. In other words, the parameter should move in the expected direction as the proportion of unobserved covariates changes.
Using these axioms, the authors evaluate the most popular sensitivity parameters. They prove (Theorem 3) that Oster’s δ‑parameter, which relies on a residualization of the omitted variable, does not satisfy either property: under equal selection it can converge to any real number, and its distribution is not monotonic in the selection probability. Consequently, the common practice of using the value 1 as a robustness cutoff for δ is theoretically unjustified and can lead to misleading conclusions.
In contrast, the paper shows that several alternative parameters—most notably the R‑value of Cinelli and Hazlett (2020) and the new parameter introduced by Diegert, Masten, and Poirier (2025)—do satisfy both consistency and monotonicity. The authors derive high‑level conditions on the covariance structure of the covariates (moving‑average, autoregressive, exchangeable, or factor models) under which these properties hold, and they demonstrate that these conditions are met by a broad class of realistic data‑generating processes.
To complement the asymptotic theory, the authors conduct a calibrated simulation using the dataset of Bazzi, Fiszbein, and Gebresilasse (2020). Treating the 22 observed covariates as the entire universe of potential covariates, they randomly select subsets of size n and compute the exact sampling distributions of the various sensitivity parameters. The empirical results align with the theory: Oster’s δ displays a wide, non‑centered distribution, while the R‑value and Diegert‑et‑al. parameter concentrate around 1 and shift monotonically with the selection probability.
The paper concludes with two strands of implications. For applied researchers, it recommends abandoning the default use of Oster’s δ in favor of parameters that satisfy the proposed axioms, providing a defensible, theory‑based justification for the chosen robustness check. For theorists, it suggests that the framework can be extended beyond omitted‑variable bias to other contexts (e.g., parallel‑trend violations in diff‑in‑diff, instrument exogeneity) where sensitivity parameters are needed. Moreover, the authors hint at future work that could exploit additional features of the covariate sampling distribution (variance, tail behavior) to further differentiate among sensitivity measures.
Overall, the article delivers the first rigorous, axiomatic tool for evaluating and selecting sensitivity parameters, moving the field beyond vague notions of “interpretability” toward transparent, property‑driven methodological choices.
Comments & Academic Discussion
Loading comments...
Leave a Comment