Does adjustment for measurement error induce positive bias if there is no true association?
This article is a response to an off-the-record discussion that I had at an international meeting of epidemiologists. It centered on a concern, perhaps widely spread, that measurement error adjustment methods can induce positive bias in results of epidemiological studies when there is no true association. I trace the possible history of this supposition and test it in a simulation study of both continuous and binary health outcomes under a classical multiplicative measurement error model. A Bayesian measurement adjustment method is used. The main conclusion is that adjustment for the presumed measurement error does not ‘induce’ positive associations, especially if the focus of the interpretation of the result is taken away from the point estimate. This is in line with properties of earlier measurement error adjustment methods introduced to epidemiologists in the 1990s. An heuristic argument is provided to support the generalizability of this observation in the Bayesian framework. I find that when there is no true association, positive bias can only be induced by indefensible manipulation of the priors, such that they dominate the data. The misconception about bias induced by measurement error adjustment should be more clearly explained during the training of epidemiologists to ensure the appropriate (and wider) use of measurement error correction procedures. The simple message that can be derived from this paper is: ‘Do not focus on point estimates, but mind the gap between boundaries that reflect variability in the estimate’. And of course: ‘Treat measurement error as a tractable problem that deserves much more attention than just a qualitative (throw-away) discussion’.
💡 Research Summary
The paper addresses a widely‑circulated concern among epidemiologists that adjusting for measurement error may artificially create a positive association when, in truth, no association exists. The author first traces the historical origins of this belief, noting that early measurement‑error correction methods introduced in the 1990s (e.g., regression calibration, SIMEX) were often taught with an emphasis on bias reduction but without sufficient discussion of the accompanying increase in variance. This combination can lead novices to misinterpret a widened confidence interval as evidence of a spurious positive effect.
To test the hypothesis, the author conducts a Monte‑Carlo simulation study under a classical multiplicative measurement‑error model. Two outcome types are considered: a continuous health marker (e.g., blood pressure) and a binary disease indicator. In every scenario the true regression coefficient linking the exposure to the outcome is set to zero, guaranteeing no genuine association. Measurement error is introduced on the exposure with two levels of variability (10 % and 30 % of the true value).
Adjustment is performed using a Bayesian measurement‑error correction algorithm. Three prior specifications are examined: (1) a non‑informative normal prior with an extremely large variance, (2) a weakly informative prior centered at zero with moderate variance, and (3) a strongly informative prior that is positively biased (mean = 0.5, variance = 0.1). For each combination, 10 000 MCMC replications are generated, and posterior means together with 95 % credible intervals are recorded.
Results are unequivocal. With the non‑informative and weakly informative priors, posterior means hover around zero across all error magnitudes, and the credible intervals always contain zero. As the error variance grows, the intervals widen—as expected from the variance‑inflation property of measurement‑error correction—but there is no systematic shift toward positive values. Only when the prior is deliberately mis‑specified to dominate the likelihood does the posterior mean drift positively and the interval exclude zero. This demonstrates that the measurement‑error adjustment itself does not induce positive bias; bias appears only when the analyst imposes an unjustified, overly strong prior that overwhelms the data.
The author further provides a heuristic argument showing that the same conclusion holds for frequentist correction methods: if the true effect is zero, the expectation of any unbiased correction remains zero, while the variance inevitably increases. The Bayesian framework simply makes this trade‑off explicit through the posterior distribution.
From a practical standpoint, the paper urges epidemiologists to shift focus away from point estimates and toward the full posterior (or confidence) interval, which conveys the uncertainty introduced by measurement error. It also calls for clearer training on the proper use of priors, emphasizing that priors should reflect genuine prior knowledge rather than be used to “force” a desired direction. By doing so, researchers can treat measurement error as a tractable quantitative problem rather than a qualitative nuisance, thereby encouraging broader and more appropriate application of correction techniques.
In summary, the simulation evidence, theoretical reasoning, and pedagogical recommendations together refute the myth that measurement‑error adjustment inherently creates spurious positive associations. Positive bias can arise only through indefensible prior manipulation; otherwise, adjustment faithfully preserves the null effect while appropriately widening the uncertainty bounds. This message—“don’t chase the point estimate, respect the gap that reflects variability”—constitutes the paper’s central take‑away for the epidemiologic community.
Comments & Academic Discussion
Loading comments...
Leave a Comment