Optimal sequential testing of two simple hypotheses in presence of control variables

Optimal sequential testing of two simple hypotheses in presence of   control variables
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Suppose that at any stage of a statistical experiment a control variable $X$ that affects the distribution of the observed data $Y$ can be used. The distribution of $Y$ depends on some unknown parameter $\theta$, and we consider the classical problem of testing a simple hypothesis $H_0: \theta=\theta_0$ against a simple alternative $H_1: \theta=\theta_1$ allowing the data to be controlled by $X$, in the following sequential context. The experiment starts with assigning a value $X_1$ to the control variable and observing $Y_1$ as a response. After some analysis, we choose another value $X_2$ for the control variable, and observe $Y_2$ as a response, etc. It is supposed that the experiment eventually stops, and at that moment a final decision in favour of $H_0$ or $H_1$ is to be taken. In this article, our aim is to characterize the structure of optimal sequential procedures, based on this type of data, for testing a simple hypothesis against a simple alternative.


💡 Research Summary

The paper addresses a sequential hypothesis‑testing problem in which the experimenter can actively choose a control variable X at each stage, thereby influencing the distribution of the observed response Y. The statistical model is defined by a conditional density fθ(y|x) that depends on an unknown parameter θ, and the goal is to test the simple null hypothesis H₀: θ=θ₀ against the simple alternative H₁: θ=θ₁. The sequential procedure proceeds as follows: at stage n the experimenter selects a value Xₙ, observes Yₙ drawn from fθ(·|Xₙ), updates the accumulated information, and then decides either to stop and make a final decision or to continue with a new control value Xₙ₊₁. The design must keep the type‑I error probability ≤α and the type‑II error probability ≤β while minimizing the expected sample size under both hypotheses.

To formalize the trade‑off between error probabilities and sample size, the authors introduce a Lagrangian loss function
L = Eθ₀


Comments & Academic Discussion

Loading comments...

Leave a Comment