Locally most powerful sequential tests of a simple hypothesis vs one-sided alternatives

Locally most powerful sequential tests of a simple hypothesis vs   one-sided alternatives
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Let $X_1,X_2,…$ be a discrete-time stochastic process with a distribution $P_\theta$, $\theta\in\Theta$, where $\Theta$ is an open subset of the real line. We consider the problem of testing a simple hypothesis $H_0:$ $\theta=\theta_0$ versus a composite alternative $H_1:$ $\theta>\theta_0$, where $\theta_0\in\Theta$ is some fixed point. The main goal of this article is to characterize the structure of locally most powerful sequential tests in this problem. For any sequential test $(\psi,\phi)$ with a (randomized) stopping rule $\psi$ and a (randomized) decision rule $\phi$ let $\alpha(\psi,\phi)$ be the type I error probability, $\dot \beta_0(\psi,\phi)$ the derivative, at $\theta=\theta_0$, of the power function, and $\mathscr N(\psi)$ an average sample number of the test $(\psi,\phi)$. Then we are concerned with the problem of maximizing $\dot \beta_0(\psi,\phi)$ in the class of all sequential tests such that $$ \alpha(\psi,\phi)\leq \alpha\quad{and}\quad \mathscr N(\psi)\leq \mathscr N, $$ where $\alpha\in[0,1]$ and $\mathscr N\geq 1$ are some restrictions. It is supposed that $\mathscr N(\psi)$ is calculated under some fixed (not necessarily coinciding with one of $P_\theta$) distribution of the process $X_1,X_2…$. The structure of optimal sequential tests is characterized.


💡 Research Summary

The paper addresses the classical sequential testing problem for a discrete‑time stochastic process (X_1,X_2,\dots) whose distribution depends on a real‑valued parameter (\theta). The null hypothesis is simple, (H_0:\theta=\theta_0), while the alternative is one‑sided, (H_1:\theta>\theta_0). A sequential test is represented by a (possibly randomized) stopping rule (\psi) and a (possibly randomized) decision rule (\phi). Three performance measures are introduced: the type‑I error probability (\alpha(\psi,\phi)), the average sample number (ASN) (\mathscr N(\psi)) evaluated under a fixed reference distribution (which need not be any (P_\theta)), and the derivative of the power function at the null, (\dot\beta_0(\psi,\phi)=\left.\frac{d}{d\theta}\beta_\theta(\psi,\phi)\right|_{\theta=\theta_0}). The objective is to maximize (\dot\beta_0) subject to constraints (\alpha(\psi,\phi)\le\alpha) and (\mathscr N(\psi)\le\mathscr N), where (\alpha\in


Comments & Academic Discussion

Loading comments...

Leave a Comment