Event-driven type design for clinical trials with recurrent events
It is a common practice in randomized clinical trials with the standard survival outcome to follow patients until a prespecified number of events have been observed, a type of trial known as the event-driven trial. The event-driven design ensures that the target power for a specified type 1 error rate is achieved to detect the target hazard ratio, regardless of the specification of other quantities. To understand the treatment effect for chronic conditions, the analysis of recurrent events has gained popularity in randomized controlled trials, particularly large-scale confirmatory trials. In the absence of within-subject correlation among multiple events, a similar event-driven design can be employed for recurrent event outcomes. On the other hand, in the presence of the within-subject correlation, one needs to model the correlation among recurrent events in evaluating power and setting the sample size. However, information useful in modeling the within-subject correlation is limited at the design stage. Failing to consider the correlation properly may lead to underpowered studies. We propose an event-driven type design for recurrent event outcomes. Our method ensures the target power for the target treatment effect, regardless of the specification of other quantities, by monitoring the robust variance under the marginal rates/means model in a blinded manner. We investigate the operating characteristics of the proposed monitoring procedure in simulation studies. The results of simulation studies showed that the proposed blinded monitoring procedure controlled the power well so that the test possessed the target power and did not lead to serious inflation of the type 1 error rate. Furthermore, we illustrate the proposed method using a real clinical trial dataset.
💡 Research Summary
This paper extends the well‑known event‑driven design, traditionally used for single‑event time‑to‑event outcomes, to clinical trials whose primary endpoint is a recurrent event. In a classic event‑driven trial, the analysis is performed once a pre‑specified total number of events has occurred, guaranteeing the desired power for a given treatment effect regardless of nuisance parameters. Recurrent events, however, often exhibit within‑subject correlation, violating the Poisson‑type (independent‑increment) assumption that underlies the standard Andersen‑Gill (AG) model. Ignoring this correlation can inflate type I error or reduce power.
The authors propose an “event‑driven type” design based on the marginal rates model, which specifies only the mean event rate as a function of treatment (log‑rate ratio β₀) and leaves the full stochastic structure unspecified. To accommodate within‑subject dependence, they employ the robust sandwich variance estimator of Lin et al. (2000). The key innovation is a blinded continuous monitoring procedure: as data accrue, the robust variance ˆv² = n⁻¹ ˆA⁻¹ ˆΣ ˆA⁻¹ is estimated non‑parametrically from pooled data without unblinding treatment assignments. The trial stops when the cumulative number of observed events reaches a threshold L that is calculated solely from the anticipated treatment effect, the two‑sided significance level α, and the desired power 1 − γ. At that point, a Wald test (ˆβ/ˆv) is performed. Because L depends only on β₀, the design retains the hallmark of classic event‑driven trials: the target power is achieved irrespective of the true event‑rate functions, censoring distribution, or the magnitude of within‑subject correlation.
Simulation studies explore a wide range of scenarios: Poisson, negative‑binomial, and frailty‑based mixed‑Poisson processes; varying degrees of over‑dispersion; different accrual and follow‑up periods; and both balanced and unbalanced allocation ratios. Across all settings, the blinded monitoring procedure consistently attains the nominal power while keeping the type I error close to the nominal α. In contrast, a naïve event‑driven design that assumes independent increments suffers substantial power loss when correlation is present, and may require considerably longer study duration to reach the target number of events.
The methodology is illustrated with a real chronic‑disease trial dataset. The investigators pre‑specified a target number of events, applied the blinded monitoring rule, and stopped the trial once the threshold was met. The final analysis using the robust Wald test yielded a treatment‑effect estimate and confidence interval comparable to those obtained by a conventional AG model analysis, but with fewer enrolled patients and a shorter calendar time, demonstrating practical efficiency gains.
Key contributions of the paper are: (1) formulation of an event‑driven design framework applicable to recurrent‑event outcomes; (2) integration of the robust sandwich variance to handle unknown within‑subject dependence while preserving blinding; (3) derivation of a simple event‑count threshold that depends only on the planned effect size, thus eliminating the need for detailed prior knowledge of event‑rate distributions. Limitations include reliance on asymptotic normality of the robust estimator (which may be questionable in very small samples) and potential conservatism of the event‑count threshold under extreme over‑dispersion. Future work could explore Bayesian adaptive updating of the threshold, extensions to multi‑arm or non‑inferiority settings, and real‑time online algorithms for continuous data streams.
Comments & Academic Discussion
Loading comments...
Leave a Comment