A Quantile Variant of the EM Algorithm and Its Applications to Parameter Estimation with Interval Data
The expectation-maximization (EM) algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The EM is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the EM algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo EM (MCEM) algorithm. The MCEM uses a random sample to estimate the integral at each E-step. However, the problem with the MCEM is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes a computational burden. In this paper, we propose what we refer to as the quantile variant of the EM (QEM) algorithm. We prove that the proposed QEM method has an accuracy of $O(1/K^2)$ while the MCEM method has an accuracy of $O_p(1/\sqrt{K})$. Thus, the proposed QEM method possesses faster and more stable convergence properties when compared with the MCEM algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided.
💡 Research Summary
This paper proposes a quantile variant of the expectation-maximization (QEM) algorithm to address limitations in parameter estimation for models with interval data. The EM algorithm is a powerful method for finding maximum likelihood estimates when data are not fully observed, but it requires integrating the log-likelihood function during each E-step, which can be computationally intensive. To mitigate this issue, the Monte Carlo EM (MCEM) algorithm uses random samples to estimate integrals at each step; however, MCEM often converges slowly and may exhibit unstable behavior.
The proposed QEM method aims to improve upon these limitations by offering faster and more stable convergence properties compared to MCEM. Specifically, the paper demonstrates that QEM achieves an accuracy of $O(1/K^2)$, which is superior to the $O_p(1/\sqrt{K})$ accuracy of MCEM. This improvement in performance is substantiated through numerical studies.
The authors provide several practical examples illustrating the application of QEM in interval-censored data problems. These examples highlight how QEM can be effectively used for parameter estimation in scenarios where traditional methods might struggle due to computational inefficiencies or instability issues. The paper thus contributes a valuable tool for improving the accuracy and efficiency of parameter estimation in models with incomplete or censored data, particularly those involving interval observations.
Comments & Academic Discussion
Loading comments...
Leave a Comment