Retrieval of Experiments with Sequential Dirichlet Process Mixtures in Model Space
We address the problem of retrieving relevant experiments given a query experiment, motivated by the public databases of datasets in molecular biology and other experimental sciences, and the need of scientists to relate to earlier work on the level of actual measurement data. Since experiments are inherently noisy and databases ever accumulating, we argue that a retrieval engine should possess two particular characteristics. First, it should compare models learnt from the experiments rather than the raw measurements themselves: this allows incorporating experiment-specific prior knowledge to suppress noise effects and focus on what is important. Second, it should be updated sequentially from newly published experiments, without explicitly storing either the measurements or the models, which is critical for saving storage space and protecting data privacy: this promotes life long learning. We formulate the retrieval as a ``supermodelling’’ problem, of sequentially learning a model of the set of posterior distributions, represented as sets of MCMC samples, and suggest the use of Particle-Learning-based sequential Dirichlet process mixture (DPM) for this purpose. The relevance measure for retrieval is derived from the supermodel through the mixture representation. We demonstrate the performance of the proposed retrieval method on simulated data and molecular biological experiments.
💡 Research Summary
The paper tackles the problem of retrieving relevant scientific experiments given a query experiment, a task motivated by the ever‑growing public repositories of molecular‑biology datasets. Rather than comparing raw measurements, the authors propose to compare Bayesian models learned from each experiment, thereby incorporating experiment‑specific prior knowledge and avoiding the storage of raw data, which addresses both noise robustness and privacy concerns.
Each experiment is represented by a set of posterior samples obtained via MCMC, denoted (M_i = {\theta_i^{(j)}}_{j=1}^{n_i}). The collection of all posterior samples across experiments is treated as a single “super‑model” that captures the distribution of model parameters in a unified latent space. To model this distribution, the authors employ a Dirichlet Process Mixture (DPM) of multivariate Gaussian components, which automatically determines the number of clusters and can represent multimodal posteriors that arise from small or noisy datasets.
A key methodological contribution is the use of Particle Learning (PL) to fit the DPM sequentially. In PL, a set of particles ({Z_t^{(i)}}_{i=1}^N) maintains cluster allocations, sufficient statistics (means and covariances), and component counts. When a new batch of MCMC samples (corresponding to a newly published experiment) arrives, each particle updates its allocation by evaluating the predictive multivariate‑t likelihood for existing clusters versus creating a new one, then resamples particles proportionally to these predictive probabilities. This scheme yields order‑independent inference, requires only a single pass over the data, and eliminates the need to retain earlier samples or models.
For retrieval, the learned super‑model provides a natural similarity measure. Given a query experiment (E_{e+1}) with posterior samples (M_{e+1}), the relevance score for a stored experiment (E_\ell) is defined as
\
Comments & Academic Discussion
Loading comments...
Leave a Comment