Latent Target Score Matching, with an application to Simulation-Based Inference
Denoising score matching (DSM) for training diffusion models may suffer from high variance at low noise levels. Target Score Matching (TSM) mitigates this when clean data scores are available, providing a low-variance objective. In many applications clean scores are inaccessible due to the presence of latent variables, leaving only joint signals exposed. We propose Latent Target Score Matching (LTSM), an extension of TSM to leverage joint scores for low-variance supervision of the marginal score. While LTSM is effective at low noise levels, a mixture with DSM ensures robustness across noise scales. Across simulation-based inference tasks, LTSM consistently improves variance, score accuracy, and sample quality.
💡 Research Summary
The paper addresses a fundamental limitation of denoising score matching (DSM) for training diffusion models: the variance of the DSM regression target explodes as the diffusion time t approaches zero, degrading both score estimation and sample quality. While Target Score Matching (TSM) solves this problem by using the clean data score as a low‑variance target, TSM is only applicable when the clean marginal score ∇θ₀log p(θ₀) is directly available. In many realistic scenarios, especially those involving latent variables z, only the joint density p(θ,z) (or p(θ,z,x) in simulation‑based inference) is accessible, and the marginal score is intractable.
To bridge this gap, the authors propose Latent Target Score Matching (LTSM). They consider a variance‑preserving stochastic differential equation (VP‑SDE) that diffuses only the variables of interest θ while leaving the latent variables untouched. Under this diffusion, they prove the “Latent Target Score Identity” (Proposition 3.1): the marginal score at time t can be expressed as the conditional expectation of the joint score scaled by 1/α(t), i.e., ∇θ_t log p_t(θ_t) = (1/α(t)) E
Comments & Academic Discussion
Loading comments...
Leave a Comment