Cutting Through the Noise: On-the-fly Outlier Detection for Robust Training of Machine Learning Interatomic Potentials

Cutting Through the Noise: On-the-fly Outlier Detection for Robust Training of Machine Learning Interatomic Potentials
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The accuracy of machine learning interatomic potentials suffers from reference data that contains numerical noise. Often originating from unconverged or inconsistent electronic-structure calculations, this noise is challenging to identify. Existing mitigation strategies such as manual filtering or iterative refinement of outliers, require either substantial expert effort or multiple expensive retraining cycles, making them difficult to scale to large datasets. Here, we introduce an on-the-fly outlier detection scheme that automatically down-weights noisy samples, without requiring additional reference calculations. By tracking the loss distribution via an exponential moving average, this unsupervised method identifies outliers throughout a single training run. We show that this approach prevents overfitting and matches the performance of iterative refinement baselines with significantly reduced overhead. The method’s effectiveness is demonstrated by recovering accurate physical observables for liquid water from unconverged reference data, including diffusion coefficients. Furthermore, we validate its scalability by training a foundation model for organic chemistry on the SPICE dataset, where it reduces energy errors by a factor of three. This framework provides a simple, automated solution for training robust models on imperfect datasets across dataset sizes.


💡 Research Summary

Machine‑learning interatomic potentials (MLIPs) have become essential tools for accelerating atomistic simulations by replacing costly electronic‑structure calculations with fast predictions of energies, forces, and stresses. However, the quality of an MLIP is fundamentally limited by the accuracy of its training data. In practice, reference datasets often contain numerical noise arising from unconverged self‑consistent field (SCF) cycles, stochastic quantum‑Monte‑Carlo methods, or inconsistent calculation settings across many configurations. This noise is especially problematic for large‑scale “foundation models” trained on millions of structures, where manual curation or iterative refinement of outliers becomes infeasible due to the required expert time and repeated expensive training cycles.

The authors propose a simple yet powerful on‑the‑fly outlier detection scheme that automatically down‑weights noisy samples during a single training run, eliminating the need for additional reference calculations or manual filtering. The method tracks the distribution of batch losses using an exponential moving average (EMA) of the mean (µ) and variance (σ²). For each configuration i in the current batch, a z‑score is computed as (L_i – µ)/σ, where L_i is the per‑sample loss. Samples whose z‑score exceeds a predefined threshold (typically z_t = 3) are deemed outliers. Their influence is reduced by assigning a weight w_i derived from a smooth sigmoid‑like function based on the Gaussian cumulative distribution function:

w_i = ½


Comments & Academic Discussion

Loading comments...

Leave a Comment