Title: Machine Learning Hamiltonians are Accurate Energy-Force Predictors
ArXiv ID: 2602.16897
Date: 2026-02-18
Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (가능하면 원문 PDF 혹은 저널 페이지에서 확인 필요) **
📝 Abstract
Recently, machine learning Hamiltonian (MLH) models have gained traction as fast approximations of electronic structures such as orbitals and electron densities, while also enabling direct evaluation of energies and forces from their predictions. However, despite their physical grounding, existing Hamiltonian models are evaluated mainly by reconstruction metrics, leaving it unclear how well they perform as energy-force predictors. We address this gap with a benchmark that computes energies and forces directly from predicted Hamiltonians. Within this framework, we propose QHFlow2, a state-of-the-art Hamiltonian model with an SO(2)-equivariant backbone and a two-stage edge update. QHFlow2 achieves $40\%$ lower Hamiltonian error than the previous best model with fewer parameters. Under direct evaluation on MD17/rMD17, it is the first Hamiltonian model to reach NequIP-level force accuracy while achieving up to $20\times$ lower energy MAE. On QH9, QHFlow2 reduces energy error by up to $20\times$ compared to MACE. Finally, we demonstrate that QHFlow2 exhibits consistent scaling behavior with respect to model capacity and data, and that improvements in Hamiltonian accuracy effectively translate into more accurate energy and force computations.
💡 Deep Analysis
📄 Full Content
Recently developed machine learning Hamiltonians (MLH; Schütt et al., 2019;Unke et al., 2021a;Li et al., 2022;Gong et al., 2023;Yu et al., 2023b;a;Li et al., 2025;Luo et al., 2025;Xia et al., 2025) predict the Kohn-Sham Hamiltonian from molecular geometry. Unlike machine-learning interatomic potentials (MLIPs; Unke et al., 2021b), they predict an electronic-structure object that provides access to quantities such as electron density while also enabling downstream evaluation of energies and forces.
However, prior work has focused primarily on using predicted Hamiltonians to accelerate self-consistent field (SCF) convergence (Kim et al., 2025;Liu et al., 2025), leaving it unclear how accurately MLH models perform when energies and forces are computed directly from the predicted Hamiltonian. This question is particularly pressing given a recent study reporting that even strong Hamiltonian models yield far less accurate energy predictions than MLIP approaches (Kaniselvan et al., 2025).
Motivated by this gap, we re-examine the capabilities of MLH under direct evaluation and ask:
“How far have MLH models progressed, and can they achieve the energy-force accuracy required for practical atomistic simulations?”
We answer this question by first establishing a benchmark that directly evaluates energies and forces from predicted Hamiltonians. Our analysis reveals that existing Hamiltonian models do not meet the MLIP level of accuracy, which serves as a practical reference for downstream applications.
To close this gap, we propose QHFlow2, which improves both scalability and robustness. First, we redesign the equivariant architecture for scalability by adapting an SO(2)equivariant backbone based on eSEN (Fu et al., 2025), whose efficient edge updates are well suited for modeling orbital interactions. Second, we introduce a two-stage pair update that improves the robustness of the off-diagonal Hamiltonian blocks to cutoff and radial-basis choices. In addition, we extend a standard Hamiltonian benchmark to directly evaluate energies and forces from predicted Hamiltonians, facilitating controlled comparisons with MLIP baselines. Finally, we analyze the scaling behavior of Hamiltonian error and downstream energy and force accuracy with respect to model size and training-set size.
Under the extended benchmark, we show that sufficiently accurate Hamiltonian prediction yields accurate energies Source code: https://github.com/seongsukim-ml/QHFlow2
. and forces under direct evaluation. On MD17 and rMD17, QHFlow2 achieves up to 20× lower energy MAE than NequIP (Batzner et al., 2022), while reaching force accuracy comparable to MLIPs for the first time among Hamiltonian predictors. On QH9, QHFlow2 reduces energy error by up to 20× relative to MACE (Batatia et al., 2022) and improves upon EquiformerV2 (Liao et al., 2024). Notably, QHFlow2 further reduces Hamiltonian prediction error by 40-50% relative to the prior state of the art while using roughly half the parameters and achieving 2.8× faster inference.
Finally, we demonstrate that QHFlow2 exhibits consistent scaling behavior with respect to model capacity and data, and that improvements in Hamiltonian accuracy effectively translate to downstream energy and force predictions. Together, these results establish Hamiltonian models as a viable approach for atomistic modeling, combining accurate energies, competitive forces, and access to electronicstructure objects within a single framework.
Overall, our contributions are as follows:
• We propose QHFlow2, a scalable MLH that modernizes an eSEN-based SO(2)-equivariant backbone for Hamiltonian prediction and introduces a two-stage edge update, improving the robustness of cutoff and radial-basis choices while achieving the best accuracy with fewer parameters.
• We establish a unified benchmark that directly computes total energies and analytic forces from predicted Hamiltonians, and we construct an rMD17 benchmark with recomputed Hamiltonians, energies, and forces to facilitate controlled comparisons with MLIP baselines.
• We study model and data scaling in Hamiltonian prediction under direct evaluation, showing that increased scale consistently reduces Hamiltonian error and improves downstream energy and force accuracy.
Machine learning Hamiltonians (MLHs). Deep learning approaches predict Kohn-Sham Hamiltonians from molecular geometries using equivariant message passing with orbital-based matrix representations (Schütt et al., 2019;Unke et al., 2021a;Li et al., 2022;Gong et al., 2023). Later work improves scalability through more efficient equivariant operations and training objectives (Yu et al., 2023a;b;Li et al., 2025;Luo et al., 2025;Xia et al., 2025). Beyond regression, Kim et al. (2025) introduces a generative formulation that treats Hamiltonians as structured objects.
Existing studies mainly evaluate Hamiltonian models using matrix and orbital-energy metrics, while some evaluate their utility within DFT workflows,