Beyond Optimization: Harnessing Quantum Annealer Dynamics for Machine Learning
Quantum annealing is typically regarded as a tool for combinatorial optimization, but its coherent dynamics also offer potential for machine learning. We present a model that encodes classical data into an Ising Hamiltonian, evolves it on a quantum annealer, and uses the resulting probability distributions as feature maps for classification. Experiments on the quantum annealer machine with the Digits dataset, together with simulations on MNIST, demonstrate that short annealing times yield higher classification accuracy, while longer times reduce accuracy but lower sampling costs. We introduce the participation ratio as a measure of the effective model size and show its strong correlation with generalization.
💡 Research Summary
Quantum annealing (QA) has traditionally been viewed as a heuristic optimizer for combinatorial problems, but its coherent dynamics can also serve as a powerful information‑processing resource. In this work the authors propose a quantum‑machine‑learning (QML) framework that directly exploits the non‑adiabatic evolution of a QA as a feature generator. Classical data (hand‑written digit images) are first reduced by principal component analysis (PCA) to a 20‑dimensional vector. Each component is then encoded into the coupling strengths of the final Ising Hamiltonian of a D‑Wave Advantage 7.1 device. The system is initialized in a uniform superposition (the ground state of a transverse‑field Hamiltonian) and evolved under a time‑dependent Hamiltonian H(s)=−A(s)H₁+B(s)H₂, where H₁ is the transverse field and H₂ encodes the data‑dependent couplings.
Because the hardware’s fast‑anneal mode limits the minimum annealing time to about 5 ns, the authors introduce a scaling parameter γ∈
Comments & Academic Discussion
Loading comments...
Leave a Comment