Geometric Latent Space Tomography with Metric-Preserving Autoencoders
Quantum state tomography faces exponential scaling with system size, while recent neural network approaches achieve polynomial scaling at the cost of losing the geometric structure of quantum state space. We introduce geometric latent space tomography, combining classical neural encoders with parameterized quantum circuit decoders trained via a metric-preservation loss that enforces proportionality between latent Euclidean distances and quantum Bures geodesics. On two-qubit mixed states with purity 0.85–0.95 representing NISQ-era decoherence, we achieve high-fidelity reconstruction (mean fidelity $F = 0.942 \pm 0.03$) with an interpretable 20-dimensional latent structure. Critically, latent geodesics exhibit strong linear correlation with Bures distances (Pearson $r = 0.88$, $R^2 = 0.78$), preserving 78% of quantum metric structure. Geometric analysis reveals intrinsic manifold dimension 6.35 versus 20 ambient dimensions and measurable local curvature ($κ= 0.011 \pm 0.006$), confirming non-trivial Riemannian geometry with $O(d^2)$ computational advantage over $O(4^n)$ density matrix operations. Unlike prior neural tomography, our geometry-aware latent space enables direct state discrimination, fidelity estimation from Euclidean distances, and interpretable error manifolds for quantum error mitigation without repeated full tomography, providing critical capabilities for NISQ devices with limited coherence times.
💡 Research Summary
This paper tackles the long‑standing scalability bottleneck of quantum state tomography (QST) by introducing a geometry‑aware latent‑space approach that simultaneously compresses measurement data and preserves the intrinsic Riemannian structure of quantum state space. Traditional QST requires O(4ⁿ) measurements and matrix operations to reconstruct a full density matrix, making it infeasible beyond roughly ten qubits. Recent neural‑network‑based tomography methods achieve polynomial scaling but treat quantum states as abstract vectors, discarding the Bures metric that governs quantum distinguishability, Fisher information, and optimal state transformations.
The authors propose “Geometric Latent Space Tomography,” a hybrid architecture that couples a classical feed‑forward encoder with a parameterized quantum circuit (PQC) decoder. The encoder maps 15‑dimensional Pauli expectation vectors (the complete set of non‑identity Pauli measurements for two qubits) into a 20‑dimensional latent vector z. A linear layer then translates z into 36 rotation angles that drive a six‑layer PQC consisting of single‑qubit RY rotations and CNOT entangling gates. Starting from the maximally mixed state I/4, the circuit prepares a predicted density matrix ρ_pred, from which the decoder again extracts Pauli expectations, closing the auto‑encoder loop. All components are differentiable via the PennyLane mixed‑qubit backend, allowing end‑to‑end back‑propagation.
Training optimizes two complementary loss terms. The reconstruction loss L_recon = 1 – F(ρ_true, ρ_pred) uses the Uhlmann‑Jozsa fidelity, directly penalizing quantum‑state mismatch rather than simple mean‑squared error on measurement vectors. The novel metric‑preservation loss L_metric enforces proportionality between Euclidean distances in latent space d_L = ‖z_i – z_j‖₂ and Bures distances d_B = arccos(√F(ρ_i, ρ_j)) for randomly sampled state pairs within each mini‑batch. Specifically, the loss minimizes the squared deviation of the ratio d_L / d_B from a global scaling factor α, encouraging a linear map between the two metrics. The total loss is L_total = L_recon + λ L_metric with λ = 0.06, a value found empirically to balance high reconstruction fidelity against strong geometric alignment.
The experimental dataset consists of 2,000 two‑qubit mixed states for training and 500 for validation. States are generated from seven noise models (depolarizing, Werner, isotropic, amplitude‑damping, phase‑damping, thermal, and separable product states) with controlled purity Tr(ρ²) ∈
Comments & Academic Discussion
Loading comments...
Leave a Comment