Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN

Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present a comparative benchmark of Quantum Tensor Networks (QTN), the Hydrodynamic Schrödinger Equation (HSE), and Physics-Informed Neural Networks (PINN) for simulating the 1D Burgers’ equation. Evaluating these emerging paradigms against classical GMRES and Spectral baselines, we analyse solution accuracy, runtime scaling, and resource overhead across grid resolutions ranging from $N=4$ to $N=128$. Our results reveal a distinct performance hierarchy. The QTN solver achieves superior precision ($L_2 \sim 10^{-7}$) with remarkable near-constant runtime scaling, effectively leveraging entanglement compression to capture shock fronts. In contrast, while the Finite-Difference HSE implementation remains robust, the Spectral HSE method suffers catastrophic numerical instability at high resolutions, diverging significantly at $N=128$. PINNs demonstrate flexibility as mesh-free solvers but stall at lower accuracy tiers ($L_2 \sim 10^{-1}$), limited by spectral bias compared to grid-based methods. Ultimately, while quantum methods offer novel representational advantages for low-resolution fluid dynamics, this study confirms they currently yield no computational advantage over classical solvers without fault tolerance or significant algorithmic breakthroughs in handling non-linear feedback.


💡 Research Summary

This paper presents a systematic benchmark of three emerging solution paradigms for the one‑dimensional viscous Burgers equation—Quantum Tensor Networks (QTN), the Hydrodynamic Schrödinger Equation (HSE), and Physics‑Informed Neural Networks (PINN)—against two classical baselines: a Generalized Minimal Residual (GMRES) linear solver and a high‑order Fourier spectral method. The authors evaluate all methods across a range of spatial resolutions (N = 4, 8, 16, 32, 64, 128, where N = 2ⁿ for the quantum approaches) and a fixed physical setup (ν = 0.01, T = 1, Δt = 0.005, step‑function initial condition, Dirichlet boundaries α = 1, β = 0). Accuracy is measured by the relative L₂ norm at the final time, while runtime, memory consumption, and quantum resource requirements (circuit depth, qubit count) are also recorded.

Classical baselines.
GMRES solves the semi‑discretized nonlinear system after linearizing the convective term; it scales linearly in both time and memory (O(N)). The Fourier spectral method, implemented via Chebyshev collocation, achieves the highest reference accuracy (L₂ ≈ 4.7 × 10⁻³ for a sine wave) but becomes numerically unstable for N ≥ 64, diverging dramatically at N = 128.

Quantum Tensor Networks (QTN).
The QTN approach encodes the solution vector as a Matrix Product State (MPS) with bond dimension χ. Non‑linear advection (u · uₓ) is evaluated by a site‑wise Hadamard product of the state MPS and its gradient MPS, temporarily inflating the bond dimension to χ² before an SVD‑based truncation restores it to χ ≤ χ_max (typically 32–64). Time integration uses an explicit fourth‑order Runge‑Kutta scheme with adaptive Δt obeying a CFL condition. The algorithm’s computational cost scales as O(n·χ⁴) (n = log₂N) and memory as O(n·χ²). Because n grows only logarithmically, runtime remains almost constant as N increases. Empirically, QTN attains L₂ ≈ 10⁻⁷ across all tested resolutions, with runtimes on a workstation of only a few hundredths of a second even at N = 128. The method requires only n qubits (log₂N) for a quantum‑inspired simulation, and its memory footprint is modest.

Hydrodynamic Schrödinger Equation (HSE).
HSE maps the fluid velocity to a complex wavefunction via the inverse Madelung transformation ψ(x) = √ρ exp(i∫u dx). The resulting linear Schrödinger‑like equation is evolved under a Hamiltonian H = H_k + H_q, where H_k implements diffusion and H_q encodes a density‑dependent potential. Two spatial discretizations are examined: a finite‑difference (FD) scheme yielding a sparse tridiagonal Hamiltonian, and a Fourier‑based spectral Hamiltonian that is dense. Both are decomposed into weighted Pauli strings and simulated using first‑order Trotter‑Suzuki product formulas; a variational, parameter‑trained Trotter variant is also explored. The FD implementation scales polynomially in n and is feasible up to N = 128, but the spectral Hamiltonian incurs exponential circuit depth, leading to memory overflow and numerical blow‑up at high resolution. Under realistic NISQ noise models (depolarizing and amplitude‑damping channels), the HSE approach achieves only L₂ ≈ 10⁻³, and the error grows rapidly with circuit depth. Resource analysis shows O(2ⁿ) memory and n qubits, confirming that current quantum hardware cannot support the required depth for high‑resolution Burgers simulations.

Physics‑Informed Neural Networks (PINN).
The PINN implementation uses a fully connected feed‑forward network with three hidden layers of 50 tanh‑activated neurons. The loss combines PDE residual, initial‑condition, and boundary‑condition terms, optimized with Adam (learning rate = 10⁻³) for 10 000 epochs. Training is performed on spatial‑temporal grids ranging from 50 × 50 to 200 × 200 points and Reynolds numbers Re ∈


Comments & Academic Discussion

Loading comments...

Leave a Comment