LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks

LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Quantum neural networks (QNNs) and parameterized quantum circuits (PQCs) are key building blocks for near-term quantum machine learning. However, their scalability is constrained by excessive parameters, barren plateaus, and hardware limitations. We propose LiePrune, the first mathematically grounded one-shot structured pruning framework for QNNs that leverages Lie group structure and quantum geometric information. Each gate is jointly represented in a Lie group–Lie algebra dual space and a quantum geometric feature space, enabling principled redundancy detection and aggressive compression. Experiments on quantum classification (MNIST, FashionMNIST), quantum generative modeling (Bars-and-Stripes), and quantum chemistry (LiH VQE) show that LiePrune achieves over $10\times$ compression with negligible or even improved task performance, while providing provable guarantees on redundancy detection, functional approximation, and computational complexity.


💡 Research Summary

The paper “LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks” introduces a novel, mathematically principled framework for compressing Quantum Neural Networks (QNNs) and Parameterized Quantum Circuits (PQCs). It addresses the critical scalability challenges in near-term quantum machine learning, such as over-parameterization, barren plateaus, and hardware constraints on Noisy Intermediate-Scale Quantum (NISQ) devices.

The core innovation of LiePrune is the dual representation of quantum gates. Unlike heuristic pruning methods, it jointly represents each unitary gate in two complementary spaces:

  1. Lie Group/Algebra Space: Every gate U is represented via the exponential map of its generator X_U in the Lie algebra su(2^n), i.e., U = exp(X_U). This preserves the unitary structure fundamental to quantum mechanics.
  2. Quantum Geometric Feature Space: The impact of a gate is quantified using the Fubini–Study distance, which measures the geodesic distance between quantum states in projective Hilbert space. Features like local loss landscape curvature and symmetry indicators are also extracted.

The LiePrune algorithm operates in three main stages, leveraging this dual representation:

  1. Lie Subgroup Partitioning: Gates are first partitioned into disjoint subsets based on their minimal closed Lie subgroup (e.g., single-qubit rotation groups). This restricts the search for redundant gates to those sharing the same algebraic foundation and qubit locality, ensuring physical implementability and reducing computational cost.
  2. Geometry-Accelerated Redundancy Detection: Within each subgroup, the Fubini–Study distance between two gates is efficiently approximated using their Lie algebra generators (via the Baker-Campbell-Hausdorff formula) instead of full state simulation, reducing cost from O(2^n) to O(d^2).
  3. Redundancy Graph Construction & One-Shot Pruning: An undirected “redundancy graph” is built where nodes are gates and edges connect gate pairs whose estimated distance is below a threshold ε (deemed ε-redundant). Connected components of this graph form clusters of redundant gates. For each cluster, a core gate with the highest “Lie sensitivity” (gradient norm w.r.t. its generator) is selected. All other gates in the cluster are merged into a single effective gate by taking a sensitivity-weighted average of their Lie algebra generators, which is then exponentiated to form the new, consolidated gate.

The authors provide strong theoretical guarantees for LiePrune:

  • Theorem 4.2 (Completeness): Under mild assumptions, the redundancy graph will connect all truly ε-redundant gate pairs within a subgroup.
  • Theorem 4.3 (Approximation Bound): The functional error induced by merging a cluster is rigorously bounded, depending on the cluster size, the redundancy threshold ε, and the commutator magnitudes between generators.
  • Theorem 4.4 (Complexity): The algorithm runs in linear time O(N) with respect to the number of gates N, assuming bounded local degree in the redundancy graph, making it suitable for real-time compression.

Experimental validation was conducted on four benchmarks: quantum classification (MNIST 4vs9, FashionMNIST Sandal vs Boot), quantum generative modeling (Bars-and-Stripes), and a quantum chemistry variational eigensolver (LiH VQE). Using a 12-layer hardware-efficient ansatz, LiePrune achieved aggressive compression rates of over 10x (e.g., reducing parameters from 360 to 36) while largely preserving task performance. Remarkably, after light fine-tuning, accuracy on some discriminative tasks not only recovered but sometimes slightly exceeded the original performance. For the LiH VQE task, a 12x compression was achieved while maintaining a chemically relevant energy approximation.

In conclusion, LiePrune is the first one-shot structured pruning framework that leverages the inherent Lie group structure and quantum geometry of QNNs to deliver provable, efficient, and aggressive compression. It demonstrates that over-parameterized QNNs often reside on lower-dimensional submanifolds, revealing significant redundancy. This work paves the way for deploying larger, more expressive models on resource-constrained NISQ devices and enables real-time circuit optimization for edge quantum computing.


Comments & Academic Discussion

Loading comments...

Leave a Comment