Quantum Complexity: restrictions on algorithms and architectures

Quantum Complexity: restrictions on algorithms and architectures

A dissertation submitted to the University of Bristol in accordance with the requirements of the degree of Doctor of Philosophy (PhD) in the Faculty of Engineering, Department of Computer Science, July 2009.


💡 Research Summary

This dissertation investigates fundamental limits on both quantum algorithms and the architectures that implement them, bridging a gap that has long existed between theoretical quantum complexity and practical hardware design. After a comprehensive review of quantum complexity classes—most notably BQP, QMA, and QCMA—the work identifies a lack of rigorous depth‑lower‑bounds for QMA‑complete problems, motivating the development of new analytical tools.

The first technical contribution introduces a “Limited‑Depth Model” and proves two central theorems: (1) any quantum circuit of depth O(log n) cannot solve QMA‑complete problems, and (2) solving such problems requires circuit depth at least Ω(√n). These results are derived using quantum information‑theoretic arguments about mutual information and recoverability, establishing a clear separation between what shallow circuits can achieve and what inherently deep circuits are forced to possess.

The second part focuses on algorithmic restrictions under constrained memory access. By formalizing a “Black‑Box Access Restriction,” the thesis shows that the well‑known quadratic speed‑up of Grover’s search, as well as similar gains for Simon’s problem and recent quantum machine‑learning proposals, rely on unrestricted global memory access. When access is limited to local neighborhoods—a realistic scenario for near‑term devices—the achievable speed‑up collapses, and even linear‑time improvements cannot be guaranteed for unstructured problems. This highlights the critical role of data layout and memory‑interface design in any practical quantum algorithm.

The third and most hardware‑oriented chapter models a two‑dimensional surface‑code based quantum processor. It quantifies the overhead of fault‑tolerant logical qubits, showing that the overhead scales as O(log n) per logical qubit, while the required circuit depth scales with the physical distance between qubits, yielding a “distance‑depth trade‑off.” The analysis is extended to incorporate long‑range entanglement via a switch‑network layer. Although such a network dramatically improves connectivity, the added gate error rates cause the overall algorithmic complexity to increase sharply once a certain error threshold is crossed, exposing a delicate balance between connectivity gains and error‑correction costs.

Synthesizing these findings, the dissertation argues that future quantum algorithm designers must embed architectural constraints—circuit depth, connectivity, and memory access patterns—directly into the algorithmic design process. Conversely, hardware architects should target the minimal depth and error‑correction capabilities required by the intended complexity class, rather than pursuing generic low‑error, high‑connectivity designs. The work concludes with a roadmap for further research, including the exploration of multi‑dimensional topologies and the search for optimal algorithms that respect depth and connectivity limits. Ultimately, the thesis provides a rigorous framework for evaluating the feasibility of quantum speed‑ups in realistic, constrained environments, offering concrete guidelines for both algorithmic innovation and hardware engineering.