Quantum Complexity: restrictions on algorithms and architectures
A dissertation submitted to the University of Bristol in accordance with the requirements of the degree of Doctor of Philosophy (PhD) in the Faculty of Engineering, Department of Computer Science, July 2009.
đĄ Research Summary
This dissertation investigates fundamental limits on both quantum algorithms and the architectures that implement them, bridging a gap that has long existed between theoretical quantum complexity and practical hardware design. After a comprehensive review of quantum complexity classesâmost notably BQP, QMA, and QCMAâthe work identifies a lack of rigorous depthâlowerâbounds for QMAâcomplete problems, motivating the development of new analytical tools.
The first technical contribution introduces a âLimitedâDepth Modelâ and proves two central theorems: (1) any quantum circuit of depth O(logâŻn) cannot solve QMAâcomplete problems, and (2) solving such problems requires circuit depth at least Ί(ân). These results are derived using quantum informationâtheoretic arguments about mutual information and recoverability, establishing a clear separation between what shallow circuits can achieve and what inherently deep circuits are forced to possess.
The second part focuses on algorithmic restrictions under constrained memory access. By formalizing a âBlackâBox Access Restriction,â the thesis shows that the wellâknown quadratic speedâup of Groverâs search, as well as similar gains for Simonâs problem and recent quantum machineâlearning proposals, rely on unrestricted global memory access. When access is limited to local neighborhoodsâa realistic scenario for nearâterm devicesâthe achievable speedâup collapses, and even linearâtime improvements cannot be guaranteed for unstructured problems. This highlights the critical role of data layout and memoryâinterface design in any practical quantum algorithm.
The third and most hardwareâoriented chapter models a twoâdimensional surfaceâcode based quantum processor. It quantifies the overhead of faultâtolerant logical qubits, showing that the overhead scales as O(logâŻn) per logical qubit, while the required circuit depth scales with the physical distance between qubits, yielding a âdistanceâdepth tradeâoff.â The analysis is extended to incorporate longârange entanglement via a switchânetwork layer. Although such a network dramatically improves connectivity, the added gate error rates cause the overall algorithmic complexity to increase sharply once a certain error threshold is crossed, exposing a delicate balance between connectivity gains and errorâcorrection costs.
Synthesizing these findings, the dissertation argues that future quantum algorithm designers must embed architectural constraintsâcircuit depth, connectivity, and memory access patternsâdirectly into the algorithmic design process. Conversely, hardware architects should target the minimal depth and errorâcorrection capabilities required by the intended complexity class, rather than pursuing generic lowâerror, highâconnectivity designs. The work concludes with a roadmap for further research, including the exploration of multiâdimensional topologies and the search for optimal algorithms that respect depth and connectivity limits. Ultimately, the thesis provides a rigorous framework for evaluating the feasibility of quantum speedâups in realistic, constrained environments, offering concrete guidelines for both algorithmic innovation and hardware engineering.