Quantum Predictive Learning and Communication Complexity with Single Input
We define a new model of quantum learning that we call Predictive Quantum (PQ). This is a quantum analogue of PAC, where during the testing phase the student is only required to answer a polynomial number of testing queries. We demonstrate a relational concept class that is efficiently learnable in PQ, while in any “reasonable” classical model exponential amount of training data would be required. This is the first unconditional separation between quantum and classical learning. We show that our separation is the best possible in several ways; in particular, there is no analogous result for a functional class, as well as for several weaker versions of quantum learning. In order to demonstrate tightness of our separation we consider a special case of one-way communication that we call single-input mode, where Bob receives no input. Somewhat surprisingly, this setting becomes nontrivial when relational communication tasks are considered. In particular, any problem with two-sided input can be transformed into a single-input relational problem of equal classical one-way cost. We show that the situation is different in the quantum case, where the same transformation can make the communication complexity exponentially larger. This happens if and only if the original problem has exponential gap between quantum and classical one-way communication costs. We believe that these auxiliary results might be of independent interest.
💡 Research Summary
The paper introduces a new quantum learning framework called Predictive Quantum (PQ), which can be viewed as a quantum analogue of the classical PAC model but with a crucial relaxation: during the testing phase the learner is required to answer only a polynomial number of randomly chosen queries rather than to predict the label of every possible instance. In the PQ setting the learner first receives a polynomial‑size training set, uses it to construct a quantum circuit (or more generally a quantum state) in polynomial time, and then, when presented with each test query, performs a measurement on this quantum resource to produce an answer. The model captures realistic scenarios where the amount of test data is limited and where quantum superposition can be exploited to evaluate many hypotheses simultaneously.
The authors construct a relational concept class 𝒞 that consists of pairs (x, y) satisfying a relation R(x, y) rather than a function mapping each x to a unique y. For this class they present a quantum learning algorithm that, using only poly(n) training examples, prepares a quantum state encoding the relation and answers any polynomial number of test queries with high probability. The algorithm’s runtime and sample complexity are both polynomial in the input length n, establishing that 𝒞 is efficiently learnable in the PQ model.
In contrast, the paper proves that any “reasonable” classical learning model—whether standard PAC, agnostic PAC, or any variant that requires a hypothesis that works on the whole distribution—needs an exponential number of examples to learn 𝒞. The lower bound follows from an information‑theoretic argument: because each input can be related to exponentially many possible outputs, a classical learner must essentially enumerate a large fraction of the output space to guarantee correctness, which forces a 2^{Ω(n)} sample requirement. This yields the first unconditional separation between quantum and classical learning: quantum learners can succeed with polynomial resources where classical learners provably cannot.
The authors also show that such a separation cannot occur for functional concept classes (where each x has a unique label). For functional classes the quantum and classical sample complexities remain polynomially related, because the uniqueness of the label eliminates the combinatorial explosion that the relational structure creates.
To further explore the limits of this quantum advantage, the paper studies a specialized communication problem called the single‑input mode. In a standard one‑way communication setting, Alice receives input x, Bob receives input y, and Alice sends a message to Bob who must compute a function or relation of (x, y). In the single‑input mode, Bob receives no input; he must output a value based solely on the message from Alice. The authors prove that any two‑input problem can be transformed into a single‑input relational problem without increasing the classical one‑way communication cost. However, the same transformation can blow up the quantum communication cost exponentially, but only when the original problem already exhibits an exponential gap between quantum and classical one‑way communication complexities. Formally, if a problem f satisfies Q^{→}(f) = 2^{Ω(R^{→}(f))}, then the transformed single‑input relational problem has quantum cost Θ(R^{→}(f)), while its classical cost remains Θ(R^{→}(f)). This demonstrates that the exponential separation observed for learning relational classes is tightly linked to known exponential separations in one‑way quantum versus classical communication.
The paper concludes by discussing the broader implications of these findings. The relational nature of the constructed concept class is essential for achieving a quantum advantage, suggesting that future quantum learning research should focus on problems where multiple correct outputs are allowed. Moreover, the connection between learning separations and communication‑complexity separations provides a new lens for investigating quantum resources in distributed settings, such as quantum networks or delegated quantum computation. Open directions include extending the PQ model to multi‑query or adaptive settings, identifying other natural relational classes with quantum learnability, and experimentally realizing the proposed learning protocol on near‑term quantum devices.
Comments & Academic Discussion
Loading comments...
Leave a Comment