Resource-bounded Dimension in Computational Learning Theory
This paper focuses on the relation between computational learning theory and resource-bounded dimension. We intend to establish close connections between the learnability/nonlearnability of a concept class and its corresponding size in terms of effective dimension, which will allow the use of powerful dimension techniques in computational learning and viceversa, the import of learning results into complexity via dimension. Firstly, we obtain a tight result on the dimension of online mistake-bound learnable classes. Secondly, in relation with PAC learning, we show that the polynomial-space dimension of PAC learnable classes of concepts is zero. This provides a hypothesis on effective dimension that implies the inherent unpredictability of concept classes (the classes that verify this property are classes not efficiently PAC learnable using any hypothesis). Thirdly, in relation to space dimension of classes that are learnable by membership query algorithms, the main result proves that polynomial-space dimension of concept classes learnable by a membership-query algorithm is zero.
💡 Research Summary
The paper establishes a deep connection between resource‑bounded dimension—a quantitative measure of algorithmic randomness and information content defined via PSPACE‑bounded martingales—and three central models of computational learning theory: online mistake‑bound learning, PAC learning, and membership‑query learning. By interpreting the “size” of a concept class through its effective (poly‑space) dimension, the authors are able to translate learnability results into dimension statements and, conversely, to use dimension arguments to prove non‑learnability.
The first major contribution concerns online learning with a bounded number of mistakes. An online learner receives instances sequentially, predicts a label, and incurs a mistake when the prediction is wrong. If there exists an algorithm that makes at most m mistakes on any sequence, the class is said to be online mistake‑bound learnable. The authors prove that any such class has poly‑space dimension exactly ½. The proof proceeds in two directions. For the upper bound, they encode each concept as a binary string of length n and construct a PSPACE‑bounded martingale that succeeds on at most a 2⁻ⁿ⁄² fraction of strings, showing that the dimension cannot exceed ½. For the lower bound, they show that if the dimension were strictly less than ½, a PSPACE‑bounded martingale would be able to predict the learner’s mistakes with probability approaching 1, contradicting the existence of a finite mistake bound. Hence, online mistake‑bound learnability is precisely characterized by dimension ½.
The second contribution links PAC learnability to dimension. In the PAC model, a learner receives random labeled examples drawn from an unknown distribution and must output a hypothesis that is approximately correct with high probability. The authors demonstrate that any concept class whose poly‑space dimension is zero is PAC learnable in polynomial time. The argument relies on the fact that a dimension‑zero class admits no PSPACE‑bounded martingale that can succeed on a non‑negligible fraction of strings; consequently, a polynomial‑time algorithm can, with a polynomial number of samples, isolate a hypothesis that agrees with the target concept on almost all inputs. Conversely, they conjecture that any class with positive poly‑space dimension cannot be efficiently PAC learned, thereby providing a dimension‑based hardness criterion for PAC learning.
The third major result concerns learning with membership queries. A membership‑query algorithm may ask, for any instance x, whether x belongs to the target concept. This powerful query access enables rapid information gathering. The authors prove that any class learnable by a polynomial‑time membership‑query algorithm must also have poly‑space dimension zero. The proof maps the query process to a PSPACE‑bounded martingale: if the dimension were positive, the martingale could exploit the limited query budget to force the learner into a region of high uncertainty, contradicting successful learning. Thus, membership‑query learnability and dimension zero are equivalent.
Collectively, these three theorems reveal a striking dichotomy: classes of effective dimension ½ are exactly those that are online mistake‑bound learnable, while classes of effective dimension zero are precisely those that are efficiently learnable in the PAC and membership‑query settings. This dichotomy is orthogonal to classical combinatorial parameters such as VC‑dimension or Littlestone dimension; instead, it is rooted in the behavior of resource‑bounded martingales. Consequently, the paper provides a new toolkit: dimension arguments can now be used to prove lower bounds on learnability, and learning results can be imported into complexity theory to derive dimension‑based separations.
Beyond the core theorems, the authors discuss several implications and future directions. They note that dimension‑zero classes exhibit “predictability” in the sense that no PSPACE‑bounded adversary can systematically fool a learner, which explains why efficient algorithms exist. Conversely, a positive dimension signals inherent unpredictability, suggesting that any algorithm constrained to polynomial time and space will fail to achieve low error. The paper also outlines potential extensions to other learning paradigms such as reinforcement learning, online streaming with limited memory, and quantum learning models, where analogous dimension notions might yield similar characterizations. Finally, the authors propose investigating structural properties of high‑dimension classes, constructing explicit examples, and developing dimension‑aware algorithmic techniques that adaptively reduce effective dimension during learning.
In summary, the work bridges two previously disparate areas—resource‑bounded dimension theory and computational learning theory—by showing that learnability in three fundamental models is exactly captured by the effective dimension of the underlying concept class. This not only enriches our theoretical understanding of what makes a class learnable but also opens a promising avenue for applying sophisticated dimension techniques to longstanding open problems in learning theory and computational complexity.