Hardness Results for Agnostically Learning Low-Degree Polynomial Threshold Functions

Hardness Results for Agnostically Learning Low-Degree Polynomial   Threshold Functions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Hardness results for maximum agreement problems have close connections to hardness results for proper learning in computational learning theory. In this paper we prove two hardness results for the problem of finding a low degree polynomial threshold function (PTF) which has the maximum possible agreement with a given set of labeled examples in $\R^n \times {-1,1}.$ We prove that for any constants $d\geq 1, \eps > 0$, {itemize} Assuming the Unique Games Conjecture, no polynomial-time algorithm can find a degree-$d$ PTF that is consistent with a $(\half + \eps)$ fraction of a given set of labeled examples in $\R^n \times {-1,1}$, even if there exists a degree-$d$ PTF that is consistent with a $1-\eps$ fraction of the examples. It is $\NP$-hard to find a degree-2 PTF that is consistent with a $(\half + \eps)$ fraction of a given set of labeled examples in $\R^n \times {-1,1}$, even if there exists a halfspace (degree-1 PTF) that is consistent with a $1 - \eps$ fraction of the examples. {itemize} These results immediately imply the following hardness of learning results: (i) Assuming the Unique Games Conjecture, there is no better-than-trivial proper learning algorithm that agnostically learns degree-$d$ PTFs under arbitrary distributions; (ii) There is no better-than-trivial learning algorithm that outputs degree-2 PTFs and agnostically learns halfspaces (i.e. degree-1 PTFs) under arbitrary distributions.


💡 Research Summary

The paper investigates the computational difficulty of finding a low‑degree polynomial threshold function (PTF) that maximally agrees with a given labeled sample set $\mathcal{S}\subset\mathbb{R}^{n}\times{-1,1}$. Two hardness results are established. First, assuming the Unique Games Conjecture (UGC), for any fixed degree $d\ge 1$ and any constant $\varepsilon>0$, it is impossible to design a polynomial‑time algorithm that outputs a degree‑$d$ PTF agreeing with more than a $\frac12+\varepsilon$ fraction of the examples, even when there exists a degree‑$d$ PTF that is consistent with $1-\varepsilon$ of the data. The proof builds a reduction from a UGC‑hard “maximum agreement” instance to the problem of finding a degree‑$d$ PTF, using a PCP‑style encoding of labels. This shows that, under UGC, no algorithm can beat the trivial random‑guess baseline for agnostically learning degree‑$d$ PTFs under arbitrary distributions.

Second, the authors prove an unconditional NP‑hardness result for degree‑2 PTFs. They show that even if a halfspace (degree‑1 PTF) can correctly label $1-\varepsilon$ of the points, finding a degree‑2 PTF that achieves agreement better than $\frac12+\varepsilon$ is NP‑hard. The reduction is from classic NP‑hard approximation problems such as MAX‑2‑SAT, translating the gap‑hardness directly into the degree‑2 PTF agreement setting. This demonstrates that the additional expressive power of quadratic thresholds does not alleviate the inherent computational barrier.

Both results have immediate implications for agnostic learning. Under UGC, there is no proper agnostic learner for degree‑$d$ PTFs that achieves non‑trivial accuracy; any proper learner cannot surpass the $\frac12$ baseline. Moreover, even when the target concept is a halfspace, no algorithm that is restricted to output degree‑2 PTFs can agnostically learn it with accuracy better than $\frac12+\varepsilon$, unless P=NP. These findings sharpen the known hardness landscape for low‑degree polynomial classifiers, linking maximum‑agreement hardness to proper learning limits, and suggest that future progress will require either abandoning properness, assuming stronger complexity conjectures, or focusing on restricted distributions where the hardness reductions do not apply.


Comments & Academic Discussion

Loading comments...

Leave a Comment