P is a proper subset of NP

P is a proper subset of NP
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The purpose of this article is to examine and limit the conditions in which the P complexity class could be equivalent to the NP complexity class. Proof is provided by demonstrating that as the number of clauses in a NP-complete problem approaches infinity, the number of input sets processed per computation performed also approaches infinity when solved by a polynomial time solution. It is then possible to determine that the only deterministic optimization of a NP-complete problem that could prove P = NP would be one that examines no more than a polynomial number of input sets for a given problem. It is then shown that subdividing the set of all possible input sets into a representative polynomial search partition is a problem in the FEXP complexity class. The findings of this article are combined with the findings of other articles in this series of 4 articles. The final conclusion will be demonstrated that P =/= NP.


💡 Research Summary

The manuscript titled “P is a proper subset of NP” attempts to argue that the only way P could equal NP is if a deterministic polynomial‑time algorithm could solve any NP‑complete problem by examining only a polynomial number of input assignments, and that finding such a “polynomial search partition” is itself an FEXP‑class problem, thereby rendering P ≠ NP. The paper proceeds through several sections, each of which suffers from serious conceptual and technical flaws.

First, the author restates the definitions of deterministic and nondeterministic Turing machines, quoting Marion, Karp, and a NIST description. The exposition incorrectly suggests that a nondeterministic machine “evaluates all possible inputs at the same time,” conflating the existential acceptance condition with literal parallel evaluation. This misunderstanding underlies the later claim that any algorithm that does not explicitly enumerate all 2^{kn} possible assignments for a K‑SAT instance must fail to find a satisfying assignment.

The core technical claim is derived in Sections 3 and 4. The author defines a K‑SAT instance with k literals per clause and n clauses, noting that the total number of possible truth assignments is 2^{kn}. He then introduces a polynomial function t(n) representing the number of elementary operations a hypothetical polynomial‑time algorithm would perform. By multiplying the number of assignments by t(n), he defines r(n) = 2^{kn}·t(n) and argues, using L’Hôpital’s rule, that r(n) diverges to infinity as n grows. The conclusion drawn is that a polynomial‑time algorithm would have to process an unbounded number of assignments per computation step, which the author interprets as an impossibility.

This line of reasoning is fundamentally flawed. The fact that the input space grows exponentially does not imply that any polynomial‑time algorithm must examine each element of that space. Many polynomial‑time algorithms for problems outside NP‑complete (e.g., 2‑SAT, bipartite matching, maximum flow) succeed by exploiting structural properties that allow them to avoid exhaustive enumeration. The author’s argument therefore amounts to a tautology: “If a polynomial‑time algorithm existed, it would have to examine all assignments; but it cannot, so no such algorithm exists.” No rigorous reduction or lower‑bound argument is provided to rule out clever algorithms that bypass explicit enumeration.

The paper then presents “The P = NP Optimization Theorem” (Theorem 4.4), which essentially restates the definition of P = NP: a deterministic algorithm that solves an NP‑complete problem by checking at most a polynomial number of inputs would prove P = NP. This theorem adds no new insight. The author further claims that locating such a polynomial‑size “search partition” is an FEXP problem (the class of functions that are exponential in a polynomial). No formal definition of the decision problem is given, nor is a reduction to a known FEXP‑complete problem shown. The use of L’Hôpital’s rule to compare exponential and polynomial growth is elementary and does not constitute a complexity‑theoretic proof.

In Section 5 the author attempts to argue that any polynomial‑time optimization must either (1) guarantee that all unchecked inputs evaluate to false, or (2) guarantee that at least one satisfying assignment lies within the checked subset. Both conditions are presented as mutually exclusive and impossible for general NP‑complete problems, yet no proof is offered. The paper merely sketches a combinatorial example with three variables and three possible values each, then extrapolates to the infinite‑clause limit without rigorous justification.

Finally, the manuscript concludes that P is a proper subset of NP. However, this conclusion rests on the earlier unproven assumptions and on a misapplication of elementary calculus to complexity growth rates. The paper does not engage with the extensive body of literature on relativization, natural proofs, or algebraic techniques that have shaped modern P vs. NP research. It also fails to address known barriers (e.g., Baker‑Gill‑Solovay relativization results, Razborov–Rudich natural proofs) that any successful proof must circumvent.

In summary, while the paper is structured as a conventional academic article, its central arguments are based on incorrect interpretations of nondeterminism, an unjustified necessity to enumerate all assignments, and a superficial use of calculus. No novel reductions, lower bounds, or algorithmic constructions are presented. Consequently, the claim that the paper resolves P vs. NP in favor of P ≠ NP is unsupported, and the work does not constitute a meaningful contribution to computational complexity theory.


Comments & Academic Discussion

Loading comments...

Leave a Comment