Considerations on P vs NP

Considerations on P vs NP
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In order to prove that the P of problems is different to the NP class, we consider the satisfability problem of propositional calculus formulae, which is an NP-complete problem. It is shown that, for every search algorithm A, there is a set E(A) containing propositional calculus formulae, each of which requires the algorithm A to take non-polynomial time to find the truth-values of its propositional letters satisfying it. Moreover, E(A)’s size is an exponential function of n, which makes it impossible to detect such formulae in a polynomial time. Hence, the satisfability problem does not have a polynomial complexity


💡 Research Summary

The paper attempts to prove that the class P is different from NP by focusing on the Boolean satisfiability problem (SAT), which is known to be NP‑complete. The authors claim that for every deterministic search algorithm A there exists a set E(A) of propositional formulas such that each formula in E(A) forces algorithm A to run for super‑polynomial (i.e., non‑polynomial) time in order to find a satisfying assignment. Moreover, they argue that the cardinality of E(A) grows exponentially with the number of variables n, and because this set is so large it cannot be recognized or enumerated in polynomial time. From these observations they conclude that SAT itself cannot be solved in polynomial time, and therefore P ≠ NP.

A close technical reading reveals several fundamental problems. First, the paper never specifies the computational model for algorithm A. In complexity theory one must fix a model (deterministic Turing machine, RAM, etc.) and define time in terms of the length of the input encoding. By speaking only of “search algorithms” the authors leave the definition of time, input representation, and allowed operations completely vague, which makes any rigorous lower‑bound claim impossible.

Second, the existence of the set E(A) is asserted without construction. To show that a particular algorithm cannot solve a problem in polynomial time one must either exhibit a concrete hard instance or apply a diagonalisation argument that works uniformly for an entire class of algorithms. The paper instead proposes a per‑algorithm family E(A) but gives no algorithmic procedure for generating its members, nor does it prove that every member indeed forces algorithm A to take super‑polynomial steps. This is a classic gap: claiming “for each A there is a hard instance” is not sufficient to prove a class‑wide lower bound.

Third, the argument that the exponential size of E(A) implies non‑detectability in polynomial time is unsound. Complexity theory distinguishes between the size of a language (the number of strings of each length) and the difficulty of deciding membership. A language can have exponentially many strings of length n and still be decidable in polynomial time (e.g., the set of all strings that contain a fixed substring). Therefore, showing that |E(A)| = 2^n does not automatically yield a time lower bound; one must demonstrate that any algorithm that decides membership must examine an exponential amount of information, which the paper does not do.

Fourth, the phrase “cannot be detected in polynomial time” is ambiguous. If detection means “given a formula, decide whether it belongs to E(A)”, then the authors are essentially assuming the very thing they need to prove: that no polynomial‑time decision procedure exists for that language. This circularity undermines the logical flow of the proof.

Fifth, the paper lacks formal statements, lemmas, and proofs. Terms such as “non‑polynomial time” are used without precise definition (e.g., does it exclude all n^k for any constant k, or also quasi‑polynomial bounds?). The claim that SAT “does not have polynomial complexity” is derived solely from the existence of the sets E(A), yet the connection between those sets and the universal quantification over all deterministic polynomial‑time algorithms is never established. In standard complexity theory, to prove P ≠ NP one must show that no deterministic polynomial‑time algorithm can solve SAT, not merely that for each individual algorithm there exists at least one hard instance.

Finally, the paper does not address known barriers such as relativization, natural proofs, or algebrization, which have historically limited many attempted proofs of P ≠ NP. By ignoring these meta‑theoretical considerations, the authors present an argument that, even if technically correct in its own limited framework, would not survive scrutiny under the broader landscape of complexity‑theoretic research.

In summary, while the paper’s high‑level idea—constructing hard instances for each algorithm—is reminiscent of diagonalisation, the execution is incomplete. The lack of a concrete construction for E(A), the failure to tie exponential cardinality to time lower bounds, the ambiguous terminology, and the absence of a rigorous model all render the claimed proof of P ≠ NP unconvincing. As it stands, the work does not advance the state of knowledge on the P versus NP question and should be regarded as an informal, non‑rigorous attempt rather than a definitive resolution.


Comments & Academic Discussion

Loading comments...

Leave a Comment