Towards P = NP via k-SAT: A k-SAT Algorithm Using Linear Algebra on Finite Fields
The problem of P vs. NP is very serious, and solutions to the problem can help save lives. This article is an attempt at solving the problem using a computer algorithm. It is presented in a fashion that will hopefully allow for easy understanding for…
Authors: Matt Groff
T o w ards P = N P via k-SA T: A k -SA T Algorithm Usi ng Linear Algebra on Finite Fields c Matt Groff 2011 Semi-complete First Draft MA TT GR OFF P .O. Box 642 Camp Hill, P A, USA 17001-0 642 mgroff100@hotmail.com August 28, 2 018 Abstract The problem of P vs. NP is very ser ious, and solutions to the problem can help sa ve lives. This article is an a ttempt a t solving the problem using a computer alg orithm. It is pr esented in a fashion that will hop efully allow for easy understanding for man y p eople and scientists from many diverse fields. In tec hnical t er ms, a nov el metho d for so lving k-SA T is explained. This method is primarily based o n linear alg ebra and finite fields. Ev- idence is given that this metho d may requir e rougly O( n 3 ) time and spac e for deterministic mo dels. Mo r e s pecific a lly the algo r ithm runs in time O ( P · V ( n + V ) 2 ) with mistaking satisfiable Bo olean expressions a s unsatisfiable with an ap- proximate probablity 1 / Θ( V ( n + V ) 2 ) P , wher e n is the num b er of c la uses and V is the n um- ber of v ariables. It’s concluded that sig nificant evidence exists that P=NP . There is a for um devoted to this pa p er at http://482 527.F orum R omanum.c om . All a r e in- vited to corres po nd here and help with the analysis o f the a lgorithm. Source co de for the asso ciated algorithm can be found at https://sour c ef or ge.net/p/la3sat . 1 In tro duc tion There ar e ma ny problems prop ose d to computer sc i- ent ists that hav e b een thought to be to o difficult for computers to solve quickly . In fact, p er haps the most fundamen tal question in computer science is to find if certain t yp es of pro blems, collectively known as the class NP , can b e solved quic kly by a computer. If so, a w or ld o f opp ortunities would o pen up, a nd many new problems that were s uppo sed to b e almost im- po ssible to solve could b e solved quickly . This pap er attempts to provide a pr o of that they can b e solved quickly , and also shows a wa y to do it. This will hop efully invite res earchers from many diverse fields to contribute to the resea rch and work of solving NP hard pro blems. The level of int er e s t in this q uestion is so great that in 20 00, the Clay Mathematics Institute listed the 7 Millennium P r ize Problems , a nd offered $1,000,00 0 for someone who could prov e the relations hip b etw een P and NP[7], which would ans wer the q uestion. 1.1 T uring Mac hines T o understand the cla sses P and NP , first consider the basic no tion of a T uring machine, which is a sci- ent ific definition of a computer. It has one or mor e tap es t hat hold infor mation. At any time, the T uring machine uses the ta pe or tap es to decide what to do next. The machine has a prescr ib ed set of instruc- tions to it to help it decide. T o develop the idea s b ehind T ur ing machines more, a distinction is made between wha t types o f dec i- sions can be made. A deterministic T uring machine (DTM) has a predetermined decision for every t yp e o f situation it encounters; th us the term deterministic. A nondeterministic T uring machine (NTM), o n the other hand, ca n hav e mo r e than o ne a ction it can do for any g iven situation. In other words, it’s ac tio ns aren’t deter mined ahea d o f time. Therefo r e, it’s s a id that it’s nondeterminstic. Figure 1 makes the difference mor e c lear. NTMs 1 T owards P = N P via k -SA T c Matt Groff 201 1 2 A decis ion tree is shown b elow. As an algo r ithm (computer prog ram) prog r esses, it must pick a choice from three p ossibilities . Nondeterministic turing machines(NTMs) hav e been thought to hav e an a dv antage in this c a se, b ecause, at each step, they can pick fro m any choice. Deterministic turing machines(DTMs) don’t r eally have the a bilit y to pick, so they’re thoug ht to b e at a disadv antage. ST AR T Choice av ailable to NTM Choice av ailable to NTM and DTM Figure 1 : A Scenario Inv o lving Choices hav e been tho ught to b e able to solve more problems than DTMs b ecaus e of the av ailability to make more choices. Her e the DTM has only one choice av ail- able at ea ch step, while the DTM ha s three choices av ailable. So this evidence p oints to DTMs as b e- ing mor e limited in po tent ial than NTMs. In fact, there is enough difference betw een these tw o com- puters that differe nt clas ses of computation (compu- tational p ow er) were prop osed for ea ch. 1.2 P and NP Right a round 19 70, Leo nid Lev in, in the USSR, a nd Stephen Co ok, o f the US, indep endently a rrived at the co ncept of NP-completeness[1]. The idea of NP-completeness comes from tw o classes o f algo rithms. Once aga in, these tw o cla sses come from the ideas of DTMs a nd NTMs. The cla sses a re defined partially by how m uch can be accomplished within a limited amount of time. This time limit, which will b e explained br iefly her e, is defined by the mathema tics of asymptotic analy- sis, which is describ ed in [10]. Basica lly , the time limit is g iven as a function o f the pro blem size. The function can take on man y forms, and t wo commo n forms are shown above in Figur e 2. Here a p olynomial function ( t = O ( n α )) is contrasted with an exp onen- tial function ( t = O ( α n )). Note that the poly nomial functions may take more time for s ma ller problems, but the larger pr oblems, which computer s cientists are mainly concerned with, have smaller times for po lynomial functions. The t wo classes mentioned a bove ar e defined for po lynomial functions. One c la ss, P , is defined as a ll problems that ca n be solv ed by a DTM in polynomial time (a p olynomia l function o f the problem size). The T owards P = N P via k -SA T c Matt Groff 201 1 3 Time log( t ) Problem Size ( n ) t = O ( α n ) t = O ( n α ) Figure 2 : The d i fference b et ween functions other class, N P , is defined a s the class of all prob- lems that ca n b e so lved by an NTM in p olynomial time. N P -co mplete proble ms ar e then problems in N P that are at least as hard to so lve as the hardes t problems in N P . The fundamental question that this pap er attempts to a nswer is if N P -c o mplete pr o blems can be solved in p olynomia l time b y a DTM; in other w or ds, is P = N P ? F or more informa tion on P versus N P , one can refer to Sipser [6]. 1.3 SA T and k-SA T SA T es sentially asks if, given a Bo olean formula, can the form ula b e satisfied. In other words, it a s ks if the v ar iables in the formula b e g iven a truth assignment that makes the entire for m ula true. It’s req uired, to thoroughly answer the question, that a certificate is given if the answer is true. This is usually in the form of a particular solution to the question, such as a satisfying assig nment for a ll v aria bles. k-SA T is a par ticular v ariation of SA T, in which the formula is org anized into clauses. All v aria bles inside a clause are connected via disjunction. All clauses are connected via conjunction. The k in k -SA T then refers to the num b er of v ariables in each clause. More information on SA T a nd k-SA T can be found in [12]. Conv entional SA T and k-SA T solvers function by learning clauses or making random guess es at solu- tions ([14] and the recent survey [15]). The current best upp er bounds for 3-SA T is O (1 . 3 2113 n ) time, a nd can b e found in ISAA C 2 010 by Iwama et al[16]. There’s an arXived paper by Hertli, Mo ser and Scheder which gives an O (1 . 3 21 n ) time algorithm[1 7]. These are all ra ndo mized algo- rithms. There is an arXived, deterministic 3-SA T algorithm by Kutzko v and Scheder that runs in time O (1 . 4 39 n )[18]. There is a pa p e r tha t prop osed a p olynomia l run- time for a SA T v a riant. V.F. Romanov prop osed a 3- SA T algo r ithm that us e s “ discordant structur e s”, or set-like opera tions on a lattice, to repr esent informa- tion and determine s o lutions in p oly nomial time[19]. How ever, Liao pr esents a n arg ument that P is not equal to NP , using a 3-SA T v a riant, in [9]. Perhaps Ba ker, Gill, and Solov ay’s theory o f rela - tivization helps explain why ther e are not more algo- rithms that attempt to so lve NP-complete pro blems in p olynomial time[20]. Their pap er “Relativiza tions of the P != NP Question” states that cons ulting an oracle can lead to situa tio ns in which P != NP . T owards P = N P via k -SA T c Matt Groff 201 1 4 x 2 x 1 x 0 ? ? ? 0 ? ? 1 ? ? 0 0 ? 0 1 ? 1 0 ? 1 1 ? 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 x 1 = 0 1 2 3 4 5 6 7 0 0 1 1 0 0 1 1 f ( x 1 ) = 0 x 0 0 x 1 1 x 2 1 x 3 0 x 4 0 x 5 1 x 6 1 x 7 Figure 3 : Representa tio ns of x 1 2 Data Structur e(s) One of the most fundamental comp onents of the a l- gorithm will b e refere d to as the clause p olynomial . Figure 3 s hows the basic orga nization of cla use po ly- nomials. Es s entially , the complete information of one clause is represented b y a po lynomial of one v ari- able. That is, for eaxh par ticular truth a ssignment of all v a riables, the p olynomial “ records ” w he ther th e clause satisfies this assignmen t. This is then attac hed to the p olynomial’s v ar iable, w hich or ders the truth assignments. Note that the particular truth assingnments are shown in red. In this fashion, the same tr ee that organize s the v ariables can b e rep eatedly used, and therefore provides some s tandard organiza tion to the information. Aga in, this tree orga nizes all p oss ible v ar iable assignements. It do e s so by star ting a t the ro ot, and then pro ceeds through all Bo o lean v ar iables in or der, and as signs one no de to each po ssible assign- men t of true or fa ls e fo r eac h v aria ble. The leav es (at the b ottom) thus represent a co mplete truth assing - men t fo r all v a riables. The cla use p olynomial for a single v aria ble is equiv- alent to a digit in a binar y num b er. In or der to under- stand this, Figure 4 s hows binary n um b ers comp osed of one to three bits. The tree in Figur e 3 uses a three bit exa mple, since ther e are thre e v ariables; thu s one 1 Bi t 0 (0) 1 (1) 2 Bi t s 00 (0) 01 (1) 10 (2) 11 (3) 3 Bi t s 0 0 0 (0) 0 0 1 (1) 0 1 0 (2) 0 1 1 (3) 1 0 0 (4) 1 0 1 (5) 1 1 0 (6) 1 1 1 (7) Figure 4 : Binary R epresenta tions T owards P = N P via k -SA T c Matt Groff 201 1 5 x 0 In System Of x 0 , x 1 , x 2 x 2 0 (1 + x 2 1 )(1 + x 2 2 ) = x 1 + x 3 + x 5 + x 7 = 0 x 0 + 1 x 1 + 0 x 2 + 1 x 3 + 0 x 4 + 1 x 5 + 0 x 6 + 1 x 7 x 1 In System Of x 0 , x 1 , x 2 (1 + x 2 0 ) x 2 1 (1 + x 2 2 ) = x 2 + x 3 + x 6 + x 7 = 0 x 0 + 0 x 1 + 1 x 2 + 1 x 3 + 0 x 4 + 0 x 5 + 1 x 6 + 1 x 7 Figure 5 : Constructing Polynomials F or V ariables bit for each v ar ia ble. Note t hen, that the middle v ari- able, x 1 , out of x 0 , x 1 , a nd x 2 , co rresp onds with the middle bit. This can b e s e e n in the fig ur e by lo oking at the middle bit for each binar y representation of three bits (which is displayed larger than the other t wo bits). The seque nc e is the same as the binar y tree ab ov e. 2.1 Constructing Clause P olynomials A basic clause is of the form x a 0 ∨ x a 1 ∨ · · · ∨ x a z (2.1) The algo rithm se e ks to constr uc t clause p olynomials for clauses in this form. This is a tw o step pro cess. The basic theory be hind the pr o cess is fair ly easily ex- plained her e, although the technicalities and a pro of are sav ed for App e ndix A on pa ge 26. T o b eg in to understand how cla use po lynomials are constructed, a few patterns a r e observed. Figur e 6, on the next page, helps to demo nstrate these. First, the obvious patter n of binary digits app ear s for the v ar iables o n the lefthand side. All v ar iables have a pattern of 2 i zeros and then 2 i ones for v a riable x i , which then rep eats. These v a riables, in co njunction, corres p o nd with binary n umbers. In other words, for the least significan t bit, the pattern is ze ro, then one, and then it r ep eats. F or the next least sig nificant bit (or v ar ia ble), the pattern has t wo zeros and then tw o ones, which then rep eats. This pattern contin ues for all v a riables. Observing this pattern, a n e asy wa y to co nstruct the clause p olynomials for a single v ariable b ecomes clear. In fact, it can b e summarized as a simple for- m ula: f ( x m ) = m − 1 Y k =0 (1 + x 2 k ) ! x 2 m n Y k = m +1 (1 + x 2 k ) ! (2.2) Here Q ( f ( x )) denotes th e (indefinite) pro duct of a function of x. More information ca n b e found ab out indefinite pro ducts in [26]. This is ta ken over a s ys- tem of n + 1 v ariables. Figure 5 s hows ho w the cla use p olynomials are ac- tually constructed using the formula. No te that these are all fo r a single v a riable. F or the x 0 example, m = 0 since the v a riable is x 0 . In other words, if the v ariable was x 32 , then the a lgorithm would plug in m = 3 2 in to the form ula. Note, then, that n is one less tha n the total num b er of v ariables, so in both cases it is tw o, since there a re three v ariables. Re- turning to the x 0 example, the lefthand pro duct ha s nothing inside it, the middle follows from kno wing n , and the right side pro duct is determined from m and n a nd the rules of pro ducts. F o r the x 1 example, there is a product on the left side to w ork with, since m is now one, and k is set to zero for this. Then it follows that (1 + x 2 k ) = (1 + x 1 ) since k = 0 . The right side follows similarly , this time with k = 2. Returning atten tion to Figure 6 on the pr evious page, a second pattern can be observed. This is app earant in the blue por tion of the figur e. Here, T owards P = N P via k -SA T c Matt Groff 201 1 6 x 4 x 3 x 2 x 1 x 0 x 1 ∨ x 3 x 0 ∨ x 3 x 0 ∨ x 2 x 0 ∨ x 1 x 0 ∨ x 1 ∨ x 3 x 0 ∨ x 1 ∨ x 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Figure 6 : Rep eating P atterns of Cl auses F or Clauses o f the F orm x a 0 ∨ x a 1 ∨ · · · ∨ x a z F or each x a i Calcula te g ( x a i ) = Q a i − 1 k =0 (1 + x 2 k ) x 2 a i Let Result = 0 F or h = 0 to z If ( h = a i ) f or some a i Result = Result + g ( x h ) Else Result = Result · (1 + x 2 h ) Return Result Figure 7 : Clause P olynomial Construction Algorithm T owards P = N P via k -SA T c Matt Groff 201 1 7 ( x 0 ∨ x 1 ) F ro m x 0 , x 1 , x 2 g ( x 0 ) = x 2 0 = 1 x 1 g ( x 1 ) = 1 + x 2 0 x 2 1 = 1 x 2 + 1 x 3 f ( x 0 ∨ x 1 ) = ( x 1 ) + ( x 2 + x 3 ) 1 + x 2 2 f ( x 0 ∨ x 1 ) = 0 x 0 + 1 x 1 + 1 x 2 + 1 x 3 + 0 x 4 + 1 x 5 + 1 x 6 + 1 x 7 ( x 0 ∨ x 1 ∨ x 2 ) F rom x 0 , x 1 , x 2 g ( x 0 ) = x 2 0 = 1 x 1 g ( x 1 ) = 1 + x 2 0 x 2 1 = 1 x 2 + 1 x 3 g ( x 2 ) = 1 + x 2 0 1 + x 2 1 x 2 2 = 1 x 4 + 1 x 5 + 1 x 6 + 1 x 7 f ( x 0 ∨ x 1 ∨ x 2 ) = ( x 1 ) + ( x 2 + x 3 ) + ( x 4 + x 5 + x 6 + x 7 ) f ( x 0 ∨ x 1 ∨ x 2 ) = 0 x 0 + 1 x 1 + 1 x 2 + 1 x 3 + 1 x 4 + 1 x 5 + 1 x 6 + 1 x 7 Figure 8 : Clause P olynom ial Examples the second pattern is shown in b oxes for tw o- v ar iable clauses. There is the or iginal one-v ariable patter n of ones and zero s, which is then follow ed by a series of ones. Note that the series of ones is exactly the same size a s the o r iginal sing le-v ariable patter n. This en- tire pattern then r epe a ts. The obser ved pattern comes fr om the combination of tw o v a riables. The long se r ies of ones comes fro m the v ariable that is “la rger” than the other. The short series of rep eating ones and z e r os comes fro m the “smaller ” o f the tw o v ariables . T ogether, they form a series that repeats. Ano ther wa y of lo oking at it is that b oth v aria bles form a rep ea ting series , and combining these tw o rep eating series crea tes another rep eating series . In fact, the pattern of r ep etition co nt inues even as more v ariables are a dded to the clause. The purple po rtion of the figur e shows the repitition(s) inv olved with three v ariables. Aga in, combining t wo v a r iables pro duces a s hort, rep eating series. When a third v ar i- able (that is s tr ictly “ la rger” tha n the other tw o) is added, a longe r, yet rep eating , ser ies emerges. Again, the techinicalities of all of this are presented (and solved) in App endix A on page 26. There is a very simple a lgorithm to calculate clause po lynomials for a ny amount of v a r iables, a s long as they are in the for m g iven in E quation 2.1. Figure 7 shows the alg o rithm. The ba sic idea is to first cal- culate the the (p oss ibly long) sequence of ones that rep eats for each v ariable. This is then shifted into the prop er p osition. It is identical to the cla use ca l- culations for o ne v aria ble, with the exception that the series is not made to r e p ea t. The idea is that us- ing “smalle r ” v ariables , the a lg orithm will co nstruct short re p ea ting se q uences, and add in the a ppropri- ate larg er v ariable when the rep eating sequence gets large enough. So es sentially , it constructs one large rep eating se q uence by making the s maller v ariable se- quences rep ea t, and a dding in larg e r v ariables when the seq uence gets lar ge enoug h. Figure 8 shows exa mples of creating clause po lyno- mials for given problems. Note that the complete set of v ariables for the orig inal pr oblem m ust b e known ahead of time, just a s fo r clauses of individual v ari- ables. As can b e se e n, the results from this fig ure corres p o nd with the first eight v alues from the co rre- sp onding clauses in Figure 6. There is really only one opera tio n r emaining to be- ing able to construct e s sentially a ny clause po lyno- mial of the form in Equation 2.1; nega tion. As it turns out, negation is not muc h more difficult than constructing non-negated claus e p oly no mials. T owards P = N P via k -SA T c Matt Groff 201 1 8 x 4 x 3 x 2 x 1 x 0 x 0 x 0 ∨ x 1 x 0 ∨ x 1 x 0 ∨ x 1 x 0 ∨ x 1 ∨ x 2 x 0 ∨ x 1 ∨ x 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 Figure 9 : P atterns of Clauses With Negation F or Clauses o f the F orm x a 0 ∨ x a 1 ∨ · · · ∨ x a z g ( x a i ) = Q a i − 1 k =0 (1 + x 2 k ) . . . If ( h = a i ) f or some a i Result = Result · x 2 a i + g ( x a i ) Else If ( h = a i ) for some a i Result = Result + g ( x h ) Else Result = Result · (1 + x 2 h ) . . . Figure 10 : Neg ated Cl ause Polynomial Cons truction Alg orithm T owards P = N P via k -SA T c Matt Groff 201 1 9 ( x 0 ∨ x 1 ) F ro m x 0 , x 1 , x 2 g ( x 0 ) = x 2 0 = 1 x 1 g ( x 1 ) = 1 + x 2 0 = 1 x 0 + 1 x 1 f ( x 0 ∨ x 1 ) = (1 x 1 ) x 2 1 + (1 x 0 + 1 x 1 ) · 1 + x 2 2 f ( x 0 ∨ x 1 ) = 1 x 1 + 1 x 1 + 0 x 2 + 1 x 3 + 1 x 4 + 1 x 5 + 0 x 6 + 1 x 7 ( x 0 ∨ x 1 ∨ x 2 ) F rom x 0 , x 1 , x 2 g ( x 0 ) = x 2 0 = 1 x 1 g ( x 1 ) = 1 + x 2 0 = 1 x 0 + 1 x 1 g ( x 2 ) = 1 + x 2 0 1 + x 2 1 x 2 2 = 1 x 4 + 1 x 5 + 1 x 6 + 1 x 7 f ( x 0 ∨ x 1 ∨ x 2 ) = (1 x 1 ) x 2 1 + (1 x 0 + 1 x 1 ) · 1 + x 2 2 + (1 x 4 + 1 x 5 + 1 x 6 + 1 x 7 ) f ( x 0 ∨ x 1 ∨ x 2 ) = 1 x 0 + 1 x 1 + 0 x 2 + 1 x 3 + 1 x 4 + 1 x 5 + 1 x 6 + 1 x 7 Figure 11 : Clause P olynomial Examples With Negation Figure 9, on the pr evious page, displa ys claus e pat- terns with nega ted v aria bles. As can b e noted b y the nega ted v ariable x 0 near the left, it is just a re- versed version of o nes and zeros, co mpared to the non-negated x 0 display ed just to the left o f it. In other words, just as nega tion r everses true and false v alue s in Bo olea n algebra, neg ation reverses zeros and ones in clause p olynomia ls. In fact, this is how all single-v ariable clauses are negated; simply sw a p the zeros a nd ones . There is actually a very simple and effective wa y to do this - the negated clauses are the same a s the regular clauses except for the fac t that they a re not m ultiplied b y a pow er of x 2 m , as the regular cla uses are. This is the equiv alent as r emov- ing this term from Equation 2.2, hence the resulting equation: f ( x m ) = m − 1 Y k =0 (1 + x 2 k ) ! n Y k = m +1 (1 + x 2 k ) ! (2.3) Again, this is for a system of n + 1 v aria bles, and n can b e adjusted according ly for the size of the prob- lem. Next, attention can b e turned to the new pat- terns observed for multiple-v ar ia ble clauses. Figure 9 shows the negated v alues o f x 0 next to the com- bined clause x 0 ∨ x 1 . Note that the pattern of ones coming from x 1 has not c hang ed in any wa y; howev er, the r e pea ting pa ttern of zeros and o nes coming fro m x 0 has been switched, compared to the non-negated equation x 0 ∨ x 1 bes ide it on the rig h t. So really , it is bec oming evident that neg ating v ariables only swaps the patter n of ones that the v aria ble cr eates. This is more evident if the cla use x 0 ∨ x 1 is ob- served b eside it. Here the pattern of ones coming from the v a riable x 1 has changed, sinc e x 1 has b een negated. So again, there is evidence that the pattern of ones coming from a v ar iable is simply “exchanged” or pla ced elsewhere. It can b e seen, though, that th is do es not change the pattern for x 2 , a s evidenced in the equa tion x 0 ∨ x 1 ∨ x 2 bes ide it. Finally , it’s seen that sw apping the pattern of ones may entail mo ving a smaller group of ones a nd zeros. This is evidenced in the final clause in the figure. Here, the v ar iable x 2 has been negated, which causes a shift in the lar gest pattern of ones, which is due to x 2 . Two examples hav e b een work ed out in slig ht detail in Fig ur e 11. Here the pattern is mostly the same, with the exceptio n that negating v a riables causes some v alues to b e swapped. This is the equiv alent of moving the pattern of o nes. The adjustments to the algorithm are shown in Figure 10 on the previous page. T owards P = N P via k -SA T c Matt Groff 2 011 10 ?? 0? 1? 00 01 10 11 0 x 0 1 x 1 0 x 2 1 x 3 + + + = f ( x 0 ) ?? 0? 1? 00 01 10 11 0 x 0 0 x 1 1 x 2 1 x 3 + + + = f ( x 1 ) 0 x 0 1 x 1 0 x 2 1 x 3 0 x 0 0 x 1 1 x 2 1 x 3 0 Clauses 1 Clause 2 Clauses Satisfied Satisfied Satisfied Figure 12 : Tw o Clause Satisfaction 3 Tw o Clause Problems Now that sufficient background ha s b een pr esented, the ma jor idea s b ehind the alg o rithm can b e intro- duced. Two clause problems ar e some of the simplest cases, yet they a llow the fundamental idea s to b e in- tro duced and studied. Figure 12 shows the four basic p ossible combina- tions b etw een tw o cla uses. If a cla use p olynomial is considered with any amount o f terms (powers of x ), there are o nly tw o p o s sible v alues (known in math- ematics as co efficients) that can b e asso ciated with each term; zero or one. I f tw o clauses are considered, there are t wo p o s sible v alues for the co efficient of the first p o ly nomial, and t wo p ossible v alues for the co- efficient o f the se c o nd p olynomial, for a tota l of four different combinations. The fig ure shows tw o clause po lynomials and the trees asso c ia ted with them. In the middle, the four po ssible combinations can b e s een. Remem ber ing that the co efficients repr esent satisfactio n for a par- ticular truth assignment, it’s p o s sible to no te the total sa tisfaction for tw o c la uses consider ed simul- taneously . When both cla uses hav e a zer o co e ffi- cient, that indicates that neither clause will satisfy the or iginal question of satisfaction for that pa r tic- ular truth ass ig nment . Lo oking thr o ugh the c lua se trees, it’s seen that this assignment co rresp onds to ( x 0 = false , x 1 = fals e). So this truth assignment won’t s atisfy any of the clauses. On the o ther hand, the tw o truth assig nmen ts in the middle of the fig- ure eac h s atisfy one clause. The truth assignment ( x 0 = true , x 1 = true) is s een to satisfy b oth cla us es, and so it is a so lution to the o r iginal pr oblem. Again, the r eason it satisfies both clauses is that it has a one for b oth co efficients, thus s ignifying that b oth c lauses are s atisfied. One impo rtant thing to note here is that b etw een t wo clauses, there are really o nly thre e p ossible sat- isfaction results. Either zero, one, or b oth of the clauses a re sa tis fie d (for any particular truth ass ign- men t). One general idea that, although perhaps triv- ial, will b e impor tant is that there are really o nly three types of satisfac tion b etw een t wo clauses. This basic conce pt will b e extended to s ee that there are really o nly n + 1 poss ible types of satisfaction be tw een n cla uses. The fundamental idea here is that the problems can be simplified so that large problems can really be dealt with by simply considering the types of satis- faction, which are fa ir ly simple. F or tw o clause prob- lems, it will really only b e necessa ry to deal with the three types o f satisfaction, and interactions be tw een the different types of satisfac tion. It will b e shown how op erations of multiplication and addition can b e used to work with the three dif- ferent types o f s a tisfaction, and to s olve ques tions ab out them. This will b e essentially simpler a nd in some wa ys necessar y to overcome the c o mplications of working with all of the p os sibilities betw een t wo or more cla uses. T owards P = N P via k -SA T c Matt Groff 2 011 11 ?? 0? 1? 00 01 10 11 0 x 0 1 x 1 0 x 2 1 x 3 + + + = f ( x 0 ) f ( x 0 ) = 0 x 0 + 1 x 1 + 0 x 2 + 1 x 3 f (1) = (1 + x 2 0 )(1 + x 2 1 ) = (1 + x )(1 + x 2 ) = 1 x 0 + 1 x 1 + 1 x 2 + 1 x 3 h ( x 0 ) = ( a · f ( x 0 )) + ( f (1) − f ( x 0 )) = (0 x 0 + ax 1 + 0 x 2 + ax 3 )+ (1 x 0 + 0 x 1 + 1 x 2 + 0 x 3 ) = (1 x 0 + ax 1 + 1 x 2 + ax 3 ) Figure 13 : Example Pre-Multiplication Calculation 3.1 Manipulating Clause P olynomials In o rder to simplify clause calculatio ns into the the equiv a lence clas ses of satisfaction, it is nece ssary to mo dify the orig inal claus e p oly no mials. The first mo difica tion used is a simple o ne. The algorithm will need to c hang e all o f the co efficients that are equal to one into co efficien ts that are equa l to a . This is easy ; it is just multiplication b y a . Here’s an example: f ( x ) = 0 x 0 + 1 x 1 + 0 x 2 + 1 x 3 a · f ( x ) = 0 x 0 + ax 1 + 0 x 2 + ax 3 The nex t mo difica tion is a bit tougher; it is ex- changing the one a nd zero co efficients. This ca n b e done by subtr acting the orig inal function from a func- tion o f ones . This inv erts the bits, since the ones a r e subtracted from o nes to be c ome zeros , and the the zeros ar e subtrac ted from ones to b ecome ones. How- ever, the function of all ones is needed. This function is, for a sys tem of V v aria bles: f (1) = V − 1 Y k =0 (1 + x 2 k ) (3.1) Note that: f (1) = 1 x 0 + 1 x 2 + 1 x 3 + · · · + 1 x 2 V − 1 This co mpletes the requirements for m ultiplication. The algorithm uses this kno wledge to c hange the one co efficients int o a coefficients and the z e ro co efficients int o one c o efficien ts. Then mult iplica tio n can b e per - formed, as will b e se e n. Figure 13 exhibits the steps taken b efore a function is r eady for multiplication. f ( x 0 ) is given through a tree, be fo re m ultiplicatio n. The o ne s must b e tra ns- formed into a s , and the zer os must b e transfor med int o ones. Note the final r esult, h ( x 0 ). It is the fin- ished c a lculation, ready for multiplication. T o do this, the function of ones for a tw o-v ariable system must b e calculated. Then the original func- tion can b e multiplied by a to transfor m the one co- efficients, and it is also s ubtracted from the function of ones (seper ately) to transfor m the zero co efficients. These are a dded together for the final result. Note that this pro cedure works for any clause in any system with any amount of v aria bles. It simply prepares the c la use p olyno mials for sp ecia l pro c e ss- ing, or interactions, with other clause p oly nomials. This may not seem like muc h, but it grea tly simpli- fies the system. T owards P = N P via k -SA T c Matt Groff 2 011 12 x 0 In System Of x 0 , x 1 , x 2 ( x 2 0 ) 2 1 + ( x 2 1 ) 2 1 + ( x 2 2 ) 2 = x 2 + x 6 + x 10 + x 14 = 0 x 0 + 1 x 2 + 0 x 4 + 1 x 6 + 0 x 8 + 1 x 10 + 0 x 12 + 1 x 14 ( x 0 ∨ x 1 ∨ x 2 ) F rom x 0 , x 1 , x 2 g ( x 0 ) = ( x 2 0 ) 2 = 1 x 2 g ( x 1 ) = 1 + ( x 2 0 ) 2 ( x 2 1 ) 2 = 1 x 4 + 1 x 6 g ( x 2 ) = 1 + ( x 2 0 ) 2 1 + ( x 2 1 ) 2 ( x 2 2 ) 2 = 1 x 8 + 1 x 10 + 1 x 12 + 1 x 14 f ( x 0 ∨ x 1 ∨ x 2 ) = ( x 2 ) + ( x 4 + x 6 ) + ( x 8 + x 10 + x 12 + x 14 ) f ( x 0 ∨ x 1 ∨ x 2 ) = 0 x 0 + 1 x 2 + 1 x 4 + 1 x 6 + 1 x 8 + 1 x 1 0 + 1 x 1 2 + 1 x 1 4 Figure 14 : Example Pre-Addition Calculation Addition requires modifica tions too. Ho wever, these mo difications ar e of a differen t v ariety than those of multiplication. Ess ent ially , the p ow er of x needs to b e doubled. Doubling th e p ow er o f x can actually be fairly sim- ple. Figure 14 displays a r etake of the creation of clause p olyno mials. The first part o f the example is really just Figure 5 on pa ge 5, mo dified fo r doubling the p ow er of x . Note that each p ow er o f x , once it is essentially ca lculated, is doubled. The re s ult is tha t all o f the powers o f x are m ultiplied by tw o in the finished clause p olyno mia l. The second par t o f the figur e is a lso a redo , this time of a multiple v aria ble clause taken fr om Fig ure 8 on pag e 7. Aga in, the main idea is that the p ow ers of the v ariable x is doubled. The a ctual op era tion of a ddition is also p er fo rmed on one specia l v alue, which comes in part from the ideas of multiplication. A constant is added to each co efficient of the p oly no mials. This constant is the same v a lue for all co efficients, so the f unction of ones once again b eco mes useful. Unfor tunately , in its orig - inally derived f or m, the pow er s of x are not the same as the p owers of x used in addition. So once aga in, the same principles are us ed to double the p owers of x for the function of ones. It is really as simple to do this a s altering every power of x in the origina l equation (Eq uation B on page 26). The new formula for the mo dified function o f ones is as follows: f (1 mo difie d ) = V − 1 Y k =0 1 + ( x 2 k ) 2 (3.2) These op erations will b e very useful in the mater ial ahead. T owards P = N P via k -SA T c Matt Groff 2 011 13 ?? 0? 1? 00 01 10 11 0 x 0 1 x 1 0 x 2 1 x 3 + + + = f ( x 0 ) ?? 0? 1? 00 01 10 11 0 x 0 0 x 1 1 x 2 1 x 3 + + + = f ( x 1 ) · 0 x 0 1 x 1 0 x 2 1 x 3 0 x 0 0 x 0 1 x 0 1 x 0 0 x 0 0 x 1 0 x 2 0 x 3 0 x 1 0 x 2 0 x 3 0 x 4 0 x 2 1 x 3 0 x 4 1 x 5 0 x 3 1 x 4 0 x 5 1 x 6 Figure 15 : Clause Multiplication ∧ F als e T r ue F als e F als e F als e T r ue F alse T rue · 0 1 0 0 0 1 0 1 Figure 16 : Op eration Equiv ale nce 3.2 Multiplication It could b e said that the algo rithm is focused around (arithmetic) multiplication o f cla us e p oly nomials. Figure 15 details an example. It b egins with tw o clause p olyno mials, f ( x 0 ) and f ( x 1 ). The asso ciated trees a r e shown that go with the cla use p olynomia ls. In the middle, the cla use p olynomia ls are br oken in to pieces and multiplied piecewise. Every part of each po lynomial is multiplied with every part of the other po lynomial. This corres po nds with plain old arith- metic multiplication of tw o polyno mials. Ho wev er, an interesting even t happ ens her e. The result that lies along the diago nal (sho wn in yellow) corre sp o nds with the result f ( x 0 ∧ x 1 ). T ha t is, the diag onal rep- resents the corres po nding clause p olynomial for the result of conjunction. The idea here is that multipli- cation of clause p olynomials ca n b e us ed to ev aluate the origina l Bo olea n equation. Figure 16 helps give a pa rtial explanation of w hy this o c c urs. As ca n b e seen from the truth tables, the logical op er ation of conjunction ( ∧ ) and the arith- metic o p eration of multiplication( · ) a re equiv alent on bit v alues . So it’s not totally unpredictable that m ultiplicatio n of a clause poly nomial con tains results similar to co njunction. The dia g onal in Figure 15 contains like terms multiplied by like terms. Only the co efficients (or num b ers attached to the terms) may differ. The result (along the dia gonal) is known in mathematics as the Hadamar d pro duct, and the algorithm is concentrated on sepera ting this diagonal term fr om the off-diago na l terms. The re a son why the algor ithm is so closely asso- ciated with the Hadama r d pro duct, o r the diagonal terms, is that it is essentially the solution to the o r igi- nal pro blem. Since it tells whic h ter ms are satisfying, the alg o rithm can simply count the num b er of sa t- isfying terms along the diagonal. If there are an y satisfying terms, then the o riginal problem can be satisfied. Otherwise, it can’t b e sa tisfied. So at this p oint in the ideaolog y , the origina l Bo olean proble m has b een tr ansformed from a ques- tion a bo ut Bo o lean ar ithmetic, to a question co ncern- ing how to g et infor mation ab o ut the diag onal (or Hadamard pro duct). T owards P = N P via k -SA T c Matt Groff 2 011 14 · = · = · = · = 0 · x 1 · x 0 · x 1 · x 0 · x 0 · x 1 · x 1 · x 1 · x a · x 1 · x a · x 1 · x 1 · x a · x a · x 1 x 2 ax 2 ax 2 a 2 x 2 0 Clauses Satisfied 1 Clause Satisfied 2 Clauses Satisfied 0 Clauses Satisfied 1 Clause Satisfied 2 Clauses Satisfied Figure 17 : Multipli cation and Satisfaction 3.3 Multiplication and Satisfaction Figure 17 shows another viewp oint of arithmetic mul- tiplication. This time the four different ca s es ba sed on the coefficie n ts are shown. In o ther words, be- t ween any tw o co e fficie nts being mu ltiplied, the input co efficients must b e one of four cases. Also, o n the left, the sa tisfaction betw een the tw o original c o effi- cients is shown. Then, o n the rig ht , the corresp ond- ing mo dified co efficie n ts and their results a r e shown. Note that there are still really three cases of satisfac- tion, resulting in 1 x 2 , ax 2 , and a 2 x 2 . It’s see n that mo difying the co efficients fo r multi- plication has allow ed for the three cases to app ear in the results of multiplication. This is one of the keys to ge tting things to work cor rectly . The algorithm is r eally interested in the cas e when b oth cla uses are satisfied. Unfortunately , the results for multiplication mix the diago na l and off-diag onal cases toge ther . In or- der to isolate the diagonal, there will be so me tricky int er play betw een m ultiplication and addition a head. One thing that can be noted is that m ultiplica- tion with the o riginal claus e p olynomials isolates the case where ev ery thing is maxima lly s atisfied (both clauses ar e satisfied). This can b e seen in Figure 18. Note that only the ca se o n the far right (where bo th cla uses have no nzero c o efficien ts) r eturns any- thing other than zero. Now this will o ccur for both 0 · x 1 · x 0 · x 1 · x 0 · x 0 · x 1 · x 1 · x 0 · x 2 0 · x 2 0 · x 2 1 · x 2 Figure 18 : Unmo dified Multi plication diagonal and off-diagonal v a lues, but it’s very c lo se to a solution. The alg orithm w ants the dia gonal po r- tion of this result, and seeks to somehow elimina te the off-dia gonal po rtion of it. The ne x t subsection will show how the algo - rithm can beg in to sep erate diag onal terms from off-diagona l terms. The algor ithm’s efforts will be concentrated on sep era ting diago na l v alues from off- diagonal v alues, since the res ult will lead to a solu- tion. T owards P = N P via k -SA T c Matt Groff 2 011 15 c 0 (1 x 2 , a 0 x 2 , a 0 x 2 , a 0 2 x 2 ) c 1 (1 x 2 , a 1 x 2 , a 1 x 2 , a 1 2 x 2 ) (0 x 2 , 1 x 2 , 1 x 2 , 2 x 2 ) + ( dx 2 , dx 2 , dx 2 , dx 2 ) ≡ ( c 0 x 2 , c 0 a 0 x 2 , c 0 a 0 2 x 2 ) ≡ ( c 1 x 2 , c 1 a 1 x 2 , c 1 a 1 2 x 2 ) ≡ dx 2 , ( d + 1) x 2 , ( d + 2) x 2 (0 , 0 , 0) + off-diagonal Figure 19 : Combining Op erations F or Elim ination 3.4 Eliminating The Diagonal Figure 20, on the following pag e, pres ent s the cases for addition a longside the b etter-explo red op eration of multiplication. The main thing to no te is that ad- dition can r eturn one of three r e sults, just like the equiv a lence clas ses of satisfaction. In fact, they are once again related in the same wa y that m ultiplica- tion is related to satisfaction. Figure 21 shows what is pro po sed for the algo- rithm. It will take tw o m ultiplicatio n ope rations (three are s hown, bu t this is more than needed), a nd combine them with one addition, to eliminate ev - erything except for the off-da g onal v alues, which are shown as gray triangles. Everything is really summarize d as ar ithmetic in Figure 19. Here , the original results a re s hown on the left, b eing mo dified so that they ca n be com- bined tog ether. It can be noted that the middle tw o cases hav e always ha d equal results, so they hav e b een combined. The right side shows only thr e e c ases in p ar anthesis, which ar e the satisfaction e quivalenc es. The algorithm s eeks to eliminate these, th us the sums at the b ottom are zer o, exce pt for the off-diag onal. Here the algo r ithm comes up with thr ee e q uations that must b e sa tisfied: c 0 + c 1 − d = 0 (3.3) c 0 a 0 + c 1 a 1 − ( d + 1) = 0 (3.4) c 0 a 0 2 + c 1 a 1 2 − ( d + 2) = 0 (3.5) Again, these co me fro m the three cases on the right side of Figure 19, s umming the columns. T o summarize all of this ag ain, what is essentially happ ening is that tw o multiplications, together with an addition, cancel out all co efficients a long the di- agonal. How ever, multiplication cr e a tes off-diag onal v alue s , and these v alues will remain. So in effect, the algorithm isolates off-diag o nal v alues. It doe s so b y using the v alues calculated from a ddition to ca ncel out the diagonal from multiplication. So now the algor ithm can concentrate fully on elim- inating the ter ms a long the diago nal to isolate the o ff- diagonal terms. The follo wing s ubsection will explore how to get these terms to cancel. First, it’s time to intro duce modular arithmetic. Subsection C.11 in the App endix notes so me so urces that go over mo dular a r ithmetic. T o simplify the ideas, essentially all calculations are p erformed as usual, except that an additional op eration is p er- formed afterwards. After the calculations a re done, to get the result mo dulo a prime p , the algorithm do es the equiv alent o f taking the remainder after di- viding the r e sult b y p . Th usly , all calcula tions a re represented by a n integer g reater than or equal to zero a nd less than p . So all calculations have a very limited range of v alues that can re sult. This in tro duces a notion of equiv a le nce, where tw o v alue s ar e equiv alent if they are the same mo dulo p . This allows the algor ithm to find solutions more easily , since the norma l r estriction that c a lculations m ust be equal is r elaxed so that calculations m ust only b e equiv alent. Equiv alence is really in tro duced so that the equa- tions that must b e satisfied ca n be solved. The relaxed restrictions allow for solutio ns tha t ca n be found ea sily . T owards P = N P via k -SA T c Matt Groff 2 011 16 · = · = · = · = + = + = + = + = 1 · x a · x 1 · x a · x 1 · x 1 · x a · x a · x 1 · x 2 a · x 2 a · x 2 a 2 · x 2 0 · x 2 1 · x 2 0 · x 2 1 · x 2 0 · x 2 0 · x 2 1 · x 2 1 · x 2 0 x 2 1 x 2 1 x 2 2 x 2 + Figure 20 : Cases o f Multipli cation and Addition + + Multipl icati on - Additi o n = Result Figure 21 : Elimi nating The Di ago n al T owards P = N P via k -SA T c Matt Groff 2 011 17 c 0 = 1 c 0 = 2 c 0 = 3 c 0 = 4 c 0 = 5 c 0 = 6 c 0 = 0 1 ≡ 1 a 0 ≡ 2 a 0 2 ≡ 4 1 2 1 2 · 1 ≡ 2 2 · a 0 ≡ 4 2 · a 0 2 ≡ 1 2 4 2 3 · 1 ≡ 3 3 · a 0 ≡ 6 3 · a 0 2 ≡ 5 3 6 3 4 · 1 ≡ 4 4 · a 0 ≡ 1 4 · a 0 2 ≡ 2 4 1 4 5 · 1 ≡ 5 5 · a 0 ≡ 3 5 · a 0 2 ≡ 6 5 3 5 6 · 1 ≡ 6 6 · a 0 ≡ 5 6 · a 0 2 ≡ 3 6 5 6 0 · 1 ≡ 0 0 · a 0 ≡ 0 0 · a 0 2 ≡ 0 0 0 0 Figure 22 : Multipli cation Results Mo dulo 7 T o find solutions that elimina te the diago nal, a fi- nite field is introduce d, allowing op eratio ns to be co n- ducted mo dulo a prime p . Figur e 22 displays calcu- lations inside a field mo dulo 7. Here the algo rithm selects a 0 = 2, although it could se le ct r eally an y v alue o ther tha n one or zero. It then calc ula tes the results o f multiplication (times a consta nt c 0 ), which for the v arious eq uiv ale nce c lasses are c 0 · 1, c 0 · a 0 , and c 0 · a 0 2 . The algo rithm is really in tere s ted in the second- order difference. This is illustr ated in Figur e 23. That is, it takes the difference b etw een results, and then takes the difference o f these differences. It’s fairly straig ht fow ar d to g et the initial results; using arithmetic multiplication works fine. Then the results hav e to b e mo dulated. As an example, in Figure 23, a 1 is set as thr ee. Then, to ca lculate a 1 2 , take 3 2 = 9. Then 9 / 2 = 1 with remainder 2. So the result is the r emainder, which is 2. That g ets the initial results, in the field. Then to get the first or der differences , take successive results and subtract the fir st from the succes sive. This is il- lustrated very clea rly in the figure, whic h do es a b et- ter job of expla ining. The take the difference again, which is called the seco nd-order differe nc e . The reaso n that it’s so impor tant to g et these second-or der differences is that they help match up m ultiplicatio n r e s ults. Note that b oth figures are ac- tually t w o sepera te multiplications; Figure 2 2 uses a 0 and c 0 while Fig ure 23 uses a 1 and c 1 . They both use the same prime, so they a re tw o sep era te multiplica- tion r esults that can b e matched up. The goa l is to 1 ≡ 1 mo d 7 a 1 ≡ 3 mod 7 a 1 2 ≡ 2 mod 7 3 − 1 ≡ 2 2 − 3 ≡ 6 6 − 2 ≡ 4 Figure 23 : Di fferences of Multi plication Results find the s econd-order differences that add up to zero mo d p . Note that the ex ample with a 1 has a second- order difference of fo ur. 3 + 4 = 7 ≡ 0 mo d 7, s o the idea is to find a sec ond-order difference o f thr ee in the other e quation. It is done with c 0 = 3. So now t wo equations have been found that match up. T o chec k this result, the equations can b e co m- bined: c 0 · 1 + c 1 · 1 ≡ 3 · 1 + 1 · 1 ≡ 4 ≡ d + 0 e c 0 · a 0 + c 1 · a 1 ≡ 3 · 2 + 1 · 3 ≡ 2 ≡ d + 1 e c 0 · a 0 2 + c 1 · a 1 2 ≡ 3 · 4 + 1 · 2 ≡ 0 ≡ d + 2 e The ma in thing to no te is that the sequence of r e- sults in the equations (4, 2, 0) ca n b e recrea ted by an addition op era tion (o n clause p olynomia ls). That is , the first equa tion = 4+ 0( − 2), the second = 4 + 1( − 2 ), and the third = 4 + 2( − 2). T owards P = N P via k -SA T c Matt Groff 2 011 18 Diagona l ( c 0 · 1) + ( c 1 · 1) − d ( c 0 · a 0 ) + ( c 1 · a 1 ) − ( d − e ) ( c 0 · a 0 2 ) + ( c 1 · a 1 2 ) − ( d − 2 e ) = 0 = 0 = 0 0 (3 · 1) + (1 · 1) − 4 (3 · 2) + (1 · 3) − (4 − 2) (3 · 2 2 ) + (1 · 3 2 ) − (4 − 2 · 2) = 0 = 0 = 0 0 Off-Diagona l ( c 0 · 1) + ( c 1 · 1) ( c 0 · a 0 ) + ( c 1 · a 1 ) ( c 0 · a 0 2 ) + ( c 1 · a 1 2 ) (3 · 1) + (1 · 1) (3 · 2) + (1 · 3) (3 · 2 2 ) + (1 · 3 2 ) = 4 = 2 = 0 4 b 0 + 2 b 1 + 0 b 2 Figure 24 : Tw o Clause Results Figure 24 shows the results o f combining equations (addition and multiplication). Here the the results of clause opera tions (m ultiplicatio n and addition) a re combined, a nd are shown a ccording to whether or not they lie along the diagonal. As men tioned pr eviously , the addition o pe r ation cancels out the multiplication along the dia gonal. How ever, the results off the diag- onal are not cancelle d. Thu sly , the results off the diago nal b eco me multi- pliers for unkno wn quantities. This is b ecause the po lynomials that under line the equations a re a lso part of this mix, and the r e s ulting p olynomial qua n- tities a re unknown, even though the m ultipliers ar e known. Thus these quantities are represented as b 0 , b 1 , and b 2 . This allows for a new r esult. Remember that the original quantities are com bined to gether in the previ- ous equatio ns that w ere used. Thus, this new result represents a sum o f the new qua n tities, along with their respe c tive m ultipliers. Therefor e, the a lg orithm arrives a t a new result which is a single equatio n in three unknowns. Again, time for so me ideas. B a ck in Section 3.3 on page 14, Figure 18 is discussed. Esp ecially imp or- tant is the fact that the maxima lly satisfied clauses , bo th along the dia gonal and off the diag o nal, can b e isolated. In this section it has been shown that equa- tions for off-dia gonal v alues can b e obtained. Think- ing about off-diago na l v a lue s , it is p ossible to cre- ate other equa tions using m ultiplication and addi- tion - and thes e equations can help determine the off-diagona l v alues b y combining them with the firs t equation and then using linear alg ebra to determine the v alues. So essentially , the algor ithm will use m ultiplication and addition of mo dified clause p olyno mials to cre- ate equations in three unknowns. These unknowns are the off-diagona l v alues. Then, the equations cre- ated can b e com bined via linear alge bra to detetmine the off-diagonal v alues. As mentioned prev iously , the algorithm can already iso la te the maximally satis- fied clause v alues tog e ther. Unfortunately , the o ff- diagonal and diag onal v alues a re combined tog ether. How ever, since the o ff-dia gonal por tion can be deter- mined via linear algebr a, the dia gonal portion can be isolated from the off-diagonal p or tio n using s imple al- gebra, and the results from linear a lgebra (which give the off-dia gonal po rtion). The diagonal v alue of maximally satisfied cla uses is th us determined, and this corresp onds to the n um b er of s atisfied solutions. Now if this n umber is nonze r o, the who le pr oblem can b e satisfied. O therwise, the original problem can’t b e sa tis fie d. T owards P = N P via k -SA T c Matt Groff 2 011 19 4 A Tw o Clause W alkthrough In order t o help understand the a lgorithm, this w alk- through will start at the b eginning of execution and pro ceed thro ugh a s many steps as p ossible. The ini- tial problem will b e given as the equation: ( x 0 ) ∧ ( x 0 ∨ x 1 ) (4.1) One o f the first things to do is to de ter mine the mo dulus. Using a prime g reater than (2 n ) 2 should work well, and it’s obvious that there are 2 clauses and 2 v ariables. 17 sho uld suffice, since 1 7 > (2 · 2) 2 . Next, a v a lue should b e given to x , which will b e used to calculate the clauses. There do esn’t seem to b e any particular ly go o d way to pick, other than using a num ber greater tha n one. Three will b e use d for this example. Next, the v alues for the clauses can b e calculated. Recalling Equatio n 2.2 on page 5 : f ( x m ) = m − 1 Y k =0 (1 + x 2 k ) ! x 2 m n Y k = m +1 (1 + x 2 k ) ! (4.2) So, f ( x 0 ) = 1 + x 2 1 x 2 0 = 1 + x 2 x 1 = (1 + 3 2 )3 1 = (10)3 = 30 ≡ 13 mo d 1 7 The second, tric kier equation comes from the sa me subsection. The algor ithm to use is in Figure 10 on page 8. This is mo dified from Fig ur e 7 on pa g e 6. Here, g ( x 0 ) = () = 1 (4.3) There is nothing to multiply , so by con ven tion this pro duct will b e set e q ual to one. As for x 1 : g ( x 1 ) = 1 + x 2 0 x 2 1 = 1 + x 1 x 2 = (1 + 3 1 )3 2 = (4)9 = 3 6 ≡ 2 mo d 17 Now, pro cee ding through the a lg orithm sta r ting at h=0, that results in the case h = a i . So the result bec omes g ( x 0 ), or 1. Now, h is incremented, so h= 1 . This results in the ca se h = a i . So g ( x 1 ) is a dded to the res ult, giving 1 + 2 = 3. This finishes the algorithm with the result f ( x 0 ∨ x 1 ) ≡ 3 mo d 1 7. A t so me time, the equatio ns sho uld be set up so that Bo olean alge br a can even tually lead to an an- swer. Now is a go o d time, so the v a rious results for a 0 = 2 and c 0 are shown in Figure 2 5 on the follo wing page. The goa l here is to eliminate the diagonal by taking co m binations o f equa tions, a s explor ed in the previous section. Observe what happ ens if ano ther set o f equations is crea ted, with a 1 = 3 and c 1 = 1: c 1 · 1 ≡ 1 · 1 ≡ 1 c 1 · a 1 ≡ 1 · 3 ≡ 3 c 1 · a 1 2 ≡ 1 · 3 2 ≡ 9 Pro ceeding as in Fig ure 23 on pa ge 17, the first dif- ferences of this equation a re 3 − 1 = 2 and 9 − 3 = 6. The second difference is 6 − 2 = 4. Recalling that the sec o nd differe nc e s s hould a dd up to the mo dulus (17), a second difference o f 17 − 4 = 13 is requir ed. F ro m Figure 25, it can b e seen that setting c 0 = 13 in the fir st se t of equa tions gives a se cond differ ence of 1 3. Thus, these tw o equations can now be com- bined succesfully . T o see this, simply use the original v alue s of a and c in each equation a nd add them to- gether. F o r exa mple using a 0 = 2 and c 0 = 13 for the first equation, and a 1 = 3 and c 0 = 1 for the sec o nd equation, this gives res ults o f combining eq uations: c 0 + c 1 ≡ 13 + 1 ≡ 14 c 0 ( a 0 ) + c 1 ( a 1 ) ≡ 9 + 3 ≡ 12 c 0 ( a 0 2 ) + c 1 ( a 1 2 ) ≡ 1 + 9 ≡ 10 Now cor resp onding addition equations can b e con- structed by observing : 14 ⇐ ⇒ 14 + 0( − 2 ) 12 ⇐ ⇒ 14 + 1( − 2 ) 10 ⇐ ⇒ 14 + 2( − 2 ) A similar examination can be done to pro duce t wo more addition eq ua tions. Pro cee ding by increas ing the pr e vious v alue of a for ea ch suc c essive m ultiplica- tion equation, r e c all that the last multiplication equa- tion used a 1 = 3. So pic k a 2 = 4. Us e c 2 = 1 to sta rt. This g ives: T owards P = N P via k -SA T c Matt Groff 2 011 20 c 0 = 1 c 0 = 2 c 0 = 3 c 0 = 4 c 0 = 5 c 0 = 6 c 0 = 7 c 0 = 8 c 0 = 9 c 0 = 10 c 0 = 11 c 0 = 12 c 0 = 13 c 0 = 14 c 0 = 15 c 0 = 16 1 ≡ 1 a 0 ≡ 2 a 0 2 ≡ 4 1 2 1 2 · 1 ≡ 2 2 · a 0 ≡ 4 2 · a 0 2 ≡ 8 2 4 2 3 · 1 ≡ 3 3 · a 0 ≡ 6 3 · a 0 2 ≡ 12 3 6 3 4 · 1 ≡ 4 4 · a 0 ≡ 8 4 · a 0 2 ≡ 16 4 8 4 5 · 1 ≡ 5 5 · a 0 ≡ 10 5 · a 0 2 ≡ 3 5 10 5 6 · 1 ≡ 6 6 · a 0 ≡ 12 6 · a 0 2 ≡ 7 6 12 6 7 · 1 ≡ 7 7 · a 0 ≡ 14 7 · a 0 2 ≡ 11 7 14 7 7 · 1 ≡ 8 7 · a 0 ≡ 16 7 · a 0 2 ≡ 15 8 16 8 5 · 1 ≡ 9 5 · a 0 ≡ 1 5 · a 0 2 ≡ 2 9 1 9 5 · 1 ≡ 10 5 · a 0 ≡ 3 5 · a 0 2 ≡ 6 10 3 10 5 · 1 ≡ 11 5 · a 0 ≡ 5 5 · a 0 2 ≡ 10 11 5 11 5 · 1 ≡ 12 5 · a 0 ≡ 7 5 · a 0 2 ≡ 14 12 7 12 5 · 1 ≡ 13 5 · a 0 ≡ 9 5 · a 0 2 ≡ 1 13 9 13 5 · 1 ≡ 14 5 · a 0 ≡ 11 5 · a 0 2 ≡ 5 14 11 14 5 · 1 ≡ 15 5 · a 0 ≡ 13 5 · a 0 2 ≡ 9 15 13 15 5 · 1 ≡ 16 5 · a 0 ≡ 15 5 · a 0 2 ≡ 13 16 15 16 Figure 25 : Mul tiplication R esults with a 0 = 2 Mo dulo 17 T owards P = N P via k -SA T c Matt Groff 2 011 21 c 2 · 1 ≡ 1 · 1 ≡ 1 c 2 · a 2 ≡ 1 · 4 ≡ 4 c 2 · a 2 2 ≡ 1 · 4 2 ≡ 16 This gives first diff er ences of 4 − 1 = 3 a nd 16 − 4 = 12, and a s econd difference of 12 − 3 = 9. So lo ok for another sec o nd differ ence that sums 9 to a total of 17. 17 − 9 = 8. Co nsulting Figure 25 again, it can b e seen that using c 0 = 8 gives a seco nd difference o f 8. T o chec k this and set up the v alues for the a ddition equations, use a 0 = 2, c 0 = 8, a 2 = 4, and c 2 = 1: c 0 + c 2 ≡ 8 + 1 ≡ 9 c 0 ( a 0 ) + c 2 ( a 2 ) ≡ 16 + 4 ≡ 3 c 0 ( a 0 2 ) + c 2 ( a 2 2 ) ≡ 15 + 16 ≡ 14 ≡ − 3 Once ag ain, observe that there is a difference of successive results by -6. F or example 9 + ( − 6) = 3, and 3 + ( − 6 ) = − 3. So the second s e t of addition equations can b e set up: 9 ⇐ ⇒ 9 + 0 ( − 6) 3 ⇐ ⇒ 9 + 1 ( − 6) − 3 ⇐ ⇒ 9 + 2( − 6) A third set of addition equatio ns can now be set up, which will conc lude the setup. Con tinuing suc- cessively , pick a 3 = 5. With c 3 = 1, this gives: c 3 · 1 ≡ 1 · 1 ≡ 1 c 3 · a 3 ≡ 1 · 5 ≡ 5 c 3 · a 2 3 ≡ 1 · 5 2 ≡ 8 It gives first differences of 5 − 1 = 4 and 8 − 5 = 3, and a seco nd difference of 3 − 4 ≡ 16. 16 + 1 = 17, so a second difference o f o ne is r equired for another set of equa tions. Once aga in using Figure 25, c 0 = 1 gives this equation for a 0 = 2. Thus: c 0 + c 3 ≡ 8 + 1 ≡ 2 c 0 ( a 0 ) + c 3 ( a 3 ) ≡ 2 + 5 ≡ 7 c 0 ( a 0 2 ) + c 3 ( a 3 2 ) ≡ 4 + 8 ≡ 12 F or this last set of equatio ns we o bserve: 2 ⇐ ⇒ 2 + 0(5) 7 ⇐ ⇒ 2 + 1(5) 12 ⇐ ⇒ 2 + 2(5 ) This s etup gives the informa tio n needed to set up a system of thr ee eq uations in three unknowns. The next s tep inv olves prepar ing the clause p oly no mials for addition and m ultiplication. O nce they are pr e- pared, addition and m ultiplication can be used to set up the line a r algebra system. First off, t he function of o nes should b e calcula ted. This is easy , r ecall Equatio n 3.1 from page 1 1: f (1) = V − 1 Y k =0 (1 + x 2 k ) (4.4) Here the function of ones is: f (1) = V − 1 Y k =0 (1 + x 2 k ) (4.5) = (1 + x 2 0 )(1 + x 2 1 ) (4.6) = (1 + x 1 )(1 + x 2 ) (4.7) = (1 + 3)(1 + 9) (4.8) = (4)(10 ) ≡ 6 (4.9) Now that the function of ones is calculated, the m ultiplicatio n equa tions can b e calculated. to b e completed later ... T owards P = N P via k -SA T c Matt Groff 2 011 22 5 Finishing The T w o Clause Example An imp ortant obs erv ation that is cr itical to t he en tire algorithm can b e made at this p oint. Originally , the gist of the algorithm was to cr e ate 3 equations in 3 unknowns. Unf or tunately , the equatio ns cannot be independent of one another, since we can only v ary 2 parameters of the 3 equa tions; O nly the v alue for d , which co rresp onds with no clauses satisfied, and the corres p o nding incr ement can v ary . Since only these t wo parameter s can v ary , we m ust so mehow satisfy the 3 unknowns with only tw o equa tions. This is where the observ ation o ccurs. The thre e unknowns a r en’t completely indep endent. One of the 3 unknowns dep en ds up on t he other 2. In the case of addition, the three unk nowns m ust add up to the function of ones. This is b ecause all three unknowns, taken tog e ther , co mpletely o ccupy all of the x co efficients, which is exactly what the function of o nes repre sents. Similar ly , in the case of m ultiplicatio n, the off-diago nals must a dd up to the function of ones sq a ured, minus the dia gonal, which is the function o f ones . Let’s refine our mo del. Fir s t, we only ne e d to come up with tw o eq ua tions. B a ck around page 18, a nd in Figure 24, we came up with 3 v ariables; b 0 , b 1 , a nd b 2 . b 0 represents 0 cla uses satisfied, b 1 represents one clause sa tisfied, and b 2 represents 2 clauses satisfied. W e just realized that these three v ariables cont ain one depe ndent v ariable among them. Let’s let b 2 be the dep endent v ariable, which makes the eq uation 4 b 0 + 2 b 1 + 0 b 2 = 4 b 0 + 2 b 1 + 0( f (1) − b 0 − b 1 ) (5.1) Thu s, we can write o ur old equa tion in 3 dep en- dent v aria bles as a new eq ua tion in 2 indep endent v ar iables. So now we only need 2 equations, and we hav e them. W e c an no w so lve the system and determine the v alues asso ciated with no clauses sat- isfied, one clause satisfied, and tw o clauses satisfied ( f (1) − b 0 − b 1 ). T his is just linear a lg ebra, but care m ust be taken to ensure that only multip lica tions are per formed instead of division, since we are w or king inside a finite field. 6 Algorithm Finish (Basics) In all cases , the algo rithm finishes the firs t p ortion with a v alue for n clauses satisfied. If this v alue is nonzero, we ca n conclude that the curr ent Bo olean equation ca n b e sa tisfied. Otherwise, there is o nly a 1-in- p 2 chance that the cur rent Bo o lean equation can be satisfied (where p 2 is a pr obability pick ed ahead of time that will b e discuss e d in greater deta il sho rtly). But this do esn’t return a certifica te; that is, an as- signment of v ar iables that s atisfies the e q uation. If we think that the equa tion can be sa tisfied, we should return a certificate. T o return a cer tificate, multiple o c curences of the basic a lgorithm are run. This proceeds as follows. First, we assume that the e q uation can b e satisfied; otherwise, simply return that it can’t b e satisfied and we ar e done. So the next step is to tak e the first v alue in the o riginal equatio n and pick a v alue for it. W e’ll pic k true, although we co uld pick false . Now we rewrite the original Bo olea n equation with this new v alue s e t as tr ue (which is fairly c o mmon knowledge, and may b e expla ine d in a future version of this pa- per ). Then we determine if the new eq uation can b e satisfied. If it can, we pro ceed to r ep e a t this pro c e s s with more v a riables until a certificate is pro duced. If the equation ca n’t b e satisfied with the v ariable set as true, we tr y a new e quation with the v aria ble set as false. If this w orks, we aga in pro ceed on with more v ar iables. Now there is a s light chance that neither equation seems to b e satisfied. If this is the case, we can try again with another prime. What’s imp ortant here is that there is no nee d to backtrack. This is a pr obabilistic a lgorithm with b ounded er- ror. At the start of the algo rithm, w e must know the total probabilit y of err or, which w e will call p 3 . Then we c an ca lculate the maximum proba bilit y of e r ror for ea ch itera tion, which is p 2 . T he n w e can deter- mine how many and wha t t yp es of primes to use to give us p 2 error at each step. All of this will be dis - cussed after the gener al n clause a lgorithm. F or now we remar k that we can contin ue to examine a pa r- ticular v ariable assignmen t only for so long, and then we can simply co nclude that w e’ve exc eeded the er ror probability and c o nclude that we couldn’t deduce a certificate; only satisfia bilit y . The ma jor p o in t to take awa y here is tha t an n clause Bo olean equation with V v aria bles will even- tually require the algorithm for mo r e clauses. Sp ecif- ically , this is the n + 2 V clause algor ithm. T owards P = N P via k -SA T c Matt Groff 2 011 23 7 The n Clause Algorithm Knowing that the n clause Bo olea n equation will even tually re q uire the n + 2 V clause algo r ithm, it’s bes t to anticipate this ahead o f time. The ac tual ge ne r al case algor ithm works by using many versions of the 2 clause a lgorithm. W e c a n b e- gin to s ee how this ca n o c cur by obs erving that the 2 clause algo rithm returns a v alue fo r all claus e s sat- isfied, whic h c an b e used as one of the fo ur initial v alue s in a se c ond 2 clause a lgorithm. The s econd 2 clause algo rithm has the same gener al require ments as any 2 clause algo rithm. It requir es a prime, which we hav e. It re quires the information for 2 cla uses; t wo num b e r s for a dditio n, and tw o num ber s for mul- tiplication. W e can get this b y using 4 v ersions o f the 2 clause a lg orithm, which prepar e ano ther itera tio n of the 2 clause algorithm. Then w e procee d again. In this fas hion, a tree of 2 cla use algor ithms can b e built up to solve an n clause system. 7.1 Semi-optimized V ersion The version pres ented in this pap er has only minor optimizations to enhanc e it; howev er, there will b e m uch ro o m left fo r further improvemen t. So far, the best w ay to optimize seems to be to fo cus on the main p ortion of the algor ithm, which is the 2 clause algorithm. If we briefly analyze the o c- curences of this, we should know that it co mbin es tw o clauses into one output. Each clause needs a p ortion defined for a ddition, and a nother for multiplication. So for ev ery 2 clauses in, there ar e 4 total inputs. This then leads to a single output. So for n clause s , at ev ery level the n umber of clauses is reduced b y half. Similarly , the num b er o f inputs is reduced to a fourth. W e can designate the num ber of such levels as l . Now for n = 2 l clauses ther e a re l levels to the tree. Similarly , there are 4 l inputs needed for all of the leav es. W e note that the 4 l inputs is equal to the 2 l clauses squa r ed: 4 l = (2 l )(2 l ), just as the inputs are the num b er of clauses sq uared. So we can con- clude that there are r oughly n 2 total instances of the 2 clause alg orithm tha t o ccur for the n clause algo- rithm. W e also know tha t the n clause algorithm will be run O ( V ) times in the case of a satisfied Bo olean equation in o rder to determine a c ertificate (a set of satisfying v a r iables). So we can also conclude that the algo r ithm will run the 2 cla use algor ithm no more than O ( V )( n + V ) 2 times. Knowing that the curr ent ce nter of attention for the n clause algorithm is the rep etition of the 2 clause algorithm, we can optimize the p erfo r mance of the 2 clause alg orithm by ma king some precomputations . This is b ecause we can use the same v alues rep eatedly for the 2 clause algorithm, with the only exceptions (outside o f intermediate computations) being the 4 inputs and single output. Knowing t hat the ma in algo r ithm will call the tw o clause alg oithm no mo r e tha n O ( V )( n + V ) 2 times, we can set a ll of the primes we use to b e approximately this v alue. Thus w e se t our primes that we use to b e Θ( V ( n + V ) 2 ). All calculations can no w b e p erformed knowing the v alue of the prime ahead of time, and this will also help to e ns ure that the mulit plicatio n inputs are rar ely ev er zero. In the ca se that they are, the a lg orithm will have to p erform so me addition steps to tr y to ensure that they b ecome nonzero . Returning to the precalculations, w e can set up the equations to pro ceed as quic kly as p o s sbile. Calculate the a ppropriate a k ’s and c k ’s so that all tw o clause algorithms run with the sa me eq ua tions. In fact, all of the cons ta nt s can b e pr ecalculated individua lly in O ( V )( n + V ) 2 time by simply cycling through every po ssible na tural (plus zero) up to the prime p and picking the a ppropriate v alue. So the precalcula tions all tak e O (( V )( n + V ) 2 ) time. Note that the tw o claus e alg orithms, a t this p oint, should take constant time. The a lgorithm is almos t co mplete. Only one im- po rtant piece remains; the problem that ar ises when a zero is given a s an input fo r multiplication. There is one go o d p ossibility that we c a n make use of. We use extr a variables in our c alculations. T o do this, we simply p erfor m clause calculations with double the num ber of v aria bles in the e q uation (A differ ent v alue could be us e d, but this seems to b e a fairly use- ful amoun t). Now, when a zer o comes up a s an input for multiplication, we can simply add in a new equa- tion with the new v a riables. If the orig inal equation is satisfiable, this s ho uld help to change the m ultipli- cation input. Otherwise, we will conclude that the equation is ques tionably unsatisfiable (It is unsatisfi- able to within the er ror pro bability 1 /p ). 8 Run time Analysis / Correctness As mentioned and expla ined in the prev ious section, most comp onents take O ( V ( n + V ) 2 ) time. The error correctio n in the case of a zero input cor rection is sep erate from the main algorithm, and ob vious ly can be done in O ( V ( n + V ) 2 ) time. So for a single pr ime, the r untime is O ( V ( n + V ) 2 ). This is a nonrando mised, deter ministic a lgorithm with b o unded erro r. E ach prime used g ives a n in- dividual e r ror b ound of 1 / Θ( V ( n + V ) 2 ). T ogether, T owards P = N P via k -SA T c Matt Groff 2 011 24 P primes give approximately an err or b ound not ex- ceeding probability 1 / Θ( V ( n + V ) 2 ) P . So the algo- rithm r uns in time O ( P · V ( n + V ) 2 ) with mista king satisfiable Bo olean e x pressions as unsa tisfiable with an approximate probablity 1 / Θ( V ( n + V ) 2 ) P . 9 Conclusion I ho pe that this pr o ject presents s ufficie nt evidenc e that P =NP . I’v e written the last few pages r ather hastily , but hop e to improve things s o on. I’m s tart- ing on writing the co de for this pro ject, so that ev - erything can b e put under b etter scrutiny . T owards P = N P via k -SA T c Matt Groff 2 011 25 Thank Y ou ! 10 Ac kno wledgemen ts I would like to thank Go d and my family for help- ing to provide me with this wonderful opp ortunity . I would also like to thank Javier Hum b erto Ospina Holguin for showing me an in teres ting new w orld and encourag ing me to get involv ed. I would like to also thank t he creator of Bricks; Andreas Rottler, for pro- viding a g reat game that helped get me exc ited ab out problems like this, as well as a wa y to meet in teres ting peo ple such as Javier. I would also like to thank the p eople who help ed to sav e my life when I was in danger ; the Mexi- can/American family in Putla, Holy Spirit Hospital (whic h also help ed me with my schizophrenia), and some p eople from Har risburg, Pennsylv ania. I’d like to thank Timo th y W ahls of Dickinson Col- lege (former ly of Penn State Har risburg) for first in- tro ducing me to the pr oblem and getting me excited. I’d also like to thank my friend Holly Dudash for being with me during tough times and helping me through. I’d like to thank Michael Sipser o f MIT and m y friend Ruben Spaa ns fo r analyzing my ideas and their inv a lua ble s uggestions. I’d like to tha nk the many g reat or ganizations , communities, and scho ols that help ed me lear n and fostered m y intellectual growth including SOS Math- ematics (online), St a ckexc hange - Mathov erflow.com, stack ov erflow.com, and cstheory .stack exchange.com. Particularly Ryan O ’ Donnel, Ar turo Magadin, C a rl Brannen, and Qia o ch u Y uan. Also Penn State Uni- versit y , and in pa rticular Penn Sta te Harrisburg . Esp ecially Drs . Null, Bui, W alker, and W ag ner. Also from other v arious Universities Jacques Car - rette, Geor ge F reder ick Viamontes, Will Jagy , and Leonid Levin. I’d lik e to thank everyone that has w or ked o n solv- ing the problem(s) o f schizophrenia, and I hop e that this paper (although p erhaps indirectly) will help with more resear ch. There are many mo r e p eople that I regreta bly hav en’t mentioned, but I hold them in high esteem and send out my thank s to. I’m just very tha nk ful that I hav e a chance to b e a pa rt of a pro ject that will ho pefully do lots of go o d things. T owards P = N P via k -SA T c Matt Groff 2 011 26 A Clause P olynomial T ec hnicalities B F unction of Ones T owards P = N P via k -SA T c Matt Groff 2 011 27 C References C.1 NP [1] http://en.wikip e dia. or g/wiki/ Co ok%E2%80%93L evin the or em [2] Co ok, S.A. The c omplexity of the or em pr ov- ing pr o c e dur es Pro ceedings, Thir d Annual A CM Symp osium on the Theory of Com- puting, ACM, New Y ork . pp. 15 1158., 1971. doi:10.11 45/80 0157.805047. [3] http://en.wikip e dia. or g/wiki/ Nondeterminis- tic T uring machine [4] http://en.wikip e dia. or g/wiki/ Complexity class [5] http://en.wikip e dia. or g/wiki/ Polyno- mial r e ducibility C.2 P vs. NP [6] M. Sipser http://www. e e cs.b erkeley.e du/ luc a/ cs172/sip ser92history.p df The History and St a- tus of the P versus NP Question Pro ceeding s of the 24th Annual ACM Symposium on Theo ry of Computing 1992, invited pap er . [7] Kevin J . Devlin The Mil lenium Pr oblems: The Seven Gr e atest Un solve d Mathematic al Puzzles Of Ou r Time Keith Devlin, 200 2. [8] http://en.wikip e dia. or g/wiki/Np har d [9] “The Complexity of 3SA T N and the P versus NP Pr oblem” http://arxiv.or g/abs/1101 .2018 C.3 Asymptotic Analysis [10] http://en.wikip e dia.o r g/wiki/Asymptotic analysis C.4 SA T [11] http://en.wikip e dia.o r g/wiki/ Co ok- L evin the or em [12] http://en.wikip e dia.o r g/wiki/ Bo ole an satisfiability pr oblem [13] T ak ayuki Y ato a nd T ak ahiro Seta Comple x it y and c ompleteness of finding another solution and its applic ation to puz z les IEICE T ra nsactions on F undamentals of Electronics, Co mm unicatio ns and Computer Science s , E86-A(5):1 0 5210 60, May 2003. [14] A. Biere Handb o ok of Satisfiability IOS Pr ess, V olume 1 85 F ro nt ier s in Artificial Intelligence and Applications, F ebruary 1 5, 20 09. [15] Knot Pipatsrisaw at and Adnan Darwiche “On Mo dern Clause-L e arning Satisfiability Solvers” http://r e asoning.cs.ucla.e du/fetch.php? id=10 8 &typ e = p df Journal of Automated Reasoning 44 2773 01, 201 0. [16] Kazuo Iw a ma , Kazuhisa Seto, T adashi T ak ai and Suguru T ama k i “Impr ove d R andomize d Al- gorithms for 3-SA T” ISAA C 20 10. [17] Timon Hertli, Robin A. Moser, Dominik Sc heder “Impr oving PPSZ for 3-SA T using Cri tic al V ari- ables” http://arxiv.or g/abs/1009.4830 [18] Konstantin Kutzkov and Dominik Scheder “Us- ing CSP T o Impr ove Deterministic 3-SA T” http://arxiv.o r g/abs/1007 .1166 [19] V.F. Romanov “Non-O rtho dox Combinato- rial Mo dels Base d on Disc or dant S t ructur es” http://arxiv.o r g/abs/1011 .3944 C.5 Relativization [20] Theo dore P . Baker, John Gill, Rob ert Solov ay R elativizatons of the P =? NP Qu est ion SIAM Journal o n Computing V olume 4, Number 4, pp.431-4 4 2 1975 . C.6 Oracles [21] http://en.wiki p e dia.or g/wiki/T uring or acle C.7 State Spaces [22] http://en.wiki p e dia.or g/wiki/State s p ac e C.8 Bo olean V ariables [23] http://en.wiki p e dia.or g/wiki/ Bo ole an algebr a (lo gic) [24] Douglas Smith, Maurice Egg en, Richard St. Andre A T r ansition T o Adv anc e d Mathematics . Bro oks/ Cole Publis hing Company , 4th Edition, 1997. T owards P = N P via k -SA T c Matt Groff 2 011 28 C.9 Calculus [25] Geor ge B. Thomas, Jr ., Ros s L. Finny Calculus and Analytic Ge ometry . Addison-W esle y Pub- lishing Company , 7th Edition, 19 88. C.9.1 Pro ducts [26] http://en.wikip e dia.o r g/wiki/Indefinite pr o duct C.9.2 Generating F unctions [27] Herb ert S. Wilf gener atingfun ctionolo gy Aca - demic P ress, Inc. Internet E ditio n, 1 9 90, 1 994. [28] S. K. Lando L e cture s on Gener ating F u nctions American Mathematical So ciety 20 03. [29] W alter P . Kelley , Allan C. Peterson Differ enc e Equations: A n Intr o duction with Applic ations Academic Press Second E ditio n, 2 0 01, 1 991. C.10 Imaginary Num b er s [30] http://en.wikip e dia.o r g/wiki/Imaginary unit C.11 Mo dular Ar ithmetic [31] Michael Artin A lgebr a P HI Lea rning P riv ate Limited 2009 . [32] http://en.wikip e dia.o r g/wiki/Mo dular arith- metic [33] http://en.wikip e dia.o r g/wiki/ Chinese r emainder the or em
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment