Graph Neural Reasoning May Fail in Certifying Boolean Unsatisfiability
It is feasible and practically-valuable to bridge the characteristics between graph neural networks (GNNs) and logical reasoning. Despite considerable efforts and successes witnessed to solve Boolean satisfiability (SAT), it remains a mystery of GNN-…
Authors: Ziliang Chen, Zhanfu Yang
Graph Neural Reasoning M ay F ail in Certifyin g Boolean Un s atisfiabilit y Ziliang Chen ∗ Departmen t of Computer Science Sun Y at-sen Uni versity GuangZh ou, Guangd o ng, China c.ziliang@yaho o.com Zhanfu Y ang ∗ ∗ Departmen t of Computer Scien ce Purdue Univ ersity W est Lafayette, IN, USA yang1 6 76@purd ue.edu Abstract It is feasible and practically-valuable to bridge the characteristics between graph neural networks (G NN s) and logical reasonin g . Despite co n siderable ef forts an d successes witnessed to solve Boolean satisfiability (SA T), it remains a mystery of GNN-based solvers for more com plex predicate logic form ulae. I n this work, we conjec tu res with some evidences, that generally- defined GNNs present se veral limitations to certif y the unsatisfiability (UNSA T) in Boolean form ulae. It implies that GNNs may probab ly fail in learning th e log ical reason in g tasks if they contain proving UNSA T as the sub -prob le m includ ed by mo st predica te logic formu lae. 1 Intr oduction Logical reasoning pr oblems span fr om simple propo sitional logic to comp lex pre dicate logic and high-o rder logic, with known theor e tical co mplexities from NP-com pleteness [3] to semi-decid able and un decidable [2]. T esting the ab ility and limitation of mach in e learning tools on log ical rea- soning prob lems leads to a fu n damental u n derstandin g of the boun dary of learna b ility and robust AI, helping to add ress in te r esting questions in decision proc edures in logic, progr am ana lysis, and verification as define d in the program ming languag e co mmunity . There h av e been a rrays of successes in learning pr opositional log ic reasonin g [ 1, 12], which f o cus on Boolean satisfiability ( SA T ) p roblems as define d below . A Boolean logic f ormula is an expression composed of Boolean constants ( ⊤ : true, ⊥ : false) , Boolea n variables ( x i ), an d pr opositional connectives such as ∧ , ∨ , ¬ ( for example ( x 1 ∨ ¬ x 2 ) ∧ ( ¬ x 1 ∨ x 2 ) ). The SA T pro blem asks if a giv en Boolean formula can be satisfied (e valuated to ⊤ ) by assigning pro per Boolean values to the literal variables. A crucial feature of the logical reasonin g domain (as is v isible in th e SA T pro blem) is that the inpu ts are often structur al, wher e logical conn ections between entities (variables in SA T problem s) are the key in formatio n. SA T and its variant problems are almost NP-co m plete or e ven more comp licated in the complexity . The fact m otiv ates th e emergenc e o f sub- o ptimal heuristic that tr ades off the solver pe r forman ce to rapid r e asoning. In ter ms of the fast inferen ce pro c ess, deep learning models are fa vored as learnable heuristic solvers [1, 12, 16]. Among them Graph Neural Networks (GNNs) have grasped a mount of attentions, since the message- passing pro c ess deli vers the transpa r ency to interpret the infer ence within GNNs, thus, r ev ealing the black box behind n eural lo gical reasoning in the failure in stan ces. Howe ver , it should be noticed that logical decision p rocedu res is more comp lex that just read in g the formulas cor rectly . It is unclea r if GNN em b edding s (f r om simple message-p a ssing) contain all the info rmation needed to reason ab out complex log ical questions on top of the graph structures ∗ indicates alphabetic order . Preprint. Under revie w . derived fr om th e formulas, or whether the comp lex embed d ing schemes can b e learned fr om b a c k- propag ation. Pre vious successes on SA T pro blems argue d for the power of GNN, which can h andle NP-complete problems [1, 12], whereas no evidences have been repo rted for solvin g semi-decidab le predicate logic problems via GNN. The significant difficulty to prove the p roblems is th e requ ire- ment o f com prehensive reasoning over a search space, since a complete pro of includ es SA T and UNSA T ( i.e. , Boolean unsatisfiability). Perhaps disapp ointingly , this work presents som e th eoretical evidences that supp o rt a pessimistic conjecture : GNNs do n ot simulate the complete solver for UNSA T . Specifically , we discover th at the n eural rea soning pro cedure learned by GNNs does simulate the algorithms th at may allow a CNF formu la changing over iterations. Th ose complete SA T -solvers, e.g. , DPLL and CDCL, are almost common in the operatio n that adap tively alters the or ig inal Bo o lean formula th at eases the reasonin g process. So GNNs do n ot learn to simulate th eir beh aviors. Instead, we prove that by appro priately defining a specific structure of GNN that a p arametrized GNN may lear n, the lo cal search h euristic in W alkSA T can be simu lated by GNN. T owards th ese results, we believe that GNN ca n no t solve UNSA T in existing logical r easoning pr oblems. 2 Embedding Logic Formulae by GNNs Preliminary: Gra ph Neural Networks (GNNs) . GNNs ref e r to the n eural architectur es devised to learn the emb edding s of no des and graph s via message-passing. Resembling the ge n eric definition in [14], they con sist of two succe ssive operators to propagate the messages a n d evolv e the emb eddings over iterations: m ( k ) v =Aggregate ( k ) { h ( k − 1) u : u ∈ N ( v ) } , h ( k ) v =Com bine ( k ) h ( k − 1) v , m ( k ) v (1) where h ( k ) v denotes the hidden state (embedd in g) of node v in the k th iteration, and N ( v ) d enotes the neighb ors of node v . In each iteration, the Agg regate ( k ) ( · ) aggregates hid den states from node v ’ s neigh bors { h ( k − 1) u : u ∈ N ( v ) } to pr oduce the new m essage ( i.e. , m ( k ) v ) fo r node v ; Combine ( k ) ( · , · ) up dates the embed ding of v in terms of its previous state and its curren t message. After a specific num b er of iter ations ( e.g. , K in our discussion), the embedd ings shou ld capture the global relatio n al info rmation of the node s, which can be fed into o ther n eural network mo dules fo r specific task s. Significant successes abou t GNNs have been witnessed in relational reasonin g [6, 17, 20], where an instance c o uld be dep arted in to mu ltip le o bjects then en coded by a series of feature s with their relation. It typ ically suits representatio n in Eq. 1. Whereas in logical reasonin g , a Bo o lean formula is in Co n junctive Normal Form (CNF) tha t c o nsists of literal and clau se items. In term o f the indepen d ence am ong literals in CNF (so do clauses), [12] embed s a fo rmula into a bipartite graph , where th e nod e s d enote the clauses and liter als that are disjoint, respectively . In this principle, g iv en a liter al v as a node, all the nodes o f clau ses that con tain s the literal are routinely tre a ted as v ’ s neighbo rs, vice and versa for the no de of each clause. W e assume Φ is a logic for m ula in CNF , i.e. , a set of clau ses, an d Ψ( v ) ∈ Φ de n ote one of clauses within the logic for mula Φ that co ntains literal v . Deriv ed fro m Eq. 1 , GNNs for logical reasoning can b e f u rther specified b y m ( k ) v =Aggregate ( k ) L { h ( k − 1) Ψ( v ) : Ψ( v ) ∈ Φ } , h ( k ) v =Com bine ( k ) L h ( k − 1) v , h ( k − 1) ¬ v , m ( k ) v , s.t. ∀ v ∈ L m ( k ) Ψ( v ) =Aggregate ( k ) C { h ( k − 1) u : u ∈ Ψ( v ) } , h ( k ) Ψ( v ) =Com bine ( k ) C h ( k − 1) Ψ( v ) , m ( k ) Ψ( v ) , s.t. ∀ Ψ( v ) ∈ Φ (2) where h ( k ) v and h ( k ) Ψ( v ) denote emb edding s of th e literal v and th e clause Ψ ( v ) in the k th iteration ( h ( k ) ¬ v denotes the embedding of the negation of v ) ; m ( k ) v and m ( k ) Ψ( v ) refer to their propag ated mes- sages. Since the value of a Boolean form u la is determined by the value assignment of the literal variables, Eq . 2 solely re q uires th e final-state literal embedd ings { h ( K ) u , u ∈ L } to pr edict the lo g- ical reasoning result. Mor e sp e cifically , we use L and C to denote a literal set and a clau se set ( L and C may be d ifferent for each CNF formula) , then Ψ( v ) is a clause and Ψ( v ) denotes a c lau se including th e literal v ∈ L . Note that the g raph emb e d dings for SA T [7] and 2QBF [7] are gene rally represented b y Eq.2 . Hence our f urther analysis is based on Eq.2 . 2 3 Certifying UNSA T by GNN s ma y Fail Although existing r esearches showed that GNN can learn a well-perfo rmed solver for satisfiability problem s, GNN-based SA T solvers actually h av e terrib le p erforma n ces in predicting unsatisfiability with high con fidence [12] in a SA T formu la, if th e for mula does not have a small unsatisfiable cor e (minimal numbe r o f clauses th at is enough to cause unsatisfiability). In fact, som e pr evious work [1] ev en com pletely removed unsatisfiable f ormulas from the training data set, since they slowed down the whole tr aining process. The difficulty in proving u n satisfiability is un derstandab le, since construc tin g a proo f of u nsatisfia- bility demand s a comp lete r e a soning in th e search space, which is mor e comp lex than constru cting a proof of satisfiability that on ly requires a witness. Traditionally it relies on th e recur si ve d e cision pro- cedures that either traverse all possible assignments to construc t the proof (DPLL [4]), o r generate extra constrain ts from assign ment trials that lead to conflicts, until som e of the co nstraints co n tra- dict each other ( CDCL [13]). Th e line of recu rsiv e algorithms inclu de some operation bran ches that reconfigu re the bipartite graph beh in d the CNF in each step wh ile th ey search. In the terms of a graph tha t may iteratively chang e ( e.g. , DPLL), perhaps miserab ly , their recursive pro cesses can no t be simulated by GNN s. Observation 3.1. Given a r ecurs ive algorithm th at iteratively r econfigu r es the graph , GNNs in Eq. 2 can not simulate th is recur sive pr oce ss. Pr o of. Associating the aggregate and combin e functio ns in Eq. 2, we obtain the iterativ e up date rule for the embedding of a literal v : h ( k ) v =Com bine ( k ) L h ( k − 1) v , h ( k − 1) ¬ v , Aggregate ( k ) L { h ( k − 1) Ψ( v ) : Ψ( v ) ∈ Φ } = Up date ( k ) L h ( k − 1) v , h ( k − 1) ¬ v , { h ( k − 1) Ψ( v ) : Ψ( v ) ∈ Φ } , s.t. v ∈ L (3) T owards this princip le, we o bserve that the em bedding update of v in the current stag e relies on the last-stage embedd ings of v and its negation ¬ v , and th e embeddin gs of all the clauses that include v in a CNF for mula ( Ψ( v ) ∈ Φ ). The literal v , ¬ v an d the clauses contain ing v are con sistent over iterations. Hence if the update function (Eq. 3) is con sistent over the iterations in Eq. 2, i.e. , ∀ k ∈ N + , Upda te ( k ) L = Update L , where Up date L means the update for literal embeddin g , GNNs derived from Eq . 3 receive a fixed graph gen erated by a CNF for mula as inpu t. Howev er , if a recursive algorithm iterativ ely ch anges the graph that repre sen ts a CNF for mula, it implies that there must be a clause th at was cha n ged ( or elimin ated) after this iteration, since clau ses are p ermutation - in variant in a CNF formula. Acco r dingly there must be a liter a l embeddin g whose upd ate p rocess depend s on a clause different fro m the previous iter a tion. It contra d icts the literal embed ding u pdate function lea r ned by Eq . 3 with ∀ k ∈ N + , Upda te ( k ) L = Update L . Hence the message-passing in GNNs could not resemble the procedur es in the complete SA T -solvers. In fact, GNNs ar e ra th er similar to lear ning a subfamily of incomplete SA T solvers (GSA T , W alk SA T [11]), wh ich randomly assign variables a nd stoch astically sear ch for local witne sses. Observation 3.2. GNN s in Eq. 2 may simulate the local sear ch in W alkSAT . Pr o of. Recall th e iterativ e upd ate routine of W alkSA T : starting by assigning a random value to each literal variable in a formula, it ran domly chooses an unsatisfied clause in the for m ula an d flips the value o f a Boolean variable within that clause. Such process is re peated till the literal assign ment satisfies all clauses in the formu la. Here we co nstruct the optima l a ggr e g ation and combine functio ns derived fr om Eq . 2 , which are designed to simulate the procedu re o f W alkSA T . In this way , if the aggregation and comb ine fu n ctions in Eq. 2 ap proxim ate these o ptimal aggregation and combin e function s, the GNN m a y simu late th e lo cal sear ch in W alkSA T . Giv en a u niverse of literals in logical reasonin g, we first in itiate the embedd ings of th em and their negation, th us, ∀ v ∈ L , random value o f h (0) v and h (0) ¬ v are initiated. This assignmen t can be treated as the Boolean value that belong to different literals, which have been mapp ed from a binary vector into a real-value emb edding space about the literals. W e also r a ndomly in itiate the clause embed- dings h (0) Ψ( v ) for reasoning each formula that contains the clause Ψ( v ) . Here we define the optimal 3 aggregation and com bine fun ctions that enco de literals and clauses r espectively , which GNNs in Eq. 2 may lea r n if they attempt to simu late W alkSA T : m ( k ) v = Aggregate L { h ( k − 1) Ψ( v ) : Ψ( v ) ∈ Φ } , = ǫ ( k ) , Y Ψ( v ) || h ( k − 1) Ψ( v ) || = 0 0 , Y Ψ( v ) || h ( k − 1) Ψ( v ) || 6 = 0 s.t. ∀ v ∈ L (4) where Aggrega te L ( · ) den otes the o ptimal aggregation function to pro pagate literal messages an d m ( k ) v denotes the optimally p ropaga ted message of liter al v in the k iteratio n; 0 is a zero -value vector; ǫ ( k ) denotes a bound ed non-zero rand om vector generated in the k iteration; || · || indicates a vector no r m. h ( k ) v =Com bine L h ( k − 1) v , h ( k − 1) ¬ v , m ( k ) v = h ( k − 1) ¬ v , v =arg max ∀ u ∈ L {|| m ( k ) u ||} and || m ( k ) v || > 0 h ( k − 1) v , otherwise s.t. ∀ v ∈ L (5) where Combine L ( · ) deno tes the op timal c ombine fun ction th at iteratively up dates literal emb e d - dings by the aid of the optimal message. Eq. 5 implies the local Boo lean variable flipping in W alkSA T : if the nor m of m ( k ) v is the maxim um am o ng all the op timal liter al messages, its literal embedd in g would be replace d by th e em beddin g of its negation, othe r wise, keep the id entical value. The maximiza tion en su res o nly one literal embeddin g that would be “flipped” per iteration, which simulates th e lo cal search b ehavior . Besides, the literal e mbeddin g selected for update would not be 0 , which imp lies all the clauses containing th is literal are satisfied (see the condition 2 in Eq. 4 ). Since all the satisfied clauses would not be selected in W alkSA T , this literal also would no t b e se- lected to up date in this iteration. Fin a lly , if a literal has b een included by a clause that is unsatisfied, it would b e rand omly picked in so me pr o bability . The u n certainty is im plied b y the randomn ess of ǫ ( k ) . m ( k ) Ψ( v ) = Aggregate C { h ( k − 1) u : u ∈ Ψ( v ) } = h (0) Ψ( v ) , Sigmoid MLP ∗ 2 X u ∈ Ψ( v ) MLP 1 ∗ ( h ( k − 1) u ) ≥ 0 . 5 0 , Sigmoid MLP ∗ 2 X u ∈ Ψ( v ) MLP 1 ∗ ( h ( k − 1) u ) < 0 . 5 s.t. ∀ Ψ( v ) ∈ Φ (6) where Aggrega te C ( · ) den otes the o p timal aggregation f unction that con veys the clause em b edding messages du ring reasoning . Note that MLP 2 P u ∈ Ψ( v ) MLP 1 ( h ( k − 1) u ) indicates Deep Sets [18], a neural network that encodes a literal embed d ing set { h ( k − 1) u } u ∈ Ψ( v ) whose literals are included by a clause Ψ( v ) . The reduced feature would be fed into the sigmo id c lause p redictor . W e use MLP 1 ∗ and MLP ∗ 2 to denote the im plicit optimal prediction to each clau se: given the ar bitrarily initiated literal embedding s that deno te the Boolean v alue assignm ent of literals, the optimal Deep Sets can predict wheth er the litera l-derived clause is satisfied ( ≥ 0 . 5 ) or not ( < 0 . 5 ). Since the p redictor is permutation - in variant to the input, Pro p ositions 3 . 1 in [15] p romises th a t it can b e app r oximated arbitrarily closely by g raph conv olution, which exactly co rrespon d s to the pa rameterized clause aggregation function s in Eq .2. O n the o th er hand, Eq. 5 promises the litera l embed dings stayin g in their initiated values over iterations, hen ce the op timal Deep Sets may alway judge whethe r a clau se (the set of litera ls as the input of De ep Sets) is satisfied or not. h ( k ) Ψ( v ) = Com bine C h ( k − 1) Ψ( v ) , m ( k ) Ψ( v ) = h ( k − 1) Ψ( v ) , h ( k − 1) Ψ( v ) = m ( k ) Ψ( v ) h (0) Ψ( v ) , || h ( k − 1) Ψ( v ) || < || m ( k ) Ψ( v ) || 0 , || h ( k − 1) Ψ( v ) || ≥ || m ( k ) Ψ( v ) || s.t. ∀ Ψ( v ) ∈ Φ (7) where Combine C ( · ) denotes the optim a l clause combine fu nction. Based on the propag a ted mes- sages conv eyed b y Eq. 2 , it determines h ow to iteratively update clause embeddin gs to simulate W alkSA T . 4 Here we elabora te how the fou r optim al fu nctions above coop e r ate to simulate an iteration of W alk - SA T . Since GNNs use literal embed dings as the initial input, we first analy ze Eq. 6 and takes a literal v into ou r con sideration. As we discussed, this func tio n receives a set o f literal emb eddings that denotes a clause that c ontains v , and then, takes the optimal Deep Sets as an oracle to judge whether this clause is satisfied. The o utput, the optimal message abo ut the clau se, equa ls to the ini- tiated emb edding of the clau se h Ψ( v ) if it is satisfied, other wise becomes 0 . This p rocess simulates the logical reason in g on a clause, wh ich W alkSA T relies on to pick an unsatisfied clau se and flip one of its variables (see Eq. 5). Based on m ( k ) Ψ( v ) , the optimal clau se combine f unction (Eq. 7) updates an arbitrary clause embeddin g that contains v . Th e first b ranch states that, if the current clause message m ( k ) Ψ( v ) is consistent with the previous clause embeddin g h ( k − 1) Ψ( v ) , it implies th e satisfiability o f the clause Ψ( v ) is not ch anged in this iteratio n (the previously satisfied clau se is still satisfied, vice and versa). In th is case the clause embed ding would not be u pdated. Th e second and third bran c h es imply that when m ( k ) Ψ( v ) and h ( k − 1) Ψ( v ) are inconsistent, how to upd a te the clause embedd ing h ( k ) Ψ( v ) to conv ey the curren t m e ssage abou t wh ether the clause Ψ( v ) is satisfied (return into the initial clause embedd in gs) or not (tur n in to 0 ). Theref ore all u pdated embedding s about the clau ses th at contain v , as the n e ighbor s of v , would be f e d into the optim al aggregation f unction in Eq . 4 . This functio n se- lects v th at only exists in satisfied clauses, i.e. , Q Ψ( v ) || h ( k ) Ψ( v ) || 6 = 0 (If there is an u nsatisfied clau ses, its embedding is 0 according to E q. 7 ,an d would lead to Q Ψ( v ) || h ( k ) Ψ( v ) || = 0 ), then the em bedding of v would become 0 . The resu lts by this o peration are taken a d vantage b y Eq. 5 , wh ich prom ises the literal that o nly exists in satisfied clau ses would not be “flipped ” ( W alkSA T only choo ses unsat- isfied clause and select its variables to flip . If literals are no t in a ny unsatisfied clauses, it would not be chosen). T owards the literal v co n tained by one unsatisfied clause at least ( Q Ψ( v ) || h ( k ) Ψ( v ) || = 0 since ther e exists a clau se embed ding e q uals to 0 acco rding to Eq. 7 ), its literal message would be assigned by a random vector ǫ ( k ) . I t implies th e random ness when W alk SA T try to select one of literal in unsatisfied clauses to flip its value. Th e flippin g process is simulated by E q. 6 as we h av e discussed. Here we futh er verify if a CNF form u la could be satisfied, literal embe d dings genera ted by th e optimal agg r egation and com bine function s that re present the Boolean assignment of literal to sat- isfy this CNF formu la, would conver ge over iterations (It cor respond s to th e stop criter ia in W alk- SA T .) . Specifically suppo se that in th e k - 1 iter ation, Eq . 5 have induced the literal e m beddin gs so that all clauses with the literal in the f ormula have b e e n satisfied. By Eq. 6 it is o bvious that ∀ v ∈ L , m ( k ) Ψ( v ) = h (0) Ψ( v ) . T o this we have h ( k − 1) Ψ( v ) = m ( k ) Ψ( v ) and h ( k ) Ψ( v ) = h ( k − 1) Ψ( v ) = h (0) Ψ( v ) since all clauses in the formula have already b een satisfied befor e th e c u rrent iteratio n . In this case, it hold s Q Ψ( v ) || h ( k − 1) Ψ( v ) || 6 = 0 and leads to ∀ v ∈ L , m ( k ) v = 0 in this formu la ( E q. 4). In term o f this, Eq. 5 guaran tee s all the literal emb edding s con sistent with those in the previous iteration. Concluding the analysis above, we know that the optima l ag gregation and co mbine fun ctions (Eq, 4 5 6 7 ) are cooper ated to simulate the local search in W alkSA T . Failur e in 2 QBF . Notably th e failure in provin g UNSA T would not be a problem fo r GNNs applied to solve SA T , as predictin g satisfiability with high confidence has already b een good eno ugh for a binary distinction. Howe ver , 2QBF prob lems imp ly so lv ing UNSA T , whic h inevitably makes GNN s unav ailable in proving the relev ant form ulae. It pro bably explains the mystery in [7] abo ut why GNNs p u rely learne d by d ata-driven super vised learning lead to th e same perfor mances as rando m speculation [16]. 4 Further Discussio n In this manuscr ip t, we provide so me discussions abou t the GNNs that c onsider the SA T and 2QBF problem as static gra ph, we h av en’t co nsidered the shr inkage cond ition, which ma y apply dyna mic GNN as [ 9], dues to the d ifficulty about proving the dyn amic g raph as we ne ed to p rove all the dynamic upd ating m e thods are impossible or not. Ought to be regarded that, this manuscr ipt does not claim GNN is provably unable to achiev e UNSA T , wh ich rem a in s an open issue. 5 Belief p ropaga tio n (BP) is a Bayesian message- p assing meth od first proposed b y [1 0], wh ich is a useful approx imation algorithm and has been applied to th e SA T pro blems (sp ecifically in 3-SA T [8]) and 2QBF pro blems [1 9]. BP can find the witne sses of unsatisfiability of 2QBF b y ad o pting a bias estimation strategy . Each ro und of BP allows the user to select the most biased ∀ -variable and assign the biased value to the variable. After all the ∀ -variables are assigned, the fo rmula is simplified by the assignmen t and sent to SA T solvers. The p r ocedur e returns the assignment as a witness of unsatisfiability if the simplified form ula is unsatisfiable, or UNKNO WN o th erwise. Howe ver , the fact that BP is used for each ∀ -variable a ssign ment leads to high overhead, similar to the RL ap proach giv en b y [5]. It is interesting, howev er , to see that with the added overhead, BP can find witnesses o f unsatisfiability , wh ich is wh at o ne-shot GNN-based emb edding s cannot ach iev e. This manuscript revealed the p reviously unrecog nized lim itation o f GNN in reason ing abou t unsat- isfiability o f SA T prob lems. Th is limitatio n is probably rooted in the simpility of message-passing scheme, which is good enough for embedd ing g raph features, but not for cond u cting com p lex rea - soning on to p of th e g raph stru ctures. Refer ences [1] Saeed Amizadeh, Sergiy Matusevych, an d Markus W e imer . Lea r ning to solve circuit-SA T: An unsuper v ised differentiable appr oach. In Internationa l Conference o n Learning Repr esenta- tions , 2019. [2] Alo nzo Chu rch. A note on the entscheidun gsprob lem. J. Symb . Log. , 1(1 ):40–4 1 , 193 6. [3] Step hen A. Cook . The complexity of theor em-provin g procedu res. In Pr oceedings o f the Third Annua l A CM Symp osium on Th eory of Computing , STOC ’71, pages 15 1–158 , New Y ork , NY , USA, 1 971. A CM. [4] Ma r tin Da vis, George Logemann , and Donald W . Loveland. A machine prog r am fo r theo rem- proving. Commu n. ACM , 5(7 ):394– 397, 1962. [5] Gil Lederm an, M a r kus N. Rabe, and Sanjit A. Seshia. Learnin g he u ristics for auto mated reasoning th rough d eep reinforcem ent learning. CoRR , abs/1807.0 8058, 2 0 18. [6] Xiao dan Liang , Xiaoh u i Shen, Jiashi Feng, Liang Lin , a n d Shuichen g Y an. Semantic object parsing with gr aph LST M . Co R R , abs/1603.0 7063, 2016. [7] Flor ian Lonsing, Uwe Egly , and Martin a Seid l. Q-resolution with generalized axio ms. In Nadia Creign ou and Dan ie l Le Berre, editors, Theo ry and Ap plications of Sa tisfi a bility T esting – SAT 2016 , pag es 4 3 5–45 2, Ch am, 2016. Springer Inte r national Publishin g. [8] M. " Mézard", G. Parisi, and R. Zecch in a. Analytic and algorith mic solution of rand om satisfi- ability problems. S c ience , 2 9 7(558 2):812 –815, 200 2. [9] Ald o Pareja, Giacom o Domenicon i, Jie Chen, T eng fei Ma, T oyotaro Suzumu ra, Hiroki Kaneza- shi, Tim Kaler, an d Charles E. L eisersen. E volvegcn: Evolving grap h convolutional networks for d ynamic graphs. CoRR , a b s/1902. 1 0191 , 2019 . [10] Judea Pea r l. Rev erend bayes on infere nce engines: A distributed hierarchic a l approach . In AAAI , p ages 133–13 6. AAAI Press, 19 82. [11] Bart Selman, Henry A. Kautz, and Bram Cohen . L ocal search strategies for satisfiability testing. In Cliques, Coloring, and Satisfiability , v olume 26 of DIMACS Series in Discr ete Math ematics and Theor etical Compu ter Science , pag es 521– 531. DIMAC S/AMS, 1 993. [12] Daniel Selsam, Matthew Lamm , Benedikt Bün z, Percy Liang, Leon ardo de Mou ra, and David L. Dill. Learn ing a SA T solver from single-bit superv ision. In ICLR (P oster) . Open- Re view . net, 20 19. [13] João P . Marques Silva, Inês L y nce, and Shar a d Malik. Co n flict-driven clause learning SA T solvers. In Handboo k of Satisfiab ility , volume 185 o f F r o ntiers in Artificial In telligenc e and Application s , pages 131–15 3. IOS Press, 20 09. [14] K e yulu Xu, W eih ua Hu, Jure Le skovec, and Stefanie Jegelka. How powerful ar e graph neur al networks? I n ICLR . OpenReview .ne t, 2019. [15] K e yulu Xu, Jingling Li, Mozhi Zhang , Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neur al networks reason a b out? CoRR , abs/1905. 13211 , 201 9. 6 [16] Zhanfu Y an g, Fei W ang, Zilian g Ch e n, Guannan W ei, an d T iark Rompf. Grap h neural rea so n- ing for 2- quantified b o olean f ormula solvers. CoRR , abs/190 4 .1208 4, 2019 . [17] K e xin Y i, Jiajun W u, Chuang Gan , Anto nio T orr alba, Pushmeet Kohli, and Joshua B. T en en- baum. Neural-sym bolic VQA: d isentangling r easoning fr om vision and lang uage un derstand- ing. CoRR , ab s/1810.0 2338 , 2018. [18] Manzil Zahee r, Satwik K ottur, Siamak Ra vanbakhsh, Barn abas Poczos, Ruslan R Salak hutdi- nov , an d Alexander J Smola. De e p sets. In Advances in neural informatio n p r oc e ssing systems , pages 3391– 3401, 2017. [19] P an Zh ang, Abolfazl Ramezanp o ur, Lenka Zdebor ová, an d Riccardo Zecch ina. Me ssag e pass- ing for qu antified b oolean formulas. CoRR , abs/12 02.25 36, 20 12. [20] Da vid Zheng , V in son Lu o, Jiajun W u, and Joshua B. T enenb aum. Unsup ervised learnin g of latent ph y sical prop erties using percep tion-pr ediction networks. CoRR , abs/180 7.092 44, 201 8. 7
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment