Each normal logic program has a 2-valued Minimal Hypotheses semantics
In this paper we explore a unifying approach --- that of hypotheses assumption --- as a means to provide a semantics for all Normal Logic Programs (NLPs), the Minimal Hypotheses (MH) semantics. This semantics takes a positive hypotheses assumption ap…
Authors: Alex, re Miguel Pinto, Lus Moniz Pereira
Each normal logic pr ogram has a 2-valued Minimal Hypotheses semantics Alexandre Miguel Pinto and Luís Moniz Pereira {amp | lmp} @di.fct.unl.p t Centro de Inteligência Artificial (CENTRIA), Departamento de Informática Faculdade de Ciências e T ecnologia, Uni ve rsidade Nov a de Lisboa 2829-516 Caparica, Portugal Abstract. In this paper we explore a unifying appro ach — that of hypo theses assumption — as a means to prov ide a semantics for all N ormal L ogic P rograms (NL Ps), t he Minimal Hypotheses (MH) semantics 1 . This semantics takes a positive hypotheses assumption approach as a means to guarantee the desirable properties of model e xistence, rele v ance and cumulati vity , and of generalizing the Sta- ble Models semantics in the process. T o do so we first introduce the fundamental semantic concept of minimality of assumed positi v e hypo theses, define the MH semantics, and analyze the semantics’ properties and applicability . Indeed, abductiv e Logic P rogramming can be conceptually captured by a strategy centered on the assumption of abducibles (or hypotheses). L ike wise, the Argumentation per- specti ve of Logic Programs (e.g. [7]) also l ends itself to an arguments (or hypotheses) assumption approach. P rev ious works on Abduction (e.g. [12]) hav e depicted the atoms of default negated literals in NLPs as abducibles, i.e., assumable hypotheses. W e take a complem entary and more general view than these works to NLP semantics by employ ing positiv e hypotheses instead. K eywords: Hypotheses, Semantics, NLPs, Abduction, Argumen tation. 1 Backgr ound Logic Programs hav e long been used in Knowledge Representation and Reasoning. Definition 1. Normal Logic Progra m. By an alph abet A of a language L we mean (fin ite or countab ly infinite) disjoint sets of constants, p r edicate symbo ls, and fun ction symb ols, with a t le ast one co nstant. In addition , any alphabet is assumed to conta in a countably infi nite set of d istinguished v ariable symb ols. A term o ver A is defi ned r ecursively as either a variable, a con stant or an expr ession o f the form f ( t 1 , . . . , t n ) wher e f is a functio n symbol of A , n its arity , an d the t i ar e terms. An atom over A is an expr ession of the form P ( t 1 , . . . , t n ) wher e P is a pr edicate symbol of A , an d the t i ar e terms. A literal is either an ato m A or its default negation not A . W e dub default literals (or default ne gated literals — DNLs, for short) those of the form not A . A term (resp. atom, liter al) is said gr ound if it does no t contain variables. The set of all gr o und terms (r esp. atoms) of A is ca lled the Herbr and u niverse (r esp. base) of A . F or short we use H to denote the Herbrand base of A . A No rmal Logic Pr ogram (NLP) is a possibly infin ite set of rules (with no infinite descending chains of syntactical dependency ) of the form H ← B 1 , . . . , B n , not C 1 , . . . , not C m , (with m, n ≥ 0 and finite) wher e H , the B i and th e C j ar e atoms, and ea ch rule stands for all its gr ound instan ces. In conformity with the stand ar d c on vention, we write rules of the form H ← a lso simply as H (k nown as “facts”). An NLP P is called defin ite if none of its rules contain default liter als. H is the head of the rule r , deno ted by head ( r ) , and body ( r ) denotes the set { B 1 , . . . , B n , not C 1 , . . . , not C m } of a ll the literals in the body of r . When doing problem modelling with logic progr ams, rules of the form ⊥ ← B 1 , . . . , B n , not C 1 , . . . , not C m , (with m, n ≥ 0 a nd finite) 1 This paper is a very co ndensed summary of so me of the main co ntributions of the PhD Thesis [19] of the fi rst author , supported by FCT -MCTES grant SFRH / BD / 287 61 / 2006, and supervised by the second author . with a non-empty body are known as a type of Integrity Co nstraints (ICs), s pecifically denials , and the y are normally used to pr une ou t unwanted cand idate solution s. W e abuse the ‘ not ’ default negatio n notation applying it to non- empty sets of literals too: we write not S to denote { not s : s ∈ S } , and co nfoun d not not a ≡ a . When S is a n arbitrary , non-em pty set of literals S = { B 1 , . . . , B n , not C 1 , . . . , not C m } we use – S + denotes the set { B 1 , . . . , B n } o f positiv e literals in S – S − denotes the set { not C 1 , . . . , not C m } o f negativ e liter als in S – | S | = S + ∪ ( not S − ) denotes the set { B 1 , . . . , B n , C 1 , . . . , C m } o f atoms of S As expected, we say a set of literals S is consistent iff S + ∩ | S − | = ∅ . W e also write heads ( P ) to denote the set of h eads o f n on-IC r ules of a ( possibly co nstrained) pr ogram P , i.e., heads ( P ) = { hea d ( r ) : r ∈ P } \ {⊥} , and f acts ( P ) to denote the set of facts of P — f acts ( P ) = { he ad ( r ) : r ∈ P ∧ body ( r ) = ∅} . Definition 2. P art of body of a rule not in loop. Let P be an NLP and r a rule of P . W e write body ( r ) to d enote the subset of b ody ( r ) whose atoms do not dep end on r . F ormally , body ( r ) is the lar gest set of literals such th at body ( r ) ⊆ b ody ( r ) ∧ ∀ a ∈| body ( r ) | ∄ r a ∈ P ( head ( r a ) = a ∧ r a և r ) wher e r a և r me ans rule r a depend s on rule r , i.e., either head ( r ) ∈ | body ( r a ) | or ther e is some o ther rule r ′ ∈ P such th at r a և r ′ and head ( r ) ∈ | body ( r ′ ) | . Definition 3. Layer Supported a nd Classically supp orted interpretations. W e say a n interpr eta tion I of an NLP P is layer ( classically) supported iff every atom a of I is layer (c lassically) supported in I . a is layer (classically) supp orted in I iff ther e is some rule r in P with head ( r ) = a such th at I | = body ( r ) ( I | = body ( r ) ). Likew ise, we say the rule r is laye r (classically) supported in I iff I | = body ( r ) ( I | = body ( r ) ). Literals in body ( r ) are, by definition , not in loop with r . T he notion of layered s uppor t r equires that all such literals be true under I in or der for head ( r ) to b e layer supported in I . Hence, if body ( r ) is empty , head ( r ) is ipso facto layer s uppo rted. Proposition 1. Classical Support imp lies Layered Support. Given a NLP P , a n interpretation I , and an atom a such that a ∈ I , if a is classically supported in I then a is also layer su pported in I . Pr oo f. Kn owing that, b y d efinition, body ( r ) ⊆ body ( r ) for every rule r , it follows trivially that a is lay er supported in I if a is c lassically supported in I . ⊓ ⊔ 2 Motivation “Why th e need for another 2-valued semantics for NLPs since we already have the Stab le Models o ne?” The question has its merit since the Stab le M odels (SMs) semantics [9] is e xactly wha t is necessary for so many problem solving issues, but the an swer to it is be st und erstood when we ask it th e other way aroun d: “Is there any situatio n where th e SMs seman tics doe s not provid e all the inten ded mod els?” and “I s there any 2-valued generalization of SMs that keeps the intende d models it does provide, add s the m issing inte nded ones, and also enjoys the useful properties of guarantee of model existence, rele vance, and cumulativity?” Example 1. A Joint V acation Problem — Merging Logic Programs. Three friend s are plannin g a joint vacation. First f riend says “If we don’t go to the m ountain s, then we should go to the beach”. Th e second friend says “If we don’t go to travelling, then we should go to the mountains”. The third friend says “If we don’t go to the beach, then we should go tra velling”. W e code this information as the following NL P: beach ← not mounta in mountain ← not tr av el trav el ← not beach Each o f th ese in dividual consistent rules com e fr om a different friend. According to the SMs, each friend had a “solution” ( a SM) f or his o wn r ule, b ut when we pu t the th ree rules to gether, because they for m an odd lo op over negation ( OLON), the resulting merged log ic prog ram has no SM. If we assume bea ch is true then we c annot conclude trav e l an d therefore we c onclude mountain is also true — this gives rise to the { beach, mountai n, not trav el } joint and multi-plac e v acation solution. The other (symme tric) two are { mountain, not beach, tr av el } and { tr av el , not mountain , beach } . This examp le too shows the importan ce of having a 2-valued semantics guaran teeing model exis- tence, in this case f or the sake of arbitrary merging of logic prog rams (and for the sake of existence of a joint vacation for these three friends). Increased Declarativity . An IC is a rule whose head is ⊥ , and alth ough such syntactica l definition of IC is ge nerally accepted as standard, the SM semantics can emp loy od d loops over negation, such as the a ← not a, X to act as I Cs, th ereby mixing and unne cessarily confou nding two distinct Knowledge Representation issues: th e one of IC use, and the one o f assign ing semantics to loops. For the sake of declarativity , rules with ⊥ head sh ould b e the only way to write ICs in a LP: no rule, or co mbination of rules, with head different from ⊥ shou ld possibly act as IC(s) un der any giv en semantics. It is commonly argued th at answer sets (or stable models) of a pro gram correspond to th e solutions of the correspondin g problem , so no answer set mea ns no solu tion. W e argue aga inst this position: “norma l” logic rules (i.e., non-I Cs) shou ld b e used to shap e the cand idate-solutio n space, whereas ICs, and ICs alo ne, sh ould b e allowed to play th e ro le o f cutting d own the un desired candid ate-solutions. In this regard, an IC-fre e NLP should always have a model; if some problem modelled as an NLP with ICs has no solution (i.e., n o m odel) that should be due only to the ICs, not to the “normal” rules. Argumentation From an argumentation perspecti ve, th e author of [7], states: “Stable extensions do not captur e the intuitive semantics of every mea ningful ar gumentation system. ” where the “stable extensions” ha ve a 1-to-1 correspondenc e to the SMs ([7]) , and als o “Let P be a knowledge b ase repr esented either as a logic pr ogram, or as a no nmono tonic theory or as an ar gumentatio n framework. Then ther e is no t necessarily a “b ug” in P if P h as no stable semantics. This th eor em d efeats an o ften held op inion in the logic pr ogramming a nd n onmon otonic r ea- soning commun ity that if a logic pr ogr am or a no nmonoto nic theory has no stable semantics then ther e is something “wr ong” in it. ” Thus, a criterion dif ferent from the stability o ne must be used in order to effectively mo del e very argumen - tation framework ad equately . Arbitrary Update s a nd/or Merges One of the main goals b ehind the conception o f no n-mon otonic lo gics was the ability t o deal with the changing, e volving, updating of knowledge. Th ere are scenarios where it is possible and useful to combin e se vera l Knowledge Bases (possibly fro m different authors or sources) into a single on e, and/or to upd ate a given KB with new k nowledge. Assuming the KBs are cod ed as IC-free NLPs, as well as the updates, the r esulting KB is also an I C-free NL P . I n such a case, the r esulting (merged and/or u pdated) K B sho uld always h av e a semantics. T his should be true par ticularly in the c ase o f NL Ps where no negations are allowed in the head s of rules. In this case no contradictions can arise because there are no conflicting rule heads. The lack of such guarantee when t he underly ing semantics used is th e Stable Models, fo r examp le, compro mises the possibility of arbitrarily upd ating and /or merging KBs (coded a s IC-free NLPs). In the case of self-u pdating pro grams, the desirable “li veness” pro perty is put in to question, ev en without outside intervention. These m otiv ational issues raise the questions “Which should be the 2 -valued models o f an NLP wh en it has n o SMs?”, “How do these relate to SMs?”, “Is there a unif orm approach to cha racterize both such models and the SMs?”, and “Is there any 2-valued generalizatio n of the SMs that encompa sses the intuiti ve semantics of e very log ic progra m?”. Answerin g such questions is a paramo unt motivation and thrust in th is paper . 2.1 Intuitively Desir ed Semantics It is c ommon ly accepted that the no n-stratification of the d efault not is the f undame ntal ingredient which allows for th e po ssibility o f existence o f several mod els for a progr am. The n on-stratified DNL s (i.e., in a loop) of a program can thus be seen as non-determ inistically assumable choice s. The rules in the progra m, as well as th e pa rticular seman tics we wish to assign them , is what co nstrains which sets of those cho ices we take as acce ptable. Programs with OLONs (e x. 1 ) are said to be “contradictory ” by the SMs commun ity because the latter takes a n egati ve hypo theses assump tion approach, co nsistently maximizing them, i.e., DNLs are seen as assum able/abdu cible hy potheses. In ex.1 though , assumin g whichev er max imal negati ve hypoth eses leads to a p ositiv e contrad ictory conclusio n via the rules. On th e oth er ha nd, if we take a consistent minimal p ositive hypotheses assumptio n (w here the assumed hypo theses are the atoms of the DNLs), then it is impossible to achieve a co ntradiction since no negati ve con clusions can b e drawn from NLP rules. Minimizing positiv e ass umption s implies the maximizing of negati ve on es b ut gaining an e xtra degree of freedom. 2.2 Desirable F ormal Properties Only I Cs (rules with ⊥ head ) shou ld “e ndange r” model existence in a log ic p rogr am. Therefor e, a semantics for NLPs with no ICs should guarantee mo del existence ( which, e.g., d oes not o ccur with SMs). Relev ance is also a usefu l property since it allows the dev elopment o f top-down q uery- driven proof-p rocedur es that allow for the sound and comp lete search for answers to a user’ s query . This is useful in the sense that in o rder to find an answer to a query only the relevant part of the pr ogram mu st be co nsidered, wh ereas with a non -relev ant semantics the who le pr ogram must b e consider ed, with corr espondin g perf ormanc e disadvantage compare d to a relev ant s emantics. Definition 4. Relevant part of P for atom a . The r elevant part of NLP P for atom a is Rel P ( a ) = { r a ∈ P : head ( r a ) = a } ∪ { r ∈ P : ∃ r a ∈ P ∧ head ( r a )= a r a և r } Definition 5. Relevance (ad apted from [5]). A sema ntics S em for logic pr ograms is said Relevant iff for every pr ogram P ∀ a ∈H P ( ∀ M ∈ M odels S em ( P ) a ∈ M ) ⇔ ( ∀ M a ∈ M od els S em ( Rel P ( a )) a ∈ M a ) Moreover , cum ulativity also plays a r ole in pe rforma nce enhancement in the sen se that o nly a sem antics enjoying this property ca n take advantage of storing intermediate lemmas to sp eed up futu re computa tions. Definition 6. Cumulativity (ada pted from [4]). Let P be an NLP , a nd a, b two atoms of H P . A semantics S em is Cumulative iff the semantics of P r emains un changed when any atom t rue in th e seman tics is added to P as a fact: ∀ a,b ∈H P ( ∀ M ∈ M odels S em ( P ) a ∈ M ) ⇒ ( ∀ M ∈ M odels S em ( P ) b ∈ M ⇔ ∀ M a ∈ M od els S em ( P ∪{ a } ) b ∈ M a ) Finally , each individual SM of a prog ram, by being minimal and classically supported , should be accepted as a mo del accor ding to every 2-v alued semantics, and hence every 2-valued semantics should be a mo del conservati ve extension of Stable Models. 3 Syntactic T ransformations It is commonly accepted that definite L Ps (i.e., without default n egation) hav e only one 2-valued model — its least model which coincid es with the W ell-Founded Model (WFM [8] ). This is also the case for locally stratified LPs. In su ch cases we can use a syntactic transformation o n a program to obtain that model. In [2] the author defined the program Remainder (denoted by b P ) for calculating the WFM, which coincides with the unique perfect mod el fo r locally stratified LPs. The Remainder can thus be seen as a g eneralization for NLPs of the l f p ( T ) , the latter ob tainable only from the subclass of definite L Ps. W e recap her e the definitions necessary f or th e Remainder becau se we will use it in the d efinition of our Minim al Hypotheses semantics. The intuitive gist of M H semantics (forma lly defin ed in sectio n 4) is as follows: an in terpretatio n M H is a MH model of progra m P iff there is some minimal set of hypoth eses H such that the truth-values of all ato ms of P b ecome determin ed assumin g th e atoms in H as true . W e resort to the p rogram Remaind er as a deterministic (and ef ficient, i.e., computable in p olyno mial time) m eans to find ou t if th e tru th-values of all literals became deter mined o r no t — we will see below how the Remain der can be used to find th is out. 3.1 Program Remainder For self-containm ent, we in clude here the definition s of [2] upo n which the Remainder relies, and adapt them where con venient to better match the syntactic conv entions used through out this paper . Definition 7. Program transformation ( def. 4.2 of [2]). A pro gram transfor mation is a r elation 7→ b etween gr o und logic pr ograms. A seman tics S a llows a tr ansformation 7→ iff M ode l s S ( P 1 ) = M odel s S ( P 2 ) fo r all P 1 and P 2 with P 1 7→ P 2 . W e write 7→ ∗ to deno te the fixed poin t of the 7→ operation, i.e., P 7→ ∗ P ′ wher e ∄ P ′′ 6 = P ′ P ′ 7→ P ′′ . It follows that P 7→ ∗ P ′ ⇒ P ′ 7→ P ′ . Definition 8. P ositive reduction (def. 4. 6 of [2]). Let P 1 and P 2 be gr ou nd pr ograms. Pr ogram P 2 r esults fr om P 1 by positi ve red uction ( P 1 7→ P P 2 ) iff ther e is a rule r ∈ P 1 and a ne gative l iteral not b ∈ body ( r ) such that b / ∈ heads ( P 1 ) , i.e., ther e is no rule for b in P 1 , and P 2 = ( P 1 \ { r } ) ∪ { he ad ( r ) ← ( body ( r ) \ { not b } ) } . Definition 9. Negative reduction (def. 4.7 of [ 2]). Let P 1 and P 2 be gr ound p r ograms. Pr ogram P 2 r esults fr om P 1 by negative red uction ( P 1 7→ N P 2 ) iff ther e is a rule r ∈ P 1 and a ne gative literal not b ∈ body ( r ) such that b ∈ f acts ( P 1 ) , i.e., b ap pears as a fact in P 1 , and P 2 = P 1 \ { r } . Negati ve redu ction is consistent with classical suppo rt, but not with the layered one. T herefor e, we intro- duce now a layered version of the negati ve redu ction operation. Definition 10. Layered negative reduction. Let P 1 and P 2 be gr ound pr ograms. Pr ogr am P 2 r esults fr om P 1 by lay ered n egati ve r eduction ( P 1 7→ LN P 2 ) iff th er e is a rule r ∈ P 1 and a ne gative literal not b ∈ body ( r ) such that b ∈ f acts ( P 1 ) , i.e., b ap pears as a fact in P 1 , and P 2 = P 1 \ { r } . The Strongly Conn ected Components (SCCs) of r ules of a progr am can be calculated in polyn omial time [20]. Once the SCCs o f rules ha ve b een identified , the body ( r ) subset of body ( r ) , for ea ch ru le r , is identifiable in lin ear time — one needs to check just on ce for ea ch literal in body ( r ) if it is also in body ( r ) . Therefo re, these p olynom ial time c omplexity opera tions are all the added complexity Layere d negative reduction adds over regular Negati ve reduction. Definition 11. Success (def. 5.2 of [2]) . Let P 1 and P 2 be gr ou nd pr ograms. Pr ogram P 2 r esults fr om P 1 by success ( P 1 7→ S P 2 ) iff ther e ar e a rule r ∈ P 1 and a fact b ∈ f acts ( P 1 ) such that b ∈ body ( r ) , and P 2 = ( P 1 \ { r } ) ∪ { head ( r ) ← ( b ody ( r ) \ { b } ) } . Definition 12. F ailure (def. 5.3 of [2 ]). Let P 1 and P 2 be gr ound pr ograms. Pr ogr am P 2 r esults fr om P 1 by failure ( P 1 7→ F P 2 ) iff ther e ar e a rule r ∈ P 1 and a positive liter al b ∈ body ( r ) such that b / ∈ heads ( P 1 ) , i.e., ther e are no rules for b in P 1 , and P 2 = P 1 \ { r } . Definition 13. Loop detection ( def. 5.10 of [2]). Let P 1 and P 2 be gr ound pr ogr ams. Pr ogram P 2 r esults fr om P 1 by loop detection ( P 1 7→ L P 2 ) iff ther e is a set A of gr ound atoms such that 1. for each rule r ∈ P 1 , if head ( r ) ∈ A , then body ( r ) ∩ A 6 = ∅ , 2. P 2 := { r ∈ P 1 | body ( r ) ∩ A = ∅} , 3. P 1 6 = P 2 . W e are not enter ing here into the details of the loop detection step, but ju st taking note that 1) such a set A corre sponds to an un found ed s et (cf . [8]); 2) lo op detection is co mputation ally equiv alent to find ing the SCCs [20], and is k nown to be of polynom ial time c omplexity; and 3) the atoms in the unfo unded set A have all their corr espondin g rules inv olved in SCCs whe re all h eads of r ules in lo op app ear positive in the bodies of the rules in loop. Definition 14. Reduction (def. 5.15 of [2]). Let 7→ X denote the r ewriting system: 7→ X := 7→ P ∪ 7→ N ∪ 7→ S ∪ 7→ F ∪ 7→ L . Definition 15. Layered reduction. Let 7→ LX denote the r ewriting s ystem: 7→ LX := 7→ P ∪ 7→ LN ∪ 7→ S ∪ 7→ F ∪ 7→ L . Definition 16. Remainder (def. 5.17 of [2]). Let P b e a pr ogram. Let b P satisfy g r ound ( P ) 7→ ∗ X b P . Then b P is called the remainder of P , and is g uaranteed to exist and to b e u nique to P . Mo r eover , th e calculu s of 7→ ∗ X is kn own to b e of polynomial time comp lexity [2]. When co n venient, we write Rem ( P ) instead of b P . An important result fro m [2] is that the WFM of P is such that W F M + ( P ) = f acts ( b P ) , W F M + u = heads ( b P ) , and W F M − ( P ) = H P \ W F M + u ( P ) , where W F M + ( P ) deno tes the set o f atom s of P tru e in the WF M, W F M + u ( P ) d enotes the s et of atoms of P true or undefi ned in the W FM, and W F M − ( P ) denotes the set of atoms of P false in the WFM. Definition 17. Layered Remainder . Let P be a pr ogram. L et the pr ogram ˚ P satisfy g r ound ( P ) 7→ ∗ LX ˚ P . Then ˚ P is called a layered remainder of P . Sin ce ˚ P is equivalent to b P , apart fr om the differ e nce between 7→ LN and 7→ N , it is trivial that ˚ P is also guaranteed to exist and to be u nique for P . Mo r eover , th e calcu lus of 7→ ∗ LX is likew ise of polynomial time complexity because 7→ LN is also of polynomial time complexity . The remain der’ s r ewrite rules ar e provably confluent, ie. indep enden t of application order . The lay ered remainder ’ s rules dif fer on ly in the n egati ve red uction ru le and the con fluence proof o f the former is read ily adapted to the latter . Example 2. ˚ P versus b P . Recall the p rogra m from examp le 1 but now with an additional f ourth stubbo rn friend who insists on going to the beach no matter what. P = beach ← not mounta in mountain ← not tr av el trav el ← not beach beach W e can clearly see that the sing le fact r ule does n ot depend on any other , and that the remaining three rules forming the loop all depen d on each o ther and on the fact ru le beach . b P is the fixed point of 7→ X , i.e., the fixed point o f 7→ P ∪ 7→ N ∪ 7→ S ∪ 7→ F ∪ 7→ L . Since beach is a fact, the 7→ N transform ation deletes th e trav el ← not b each rule; i.e., P 7→ N P ′ is such that P ′ = { b each ← not mountain mountain ← not tr av el beach ←} Now in P ′ there are no r ules for tr av el and hence we can apply the 7→ P transform ation which deletes the not trav el fro m the body of mou ntain ’ s rule; i.e, P ′ 7→ P P ′′ where P ′′ = { beach ← not mountain mountain ← beach ←} Finally , in P ′′ mountain is a f act and hen ce we can again apply the 7→ N obtaining P ′′ 7→ P P ′′′ where P ′′′ = { mountai n ← beach ←} upon which no m ore transformatio ns can be applied, so b P = P ′′′ . Instead, ˚ P = P is the fixed point of 7→ LX , i.e., the fixed point of 7→ P ∪ 7→ LN ∪ 7→ S ∪ 7→ F ∪ 7→ L . 4 Minimal Hypotheses Semantics 4.1 Choosing Hypotheses The abductive perspective of [12 ] d epicts the atoms of DNLs as abd ucibles, i.e., assumab le hypotheses. Atoms o f DNL s can be considered as abdu cibles, i.e., assumable hy potheses, but not all of th em. Wh en we have a locally stratified program we cannot really say ther e is any degree of freed om in assuming truth values f or th e atoms of the pro gram’ s DNLs. So, we realize th at only th e atom s o f DNLs in volved in SC Cs 2 are eligible to be considered further assumable hypoth eses. 2 Strongly Connected Componen ts, as in Examples 1 and 2 Both th e SMs and the ap proach of [1 2], whe n tak ing the abdu ctiv e perspective, adopt negative hy - potheses on ly . Th is app roach works fine for some in stances of non-well-f ounded negation such as loops (in particu lar , for ev en loop s over negation like this o ne), but not f or odd lo ops over negation like, e.g. a ← not a : assum ing not a would lead to the conclusion th at a is true which contradicts the initial as- sumption. T o overcome this p roblem, we generalized the h ypothe ses assumption perspe ctiv e to allow the adoption , not o nly of negati ve hypotheses, but also o f p ositiv e ones. Ha ving taken th is gen eralization step we realized that positiv e hy potheses assump tion alone is sufficient to address all situations, i.e., there is no need for b oth positi ve and negative hypotheses assum ption. Indeed, because we minimize the positi ve hypoth eses we are with on e stroke maxim izing the negati ve ones, which has been the traditional way of dealing with the CW A, and also with stable models because the latter’ s require ment of classical supp ort minimizes models. In example 1 we saw three solutions, each assumin g as true one of the DNL s in th e loo p. Adding a fourth stubborn friend insisting on going to the beac h, as in exam ple 2, should still permit the two solu tions { beach, mou ntain, not tr av e l } an d { tr av el , n ot mountain, bea c h } . The on ly way to permit both these solutions is by resorting to the Layered Remainder, an d not to the R emainder, as a mea ns to identify the set of assumable hypo theses. Thus, all the literals o f P that are not determined fa lse in ˚ P are candidates for the role o f hypotheses we may consider t o assume as true . Merging th is perspecti ve with the abductive perspective o f [12] (where the DNLs are the abducibles) we come to the following definition of the Hypothe ses set of a prog ram. Definition 18. Hypo theses set of a p rogram. Let P be an NLP . W e write H y ps ( P ) to deno te the set of assumable h ypotheses of P : th e ato ms that a ppear a s DNLs in the b odies o f rules of ˚ P . F ormally , H y ps ( P ) = { a : ∃ r ∈ ˚ P not a ∈ body ( r ) } . One can d efine a classical suppo rt compatible version of the Hypotheses set o f a pr ogram, only using to that effect the Remainder instead of the Layered Remainder . I.e., Definition 19. Classical Hypo theses set of a p rogram. Let P be an NLP . W e write C H y ps ( P ) to denote the set of assumable h ypotheses of P co nsistent with the classical notion of support: the atoms that appear as DNLs in the bodies of rules of b P . F ormally , C H y ps ( P ) = { a : ∃ r ∈ b P not a ∈ body ( r ) } . Here we take the layered support compatible appr oach and, ther efore, we will use the Hypotheses set as in definition 18. Since C H y ps ( P ) ⊆ H y ps ( P ) for ev ery NLP P , th ere is n o gener ality loss in using H y ps ( P ) instead o f C H y ps ( P ) , while usin g H y ps ( P ) allows for some useful semantics p roperties ex- amined in the sequel. 4.2 Definition Intuitively , a Minim al Hypotheses model of a prog ram is obtained from a minim al set of hypoth eses which is sufficiently large to determine the truth-value of all literals via Remain der . Definition 20. Minimal Hypotheses mo del. Let P be an NLP . Let H y ps ( P ) be the set of assumable hypothe ses of P (cf. definition 18), and H some su bset of H y ps ( P ) . A 2-valued model M of P is a Min imal Hypotheses model of P iff M + = f acts ( \ P ∪ H ) = he ads ( \ P ∪ H ) wher e H = ∅ or H is non-empty set-inclusion minimal (the set-inclusion minimality is consider ed only for no n-empty H s). I.e., the hyp otheses set H is minimal but sufficient to determine ( via Remain der) the truth-value of all literals in the pr ogram. W e alr eady know that W F M + ( P ) = f acts ( b P ) and that W F M + u ( P ) = heads ( b P ) . Thu s, whenever f acts ( b P ) = heads ( b P ) we ha ve W F M + ( P ) = W F M + u ( P ) wh ich means W F M u ( P ) = ∅ . Moreover , whenever W F M u ( P ) = ∅ we know , by Corollary 5.6 of [8], that the 2-valued mod el M such that M + = f acts ( b P ) is the uniqu e stable mod el of P . Thus, we conclude that, as an alternative equi valent definition, M is a Minimal Hypotheses model of P iff M is a stable mod el of P ∪ H wh ere H is empty or a non- empty set-inclusion min imal subset of H y ps ( P ) . Mor eover , it f ollows immediately that ev ery SM of P is a Minimal Hypotheses model of P . In example 2 we can thus see that we have th e two models { beach, mounta i n, not trav el } and { tr av el , be ach, not mounta i n } . This is the case because the add ition of the fourth stub- born f riend d oes n ot ch ange th e set of H y p s ( P ) which is based up on the Layered Remain der, and n ot o n the Remainder . Example 3. Minimal Hypotheses models for the vacation with passport variation. Consider again the vacation p roblem from e xample 1 with a v ariation includ ing the n eed fo r valid passports for tr av elling P = beach ← not mountain mountain ← not tr av el trav el ← not b each, not expi re d _ passpor t passpor t _ ok ← not expire d _ passport expir ed _ passpor t ← not passport _ ok W e have P = ˚ P = b P and thus H y ps ( P ) = { beach, mountain , tr av el , passpor t _ ok , expir ed _ passpor t } . Let us see which are the MH models for this progra m. H = ∅ d oes not yield a MH model. Assuming H = { beac h } we have P ∪ H = P ∪ { beach } = beach ← not mountain mountain ← not tr av el trav el ← not b each, not expi re d _ passpor t beach passpor t _ ok ← not expire d _ passport expir ed _ passpor t ← not passport _ ok and \ P ∪ H = mountain beach passpor t _ ok ← not expire d _ passport expir ed _ passpor t ← not passpor t _ ok which mean s H = { be ach } is no t suf ficient to determin e the truth values of all literals of P . One can easily see th at th e same hap pens fo r H = { mou ntain } an d for H = { tra v el } : in either case the liter als passpor t _ ok and expired _ p a ssport r emain non-deter mined. If we assume H = { expir ed _ passpor t } then P ∪ H is beach ← not mountain mountain ← not tr av el trav el ← not b each, not expi re d _ passpor t passpor t _ ok ← not expire d _ passport expir ed _ passpor t ← not passport _ ok expir ed _ passpor t and \ P ∪ H = mountain expir ed _ passpor t which m eans M + expired _ passpor t = f acts ( \ P ∪ H ) = heads ( \ P ∪ H ) = { mountain, expir ed _ passport } , i.e., M expired _ passpor t = { not bea ch, mountain, not trav el , not pas s port _ ok , expir ed _ passp ort } , is a MH model of P . Since assum ing H = { expir ed _ passpor t } alone is suf ficient to d etermine all litera ls, there is no other set of hy potheses H ′ of P suc h that H ′ ⊃ { e xpir e d _ passport } (notice the strict ⊃ , not ⊇ ), yielding a MH model of P . E.g., H ′ = { tr av el , e xpire d _ passport } do es not lead to a MH mod el of P simply because H ′ is not minimal w .r .t. H = { expired _ passp or t } . If we assume H = { passpor t _ ok } then P ∪ H is beach ← not mountain mountain ← not tr av el trav el ← not b each, not expi re d _ passpor t passpor t _ ok ← not expire d _ passport expir ed _ passpor t ← not passport _ ok passpor t _ ok and \ P ∪ H = beach ← not mounta in mountain ← not tr av el trav el ← not beach passpor t _ ok which, apart from the f act passport _ ok , correspond s to the original version of this examp le and st ill lea ves literals with non-determin ed truth- values. I.e., assuming the passports ar e OK allows for the th ree p ossibili- ties of example 1 b ut it is no t enou gh to entirely “solve” the vacation pr oblem: we nee d some hyp otheses set containing one of beach , mountain , or trav el if (in this case, and only if ) it also contain s passport _ ok . Example 4. Minimality o f H ypotheses do es not guarantee minimality of model. Let P , with n o SM s, be a ← not b, c b ← not c, not a c ← not a, b In this case P = b P = ˚ P , which makes H y ps ( P ) = { a, b, c } . H = ∅ does n ot determin e all liter als of P bec ause f acts ( \ P ∪ ∅ ) = f acts ( b P ) = ∅ and heads ( \ P ∪ ∅ ) = heads ( b P ) = { a, b , c } . H = { a } do es determine all literals of P because f acts ( \ P ∪ { a } ) = { a } and heads ( \ P ∪ { a } ) = { a } , thus yielding the MH model M a such that M + a = f acts ( \ P ∪ { a } ) = { a } , i.e., M a = { a, n ot b, not c } . H = { c } is also a minimal set o f h ypothe ses determining all literals be cause f acts ( \ P ∪ { c } ) = { a, c } and heads ( \ P ∪ { c } ) = { a, c } , thus yielding the MH mo del M c of P such th at M + c = f acts ( \ P ∪ { c } ) = { a, c } , i.e., M c = { a, n ot b, c } . Ho wever , M c is not a min imal mo del of P because M + c = { a, c } is a strict superset of M + a = { a } . M c is indee d an MH model of P , but just no t a minimal model thereby bein g a clear exam ple of how minimality o f h ypoth eses does n ot en tail min imality o f co nsequen ces. Just to make this e xample complete, we show that H = { b } also determines all literals of P b ecause f acts ( \ P ∪ { b } ) = { b, c } and heads ( \ P ∪ { b } ) = { b, c } , thus y ielding the MH m odel M b such that M + b = f acts ( \ P ∪ { b } ) = { b, c } , i .e., M b = { not a, b, c } . Any other hypotheses set is necessarily a strict superset of either H = { a } , H = { b } , or H = { c } and, therefor e, not set-inclusion minimal; i.e., there are no more MH models of P . Also, not all minimal models of a prog ram are MH models, as the following example sho ws. Example 5. Some minimal models are not Minimal Hypotheses models. Let P (with no SMs) be a ← k k ← not t t ← a, b a ← not b b ← not a In th is case P = b P = ˚ P an d theref ore H y ps ( P ) = { a, b, t } . Since f acts ( b P ) 6 = heads ( b P ) , the hyp otheses set H = ∅ do es not y ield a MH mo del. Assuming H = { a } we h ave \ P ∪ H = \ P ∪ { a } = { a ← , k ←} so, \ P ∪ H is the set of facts { a, k } and, th erefore, M a such th at M + a = f acts ( \ P ∪ H ) = f acts ( \ P ∪ { a } ) = { a, k } , is a MH model of P . Assumin g H = { b } we have \ P ∪ { b } = a ← k k ← not t t ← a b ← not a b thus f acts ( \ P ∪ { b } ) = { b } 6 = h eads ( \ P ∪ { b } ) = { a, b , t, k } , which mean s the set of hypotheses H = { b } d oes not yield a MH model of P . Assuming H = { t } we have \ P ∪ { t } = t ← a, b b ← not a a ← not b t thus f acts ( \ P ∪ { t } ) = { t } 6 = heads ( \ P ∪ { t } ) = { a, b, t } , which mean s th e set of hypothe ses H = { t } does not yield a MH model of P . Since we alread y know that H = { a } yields an MH model M a with M + a = { a, k } , there is no po int in trying o ut a ny subset H ′ of H y ps ( P ) = { a, b, t } suc h th at a ∈ H ′ because any su ch su bset would not be minima l w .r .t. H = { a } . Let us, th erefore , move on to the unique sub set left: H = { b , t } . Assuming H = { b, t } we have \ P ∪ { b, t } = { t ← , b ←} thu s f a c ts ( \ P ∪ { b, t } ) = { b, t } = heads ( \ P ∪ { b, t } ) , which means M b,t such that M + b,t = f acts ( \ P ∪ H ) = f acts ( \ P ∪ { b, t } ) = { b, t } , is a MH model of P . It is important to rem ark that this prog ram has other classical models, e.g , { a, k } , { b , t } , and { a, t } , but only th e first two are Minimal Hyp otheses mod els — { a, t } is obtainab le only via the set of h ypothe ses { a, t } which is non-minim al w .r .t. H = { a } that yields the MH model { a, k } . 4.3 Properties The m inimality of H is not sufficient to ensure minimality of M + = f acts ( \ P ∪ H ) making its checking explicitly nec essary if tha t is so desired. Minimality of hypo theses is indeed the comm on prac tice is scienc e, not the minimality of their ine vitable co nsequen ces. T o the contrary , t he more of these the better because it signifies a greater predictive power . In Logic Progr amming model minimality is a co nsequen ce o f definitions: the T operato r in definite progr ams is conducive to defining a least fixed po int, a unique minimal model semantics; in SM, th ough there may b e more th an one mo del, m inimality turns o ut to be a proper ty becau se o f the stability (and its attendant classical suppo rt) requiremen t; in the WFS, aga in the existence o f a least fixed point operator affords a minimal (information ) model. In ab duction too, m inimality of co nsequen ces is not a caveat, but rather minimality of hypotheses is, if that even. Hence ou r ap proach to LP seman tics via MHS is novel indeed, a nd insisting instead on positive hypo theses establishes an impr oved and mor e gen eral lin k to abduction and argumentation [16, 17 ]. Theorem 1. At least o ne Minima l Hypotheses m odel of P complies with the W ell-F o unded Mo del. Let P b e an N LP . Then, th er e is at least one Min imal Hypotheses model M of P such that M + ⊇ W F M + ( P ) and M + ⊆ W F M + u ( P ) . Pr oo f. I f f acts ( b P ) = he ads ( b P ) or e quiv alently , W F M u ( P ) = ∅ , th en M H is a MH mod el of P given that H = ∅ because M + H = f ac ts ( \ P ∪ H ) = heads ( \ P ∪ H ) = f acts ( \ P ∪ ∅ ) = heads ( \ P ∪ ∅ ) = f acts ( b P ) = heads ( b P ) . On the other hand, if f acts ( b P ) 6 = heads ( b P ) , then there is at least one non-em pty set-inclusion minimal set o f hypotheses H ⊆ H y ps ( P ) suc h that H ⊇ f acts ( P ) . The correspon ding M H is, by defin ition, a MH m odel of P which is guaran teed to comply with M + H ⊇ W F M + ( P ) = f acts ( b P ) and M − H ⊇ not W F M − ( P ) = not ( H P \ M + H ) . ⊓ ⊔ Theorem 2. Minimal H ypotheses sem antics guarantees mod el existence. Let P be an NLP . The r e is always, at least, one Minimal Hypothe ses model of P . Pr oo f. I t is trivial to see that one can always find a set H ⊆ H y ps ( P ) such that M + H ′ = f acts ( \ P ∪ H ′ ) = heads ( \ P ∪ H ′ ) — in the extreme c ase, H ′ = H y ps ( P ) . From such H ′ one c an always select a minimal subset H ⊂ H such that M + H = f acts ( \ P ∪ H ) = he ads ( \ P ∪ H ) still hold s. ⊓ ⊔ 4.4 Relevance Theorem 3. Minimal H ypotheses semantics enjoys Relevance. Let P be an NLP . Th en, b y definition 5 , it holds that ( ∀ M ∈ M odels M H ( P ) a ∈ M + ) ⇔ ( ∀ M a ∈ M od els M H ( Rel P ( a )) a ∈ M + a ) Pr oo f. ⇒ : Assum e ∀ M ∈ M odels M H ( P ) a ∈ M + . Now we need to prov e ∀ M a ∈ M od els M H ( Rel P ( a )) a ∈ M + a . Assume some M a ∈ M odel s M H ( Rel P ( a )) ; now we show that assum- ing a / ∈ M + a leads to an absurd ity . Since M a is a 2-valued comp lete model of R el P ( a ) we kn ow that | M a | = H Rel P ( a ) hence, if a / ∈ M a , then necessarily not a ∈ M − a . Since P ⊇ R el P ( a ) , by the orem 2 we know that t here is some model M ′ of P such that M ′ ⊇ M a , and thus not a ∈ M ′− which contrad icts the in itial assumption that ∀ M ∈ M odels M H ( P ) a ∈ M + . W e conclude a / ∈ M a cannot h old, i.e. , a ∈ M a must h old. Since a ∈ M + hold fo r every mod el M of P , then a ∈ M a must hold for e very model M a of Rel P ( a ) . ⇐ : Assume ∀ M a ∈ M od els M H ( Rel P ( a )) a ∈ M + a . Now we need to prove ∀ M ∈ M odels M H ( P ) a ∈ M + . Let us write P ) a ( as an abb reviation of P \ Rel P ( a ) . W e hav e ther efore P = P ) a ( ∪ Rel P ( a ) . Let u s now take P ) a ( ∪ M a . W e know that every NLP as an MH m odel, h ence every MH mo del M of P ) a ( ∪ M a is su ch th at M ⊇ M a . Let H M a denote the Hy potheses set of M a — i.e., M + a = f acts ( \ Rel P ( a ) ∪ H M a ) = he ads ( \ Rel P ( a ) ∪ H M a ) , with H M a = ∅ or n on-emp ty set-inclusion minimal, as p er definition 2 0. I f f acts ( \ P ∪ H M a ) = he ads ( \ P ∪ H M a ) then M + = f acts ( \ P ∪ H M ) = heads ( \ P ∪ H M ) is an MH model of P with H M = H M a and, necessarily , M ⊇ M a . If f acts ( \ P ∪ H M a ) 6 = he ads ( \ P ∪ H M a ) th en, knowing that e very program has a MH mod el, we c an always find an MH mo del M of P ) a ( ∪ M a , with H ′ ⊆ H y ps ( P ) a ( ∪ M a ) , wh ere M + = f acts ( \ P ∪ H ′ ) = heads ( \ P ∪ H ′ ) . Such M is th us M + = f acts ( \ P ∪ H M ) = heads ( \ P ∪ H M ) wh ere H M = H M a ∪ H ′ , which mean s M is a MH mo del of P with M ⊇ M a . Since ev ery model M a of Re l P ( a ) is suc h that a ∈ M + a , then e very model M of P must also be such that a ∈ M . ⊓ ⊔ 4.5 Cumulativity MH seman tics enjoys Cum ulativity thus allowing for lemma storing techniques to b e used during compu - tation of answers to queries. Theorem 4. Minimal Hypotheses semantics enjoys C umulativity . Let P b e an NLP . Then ∀ a,b ∈H P ( ∀ M ∈ M odels M H ( P ) a ∈ M + ) ⇒ ( ∀ M ∈ M odels M H ( P ) b ∈ M + ⇔ ∀ M a ∈ M od els M H ( P ∪{ a } ) b ∈ M + a ) Pr oo f. Assum e ∀ a ∈H P M ∈ M odels M H ( P ) a ∈ M + . ⇒ : Assume ∀ M ∈ M odels M H ( P ) b ∈ M + . Since e very MH m odel M contain s a it follows that all such M are also MH mo dels o f P ∪ { a } . Since we assumed b ∈ M as well, and we know that M is a MH model o f P ∪ { a } we conclu de b is also in those MH mo dels M of P ∪ { a } . By adding a as a fact w e have necessarily H y ps ( P ∪ { a } ) ⊆ H y ps ( P ) which mea ns that there cannot be more MH models for P ∪ { a } than for P . Since we alrea dy kn ow that for e very MH m odel M o f P , M is also a MH mod el of P ∪ { a } we must con clude that ∀ M ∈ M odels M H ( P ) ∃ 1 M ′ ∈ M od els M H ( P ∪{ a } ) such that M ′ + ⊇ M + . Since ∀ M ∈ M odels M H ( P ) b ∈ M + we necessarily conclude ∀ M a ∈ M od els M H ( P ∪{ a } ) b ∈ M + a . ⇐ : Assume ∀ M a ∈ M od els M H ( P ∪{ a } ) b ∈ M + a . Since the MH semantics is relev ant (theorem 3) if b do es not d epend o n a then ad ding a as a fact to P o r n ot h as no impact o n b ’ s truth-value, and if b ∈ M + a then b ∈ M + as well. If b does depend on a , which is true i n e very MH model M of P , then e ither 1) b dep ends positively on a , and in this case since a ∈ M then b ∈ M as well; or 2) b depends negatively on a , and in this case the lack o f a as a fact in P can only co ntribute, if at all, to make b true in M as well. Th en we conclud e ∀ M ∈ M odels M H ( P ) b ∈ M + . ⊓ ⊔ 4.6 Complexity The complexity issues usually relate to a particular set of tasks, namely: 1) k nowing if the program h as a model; 2) if it has any model entailing some set o f groun d literals (a quer y); 3) if all mod els entail a set o f literals. In the case o f MH semantics, the answer to the first question is an imm ediate “yes” b ecause MH semantics g uarantees m odel existence for NLPs; the secon d and third qu estions corr espond (resp ectiv ely) to Bra ve and Cautious Reasoning, which we now a nalyse. Brav e Reasoning The comp lexity of the Brave Reasoning task with MH semantics, i.e., finding an MH model satisfying some particular set of literals is Σ P 2 -complete. Theorem 5. Brav e Reasoning with MH semantics is Σ P 2 -complete. Let P be an NLP , a nd Q a set of literals, or quer y . F inding an MH model such that M ⊇ Q is a Σ P 2 -complete task. Pr oo f. T o show that finding a MH model M ⊇ Q is Σ P 2 -complete, note th at a non deterministic T uring machine with a ccess to an NP-comple te oracle can solve the pro blem as f ollows: nondeterm inistically guess a set H of hypotheses (i.e. , a su bset of H y ps ( P ) ). It rem ains to check if H is empty or non-e mpty minimal such th at M + = f acts ( \ P ∪ H ) = heads ( \ P ∪ H ) and M ⊇ Q . Checking that M + = f acts ( \ P ∪ H ) = heads ( \ P ∪ H ) can be d one in po lynom ial time (because comp uting \ P ∪ H can be d one in polyno mial time [2] fo r wh ichever P ∪ H ), and checking H is em pty or non- empty minimal req uires a nond eterministic guess of a strict subset H ′ of H and then a polynom ial ch eck if f acts ( \ P ∪ H ′ ) = heads ( \ P ∪ H ′ ) . ⊓ ⊔ Cautious Reasoning Conv ersely , the Cau tious Reasoning, i.e., g uaranteein g that every MH m odel satisfies some particular set of literals, is Π P 2 -complete. Theorem 6. Cautious Reasoning with MH semantics is Π P 2 -complete. Let P be an NLP , a nd Q a set of literals, or quer y . Guaranteeing tha t all MH models ar e su ch t hat M ⊇ Q is a Π P 2 -complete task. Pr oo f. Cautio us Reasoning is the com plement o f Brave Reason ing, and sin ce the latter is Σ P 2 -complete (theorem 5), the former must necessarily be Π P 2 -complete. ⊓ ⊔ The set of hy potheses H y ps ( P ) is obtain ed fro m ˚ P which identifies rules that depend on themselves. The hyp otheses are the atoms of DNLs of ˚ P , i. e., the “atom s o f not s in loop ”. A Minimal Hypo theses mo del is then ob tained from a minimal set of th ese hypo theses suf ficient to determine the 2-valued truth-value o f ev ery literal in the program. T he MH semantics imposes no ordering or preference between hypotheses — only their set-in clusion min imality . For this reason, we can think of the c hoosing of a set of h ypoth eses yielding a MH model as finding a minimal solution to a disjunction problem, where the disjuncts are the hypoth eses. In this sense, it is t herefo re un derstandab le that the complexity of the reasoning tasks with MH semantics is in line with that of, e.g., reaso ning tasks with SM semantics with Disjunctive Logic Programs, i.e, Σ P 2 -complete and Π P 2 -complete. In ab ductive reasoning (as well as in Belief Re vision) one d oes not always req uire minimal solution s. Like wise, taking a hyp otheses assumptio n b ased semantic ap proach , like the one of MH, on e may no t require minim ality of assumed hypotheses. In such case, we would be und er a no n-Minim al Hy potheses semantics, and the com plexity classes of the correspond ing r easoning task would be on e le vel do wn in the Polynom ial Hierarch y re lati vely to the MH semantics, i.e., Bra ve Reaso ning with a non-Minima l Hy pothe- ses sema ntics would be NP-com plete, and Cautious Reasoning would be coNP- complete. W e leave the exploration of such possibilities for future work. 4.7 Comparisons As we have seen all stable mode ls are MH models. Since MH mo dels are always guaranteed to exist for ev ery NL P ( cf. t heorem 2 ) and SMs are not, it fo llows imm ediately that th e Minimal Hypotheses semantics is a strict mod el conservativ e gene ralization of the Stable Models sem antics. The MH mod els th at are stable models a re exactly th ose in which all rules ar e classically suppor ted. W ith this c riterion o ne can conclude whether some p rogram does n ot ha ve any stable models. For Normal Logic Prog rams, th e Stable M odels semantics coincides with the Answer-Set seman tics ( which is a generalizatio n of SMs to Ex tended L ogic Programs) , where the latter is known (c f. [ 10]) to co rrespon d to Reiter’ s default logic. Hence, all Reiter ’ s default extensions have a co rrespond ing Minimal Hypo theses mode l. Also, since Moor e’ s expansion s of an autoep istemic th eory [13] are known to have a on e-to-on e corre sponden ce with the stable mode ls o f the NLP version of the theor y , we conclude that for every such expansion th ere is a matching Minim al Hypotheses model for the same NLP . Disjunctive Lo gic Progr ams (DisjLPs — allowing for disjunctions in th e h eads of rules) can be syn- tactically transfo rmed into N LPs by applying the Shifting Rule presen ted in [6] in all possible ways. By non-d eterministically ap plying s uch transformation in all possible w ays, s ev eral SCCs of r ules may appear in the resulting NLP that were no t present in the original DisjLP — assigning a meaning to every such SCC is a d istinctiv e featu re o f M H sem antics, u nlike oth er semantics such as the SMs. This way , the MH semantics can be defined for DisjLPs as well: th e MH m odels of a DisjLP are th e MH m odels of th e NLP resulting from the transfor mation via Shifting Rule. There are other k inds of disjunction, like the on e in lo gic programs with ordered d isjunction (LPOD) [3]. These employ “ a new co nnective called or d er ed disjunction. The n ew c onnective allows to r epr esent alternative, ranked option s for pr o blem solution s in th e h eads o f rules ”. As the author of [3 ] says “the semantics of logic progr ams with order ed d isjunction is based on a prefer ence relatio n o n answer sets. ” This is different fr om the semantics assigned by M H since the latter includes no ordering, nor preferences, in the assumed m inimal sets o f hypothe ses. E.g., in example 1 there is no no tion of prefer ence or order ing amongst candidate models — LPODs would no t be the appro priate formalism for such cases. W e leave for futur e work a thoroug h co mparison of th ese appro aches, n amely comp aring the semantics of LPODs against the MH models of LPODs transform ed into NLPs ( via the Shifting Rule). The m otiv ation for [21] is similar to our o wn — to assign a semantics to e very NLP — howe ver th eir approa ch is different f rom ours in the sense that the methods in [21] resort to contrapositive rule s allowing any positive literal in th e he ad to be shifted (b y negating it) to the b ody o r any negative li teral in the bo dy to be shifted to the head (by m aking it positi ve). This approach considers e ach rule as a disjunction making no d istinction betwe en such literals occurr ing in th e ru le, wh ether or not they ar e in loop with the head of the rule. This perm its the shifting opera tions in [21] to create suppor t f or atoms that have no rules in the original progr am. E.g. Example 6. Nearly-Sta ble Models vs MH models. T ake the pro gram P = a ← not b b ← not c c ← not a, not x According to the shifting operation s i n [21] this progr am could be transforme d into P ′ = b ← not a b ← not c x ← not a, not c by shifting a and not b in the first rule, and shif ting the not x to the head (beco ming p ositiv e x ) an d c to the body (becomin g n egati ve not c ) of the third rule thus allo wing for { b, x } (which is a stable model of P ′ ) to be a nea rly stable mo del of P . In this sen se the a pproach of [21] allo ws f or the violation o f the Closed-W orld Assumption . This does not happen with our approach : { b, x } is not a Minima l Hy potheses model simp ly b ecause since x has n o rules in P it can not be true in any MH model — not x is not a member of H y ps ( P ) (cf. def. 18). As shown in theorem 1, at least o ne MH m odel of a p rogra m complies with its well- found ed model, although not necessarily all MH models do. E.g., the prog ram in Ex. 2 has the two MH models { beach, mountain, not tra v el } and { b each, not mountain, tr av e l } , whereas th e W F M ( P ) im poses W F M + ( P ) = { beach, mountain } , W F M u ( P ) = ∅ , and W F M − ( P ) = { tr av e l } . This is du e to the set o f Hypo the- ses H y ps ( P ) o f P being taken from ˚ P (b ased o n the layered suppo rt notio n) instead o f b eing taken from b P (based on the classical notion of suppor t). Not all Minimal Hyp otheses m odels ar e Minimal Mod els of a p rogra m. The ratio nale b ehind M H semantics is min imality of hy potheses, but n ot necessarily min imality of con sequences, the latter being enforce able, if so desired, as an additional requirem ent, although at the e xpense of increased complexity . The relation between logic programs and argumentation systems has b een considered for a long tim e now ([7] amo ngst many o thers) and we ha ve also taken steps to understan d and further that relationship [16–1 8]. Dun g’ s Prefer red Exten sions [7] are maximal sets of negati ve h ypotheses y ielding con sistent m od- els. Prefer red E xtensions, howe ver , th ese are no t guaran teed to always yield 2-valued co mplete models. Our previous appro aches [16, 17] to a rgumentation have already a ddressed th e issue o f 2 -valued model exis- tence gua rantee, and th e MH semantics also solves th at prob lem by virtu e of positi ve, instead of n egati ve, hypoth eses assum ption. 5 Conclusions and Futur e W o rk T akin g a positi ve hy potheses assumptio n appro ach we d efined the 2-valued Min imal Hyp otheses seman tics for NLPs that g uarantee s model e xistence, en joys relev ance and cumulativity , and is also a mo del conser- vati ve generalization of the SM s emantics. Also, by adopting positi ve hypoth eses, we no t only generalized the argumentation based approach of [7], b ut th e resulting MH semantics lends itself naturally to abd uctive reasoning , it being un derstood as h ypoth esizing p lausible reaso ns sufficient for justify ing given o bserva- tions or supp orting d esired g oals. W e also defined the lay ered suppor t notio n wh ich g eneralizes the classical one by recognizin g the special role of loops. For query an swering, the MH seman tics provid es mainly thr ee advantages over the SMs: 1) b y en joy- ing Relevance top-down query-solv ing is p ossible, ther eby circu mventing whole m odel c omputatio n (and groun ding) which is unavoidable with SMs; 2) by considering o nly the relev ant sub-par t of the prog ram when answering a query it is possible to enact grounding of only those rules, if grou nding is really desired, whereas with SM semantics who le prog ram groun ding is, on ce ag ain, inevitable — gr oundin g is known to be a major sou rce o f computational time consumptio n; MH sem antics, b y e njoying Relev ance, permits curbing this task to th e minimu m suf ficient to answer a qu ery; 3) by enjoying Cumulativity , as soon as the truth-value of a literal is determined in a b ranch for the top q uery it ca n be stored in a table and its v alue used to speed up the computatio ns of other branches within the same top query . Goal-driven abd uctive reason ing is elegantly mo delled by top- down a bductive-query-solv ing. By tak ing a hypoth eses assumption appr oach, en joying Relev ance, MH semantics cate rs well for this convenient problem representation and reasoning category . Many application s have been developed using the Stable M odel/Answer-set semantics as the unde rlying platform. The se gener ally ten d to be fo cused on solving pro blems th at req uire comp lete knowledge, such as search proble ms where all the k nowledge represented is relevant to the solutions. Howe ver , as Knowledge Bases increase in size and complexity , and as merging an d u pdating of KBs becomes more and more common , e.g. for Semantic W eb ap plications, [11 ], partial k nowledge problem solving im portance grows, as the need to ensure overall co nsistency of the merged/updated KBs. The Minim al Hypoth eses semantics is intended to, and can be used in all th e ap plications whe re the Stable Mo dels/Answer-Sets semantics are them selves used to mo del KRR and sear ch p roblems, plus all applications where query answering (both under a credulous mode of reaso ning and under a skeptical one) is in tented, p lus all ap plications wher e abdu ctiv e reasoning is nee ded. Th e MH semantics aims to be a sound theoretical platform for 2-valued (possibly abducti ve) r easoning with logic programs. Much work s till remains to be done that can be rooted in this platform contribution. Th e general topics of using non-no rmal log ic p rogra ms (allowing for negation, de fault a nd/or explicit, in th e heads of rules) for Belief Re vision, Updates, Preferences, etc., are per se orthogo nal to the semantics issue, and therefor e, all these su bjects can now be addressed with Min imal Hypo theses semantics as the un derlying platform. Impor tantly , MH can guaran tee the liveness of upda ted an d self -updatin g LP prog rams such as those of EV OLP [1] and related ap plications. The Minimal Hypothe ses semantics still has to be th oroug hly com - pared with Revised Stable Models [15], PSt able Models [14], and other related semantics. In summ ary , we have provided a fresh p latform on which to re -examine e ver present issues in Logic Programm ing and its uses, which purports to provide a n atural continuation a nd improvement of LP de vel- opment. Refer ences 1. J.J. Alferes, A. Brogi, J. A. Leite, and L . M. Pereira. Evolving logic programs. In S. Flesca et al. , editor, Pro cs. JELIA’0 2 , vo lume 2424 of LNCS , pages 50–61. Springer , 2002. 2. S. Brass, J. Di x, B. Freitag, and U. Zuko wski. Transformation-b ased bottom-up computation of the w ell-founded model. T PLP , 1(5):497–5 38, 2001. 3. Gerhard Brewka. Logic programming with ordered disjunction. In I n Proceed ings of A AAI-02 , pages 100 –105. AAAI Press, 2002. 4. J. Dix. A Classifi cation Theory of Semantics of Normal L ogic Programs: I. Strong Properties. Fundam. Inform. , 22(3):227–2 55, 1995. 5. J. Dix. A Classifi cation Theory of Semantics of Normal L ogic Programs: II. Weak Properties. Fundam. I nform. , 22(3):257–2 88, 1995. 6. Jürgen Dix, J Urgen Dix, Georg Gottl ob, Wiktor Marek, and Cecylia Rauszer . Reducing disjuncti ve to non- disjuncti ve semantics by shift-operations . Fundamenta Informaticae , 28:87–100, 1996. 7. P . M. Dung. On the acceptability of ar guments and its fundamen tal role in nonmonoton ic reasoning, logic pro- gramming and n-person games. AI , 7 7(2):321–35 8, 1995. 8. A. V an Gelder , K. A. Ross, and J. S. Schlipf. T he well-founded seman tics for general logic programs. J . A CM , 38(3):620–6 50, 1991. 9. M. Gelfond and V . Lifschitz. The stable model semantics for logic programming. In P r ocs. ICLP ’88 , pages 1070–1 080, 1988. 10. M. Gelfond and V . Lifschitz. L ogic programs with classical negation. In D. W arren et al., editor, IC LP , pages 579–59 7. MIT Press, 1990. 11. A. S. Gomes, J. J. Alferes, and T . Swift. Implementing query answering for hybrid mkn f kno wledge bases. In M. Carro et al., editor , P ADL’10 , v olume 593 7 of LNCS , pages 25–39. Springer , 2010. 12. A. C. Kakas, R. A. K o walski, and F . T oni. Abducti ve logic programming . J. Log . Comput. , 2(6):719 –770, 1992. 13. R. C. Moore. Semantical considerations on nonmon otonic logic. AI , 25(1):75–94, 1985. 14. M. Osorio and J. C . Nie v es. Pstable semantics for possibilistic logic pro grams. In MICAI’07 , vo lume 4827 of LNCS , pages 294–304 . Springer , 200 7. 15. L. M. Pereira and A. M. Pinto. Revised stable models - a semantics for l ogic programs. In C . Bento et al., editor, Pr ocs. EPIA ’05 , v olume 3808 of LNAI , pages 29–42. Springer , 2005. 16. L. M. Pereira and A. M. Pinto. Approv ed models for normal logic programs. In N. Dershowitz and A. V oronk ov , editors, Pr ocs. LP AR’07 , volume 47 90 of LNAI . Springer , 2007. 17. L. M . Pereira an d A. M . Pinto. Reductio ad absurdum arg umentation in normal logic programs. In G. Si mari et al., editor , Ar gNMR’07-LPNMR’07 , pag es 96–113. Springer , 2007. 18. L. M. Pereira an d A. M. Pinto. Collabo rative vs. Confl icting Learning, Evolution an d Ar gumentation, in: Op posi- tional Concepts in Computational Intelligence . Studies in Computational Intelli gence 155. Springer , 2008. 19. A. M. P into. Every normal logic pr og ram has a 2-valued semantics: theory , extensions, applications, implemen- tations . PhD thesis, Uni v ersidade Nov a de Lisboa, 2011. 20. R. T arjan. Depth-first search and linear graph algorithms. SIAM J. on Computing , 1(2):14 6–160, 1972. 21. C. W ittev een. Every normal program has a nearly st able model. In J. Dix, L .M. Pereira, and T .C. Przymusinski, editors, Non -Monotonic Extensions o f Lo gic Pr ogramming , volume 927 o f Lectur e Notes in Artificial Intelligence , pages 68–84. Springer V erlag, Berlin, 1995 .
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment