Matrix Graph Grammars
This book objective is to develop an algebraization of graph grammars. Equivalently, we study graph dynamics. From the point of view of a computer scientist, graph grammars are a natural generalization of Chomsky grammars for which a purely algebraic…
Authors: Pedro Pablo Perez Velasco
MA TRIX GRAPH GRAMMARS b y P edro P ablo P ´ erez V elasco V ersion 1.2 c Cop yrigh t b y P edro P ablo P ´ erez V elasco 2007, 2008, 2009 T o my family VI I A CKNO W LEDGEMENTS These lines are particularly pleasant to write. After all those years, I hav e a quite long list of pe o ple that hav e cont ributed to this bo ok in one wa y or another. Unfortunately , I will not b e able to include them all. Apolo gizes for the absences . First o f all my family . Gema, with neverending patience and love, alw ays suppo rts me in every single pro ject that I undertake. My unbounded lov e and gratitude. Hard to return, though I’ll try . My tw o daug h ters, Sof ´ ıa and Diana, make every sing le moment worth y . I’m absolutely grateful for their existence. My brothers ´ Alex and Nina, now living in Switzerland, with who m I shared so many moments and that I miss so muc h. My par en ts, alwa ys supp orting als o with patience and lov e, worried if this b oy w ould b ecome a man (a m I?). Juan, my thesis supervis or, whose advice and interest is in v aluable. He has been actively involv ed in this pro ject despite his ma n y resp onsibilities. Also, I would like to thank the p eople at the series of seminar s on complexity theor y at U.A.M., headed by Rob erto Mor iy´ on, for their in terest on Matr ix Graph Grammars . Many friends have stoically sto o d s ome chats on this topic affe cting in terest. Tha nk you very m uch for y our friendship. Kik eSim, GinHz, ´ Alv aro Iglesias, Ja ime Guer rero, ... All those who hav e passed by ar e not forgotten: People at ELCO (Da vid, F abrizio, Juanjo, Juli´ an, Lola, ...), at E ADS/SIC (Ja vier, Sergio, Rob erto, ...), at Isban, at Banc o Santander. Almost uncountable. I am also grateful to those that have w orked on the to ols used in this b o ok: Emacs and microEmacs , MikT eX, T eT eX, T eXnicCenter, Op enOffice and Ubunt u. I would lik e to hig hligh t the very go o d surveys av aila ble on different topics on ma th- ematics at the web, in particular a t w ebs ites http: //math world.wolfram.com and http:/ /en.wi kipedia.org , and the anonymous p eople b ehind them. Last few years hav e been particula rly intense. A mixture of ha rd w ork a nd very goo d luck. I feel that I have r eceived m uch more than I’m g iv ing. In hum ble return, I will try to administer htt p://www .mat2gra.info , with freely av a ilable informa tion on Matrix Graph Grammars such as articles, seminars, presen tations, pos ters, one e -bo ok (this one you are ab out to re a d) and whatever y ou may w ant to contribute with. Con ten ts 1 In tro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Historical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Motiv ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Bo ok Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Bac kground and Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1 Logics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 5 2.2 Category Theor y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Graph Theo ry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 T ensor Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5 F unctional Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6 Group Theo ry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.7 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3 Graph Grammars Approac hes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1 Double PushO ut (DPO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.1 Ba sics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.2 Sequentialization and Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.1.3 Application Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.1.4 Adhesive HLR Categor ies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 8 3.2 Other Ca tegorical Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 X Con tents 3.3 No de Replace ment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4 Hyper edge Replace ment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5 MSOL Approa c h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.6 Relation-Algebra ic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.7 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4 Matrix Graph Grammars F undam e n tals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.1 Pro ductions and Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.2 Types and Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3 Sequences a nd Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.4 Coherence Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.5 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5 Initial Digraphs and Co mp osition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.1 Minimal Initial Digra ph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2 Negative Initial Digra ph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 0 7 5.3 Comp osition and Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.4 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 6 Matc hi ng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.1 Match and Ex tended Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6.2 Marking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.3 Initial Digraph Set and Negative Digraph Set . . . . . . . . . . . . . . . . . . . . . . . . 131 6.4 Int ernal and E xternal ε - pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.5 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7 Sequen tialization and Pa rallel i sm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.1 Graph Co ngruence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.2 Sequentialization – Gr ammar Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.3 Sequential Independence – Deriv a tio ns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 7.4 Explicit Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.5 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Con tents XI 8 Restrictions on Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 8.1 Graph Co nstraints a nd Application Conditions . . . . . . . . . . . . . . . . . . . . . . . 1 70 8.2 Embedding Applicatio n Co nditions into Rules . . . . . . . . . . . . . . . . . . . . . . . . 18 5 8.3 Sequentialization of Application Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 194 8.4 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 9 T ransformation of Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7 9.1 Consistency and Compa tibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 9.2 Moving Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 9.3 F rom Simple Dig r aphs to Multidigraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 9.4 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 10 Reac hability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 33 10.1 Crash Cour se in Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 10.2 MGG T echniques for Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 37 10.3 Fixed Matrix Graph Grammars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 10.4 Floating Matrix Graph Gra mmars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 10.4.1 Ex ternal ε -pro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4 6 10.4.2 Internal ε -pr oductio n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 9 10.5 Summary a nd Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 11 Conclusions and F urther Researc h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3 11.1 Summary a nd Short T erm Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 11.2 Long T er m Resea rch P r ogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 A.1 Presentation of the Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 A.2 Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 A.3 Initial Digra ph Sets and G-Co ngruence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2 A.4 Reachabilit y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 A.5 Graph Co nstraints a nd Application Conditions . . . . . . . . . . . . . . . . . . . . . . . 2 82 A.6 Deriv ations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 1 XI I Con tents Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 7 List of Figures 1.1 Main Steps in a Grammar Rule Applicatio n . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Partial Diagram of Pro ble m Dep endencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Confluence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 0 2.1 Univ ersal Pro perty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 Pro duct, Cone and Universal Cone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1 2.3 Pushout and Pullba c k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4 Pushout as Gluing of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Initial P us hout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6 V an Kamp en Squar e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 5 2.7 Three, F o ur and Five No des Simple Digra phs . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1 Example of Simple DPO Pro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2 Direct Deriv ation a s DPO Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3 Parallel Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4 Sequen tial Indep endence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.5 Generic Application Condition Diag ram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.6 Gluing Co nditio n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 9 3.7 SPO Direct Deriv ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.8 SPO W ea k Parallel Indep endence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.9 SPO W ea k Sequential Indep endence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 XIV List of Figures 3.10 Sequential and Paralle l Indep endence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.11 SPB Replication Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.12 Example of NLC P ro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.13 edNCE No de Repla c emen t Exa mple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.14 Edge Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.15 String Grammar Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.16 String Grammar Deriv ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.17 Pushout for Simple Gr aphs (Relationa l) and Direct Deriv ation . . . . . . . . . . 64 4.1 Example of Pro duction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.2 Examples of Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3 Example of Pro duction (Rep.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.4 Productio ns q 1 , q 2 and q 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.5 Coherence for Two Pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.6 Coherence Conditions for Three P ro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.7 Coherence. F our and Five Pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.8 Productio ns q 1 , q 2 and q 3 (Rep.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.9 Example of Nihilation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.1 Example of Sequence a nd Deriv ation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2 Non-Compatible Pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.3 Minimal Initial Digraph (Int ermediate Expres sion). F our Pr oductio ns . . . . 10 3 5.4 Non-Compatible Pro ductions (Rep.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.5 Minimal Initial Digraph. Examples and Co un terexample . . . . . . . . . . . . . . . 104 5.6 F or m ulas (5 .1) and (5.12) for Three P ro ductions . . . . . . . . . . . . . . . . . . . . . . 106 5.7 Equation (5.8) for 3 a nd 4 P ro ductions (Negation of MID) . . . . . . . . . . . . . . 107 5.8 Av aila ble a nd Unav aila ble Edges After the Application of a Pro duction . . 108 5.9 Productio ns q 1 , q 2 and q 3 (Rep.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.10 NID for s 3 q 3 ; q 2 ; q 1 (Bold = Two Arrows) . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.11 Minimal Initial Digraphs for s 2 q 2 ; q 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.12 Comp osition and Co ncatenation of a no n- Compatible Sequence . . . . . . . . . 116 6.1 Productio n Plus Ma tc h (Direct Deriv ation) . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 List of Figures XV 6.2 (a) Neighbo r ho o d. (b) E xtended Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.3 Matc h P lus Potential Dangling Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 23 6.4 Matc hing and Extended Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 24 6.5 F ull P ro duction and Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.6 Example of Marking and Sequence s p ; p ε . . . . . . . . . . . . . . . . . . . . . . . . . . 130 6.7 Initial Digr aph Set for s =remove channe l;remo ve chann el . . . . . . . . . . . . . 133 6.8 Negative Digraph Set for s=cli entDow n;clientDown . . . . . . . . . . . . . . . . . . 1 34 6.9 Complete Neg a tiv e Initial Digraph K 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 34 6.10 Exa mple of Internal and Ex ternal Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 36 7.1 G-congruence for s 2 p 2 ; p 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 7.2 G-congruence for Sequences s 3 p 3 ; p 2 ; p 1 and s 1 3 p 2 ; p 1 ; p 3 . . . . . . . . . . . 146 7.3 G-congruence for s 4 p 4 ; p 3 ; p 2 ; p 1 and s 1 4 p 3 ; p 2 ; p 1 ; p 4 . . . . . . . . . . . . . . . 146 7.4 G-congruence (Alternate F orm) for s 3 and s 1 3 . . . . . . . . . . . . . . . . . . . . . . . . . 14 8 7.5 G-congruence (Alternate F orm) for s 4 and s 1 4 . . . . . . . . . . . . . . . . . . . . . . . . . 14 8 7.6 Positiv e and Negative DC Conditions, D C 5 and D C 5 . . . . . . . . . . . . . . . . . 15 1 7.7 Altered P r o duction q 1 3 Plus Pro ductions q 1 and q 2 . . . . . . . . . . . . . . . . . . . . . 153 7.8 Compo sition and Concatena tion. Three Pr o ductio ns . . . . . . . . . . . . . . . . . . . 1 5 4 7.9 Example of Minimal Initial Digraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.10 Adv ancement. Thr ee and Five Pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7.11 Three Simple Pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 59 7.12 Altered Pr oductio n q 1 3 Plus Pro ductions q 1 and q 2 (Rep.) . . . . . . . . . . . . . . . 1 6 0 7.13 Sequential Indep endence with F re e Ma tching . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.14 Asso ciated Minimal and Nega tiv e Initial Digr aphs . . . . . . . . . . . . . . . . . . . . . 163 7.15 Parallel Exe cution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 7.16 Exa mples of Parallel E xecution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 8.1 Application Co ndition on a Rule’s Left Hand Side . . . . . . . . . . . . . . . . . . . . . 170 8.2 Example of Diagra m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 8.3 Finding Complement and Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 8.4 non-Injective Morphisms in Application Condition . . . . . . . . . . . . . . . . . . . . . 17 5 8.5 A t Mo st Two Outgoing Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 XVI List of Figures 8.6 Example of Preco ndition Plus Postcondition . . . . . . . . . . . . . . . . . . . . . . . . . . 178 8.7 Quan tification E xample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 8 0 8.8 Diagram for Three V ertex Co lorable Graph Constra in t . . . . . . . . . . . . . . . . . 183 8.9 Satisfaction of Application Condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 84 8.10 Example of Application Condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4 8.11 (a) GC dia gram (b) Graph to which GC applies (c) Closure o f GC . . . . . . 18 6 8.12 Closure and Decomp osition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 88 8.13 Application Condition Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 8.14 Closure Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 8.15 Application Condition Example Co rrected . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 8.16 Pro duction T ransfor mation According to Lemma 8.3.1 . . . . . . . . . . . . . . . . . 196 8.17 T rans forming D Ready r Ready s int o a Sequence. . . . . . . . . . . . . . . . . . . . . . . . . 196 8.18 Identit y id A and Conjuga te id A for E dges . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 7 8.19 id A as Sequenc e for Edg es . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 8.20 Decomp osition Op erator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 9 8.21 T rans forming D someE mpty r someE mpty s int o a Sequence. . . . . . . . . . . . . . . 199 8.22 Closure Op erator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 8.23 Example of Diagram with Two Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 8.24 Preconditio n and Postcondition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 03 9.1 Non-Compatible Application Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 9.2 Non-Coherent Application Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 9.3 Av o idable non-Compa tible Application Condition . . . . . . . . . . . . . . . . . . . . . 210 9.4 non-Coherent Application Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 10 9.5 Negativ e Gra phs Disabling the Seq uences in Fig. 8.21 . . . . . . . . . . . . . . . . . . 21 3 9.6 (a) Example rule (b) MID without AC (c) Completed MID . . . . . . . . . . . . . 213 9.7 (a) Example Rules (b) MIDs (c) Starting Graphs for Ana lyzing Conflicts . 214 9.8 (W eak) Pr econdition to (W eak) Postcondition T r a nsformation . . . . . . . . . . 219 9.9 Restriction to Common Parts: T otal Mor phism . . . . . . . . . . . . . . . . . . . . . . . . 21 9 9.10 Preconditio n to Postcondition E xample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1 9.11 Multidigraph with Two Outgoing Edg es . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 9.12 Multidigraph Constra in ts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 List of Figures XVI I 9.13 Simplified Diag ram for Multidigr aph Constra in t . . . . . . . . . . . . . . . . . . . . . . . 228 9.14 ε -pr oductio n and Ξ -pr oductio n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 9 10.1 Linear Combinations in the Co n text of Petri Nets . . . . . . . . . . . . . . . . . . . . . 235 10.2 Petri Net with Related Pro duction Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 7 10.3 Minimal Mar k ing Firing Sequence t 5 ; t 3 ; t 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 10.4 Rules for a C lie nt-Server Broadcas t- Limited System . . . . . . . . . . . . . . . . . . . 241 10.5 Matrix Representation for No des, T ensor for E dg es and Their Co upling . . 24 2 10.6 Initial and Final States for P ro ductions in Fig. 10 .4 . . . . . . . . . . . . . . . . . . . . 243 10.7 Initial and Final States (Ba s ed on Pro ductions o f Fig. 10.4) . . . . . . . . . . . . 247 11.1 Diagr am of P roblem Dep endencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6 A.1 Graphical Representation of Sys tem Actor s . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 A.2 DSL Syntax Sp ecification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 6 1 A.3 Basic P ro ductions of the As s em bly Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2 A.4 Pro ductions for O p era tor Mov ement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2 A.5 Break-Down and Fixing of Assembly Line E lemen ts . . . . . . . . . . . . . . . . . . . . 263 A.6 Snapshot o f the Assembly Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7 A.7 Graph Gr a mmar Rule rejec t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 A.8 Minimal Initial Digraph a nd Image of Sequence s 0 . . . . . . . . . . . . . . . . . . . . . 269 A.9 Comp osition of Sequence s 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 A.10 DSL Syntax Sp ecification Extended . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1 A.11 Pro duction a ssembl e in Greater Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 A.12 MID a nd Exc e r pt of the Initial Digra ph Set o f s 0 pack ; c ertify ; ass em 273 A.13 MID for Sequences s 1 and s 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 A.14 Ordered Items in Convey o rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6 A.15 Initial and Final Digr a phs for Reachabilit y Example . . . . . . . . . . . . . . . . . . . 277 A.16 Graph Co nstraint on Conv eyor Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 82 A.17 Graph Co nstraint as Pre condition and Postcondition . . . . . . . . . . . . . . . . . . . 283 A.18 Ordered Items in Convey o rs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5 A.19 Expanded Rule r eject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 8 6 A.20 Rules to Remove Last Item Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 7 XVI I I Li st of Figures A.21 Gr ammar Initial State for s 1 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 A.22 P ro duction to Remov e Dangling Edges (Ordering of Items in Conv eyors) . 289 A.23 Gr ammar Final State for s 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 List of T ables 4.1 Possible Actions for Two Pro ductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2 Possible Actions (Two Pro ductions Incl. Dang ling Edges) . . . . . . . . . . . . . . . 93 4.3 Possible Actions (Three Pro ductions Incl. Dang ling Edges ) . . . . . . . . . . . . . 94 7.1 Coherence for Adv ance men t of Two Pro ductions . . . . . . . . . . . . . . . . . . . . . . 157 8.1 All Possible Diagra ms for a Single Element . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 1 In tro duction This b o o k is one of the subpro ducts of my dis sertation. If its aim had to b e summarized in a single sentence, it could b e algebr aization of gr aph gr ammars or , more accurately , study of gr aph dynamics . F rom the p oint of view of a computer scientist, g raph grammars are a natural gener- alization of Chomsky gr ammars for whic h a purely algebr aic approach do es not exist up to now. A Chomsk y (or string ) grammar is , r oughly sp eaking, a precise description o f a formal langua ge (which in essence is a set o f string s ). On a mor e discr ete mathema ti- cal style, it can b e said that gr aph grammars – Matrix Graph Gra mma rs in particular – study dynamics of gr a phs. Ideally , this algebra ization w ould enforce our under standing of grammar s in gener al, providing new analysis techniques and g eneralizations of concepts, problems a nd results known so far . In this b o ok we fully develop such theory ov er the field GF p 2 q – the field with tw o elements – which co vers all gra ph cases, from simple graphs (more attra ctiv e for a math- ematician) to m ultidigra phs (mor e interesting for an applied computer scientist). The theory is pre sen ted and it s basic prope rties demo ns trated in a first stage, mo ving to increasingly difficult pro blems and e s tablishing rela tions among them: • Appli cabilit y , for which tw o eq uiv alent characterizations (necessar y a nd sufficien t conditions) are provided. • Indep e nde nce . Sequent ial a nd parallel independence in particular , ge ner alizing pr e- viously known results for tw o elements. 2 1 Introduction • Restrictions . The t heory develop ed so far for graph constraints and application conditions is significa n tly gener alized. • Reac hability . The state e quation for Petri nets and related techniques are extended to g eneral Ma tr ix Gra ph Gra mmars. Also, Ma trix Gr aph Gr a mmars techniques a re applied to Petri nets. Throughout the b o ok many new concepts ar e int ro duced such as compa tibility , co - herence, initial and nega tiv e graph sets, etc. Some of them pr o ject interesting insights ab out a given grammar, while o thers are us ed to study previously men tioned problems . Matrix Gra ph Gra mma rs hav e several adv a n tages. First, many branches of ma th- ematics are at our disp osal. It is based on Bo olean algebra, so first and second order logics can b e applied a lmo st directly . They admit a functional r epresentation so many ideas from functional analys is can b e utilized. On the mor e alg ebraic side it is p ossible to use group theor y and tensor alge bra. Fina lly , catego ry theo ry co nstructions s uc h a s pushouts are av ailable as well. Second, as it splits the s tatic definition fro m the dynamics of the system, it is po ssible to study to so me e x ten t many prop erties o f the grammar without the need of an initial state. Third, although it is a theoretical too l, Ma trix Gr aph Grammars are quite close to implementation, b eing p ossible to develop to ols based o n this theo ry . This intro ductory chapter aims to provide some p ersp ectiv e on graph gra mmars in general and on Matrix Graph Grammars in par ticular. In Sec. 1.1 we pr esent a (partial) historical o verview of graph gr ammars a nd g raph transfo rmation s ystems taken fr o m several sour c e s but mainly fro m [36] a nd [22 ]. Section 1.2 intro duces those op en problems that have guided our resea r ch . Finally , in Sec. 1.3 we br ush ov er the b o ok and see how applic ability , se qu ential indep endenc e and r e achability articula te it. 1.1 Historical Overview Research in graph grammar s s tarted in the late 60’s [69][72], strongly motiv a ted by prac- tical problems in computer science a nd since then it has become a v ery activ e area. Currently there is a wide ra nge of applications in different bra nc hes of computer s cience such as formal lang uage theo ry , softw are engineering, pattern recognitio n and ge ne r a- tion, implementation of term rewriting, lo gical and functional programming, compiler 1.1 Historical Overview 3 construction, databas e desig n and theo ry , visua l pro gramming a nd mode ling languag es and many more (see [2 3] for references on thes e a nd other topics). There are diff erent a pproaches to graph g rammars a nd graph trans fo rmation s ys- tems. 1 Among them, the most pro minen t are the alg e br aic, logica l, relatio nal and set- theoretical. L m p R m G p G 1 H Fig. 1. 1. Main Steps in a Grammar R ule Application The main steps – so me of which ar e summarized in Fig . 1.1 – in all appro aches for the applica tion of a gr ammar rule p : L Ñ R to a ho st graph G (also k no wn as initial state ) to even tually o btain a fi n al state H are almost the same: 1. Se le c t the gramma r rule to b e applied ( p : L Ñ R in this case). In genera l this step is non-deter ministic. 2. Find an o ccurrence of L in G . In general this step is a lso non-deterministic b ecause there may b e several o ccurrences of L in G . 3. C heck any applicatio n condition o f the pro duction. 4. Remove elements that app ear in L but not in R . There are tw o p oss ibilities for so-called dangling e dges : 2 a) P ro duction is not a pplied. b) Dangling edges are deleted to o. 1 The only difference betw een a grammar and a transformation system is that a grammar considers an initial state while a transformation sy stem does not. 2 A dangling ed ge is one n ot app earing in the rule sp ecification whic h is incident to one no de to be eliminated. 4 1 Introduction If the pro duction is to be applied, the system state c hanges fro m G to G 1 (see Fig. 1.1). 5. Glue R with G 1 . The system state changes from G 1 to H (see Fig. 1.1). Now we shall briefly rev iew pr eviously mentioned families of approaches. The so-ca lled algebr aic appr o ach to g raph g rammars (graph transfor mation systems) is characterized by relying almost exclusively on categ ory theory and using gluing o f gra phs to p erform op erations. It can be divide d into at le a st three main sub appro aches, dep ending on the categoric al constructio n under use: D PO (Double P ushOut, see Sec. 3 .1), SPO (Single PushOut, s e e Sec. 3 .2), pul lb ack and double pul lb ack (also summarized in Sec. 3.2). W e will no t comment on o thers, like sesquipushout for example (see [9]). DPO was initiated by E hrig, Pfender and Schneider in the early 7 0’s [21] as a gener- alization o f Chomsky g rammars in order to consider gra phs ins tea d of strings. It seems that the term algebr aic was a pp ended b ecause gr a phs might b e consider ed as a sp ecial kind of algebr as and b ecause the pushout construction was p erceived more a s a concept from universal a lgebra than from categ ory theory . No wadays it is the mor e prominent approach to graph rewriting, with a v ast bo dy of theoretical results and several to ols for their implement ation. 3 By mid a nd late 8 0’s Ra oult [70], Kennawa y [4 1][42] and L¨ owe [49] develop ed SPO approach pro bably motiv a ted b y some “restrictio ns” o f DPO, e.g. the usage of total instead of pa rtial mor phisms. Raoult and Kennaw ay were fo cused on term graph rewriting while L ¨ owe took a more ge neral appr oach. In the late 90’s a new a ppr oach – although less pr ominen t for now – emerged b y reverting all arr ows (using pullbacks ins tead o f pushouts), prop osed by Bauderon [5]. It seems that, in contrast to the pushout construction, pullbacks ca n handle deletion and duplication more eas ily . DPO has b een genera lized recently through adhesive HLR c ate gories , whic h is sum- marized in Sec. 3.2 (we ar e not aw are of a similar initiative for SPO or pullback). F o r a deta ile d account see [22]. Instead of just c onsidering gr aphs, all main ide a s in DPO can b e extended to higher level structures like la beled gr aphs, typed g raphs, Petri nets, 3 F or example AGG – see [76] or visit http://tfs.cs. tu- berlin.de/agg/ – and A T oM 3 – see [45] or visit http://atom3.cs.mc gill.ca/ –. 1.1 Historical Overview 5 etc. This is firs tly accomplished in [16] and [17 ], star ting the theo r y of HLR syst ems (High Level Replacement systems). Independently , Lack a nd Sob o ci ´ nski in [43] in tro - duced the concept of adhesive c ate gory a nd in [1 8] b oth were merged to get adhesive HLR ca tegories. In this bo ok w e shall refer to these approa c hes as c ate goric al , to distinguish from ours which is more algebraic in na tur e. The so-called set-the or etic appr o ach (so metimes also kno wn as algorithmic appro ach) substitutes one structure b y a nother structure, either no des or e dg es. There are tw o sub- families, no de r eplac ement and e dge r eplac ement (also hyp er e dge re pla cemen t), depending on the type of elements to be replaced. No de replac emen t (edNCE) was int ro duced in [55][56] and further inv estigated in many pap ers. It is based on co nnecting ins tead of gluing for em b edding o ne gr a ph int o another. Ma n y extensio ns and particular ca ses hav e bee n studied s o far, a nd many o thers, such a s C-edNCE when considering c onfluence, NCE, NLC, dNLC, edNLC and edNCE (see Sec. 3.3 for the mea ning of acronyms) ar e currently on going. Hyp eredge replacement w as in tro duced in the early 7 0 s by F eder [27] and Pavlidis [59] and has b een intensiv ely inv estigated s inc e then. Co ntrary to the no de replacement appr oach, it is based on gluing . P lease, see Secs. 3.3 and 3.4 for a quick int ro duction. It is possible to use logics to express graphs and to enco de graph trans formation. In Sec. 3.5 this approach with monadic second order logic is reviewed presenting its foundations a nd main results. 4 The r elational appr o ach (als o al gebr aic-r elational appr o ach ) is based on relational metho ds to sp ecifying graph rewriting (in fact it co uld be applied to more general struc- tures than graphs). Once a gr aph is characterize d a s a rela tional structure it is po ssible to a pply all relational machinery , substituting categories by alleg ories and Dedekind cat- egories. Probably , the main adv antage is tha t it is p ossible to give lo cal characteriz ation of co ncepts. The ro ots of this a pproach seem to date back to the ear ly 1 970’s with the pap ers o f K a wahara [3 8][39][40] establishing a rela tional calculus inside topos theory . An ov e rview can b e found in Sec. 3.6. Our approa c h ha s b een influenced by these approaches to a different extent, heavily depe nding o n the topic. The basics o f Ma trix Graph Grammars a re most influence d by 4 Monadic Second Order Logics, MSOL, lie in b etw een first and seco nd order logics. 6 1 Introduction the categ orical approa c h, ma inly by SPO in the s hape of pro ductions and to so me extent of direct deriv ations. F or application co nditions and g raph co nstraints, our inspira tion comes almost exclusively fr o m MSOL. Concerning the relational approach, our basic structure has a natural repre s en tation in relatio nal terms but the development in b oth cases is very different. The influence of hyperedge r e placement and no de r eplacement, if any , is muc h more fuzzy . 1.2 Motiv ation The diss ertation t hat gav e rise to this bo ok started as a pro ject to study sim ulation proto cols (conserv a tive, optimistic, etc.) under gr aph transfor mation systems. In the first few weeks we missed a r e al algebraic a ppr oach to gra ph g r ammars. “Real” in the sense that there a re algebr aic repres e ntations of graphs v ery close to basic algebraic structures such as v ector spaces (incidence or adjacency ma trices for exa mple) but the theories av aila ble so far do no t make use of them. As commented a b ov e, the main ob jective o f this b o ok is to give an algebra ization of gr aph grammar s. One adv antage foreseen from the very beginning was the fact tha t nice in terpreta tions in terms of functional ana lysis and physics c o uld be used to mov e fo rward, des pite the fact that the under lying structure is binar y so, if necessar y , it was p ossible to bring in easily logics and its p owerful methods . Our schedule included several increasingly difficult problems to b e treated by our approach with the hop e of getting better ins igh t and understanding, trying to g eneralize whenever p ossible and, most imp ortantly , providing a unified b o dy of results in which all conc e pts and ideas would fit natur a lly . First things first, so we b egin with the na me of the b o ok: Matrix Graph Gra mmars. It has b een chosen to emphasize the alg ebraic part of the appro ach – althoug h ther e are also logics, tensor s, o per a tors – and to rec all matrix mechanics as int ro duced by Born, Heisenberg and Jo rdan in the firs t ha lf o f the tw entieth century . 5 Y ou are kindly in vited to visit http:/ /www.ma t2gra.info for further r esearch, a web page dedica ted to this topic that I (hop efully) intend to maintain. 5 An alternative was Y AGGA, which stand s for Y et Another Graph Grammar A pproac h (in the st yle of the famous “Y et Another...” series). 1.2 Motiv ation 7 Section 1.1 p oint s out that motiv a tions of some graph grammar approaches hav e bee n quite close to prac tice, in contrast with Matrix Graph Grammars (MGG) which is more theo retically driven. Nonetheles s, there is a n on- going pro ject to implemen t a graph grammar to ol based o n A T oM 3 (see [45] or visit htt p://ato m3.cs.mcgill.ca/ ) using algorithms der iv ed fr o m this b o ok (the a nalysis algo rithms are ex pected to hav e a go o d per formance). W e will briefly touch o n this topic in Sec. 6.3. App endix A illustrates all the theo r y with a mo re or les s realis tic case study . This “basis for theoretical s tudies” intends to provide us with the capa bilit y of solving theoretical pro blems as those co mmen ted below, which ar e the backb one of the b oo k. Informally , a g rammar is a set of pro ductions plus a n initial graph which we can safely think o f as a c o llection of functions plus a n initial set. A sequence o f pro ductions would then b e a sequence of functions, applied in o rder. T og ether with the function we specify the elements that must b e found in the initial set (in its domain), so in o r der to a pply a function we must first find the doma in of the function in the initial set (this pr o ces s is known a s matching ). As pro ductions a re a pplied, the system mov es on trans forming the initial set in a s equence of in termediate sets to ev entually arrive to a final state (final set). 6 Actually , we will deal neither with sets nor with functions but with directed graphs and mor phis ms. W e will s p eak of gr aphs, digra phs or simple digr aphs meaning in a ll cases simple digraphs. See Sec. 2.3 for its definition and main prop erties. Once gr ammar rules have be en defined and its main pro per ties established, the first problem we will a ddress is the characterization of applic ability , i.e. g ive necessar y and sufficient co nditio ns to guara n tee tha t a sequence can b e applied to a n initial state (also known as host gr aph ) to output a final s ta te (a gr a ph aga in). F or mally sta ted for further reference: Problem 1 (Appl icabilit y) F or a se quenc e s n made u p of ru les in a gr ammar G and a simple 7 digr aph G , is it p ossible to apply s n to t he host gr aph G ? 6 The natural in terpretation is that functions modify sets, so some dynamics arise. 7 Defined in Sec. 2.3. 8 1 Introduction No restriction is set on the output of the se quence except that it is a simple digraph. There is a basic problem when deleting no des known as dangling c ondition : Are all incident edges eliminated to o? Otherwise the o utput would not b e a digra ph. When we ha ve a pro duction and a matching (for that pro duction) we will sp eak of a dir e ct derivation . A sequence of dir e ct deriv ations is called a derivation . A quite natur al prog ression in the study of grammar s is the following ques tion, that we call indep endenc e pr oblem : 8 Problem 2 (Indep endence) F or t wo given derivations d n and d 1 n applic able to host gr aph G , do they r e ach t he same s t ate?, i.e. is d n p G q d 1 n p G q ? Mind the similarities with c onfluenc e and lo c al c onfluenc e (see b elow). How ever, in- depe ndence is a very general problem and we will b e interested in a reduced version of it, known as se quent ial indep endenc e , which is widely addressed in the gr aph g rammar literature and also in other branches of computer science. As far as we know, in the literature [2 2; 23] this problem is addressed for sequences of tw o direct deriv ations, b eing longer seq uences studied pairwise. Problem 3 (Sequential Indep endence) F or two derivations d n and d 1 n σ p d n q ap- plic able to host gr aph G , with σ a p ermutation, do they r e ach the same state? Of cour se, problems 2 and 3 can b e extended easily to consider any finite num b er of deriv ations and, in bo th cas e s, there is a dep endence relationship with r e spect to problem 1. Our next s tep will b e to genera liz e some theory fro m Petri nets [54], which can be seen as a particular case of Matr ix Graph Gr ammars. In par ticular, our interest is focused on r e achabili ty : Problem 4 (Reac hability) F or two given st ates (initial S 0 and fin al S T ), is ther e any se quenc e made up of pr o ductions in G that tr ansforms S 0 into S T ? In the theo ry developed so far for Petri nets, re a c hability is addressed using the state e quation (linear s ystem) which is a neces s ary co ndition for the existence o f such a sequence (see Cha p. 10). 8 Indep endenc e from the p oin t of v iew of th e grammar: It do es not matter which path the grammar follo ws b ecause in both cases it finishes in the same state. 1.2 Motiv ation 9 Problem 4 dir e c tly relie s on pro blem 1. More interestingly , it is also related to prob- lems 2 a nd 3: As every solution provided by the state equation sp ecifies the set o f pro- ductions to be applied but not the order (s e e Sec. 1 0.1), sequences asso ciated to differen t solutions of the sta te equation can b e indep endent but not s e q uen tial indep enden t (this is b e cause different sets of solutions apply each pro duction a different n umber of times). So, in pa rticular, reachabilit y can be us e ful to split indep endence a nd sequent ial inde- pendenc e . Fig. 1.2. P artial Diagram of Problem Dep endencies All these problems with their corr espo nding dependencies are summar ized in Fig. 1.2. Compare with the complete diagr am that includes mid-ter m and lo ng-term resear c h in Fig. 11 .1 on p. 2 56. Although we will not study confluence in this b o ok (except some ideas in Chap. 11), just to make a complete acco un t tw o further re la ted problems are introduced. W e will briefly r eview them in the la st chapter. Problem 5 (Co nfluence) F or t wo given states S 1 and S 2 , do ther e ex ist t wo deriva- tions d 1 and d 2 such t hat d 1 p S 1 q d 2 p S 2 q ?. Strictly sp eaking this is no t confluence as defined in the literature [77]. T o the left o f Fig. 1 .3 you ca n find c o nfluence: F o r the initial s tate S 0 that indep e nden tly evolv es to S 1 and S 2 , is it p ossible to find der iv ations that close the dia mond? 9 T o the right of the 9 The difference b etw een lo c al c onfluenc e and conflu ence is that in the former to mov e from S 0 to S 1 or S 2 it is mandatory to u se a direct deriv ation and not a deriva tion. 10 1 Introduction same figure we hav e repr esent ed problem 5. The difference is that a common initial state is not ass umed. Fig. 1. 3. Confluence In mathematics, existenc e and u n iqueness theor ems are central to any of its branches. As it is, the analogous terms in computer s cience are termination and c onfluenc e , res p ec- tively . In some sens e w e may think of r eachabilit y as o pening or bro adening the state spa ce of a given gra mmar while confluence , as introduced here, closes o r b ounds it. Problem 5 deals with c onflu ency of confluence. The other par t (how to ac tua lly g e t to the sta tes S 1 and S 2 ) is more related to reachabilit y . Note that if one o f the der iv ations is the identit y then pro ble m 5 b ecomes pr oblem 4 (reachabilit y). If w e limit to p ermutation o f s equences, a s in the deriv ation of problem 3 out of problem 2 , we can p ose: Problem 6 (Sequential Confluence) F or two given initial states, do ther e exist two derivations (one p ermut ation of the other) with isomorphi c final st ates?. Again, it is not difficult to make them consider a n y finite set of deriv ations ins tead of just tw o . O nce w e know if a g rammar is confluent , the next step is to know ho w muc h it takes to get to its final state. This is very clo se to c omplexity . Co mplexit y theory is no t addressed in this b o ok. T o the b est of our knowledge, applicability (problem 1) ha s not b e en a ddressed up to now. Indep endence and sequential independence (problems 2 and 3) ar e very popular . 10 10 Actually , it is sequen tial indep enden ce the one n ormally addressed in the literature. W e hav e introduced indep endence for its p otential link with confluence. 1.3 Book Outline 11 See for example Cha ps. 3 and 4 in [23]. Reachability is a key co ncept and has b een studied and partially characterized in ma n y pap ers, mainly in Petri nets theory . See [54]. Con- fluence is a concept of fundamental imp ortance to gr ammar theory . F or term rewriting systems s e e [30]. 1.3 Bo o k Outline Based on the problems co mmented in previous s ection, the b o ok is org anized in nine chapters plus one app endix. The First thre e chapters, including this o ne, a re in tro ductory . Chapter 2 provides a short ov erview of neede d mathematica l machinery which includes some basic results fro m logics (first and monadic second order), categor y theory , tensor algebra, gr aph theor y , functional ana lysis (notation and some basic results) and gro up theory . W e ha ve not used adv anced results on any of these disciplines so pro bably a quick review should suffice, mainly for fixing nota tion. Graph gra mmars a pproaches are discussed in Chap. 3 , whic h essentially expands the ov e rview in Sec. 1.1. Sections 3.1 and 3.2 cov er algebr aic appro aches, for which we prefer the ter m c ate goric al , as commented ab ov e. Set-theoretic appro aches (no de and hyp e r edge replacement) are cov ered in Secs . 3.3 and 3 .4. T erm rewr iting thro ugh monadic second order logics is the MSOL a pproach, to which Sec. 3.5 is devoted. The chapter ends with the relational appro ach in Sec. 3 .6. The ob jectiv e o f this ch apter is to get an idea of each approach (and not to provide a detailed study) in order to, among other things, ease compariso n with Matrix Graph Grammars . Chapter 4 in tro duces the basics of our propo sal (Sec. 4.1) and prepar es to attack problem 1 by intro ducing concepts such as c ompletion (Sec. 4.2), c oher enc e , seq ue nc e s (Sec. 4 .3) and the nihilation matrix (Sec. 4.4). Standing on Chapter 4, Chapter 5 studies minimal a nd ne gative initial digr aphs (Secs. 5.1 and 5.2), subse quen tly generalized to initia l digr aph set in Sec. 6.3), c om- p osition a nd c omp atibility (Sec. 5 .3) and theo r ems related to their prop erties a nd char- acterizations . Chapter 6 co vers an essential part of pro duction a pplica bilit y: Matching the left hand side (LHS) of a pro duction inside the ho st g raph. Dang ling edge s are cov ered, dealing with them with wha t we c a ll ε -pr o ductions in Se c . 6.1 a nd further studied and cla ssified 12 1 Introduction in Sec. 6.4. W e deal with marking in Sec. 6.2, which ca n help in c ase it is necessary to guarantee that several pro ductions hav e to b e applied in the sa me place. Minima l and negative initial digr aphs are gener alized to the initial digr aph set in Sec. 6 .3. In Sec. 6.5 we giv e tw o characterizations for applicability (pr o blem 1). W e will cop e w ith sequential indep endence (pro blem 3) for quite gene r al families of per m utations in Chap. 7. Sa mene s s of minimal initial dig raph (called G -c ongruenc e ) for t wo sequences is addresse d in Sec. 7.2; the case of t wo deriv a tions is seen in Sec. 7.3. Explicit paralle lis m is studied in Sec. 7.4 through comp osition and G -c ongruenc e , which is rela ted to initial digr aph sets . In Chap. 8 gra ph constra in ts and application conditions (pr e conditions and p ostcon- ditions) are studied for Matrix Graph Grammars. They are in tr oduced in Sec. 8.1 where a short overview o f r elated concepts in other gra ph g rammars approa c hes is carr ie d o ut. The notio n of direc t deriv ation is extended to cop e with applica tion conditions in Matrix Graph Gra mma rs in a very na tural manner in Sec. 8 .2 a nd functionally r epresented in Sec. 8.3, where they are sequentialized. Chapter 9 contin ues with graph c o nstraints and application co nditions. First, some prop erties suc h as cons is tency are defined and c hara cterized (Sec. 9.1). In Sec. 9.2 w e show how it is p ossible to transform p ostconditions into preconditions and vice versa. Both of theoretical a nd of pra ctical impor tance is the use of v a riable no des b ecause, among other things, it allows us to automatically extend the theory to include mu ltidigra phs without any c hange o f the theory of Matr ix Graph Grammars in Sec. 9 .3. In Cha p. 10 problem 4 (re achabilit y) is tackled, extending results fro m Petri nets to more general gra mmars. Sectio n 1 0.1 quickly introduces this theo r y and summarizes some basic results. Section 10.2 applies so me Matrix Graph Grammar s results from previo us chapters to Petri nets. The re s t of the c hapter is de voted to extending Petri nets results for reachabilit y to Ma trix Graph Gr ammars, in particular Sec. 10 .3 cov ers graph gr ammars without da ngling edges while Sec . 10 .4 deals with the genera l case. The bo ok ends in Chap. 1 1 with the conclusio ns and further r esearch. A summary of what we think are o ur most imp ortant contributions can b e found there. Finally , in App endix A a fully worked ca se study is presented in which all main theorems are applied tog ether with detailed e xplanations and implementation re ma rks and advice s. 1.3 Book Outline 13 Most of the material pres en ted in this b oo k ha s b een published [60], [61], [62], [63], [64] and [65] and presented in in ternatio na l congresses : ICM’200 6 (In ternationa l Congress of Mathematicians, aw arded with the second prize of the poster c ompetition in Sec- tion 15 , Mathematic al Asp e cts of Computer S cienc e ), ICGT’2006 (International Confer- ence on Graph T r ansformations), P NGT’2 006 (Petri Nets a nd Graph T ransfor mations), PROLE’2007 (VI I Jor na das sobre Progr a maci´ on y Lengua jes) and GT-VC’2007 (Gra ph T ransfo rmation for V er ific a tion and Concurr ency , in CONCUR’2007 ). Some further research is now av ailable in http: //www. mat2gra.info and in the arXiv ( http:/ /arxiv. org , just lo ok for “Matrix Graph Grammars ” in their sea rch engine). Besides, a slight genera lization using Bo ole an c omplexes hav e a ppear ed in [6 6]. 2 Bac kgroun d and Theory The Matr ix Graph Gr ammar appro ach us e s many ma thematical theor ie s which might seem dista n t o ne from the others. Nevertheless, there a r e some in teresting ideas connect- ing them which we seize to contribute whenever p ossible. Ma trix Graph Gr ammars do not dep end o n a n y nov el theore m that op ens a new field of resea rch, but aims to put “old” pro blems in a new p ersp ective. There are excellent b o oks av a ilable cov er ing every sub ject o f this to pic. There are also excellent res ources o n the web. W e think tha t this fast introduction s hould suffice. It is intended as a r eference ch apter. All conce pts ar e highlig h ted in b old to ease their lo cation. 2.1 Logics Logics are of fundament al imp ortance to Matr ix Gr a ph Gra mmars for tw o r easons. First, graphs are represented by their adjac e ncy matr ices. As we will b e most concer ned with simple digra phs, they ca n b e represented b y Bo ole an matrices (we will come back to this in Sec. 2.3). 1 Second, Chap. 8 gener a lizes gr a ph cons traints and applicatio n condi- tions us ing monadic second or der logics . Go od references o n ma thematical lo g ics are [48] and [74]. 1 Multidigraphs are also addressed using Bo olean matrices. Refer to Sec. 9.3. 16 2 Bac k ground and Theory First-order pr edicate ca lculus (more briefly , firs t order logic, FOL) g e neralizes pro po- sitional lo gic, which deals with pro pos itions: A statement that is either true or false . F OL fo rm ulas a re constructed from individual c onstants ( a , b , c , etc., typically lower- case letters from the beginning of the alpha bet), individual variabl es (x, y , z, etc., t ypically low er -case letters from the end of the a lphabet), pr e dic ate symb ols (P , Q, R, etc., typically upper -case letters ), fu n ction symb ols (f, g, h, etc., typically low er -case letters from the middle of the alphab et), pr op ositional c onne ctives ( , ^ , _ , ñ , ) and quant ifiers ( , D ). Set C will b e that of individual constants, set F will b e function s ym b ols and set P will contain predicate symbols. Besides thes e ele men ts, punctuatio n sy m b ols a r e p ermitted such as parenthesis and commas. A formula in which every v ar iable is quantified is a clo sed form ula ( op en formula otherwise). A term (formula) that contains no v ar iable is called ground ter m (gro und formula). The arity of a ny predicate function f is its num be r of arg uments, nor mally written as a n uppe r index, f n , if needed. The r ules fo r constructing ter ms a nd formulas ar e recurs ive: Every element in C is a term, as it is any individual v a riable and als o f n p t 1 , . . . , t n q , where f n P F a nd t i are terms. Also, P P P is a formula 2 and the application o f any prop ositiona l connective o r quantifier (or b oth) to tw o o r more predica tes is also a for m ula. In fact, constants are fo r m ulas of arity zer o so it would b e conv enient to omit them and allow formulas of any arity . Nevertheless w e will follo w the traditio nal e xpos ition and use the term function when a r it y is at least 1. Example . As an example of FOL formula, one of the inference rules of predica te calculus is written: D xP p x q ^ xQ p x q ñ D x r P p x q ^ Q p x qs . It r eads as if there exis ts x fo r which P and for all x Q , then there exists x for which P and Q . F or ano ther example, let’s co nsider the languag e of o rdered Ab elian g roups. It has one constant 0, o ne unary function , one binary function and o ne binary relation ¤ . • 0, x , y are atomic terms. 2 It is call ed atomic f ormula . 2.1 Logics 17 • p x, y q , p x, p y , p z qqq are terms, usually written in infix notation as x y , x p y p z qq . • pp x, y q , 0 q , ¤ pp x, p y , p z qqq , p x, y qq are ato mic formulas, usually written in infix no tation as x y 0, x y z ¤ x y . • p x D y ¤ pp x, y q , z qq ^ pD x pp x, y q , 0 qq is a formula, mor e readable if wr itten a s p x D y x y ¤ z q ^ p D x x y 0 q . The semantics of our lang uage dep end on the domain of discourse ( D ) and o n the int erpretation function I . The domain of disc ourse (a lso known as universe of discourse) is the set of o b jects we use the F OL to talk ab out a nd must be fixed in adv ance. In the exa mple ab o ve, for a fixed Abelia n gro up, the domain of discours e are the elements of the group. F or a given domain of disco ur se D it is nec essary to define a n interpretation function I which assigns meanings to the no n- logical v o cabulary , i.e. maps symbols in our lang uage onto the domain: • Cons tan ts a re mapp ed onto ob jects in the domain. • 0- ary predicates ar e mapped onto true or false , i.e. whether they are true o r false in this interpretation. • N-a r y pre dicates are mapped onto sets of n-ary ordered tuples of elemen ts of the domain, i.e. those tuples o f member s for w hich the predica te holds (for exa mple, a 1-ary pr edicate is mapp ed o nto a subset of D ). The in terpretatio n of a formula f in our langua ge is then given by this mo r phism I together with an as signment of v alues to an y fre e v aria bles in f . If S is a v ar iable assignment on I then we can write p I , S q | ù f to mean that I s atisfies f under the assignment S ( f is true under in terpretatio n I and a s signment S ). Our interpretation function as s igns denotations to co ns tan ts in the lang uage, while S assig ns deno tations to free v a riables. First-order predicate logic a llows v ariables to ra nge over a to mic sy m b ols in the domain but it do es not a llo w v aria bles to be bo und to predica te symbols, howev er. A s econd order l o gic (such a s second order predicate lo gic, [48]) do es a llow this, and s en tences such as P r P p 2 qs (all predicates apply to num b er 2) can b e wr itten. Example . Starting out with for m ula: 18 2 Bac k ground and Theory β p X q x, y , z rp P p x, y q ^ P p x, z q ñ y z q ^ p P p x, z q ^ P p y , z q ñ x y qs which expres ses injectiveness of a binar y r elation P on its domain, it is p ossible to give a characteriza tion o f bijection ( X ) b et ween tw o sets ( Y 1 , Y 2 ): D X r β p X q ^ x p Y 1 p x q D y X p x, y qq ^ p Y 2 p x q D y X p y , x qqs . The bijection X is a binary rela tio n and the s e ts Y 1 and Y 2 are una ry relations. Hence, Y 1 p x q is the s a me as x P Y 1 . See [2 3], pp. 319- 320 for more details. Another example is the least upper bound (lub) prop erty for sets of rea l num b ers (every bo unded, nonempty set of r e al num b ers has a supr em um): A r p D w p w P A q ^ D z w p w P A ñ w ¤ z qq ñ D x y p w P A, p w ¤ y q x ¤ y qs . Second order log ic (SO L ) is mo re expressive than FOL under standar d sema n tics: Quantifiers range over all sets or functions of the a ppropriate so r t (thus, once the domain of the first order v ariables is established, the meaning of the remaining quantifiers is fixed). It is still p ossible to increase the order of the logic, for example b y a llowin g predicates to acc e pt arguments which are themselves pr e dic a tes. Chapter 8 mak es use o f monadic s econd order logic , MSOL for short, 3 which lies in b etw een first order a nd second order lo g ics. Instead of allowing quantification ov e r n- ary predicates, MSOL quantifies 0 -ary a nd 1-ary predicates, i.e. individuals and subsets. There is no restr iction on the a r it y of predicates. A theorem b y B ¨ uc hi and Elgot [7][26] (see a lso [78]) states that s tring languages generated by MSOL formulas cor resp ond to regular languag es (see also Sec. 3 .5), so w e hav e a n alterna tiv e to the use of reg ular expressio ns, a ppropriate to ex press patterns (this is one of th e r easons to make use of th em in C ha p. 8). 4 Another rea son is that prop erties as gener al as 3-colo r ability of a graph (see [23], Chap. 5 and also Sec . 8 .1) can be enco ded using MSOL so, for many purp oses, it se ems to b e ex pr essive enough. 3 In the literature there are several equiv alen t con tractions such as MS, MSO and M2L. 4 See [53] for an introd uction to monadic second order logic. See [29] for an implemen tation of a translator of MSOL form u la into finite-state automaton. 2.2 Category Theory 19 2.2 Category Theory Category theor y was first int ro duced b y S. Eilenberg and S. Mac L a ne in the early 194 0s in co nnection with their studies in ho mology theory (algebra ic top ology). See [25]. The reference b o ok in ca tegory theory is [5 0]. There are also several very go o d surveys on this to pic on the web such as http: //www.c s.utwente.nl/ ~ fokkin ga/mmf 92b.pdf . A category C is made up o f a cla ss 5 of o b jects, a class of morphis ms and a binary op eration ca lled c omp osition of morphisms , p Ob j p C q , H om p C q , q . Each morphism f has a unique source ob ject and a unique tar get ob ject, f : A Ñ B . There a re t wo a xioms for categorie s: 1. if f : A Ñ B , g : B Ñ C and h : C Ñ D then h p g f q p h g q f (asso cia tiv ity). 2. X D 1 X : X Ñ X s uc h that f : A Ñ B it is true that 1 B f f f 1 A (existence of the identit y mor phism). An o b ject A is initial if and only if B D ! f : A Ñ B , and terminal if B D ! g : B Ñ A . Not all categ ories have initial or terminal o b jects, althoug h if they exis t then they are unique up to a unique is omorphism. Example One first example is the ca tegory Set , where o b jects a r e sets and morphisms are total functions. Doing set theo ry in the ca tegorical languag e forces to expres s every- thing with function comp osition only (no e x plicit arguments, membership, etc). Notice that morphisms need not b e functions. F or exa mple, any directed g r aph de ter - mines a categor y in which each no de is one ob ject and each directed edge is a mo rphism. Comp osition is concatenation of paths and the iden tity is the empt y path. This categ o ry is at times called P ath catego r y . Similarly , an y preordered set p A, ¤q can be though t o f as a categor y . Ob jects a r e in this case the elements of A ( a, b P A ), a nd there is a morphism betw een t wo given elements whenev er a ¤ b . The identit y is a ¤ a . 6 5 A class is a collection of sets or oth er mathematical ob jects. A class that is not a set is call ed a proper class and has th e properties t hat it can not b e an element of a set or a class and is not sub ject to t he Zermelo-F raenkel axioms, thereby a voiding some paradoxes from naive set theory . 6 These three ex amples can be found in [28]. 20 2 Bac k ground and Theory The empty set H is the only initial ob ject and e v ery singleto n ob ject (one-element set) is terminal in catego r y Set . If as b efore p A, ¤q is a preo rdered se t, A ha s a n initial ob ject if and only if it has a smallest element, and a terminal ob ject if and only if A has a larg est element. In the ca tegory of gra phs (to be defined so on) the null gra ph – the graph without no des and edg e s – is an initial o b ject. The gra ph with a single no de a nd a s ing le edge is ter mina l, except in the ca tegory o f simple gra phs without lo ops which do es not hav e a terminal ob ject. Example . A multigraph G p V , E , s, t q consists of a set V of vertexes and a se t E of edges. F unctio ns source and target s, t : E Ñ V respectively return the initial no de and the final no de of an edge. A gr aph mor phism f : G 1 Ñ G 2 , with f p f V , f E q , consists of tw o functions f V : V 1 Ñ V 2 and f E : E 1 Ñ E 2 such that f V s 1 s 2 f E and f V t 1 t 2 f E . Comp osition is defined comp onent -wise, i.e. given f 1 : G 1 Ñ G 2 and f 2 : G 2 Ñ G 3 then f 2 f 1 p f 2 ,V f 1 ,V , f 2 ,E f 1 ,E q : G 1 Ñ G 3 . The catego ry of g raphs with total mo rphisms will b e denoted Graph and Graph P if mor phisms are allow ed to be pa r tial. Graph P will b e more interesting for us. Let C and D b e tw o ca tegories. A functor F : C Ñ D is a mapping 7 that asso cia tes ob jects in C with ob jects in D (for s ome X P C , F p X q P D ) a nd morphisms in C with morphisms in D : f : X Ñ Y , f P C , F p f q : F p X q Ñ F p Y q , F p f q P D . (2.1) An y functor ha s to keep the catego r y structure (identities and comp osition), i.e. it m ust satisfy the fo llo wing tw o pr o per ties: 1. X P C , F p 1 X q 1 F p X q . 2. f : X Ñ Y , g : Y Ñ Z we hav e that F p g f q F p g q F p f q . Example . The c onstant fun ctor b etw ee n categor ies C a nd D sends every ob ject in C to a fixe d o b ject in D . The diagonal fun ctor is defined b etw een catego ries C and C D and sends each ob ject in C to the constan t functor in that ob ject. 8 Let C denote the category of v ector spaces o ver a fixed field, then the tensor pr o duct V b W defines a functor C C Ñ C . 7 F un ctors can b e seen as morphisms betw een categories. 8 C D is the clas s of all morph isms from D to C 2.2 Category Theory 21 C F U g Y D F p U q F p g q X u f F p Y q Fig. 2.1. Universal Prop erty All constr uctions that fo llow can b e c haracter ized by some a bstract prop erty that de- mands, under some co nditio ns, the existence o f a unique morphism, known as univ ersal prop erties . One co ncept constantly used is that o f univ ersal morphism , which can b e easily recognized in the rest of the section: Let F : C Ñ D be a functor and let X P D , a universal morphism from X to F – where U P C a nd u : X Ñ F p U q – is the pa ir p U, u q such that Y P C a nd f : X Ñ F p Y q , D ! g : U Ñ Y sa tis fying : 9 f F p g q u. See Fig. 2.1 where blue dotted ar rows delimit the commutativ e triangle p u, f , F p g qq . P 1 Π 1 X Π 1 Y u N γ X γ Y N γ X γ Y u P Π X Π Y L δ X δ Y X Y F p X q F p f q F p Y q F p X q F p f q F p Y q Fig. 2.2. Produ ct, Cone and Univers al Cone 9 In fact, this is a universal property for unive rsal morphisms. 22 2 Bac k ground and Theory The pro d uct of ob jects X and Y is an ob ject P and tw o mor phisms Π X : P Ñ X and Π Y : P Ñ Y s uc h that P is terminal. This definition can b e extended easily to an arbitrar y collection o f ob jects. A cone from N P D to functor F : C Ñ D is the fa mily of morphisms γ X : N Ñ F p X q such that f : X Ñ Y , f P C we have F p f q γ X γ Y . A limi t is a universal co ne, i.e. a cone through which all other cones factor: A cone p L, δ X q of a functor F : C Ñ D is a limit of that functor if and only if for any cone p N , γ X q of F , D ! u : N Ñ L such that γ X δ X u ( L is terminal). See Fig. 2 .2. X g f Y γ Y δ Y P B 1 δ X δ P B δ Y P B γ Y γ X X f Z γ Z δ Z P O δ P O P O 1 Y g Z Fig. 2. 3. Pushout and Pullbac k A pullbac k 10 is the limit of a diagr am 11 consisting of tw o morphisms f : X Ñ Z and g : Y Ñ Z with a common c odo main. By reverting all ar rows in pr evious definitions 12 we get the dual c o ncepts: Co pro d- uct , co cone , coli mit and pushout . A pushout 13 is the c o limit of a dia gram consisting of tw o morphisms f : X Ñ Y a nd g : X Ñ Z with a co mmo n domain and can be informally interpreted a s clos ing the square depicted to the left of Fig. 2.3 by defining the red da shed morphisms γ Z and γ Y . Fine blue dotted mor phisms ( δ Y , δ Z and δ P O ) 10 Also know n as fib ered product or Cartesian square . 11 Informally , the d iagram is what app ears to the lef t of Fig. 2.3. F ormally , a d iagram of t yp e I – the index or scheme category – in category C is a functor D : I Ñ C . What ob jects and morphisms are in I is irrelev ant. On ly the wa y in whic h they are related is of importance. 12 Reverting arrows is at times called duality . 13 Also know n as fib ered coproducts or fib ered sums . 2.2 Category Theory 23 illustrate the universal prop erty o f PO of b eing the initial ob ject. W e will s ee in Secs. 3 .1 and 3.2 that the ba sic pilla r s o f categor ic al appro a c hes to gr a ph tra nsformation a re the pushout and pullback diagr ams depicted in Fig. 2.3. Pushout cons tr uctions a re very imp ortant to gra ph trans formation systems, in par - ticular to SPO a nd DPO appr oaches, but also used to some extent by most of the rest of the categorica l appro aches. The intuition of a pushout betw een sets A , B and C as in Fig. 2 .4 is to glue sets B and C thr o ugh set A or, in other words, put C where A is in B . Fig. 2.4. Pushout as Gluing of Sets A pusho ut complem e n t is a catego rical constructio n very similar to P O and P B. In this ca se, following the nota tion on the left o f Fig. 2.3, f a nd γ Y would be given and g , γ Z and Z need to b e defined. Roughly sp eaking, a n initial pushout is an initial ob ject in the “categ ory of pushouts”. 14 Suppo se we ha ve a pushout as depicted to the left o f Fig. 2.3, then it is said to b e initial over γ Y if for every pushout f 1 : X 1 Ñ Y and γ 1 Z : Z Ñ P O (re fer to Fig. 2.5) there exist unique morphisms f : X Ñ X 1 and γ Z : Z Ñ Z 1 such that: 1. f f 1 f and γ Z γ 1 Z γ Z . 2. T he squar e defined by o verlined morphisms p f , g , γ Y , γ Z q is a pushout. 14 Initial pushouts are needed for the gluing condition and to d efine HLR categories. See b elow and also Sec. 3.1. 4 . 24 2 Bac k ground and Theory X g f f X 1 γ 1 Y f 1 Y γ Y Z γ Z γ Z Z 1 γ 1 Z P O Fig. 2. 5. Initial Pushout Now we will intro duce adhes iv e HLR categories 15 which are very important for a general study of gr a ph gr ammars a nd gra ph tra ns formation systems. See Sec . 3.1.4 for an introductio n or refer to [22] for a detailed a ccount. V an Kamp en squares ar e pushout diag rams c lo sed in some sense under pullbacks. Given the pushout diagram p p, m, p , m q on the flo or of the cube in Fig. 2.6 and the t wo pullbacks p m, g 1 , m 1 , l 1 q and p p, r 1 , p 1 , l 1 q of the back faces (depicted in dotted re d) then the front fac e s p p , h 1 , p 1 , g 1 q and p m , h 1 , m 1 , r 1 q (depicted in dashed blue) are pullbacks if and o nly if the top squar e p p 1 , m 1 , p 1 , m 1 q is a pushout. E ven in catego ry Set no t all pusho uts ar e v an K ampen squares, unless the pushout is defined alo ng a monomorphism (an injective morphism). W e say that p p, m, p , m q is defined along a monomorphism if p is injective (symmetrically , if m is injective). A categ o ry has pushouts along monomo rphisms if at lea st one of the g iv en mor phism is a monomorphism. W e will be interested in so- called adhesive categor ies. A ca tegory C is called adhes i v e if it fulfills the following prop erties: 1. C has pushouts alo ng monomorphisms. 2. C has pullbacks. 3. P ushouts alo ng monomorphisms are v an Kamp en squar es. There ar e impo rtant c a tegories that turn o ut to b e a dhesive categ ories but others are not. F or example, Set and Graph are adhesive catego ries but Poset (the ca tegory of partial ordered sets ) and T op (topo logical space s and co n tinuous functions) a re not. 15 HLR stands for High L evel R eplac ement . 2.2 Category Theory 25 L 1 l 1 p 1 m 1 G 1 g 1 p 1 R 1 m 1 r 1 H 1 h 1 L m p G p R m H Fig. 2.6. V an Kampen Square Axioms of a dhes iv e categorie s hav e to b e weakened b ecause there ar e impor tant cat- egories for graph tra nsformation that do not fulfill them as e.g. t yp e d a ttributed graphs. The main difference b et ween adhes ive categor ie s and adhesive HLR ca tegories is that adhesive prop erties are demanded for some sub class M o f monomo rphisms a nd not for every monomorphism. A categor y C with a set of mor phisms M is an adhes i v e HLR category if: 1. M is closed under isomorphism comp osition and decomp osition ( g f P M , g P M ñ f P M ). 2. C has pushouts and pullbac ks along M -morphisms and M -morphisms are closed under pushouts a nd pullbacks. 3. P ushouts in C along M -morphisms are v an Kamp en squares . Symmetrically to pr evious use of the term “along” , a pushout along a n M -mor phism is a pushout where at least o ne of the given morphisms is in M . Among o thers, catego ry PTNets (place/transition nets) fails to b e an adhesive HLR category so it would b e nice to still consider wider se ts of graph gra mmars by further relaxing the ax iomatic of adhesive HLR catego ries. In particular the third a xiom can b e weak ened if only some c ub es in Fig. 2.6 are consider ed for the v a n Kampe n prop erty . In this ca se we will sp eak of weak adhesive HLR categories : 26 2 Bac k ground and Theory 3’. P ushouts in C along M -morphisms ar e weak v an Kamp en squar es, i.e. the v an Kam- pen sq ua re prop erty holds for a ll commutativ e cub es with p P M a nd m P M o r p P M and l 1 , r 1 , g 1 P M . Adhesive HLR catego ries enjoy many nice prop erties concer ning pushout and pull- back constructions, allowing us to mov e forward and ba c kward easily inside diagrams. Assuming all inv olved morphisms to b e in M : 1. P ushouts alo ng M -morphisms are pullbacks. 2. If a pushout is the compositio n of tw o squar es in whic h the second is a pullba ck, then in fac t b oth squar e s are pushouts and pullbacks. 3. T he symmetrical v an Kampen prop erty for pullbac ks also holds (see Fig. 2.6): If the top squar e p G 1 , H 1 , R 1 , L 1 q is a pullback and the front squares p G 1 , G, H , H 1 q and p H 1 , H, R , R 1 q are pushouts, then the bo ttom p G, H, R , L q is a pullbac k if and only if the ba c k face s p G 1 , G, L , L 1 q and p L 1 , L, R, R 1 q are pushouts. 4. P ushout complements ar e unique up to isomor phis ms . It is neces sary to b e cautio us when p orting co ncepts to (weak) adhesive catego ries as mor phisms inv o lv ed in the definitions and theor ems ha ve to b elong to the set of morphisms M . 2.3 Graph Theory In this s ection simple digra phs are defined, which can be represented as Bo olean matrices. Besides, basic op erations on these matrices ar e int ro duced. They will b e used in la ter sections to characterize gr aph transfor mation rules. Als o , compa tibilit y for a gra ph 16 – an adjace ncy matrix and a vector o f no des – is defined and studied. This pav e s the way to the notion of compatibility of g rammar rules 17 and of sequence 18 of pro ductions. Graph theory is consider ed to start with Euler’s pa p er on the seven bridges o f K¨ onisb erg in 17 36. Since then, there has b een an intense research in the field by , a mo ng others, Cayley , Silvester, T ait, Rams e y , Erd¨ os, Szemer´ edy and man y more. Nowada ys 16 See Definition 2.3.2. 17 See Definition 4.1.5. 18 See Sec. 5.3. 2.3 Graph Theory 27 graph theory is applied to a wide range of a reas in different disc iplines in b oth science and engineering , such as co mputer science, chemistry , physics, top ology , a nd many mor e. Among its main branches we can cite extr emal gr aph theory , geo metric gr aph theory , al- gebraic gr aph theory , probabilistic (a lso known as r andom) gr aph theory and top ological graph theor y . W e will just use some basic facts fro m algebra ic graph theo ry . The ca tegory o f gra phs ha s b een in tro duced in Sec. 2.2. An easy wa y to define a simple digraph G p V , E q is as the structure that consists of tw o se ts , one of no des V t V i | i P I u and one of edge s E tp V i , V j q P V V u (think of arrows as connecting no des). 19 The pr efix “di” means that edges are directed and the term “simple” that at most one arrow is a llow ed b etw ee n the same tw o no des. F or example, the co mplete simple digraph with three vertexes and tw o exa mples o f fo ur a nd five vertexes can b e found in Fig. 2.7. Fig. 2.7. Three, F our and Five N odes Simple Digraphs An y simple digra ph G is uniquely determined thro ugh one o f its asso ciated matrices, known as adjacency matrix A G , who se element a ij is defined to b e one if ther e exists an arrow joining vertex i with vertex j and zero otherwise. This is not the only p ossible characterization of gra phs using matrices. The incidence matrix is an m n matrix I m n , where m is the n umber of no des and n the num b er o f edges , 20 such that I i j 1 if e dge e j leav es the no de a nd I i j 1 if edge e j ent ers the no de ( I i j 0 other wise). As it is p oss ible to rela te the adjacenc y and 19 Mind the difference betw een t his and ha v ing fun ctions s and t , see for example [22]. 20 The tensor n otation is explained in Sec. 2.4. 28 2 Bac k ground and Theory incidence matrices thr ough line gra phs , w e will mainly characterize g raphs throug h their adjacency matrices. 21 In addition, a vector that we call no de v ector V G is a s so ciated to our digr aph G , with its elements equal to one if the corres ponding no de is in G and zero otherwise. V G will b e necessa ry b ecause we will study sequences of pro ductions, which probably apply to different g raphs. Their adjacency matric es will then refer to different sets o f no des. In or der to op erate algebr aically we will complete a ll matrices (refer to Sec. 4.2 for completion). No de vectors are used to dis tinguish which no des be long to the gra ph and which ones have be en added for algebr aic op eratio n cons istency . Next example illustr ates this p oint. Example . The adjacency matrices A E and C E for first and third graphs of Fig. 2.7 are: A E 1 1 1 0 | 1 1 1 1 0 | 2 1 1 1 0 | 3 0 0 0 0 | 4 A N 1 | 1 1 | 2 1 | 3 0 | 4 C E 1 1 1 1 | 1 0 1 1 0 | 2 0 1 0 0 | 3 0 0 0 0 | 4 C N 1 | 1 1 | 2 1 | 3 1 | 4 where A N and C N are the co rresp onding no de vectors. A vertically sepa r ated co lumn indicates no de or de r ing, which applies b oth to rows a nd columns. Note that edge s incident to no de 4 ar e c o nsidered in matrix A E . As there is no no de 4 in A , corres p onding element s in the adjacency matrix are zero. T o clearly sta te that this no de do es not b elong to graph A we hav e a zero in the fourth p osition of A N . Note that simple g raphs (without orientation on e dg es) can b e studied if we limit to the subspace of sy mmetr ic adjacency matr ices. In Sec. 9.3 we study how to extend Matrix Graph Grammars appr o ach to consider multigraphs a nd multidigraphs. The difference betw een a simple digra ph and a multidigraph is that s imple gra phs allow a ma xim um of one edge connecting tw o no des in each dire c tion, while a multidigraph allows a finite nu mber of them. 21 The line graph L p G q of graph G is a graph in whic h each vertex of L p G q represents an ed ge of G and tw o no des in L p G q are incident if the corresp onding edges share an en dp oin t. Incidence and adjacency matrices are rela ted through the equation: A p L p G qq B p G q t B p G q 2 I where A p L p G qq is the adjacency matrix of L p G q , B p G q its incidence matrix and I the identit y matrix. 2.3 Graph Theory 29 In the liter a ture, depe nding mainly on the b o ok, there is so me co nfusion with termi- nology . At times, the ter m graph applies to multigraphs while other times g raph refers to simple graphs (also known a s r elational gr aphs ). Whenever found in this b o ok, and unless other wise stated, the ter m gr aph should b e understo o d as simple dig raph. The basic Bo olean o pera tions on graphs are defined c o mponent-wise o n their adja- cency matrices. Let G and H b e t wo gr aphs with adjacency matrices g i j and h i j , i, j P t 1 , . . . n u , then: G _ H g i j _ h i j G ^ H g i j ^ h i j G g i j . Similarly to or dinary matr ix pr o duct ba sed o n additio n and multiplication b y sc a lars, there is a natural definition for a Bo olea n pro duct with th e same structure but using Bo olean op erations and and or . Definition 2.3. 1 (Bo olean Matrix Pro duct) F or digr aphs G and H , let M G g i j i,j Pt 1 ,...,n u and M H h i j i,j Pt 1 ,...,n u b e their r esp e ctive adjac ency matric es. The Bo ole an pr o duct is an adjac ency matrix again whose elements ar e define d by: p M G d M H q i j n ª k 1 g i k ^ h k j . (2.2) Element p i, j q in the Bo olean pro duct matrix is one if there exists an e dg e joining no de i in dig raph G with so me node k in the same digr aph and another edge in digr aph H star ting in k a nd ending in j . The v alue will be z e r o otherwise. If for example we wan t to chec k whether no de j is rea c hable starting in no de i in n steps or less, we may calculate n k 1 A p k q , where A p k q A d k q d A , and see if element p i, j q is o ne. 22 W e will co nsider square matrices only as every no de can b e either initial or terminal for any edge. Another useful product o per ation that can b e defined for t wo simple digraphs G 1 and G 2 is its tensor pro duct (defined in Sec. 2.4) G G 1 b G 2 : 1. T he no des set is the Car tesian pro duct V p G q V p G 1 q V p G 2 q . 2. T wo vertices’s u 1 b u 2 and v 1 b v 2 are adjacent if and only if u 1 is adjacent to v 1 in G 1 and u 2 is adjacent to v 2 in G 2 . 22 In order to distinguish when w e are using the standard or Boolean p roduct, in the latter exp onents will b e enclose d b et ween brack ets. 30 2 Bac k ground and Theory In Sec. 2.4 we will see that the adjacency matrix of G coincide with the tens o r pro duct of the a djacency matrices of G 1 and G 2 . Definition 2.3 .3, Prop osition 2.3.4 a nd the intro duction ab ov e of the no des vector is not standard in gr aph theory (in fact, as far as we know, we are in tro ducing them). The decision of including them in this intro ductory s ection is be c a use they are simple results very close with what one under stands as “basic s” of a theor y . Given an adjacency matrix and a v ector of no des, a natural question is whether they define a simple digr a ph or not. Definition 2. 3.2 (Compatibili t y) A Bo ole an matrix M and a ve ctor of no des N ar e compatible if they define a simple digr aph: No e dge is incident to any no de that do es not b elong to the digr aph. An edge incident to some no de whic h does not b elong to the g raph (ha s a zero in the corres p onding p osition of the no des vector) is called a dangling edge . In the DPO/SPO a pproaches, this condition is check ed when building a direct deriv a- tion, known a s dangling c ondition . The idea behind it is to o btain a closed set of entities, i.e. deletio n o f no des outputs a digraph aga in (every edge is incident to some no de). Prop osition 2.3 .4 be lo w pr ovides a criteria for tes ting compatibility for simple dig r aphs. Definition 2. 3.3 (Norm of a Bo o lean V ector) Le t N p v 1 , . . . , v n q b e a Bo ole an ve ctor. Its norm } } 1 is given by: } N } 1 n ª i 1 v i . (2.3) Prop osition 2.3 .4 A p air p M , N q , wher e M is an adjac ency matr ix and N a ve ctor of no des, is c omp atible if and only if M _ M t d N 1 0 (2.4) wher e t denotes tr ans p osition. Pr o of In a n adjac e nc y matrix, r ow i repr esent s outgoing edges from vertex i , while co lumn j are inco ming edges to vertex j . Moreov er, p M q ik ^ N k 1 if and only if p M q ik 1 2.4 T ensor Algebra 31 and p N q k 0, and thus the i -th element o f vector M d N is o ne if and only if there is a dangling edge in row num b er i . W e have just considered outgo ing edges; for incoming ones we have a very s imilar term: M t d N . T o finish the s ufficien t part of the pro of – necessity is almost straightforward – w e or both terms and take nor ms to detect if ther e is a 1. Remark . W e ha ve used in the pro of o f P rop osition 2.3.4 distribution of d and _ , p M 1 _ M 2 q d M 3 p M 1 d M 3 q _ p M 2 d M 3 q . In addition, we also ha ve the distribu- tive law on the left, i.e. M 3 d p M 1 _ M 2 q p M 3 d M 1 q _ p M 3 d M 2 q . Besides, it will b e stated without pro of that } ω 1 _ ω 2 } 1 } ω 1 } 1 _ } ω 2 } 1 . In Chap. 6 we will dea l with matching, i.e. finding the left hand side of a gr aph grammar rule in the initial state (host gra ph). A matching alg orithm is not pr op o sed; our appr o ach assumes that such alg orithm is g iv en. This is closely re la ted to the well known gr aph-subgraph isomorphism pro blem ( SI ) which is an NP -co mplete decision problem if the num b er of no des in the subgra ph is s tr ictly smaller than the num b e r of no des in the graph. W e will brus h over complexity theory in Chap. 1 1.2. 2.4 T ensor Algebra Throughout the b o o k, qua n tities that can b e r e presented by a letter with subscripts or sup e rscripts attached 23 will b e used, tog ether with so me a lgebraic structure (tensorial structure). This sec tion is devoted to a quick introduction to this topic . Two very go o d references ar e [33] (with r elations to physics) and the clas s ic b o ok [75]. A tensor is a multilinear a pplication b et ween vector spac es. It is at times interesting to stay a t a more abstract level and think o f a tensor as a system that fulfills certain notational pr o per ties. Systems can b e heterogeneous when ther e are different types of elements, but we will only consider homo geneous s ystems. Therefore we will speak of systems o r tensors, it do es not matter which. The rank 24 of a system (tensor) is the num b er of indexes it has , taking into a ccount whether they ar e sup erscripts or subscripts. F or example, A i j k is 1 2 -v alent or of rank (1,2). Subscr ipts or sup erscripts a re refer red to as indexes or s uffixes. 23 A i j k for example. 24 The terms order and v alence are commonly used as synonyms. 32 2 Bac k ground and Theory Algebraic op eratio ns of addition and subtraction a pply to sys tems of the sa me type and ra nk . They are defined comp onent-wise, e.g . C i j k A i j k B i j k , pr o vided tha t some additive structure is defined on elements of the s ystem. W e do not follow the Einstein summation co nven tion, which states that when an index app ear s t wice, one in an upp er and one in a low er p osition, then they ar e summed up ov er all its p ossible v alues . The pro duct is obtained multip lying each comp onent of the first system with each comp onent of the second s ystem, e.g. C imnl j A i j b B mnl . Suc h a pro duct is ca lled outer pro duct or tensor pro duct . The rank of the result is the sum of the ranks of the factors and inher its all the indexes o f its factors . All linear relations are satisfied, i.e. for v 1 , v 2 P V , w P W and v b w P V b W the following identities are fulfilled: 1. p v 1 v 2 q b w v 1 b w v 2 b w . 2. c v b w v b cw c p v b w q . T o categ orically characterize tensor pro ducts note that ther e is a natura l isomorphism betw een all bilinear maps fr o m E F to G and a ll linear maps from E b F to G . E b F has all a nd only the rela tions that are neces s ary to ensure that a homomor phis m from E b F to G will b e linear (this is a universal prop erty). F or vector spaces this is quite straightforward, but in the ca se of R -mo dules (mo dules ov er a ring R ) this is normally accomplished by taking the quotient with resp ect to appropr iate submo dules. Example . The Kroneck er pro duct is a sp ecial ca se of tensor pro duct that we will use in Chap. 10. Given matrices A a i 1 j 1 m n and B b i 2 j 2 p q , it is defined to b e C A b B p c i j q mp nq where c i j a i 1 j 1 b i 2 j 2 (2.5) being i p i 1 1 q n i 2 and j p j 1 1 q m j 2 . The notation A a i j m n denotes a matrix with m rows a nd n columns, i.e. i P t 1 , . . . , m u and j P t 1 , . . . , n u . As a n example: A a 1 1 a 1 2 1 2 B b 1 1 b 1 2 b 2 1 b 2 2 2 2 C A b B a 1 1 b 1 1 a 1 1 b 1 2 a 1 2 b 1 1 a 1 2 b 1 2 a 1 1 b 2 1 a 1 1 b 2 2 a 1 2 b 2 1 a 1 2 b 2 2 2 4 Note that the Kroneck er product of the a dja c ency matrices of tw o gra phs is the adjacency matrix o f the tenso r pro duct gra ph (see Sec. 2.3 for its definition). The op eration of con traction happ ens when an upp er and a low er indexes are set equal and summed up, e.g. C imnl j ÞÝ Ñ C mnl ° N j 1 C j mnl j ° i j C imnl j . F or example, 2.4 T ensor Algebra 33 the standard m ultiplication of a vector by a matr ix is a co n traction: Consider matrix A i j and vector v k with i, j, k P t 1 , . . . , n u , then matrix multiplication can b e p erformed by making j and k equal a nd summing up, u i ° n j 1 A i j v j . The i nner pro duct is r epresented by x , y and is obtained in tw o steps: 1. T ake the outer pro duct of the tens o rs. 2. Perform a contraction on tw o of its indexes. In Sec . 2.5 we will ex tend this notation to co pe with gr aph gra mmar rules repres en tation. Upper indexes are called con tra v arian t and lower indexes co v arian t . Contrav a r i- ance is as s oc ia ted to the tang en t bundle (tange n t spa ce) o f a v ariety and corresp onds, s o to sp eak, to columns. Cov aria nce is the dual notion and is asso ciated to the co tan- gent bundle (normal spa ce) and rows. As a n exa mple, if we hav e a vector V in a three dimensional spac e with basis t E 1 , E 2 , E 3 u then it can be re pr esented in the form A a 1 E 1 a 2 E 2 a 3 E 3 . Comp onents a i can be ca lculated via a i A, E i D with E i , E j D δ i j , wher e the K roneck er delta function is 1 if i j and zer o if i j . Basis E i ( and t E i u are called recipro cal or dual . W e will not enter the representation o f δ in integral form or the rela tion with the Dirac delta function, of fundamental imp o rtance in distributio n theory , functiona l analysis (see Sec. 2.5) and quantum mechanics. The Kroneck er delta can b e generalize d to an n n - v alent tensor: δ j 1 ,...,j n i 1 ,...,i n n ¹ k 1 δ i k j k . (2.6) Besides the K roneck er delta , there are other very useful tensors such a s the metric tensor , which c a n be informally intro duced b y g ij E i E j and g ij E j E i . Note that g raises or lowers indexes, thus moving fro m cov ariance to co n trav ariance and vice versa. Related to δ and to group theory is the imp ortant Levi-Civita symb ol : ε σ $ ' ' & ' ' % 1 if σ is an even p ermutation. 1 if σ is an o dd p ermutation. 0 otherwise. (2.7) where σ p i 1 . . . i n q is a per m utation of p 1 . . . n q . See Sec. 2.6 for definitions and further re s ults. Symbols δ and ε ca n b e r e la ted through matrix A p a kl q δ i k j l and: ε i 1 ... ε j 1 ... det p A q . (2.8) 34 2 Bac k ground and Theory 2.5 F unctional A nalysis F unctional analysis is a branch of mathematics focused on the study of functions – oper- ators – in infinite dimensiona l spaces (although its results a lso apply to finite dimensional spaces). Besides the a lgebraic struc tur e (no rmally a vector space but at times groups) some o ther ingredients are normally added such a s an inner pr o duct (Hilb ert spaces), a nor m (Banach spaces) a metric (metric spaces) or just a top ology (top ological vector spaces). An op erator is just a function, but the ter m is nor mally employ ed to call a tten tion to some sp ecial as pect. Examples of o per a tors in mathematics ar e differe n tial and integral op erators, linea r op erator s (linear transfor mations), F ourier trans form, etc. In this b o ok we will call op erators to functions that act o n functions with image a function. Op erator s will b e used, e.g. in Chap. 6 to mo dify pro ductions in order to get a pro duction or a sequence of pr o ductions. W e will need to change pro ductions as co mmen ted ab ov e and our inspiration c o mes from op erator theo ry and functional a nalysis, but we would like to put it forward in a quantum mechanics style. So, although it will no t b e used as it is, we will give a very brief int ro duction to Hilb ert and Banach spaces, bra -ket no ta tion and duality . A Hilb ert space H is a v ector space, complete with res pect to Ca uc hy sequences over a field K (ev ery Ca uc hy seq uence has a limit in H ), plus a s c alar (or inner) pro duct. 25 Completeness ensures that the limit of a convergen t sequence is in the space, facilitating several definitions fr om analys is (note that a Hilb ert space can b e infinite-dimensional). The inner pro duct – x u, v y , u, v P H – equips the structure with the notions of distance and a ng le (in pa rticular p e rpendicula rit y). F rom a geometric point of view, the scala r pro duct can b e in ter pr eted as a pr o jection wher eas analytically it can b e seen as an int egra l. 25 Inner prod uct x , y : H H Ñ K axioms are: 1. x, y P H , x x, y y x y , x y . 2. a, b P K , x, y P H , x ax, by y a x x, y y b x x, y y . 3. x P H , x x, x y ¥ 0 and x x, x y 0 if and only if x 0. 2.5 F un ctional Analysis 35 The inner pro duct gives raise to a norm 26 } } via } x } 2 x x, x y , x P H . Any nor m can b e interpreted as a mea sure of the size of elements in the vector space. E v ery inner pro duct defines a norm but, in gener al, the opp osite is not true, i.e. no rm is a weak er concept than sca lar pro duct. The relationship b etw een row and co lumn vectors can b e generalized fr o m an a bstract po in t o f v ie w through dual spaces . The dua l space H of a Hilb ert space H over the field K has a s elements x P H , linear applications with domain (initial set) H a nd co domain (image ) the underlying field K , x : H Ñ K . The dual spac e bec o mes a v ector spa ce defining the addition x 1 , x 2 P H , x P H by p x 1 x 2 qp x q x 1 p x q x 2 p x q and the sca lar product k P K by k x p x q x p k x q . Using tensor algebr a ter minology (see Sec. 2.4) elements of H are calle d cov ariant and e le ments of H contra v aria n t. Note how in x x, y y it is p ossible to think o f x as an ele ment of the vector space and y as an ele ment o f the dual space. An y Hilbert space is isomorphic (or an ti-is omorphic) to its dual space, H H , which is the conten t of the Riesz r epr esentation the or em . This is particula rly relev ant to us b ecause it is a justification of the Dir ac bra- k et nota tion that we will als o use. The Riesz representation theore m can b e stated in the following terms: Let H b e a H ilb ert space, H its dual and define φ x p y q x x, y y , φ P H . Then, the mapping Φ : H Ñ H such that x ÞÑ φ x is an iso metric isomo rphism. This means that Φ is a bijection a nd that } x } } φ x } . W e w ill very briefly intro duce Banach spaces to illustrate how notions and ideas from Hilber t space s, sp ecially no tation, is extended in a more or less natural wa y . A co mplete 27 vector space plus a nor m is known as a Banac h space , B . Ass o cia ted to any Banach spac e there exists its dual space, B , defined a s befo re. Contrary to Hilbe rt spaces, a Ba na c h space is no t iso metrically isomor phic to its dual space. 26 Norm } } : B Ñ K axioms are: 1. x, y P B , } x y } ¤ } x } } y } . 2. a P K , x P B , } ax } | a | } x } . 3. x P B , } x } ¥ 0 and } x } 0 if and only if x 0. 27 Complete in the same sense as for Hilb ert spaces. 36 2 Bac k ground and Theory It is p ossible to define a distance (also called metric ) out o f a norm: d p x, y q } x y } . Even though there is no such geometrica l int uition of pr o jection nor angle s , it is still p ossible to use the no tation we are interested in. Given x P B , x P B , instead of wr iting x p x q (the result is an element o f K ) at times x x, x y is pr eferred. Although the space and its dual live at differ ent levels , we would like to r e c o ver this g eometrical int uition o f pr oje ction . In some (very nice) sense, the result of x p x q is the pro jection of x over x . The same applies for an opera tor T a cting on a Banach space B , T : B Ñ B . Suppose f , g P B , then g T p f q x f , T y . This is closer to our situation, so the applica tion of a pro duction 28 can b e written R x L, p y . (2.9) The le ft part is sometimes c a lled br a and the r ight part ket : x bra , k e t y . Besides dual elements, the adjoint of an op erator is also repr esented using asterisk s . In our case, the adjoint op erator of T , repres e n ted b y T , is formally defined b y the ident ity: x L, T p y x T L, p y . (2.10) Roughly sp eaking , T is a n op erator (a function) that mo difies a pr oductio n, being its output a pro duction again, so the left hand side in (2.10) is equiv alent to T p p q p L q , and the r igh t hand s ide is just p p T L q . Note tha t T p p q is a pro duction and T L is a s imple digraph. In quantum mechanics the p ossible states of a quantum mechanical system ar e repre- sented by unit vectors – state ve ctors – in a Hilb ert space H or state sp ac e (equiv alently , po in ts in a pr o jectiv e Hilb ert spa ce). Each obse rv able – prop erty of the sys tem – is de- fined by a linear o per ator acting o n the elements of the state space . Ea c h eigens tate of an observ able corres ponds to a n eig en vector of the o per ator and the eig en v alue to the v alue o f the obs erv able in that eigensta te. An interpretation of x ψ | φ y is the probability amplitude fo r the sta te ψ to co llapse into the s tate φ , i.e. the pro jection of ψ ov er φ . In this case, the notation can b e generalized to metric spaces, to p olo g ical vector spa ces and even vector spac e s without a n y top olog y (clos e to o ur ca se as we will deal with graphs 28 See Sec. 4.1 for definitions. 2.6 Group Theory 37 without introducing notions suc h as metrics, scalar pr o ducts , etc). Two recommended references ar e [37] and [6 8]. This digress ion on quantum mechanics is justified b ecause alo ng the present contri- bution we would lik e to think in gr aph grammars as having a static definition which prov okes a dyna mic b ehaviour and the dua lit y b etw een state and obser v able. Besides, the use of the notation, we w ould lik e to k eep s ome “physical” (mechanics) int uition whenever possible. 2.6 Group Theory One wa y to in tro duce group theory is to define it a s the part of mathematics tha t study those structur es for whic h the eq uation a x b has a unique s olution. There is a very nice definition due to James Newma n [5 7] that I’d like to quote: The theor y of groups is a branch of mathematics in whic h one does something to something and then compares the res ults with the result of doing the sa me thing to something else, or something else to the same thing. W e will b e interested in gro ups, mainly in its nota tio n and basic results, when dealing with sequen tialization in Chaps. 4 and 7. A group G is a set to gether with an op eration p G, q that s atisfies the following axioms: 1. Closur e: a, b P G , a b P G . 2. A sso ciativity: a, b, c P G , a p b c q p a b q c . 3. Ident ity element: D e P G such that a e e a a . 4. In verse element: a P G D b P G such that a b e b a . Actually , the third and fourth axio ms can b e weakened as only o ne identit y per axiom should suffice, but we think it is worth stressing the fact that if they exist then they work on b oth sides . Norma lly , the inv er se element of a is written a 1 . At times the identit y element is repres en ted b y 1 G or 0 G , dep ending on the nota tion (Ab elian or no n-Abelian). A g r oup is called Ab el ian or commutativ e if a, b P G , a b b a . A group S inside a gr o up G is called a subgroup . If this is the ca se, we need S to be clo sed under the g roup op era tio n, it also must hav e the identit y element e and every 38 2 Bac k ground and Theory element in S must hav e a n inv er se in S . If S G a nd a, b P S we hav e that a b 1 P S then S is a subgroup. L agr ange’s the or em states that the order of a subgro up (n umber of element s) nec essarily divides the or der of the gr oup. W e are almost exclusively interested in g r oups o f p ermutations: F or a given sor ted set, a c hange of order is c alled a p e rm utatio n . This do es not reduce the scope b ecause, by Cayl ey’s the or em , every gr oup is isomorphic to some group of p ermutations. A transp os ition is a p ermutation that ex c hanges the position of tw o elemen ts whilst leaving all other ob jects unmov ed. It is known that any p ermutation is equiv alent to a pro duct of tra nspos itions. F urthermo re, if a per m utation can r esult from an odd nu mber of transp ositions then it can not re s ult from and even num b er of p ermu tations, and vice versa. A per m utation is ev en if it can b e pro duced b y an e ven n umber of exchanges and o dd in the other case. This is calle d parit y . The sig nature of a p ermutation σ , s g n p σ q , is 1 if the p ermutation is even and 1 if it is o dd. This is the L evi-Civita symb ol as int ro duced in Sec. 2.4 if it is extended for non-injective maps with v alue zer o. An y p ermutation ca n b e deco mpos e d into c y cles. A cycle is a closed chain inside a p ermutation (so it is a p ermutation itself ) which enjoys some nice prop erties a mong which we highligh t: • Cycles inside a p e r m utation can b e chosen to b e dis jo in t. • D isjoint cycles co mm ute. An y per m utation can be wr itten as a tw o row matrix where the first r ow represents the original o rdering of elements and the second the order once the p ermutation is a pplied. Example . The p ermutation σ can b e decomp osed into the pro duct of three cyc le s : σ 1 2 3 4 5 6 7 8 3 5 7 8 2 4 1 6 p 1 3 7 qp 2 5 qp 4 8 6 q . Note that this decomp osition is not unique b ecause any decompos ition into transpo - sitions would do (and there are infinitely many). If the per m utation turns out to b e a cycle, then a clearer nota tion can be used: W rite in a row, in or de r , the following element in the p ermutation. In the example a bove we beg in with 1 and note that 1 go es to 3, which g o e s to 7, whic h go es back to 1 and hence it is written p 1 3 7 q . 2.7 Su mmary and Conclusions 39 A c y cle with an even num b er of elements is an o dd p ermutation and a cycle with an o dd num b er of e le ments is a n even p ermutation. In pra ctice, in or der to determine whether a given per mutation is even or o dd, one writes the p ermutation as a pro duct of disjoint cycles: The p ermutation is o dd if and only if this factorizatio n contains an o dd nu mber of even-length cycles. 2.7 Summary and Conclus ions In this chapter we hav e quickly reviewed some basic facts of ma thematics that will b e used throughout the rest of the b o ok: The bas ic s o f first or der, seco nd or der and mo nadic second order logics, some constructio ns o f category theory such as pushouts and pullbacks together with the intro duction of some categ ories, gra ph theory bas ic definitions and compatibility , tensor algebra and functional analysis notations and some bas ic group theory , paying some attention to p ermut ations. Int ernet is full of very go o d w eb pages introducing these branches o f mathematics with deeper explanations and plent y of examples. It is not possible to giv e an exha ustiv e list o f all web pages vis ited to make this chapter. Nevertheless, I would like to highlig h t the very goo d job b eing p erfor med b y the communit y at http ://pla netmath.org/ and http:/ /www.w ikipedia.org/ . Next c hapter s umma r izes current a pproaches to gr aph grammar s and graph tr a ns- formation systems, so it is still introducto ry . W e will put our hands o n Ma trix Graph Grammars in Chap. 4. 3 Graph Grammars Approac hes Before moving to Matrix Gr aph Grammars it is necessar y to take a lo ok at other ap- proaches to gra ph transfor mation to “get the taste”, which is the aim of this chapter. W e will see the basic foundations leaving c omparisons of more adv anced topics (like application conditions) to sp oradic remark s in future chapters. Sections 3.1 and 3.2 are devoted to catego rical appro aches, proba bly the most de- veloped for malizations of gra ph grammars. O n the theoretical side, very nice ideas hav e put at our disp osal the p o ssibility of using ca tegory theory and its g eneralization p ow er to s tudy graph gramma rs, but even more so , a big effort ha s b een undertaken in order to fill the g ap b etw een c ategory theory and practice with to ols such a s AGG (se e [22]). Please, re fer to [1 ] for a detailed discussion and co mparison of to ols . In Secs. 3 .3 and 3 .4 tw o completely different forma lisms to the categorica l approa c h are s ummarized, a t times called set-the or etic or even algorithmic appr oaches. They a r e in so me sense closer to implemen tation than tho s e using category theory . There has been a lot of resear c h in these tw o essential appro aches so unfor tunately we will just scratch the sur face. Int eresting ly , it is p ossible to s tudy graph transfor mation using logics, providing us with all powerful metho ds from this branc h of ma thematics, monadic second or der logics in pa rticular. W e will brush ov er this brilliant approach in Sec. 3.5. T o finish this review we will br iefly touch o n the very interesting r elation-algebr aic approach in Sec . 3.6, which ha s not attracted as m uch a ttention as one should exp ect. Finally , the chapter is clo sed with a summa r y in Sec. 3 .7. 42 3 Graph Grammars Approac hes In this chapter we abuse of b old letters with the inten tio n of facilitating the sear c h of some definition or result. It is a ssumed that this chapter a s well a s Chap. 2 will b e mainly used for referenc e . 3.1 Double P ushOut (DP O) 3.1.1 Basics In the DPO a pproach to g raph re wr iting, a direct deriv ation is r epresented by a double pushout in c ategory Graph (multigraphs and total gr aph morphisms). Pro ductions can be defined a s three graph c ompo nents, separating the element s that should b e preserved from the left and right hand sides o f the r ule. A pro duction p : p L l Ý K r Ý Ñ R q consists of a pro duction name p and a pair of injectiv e gra ph mo rphisms l : K Ñ L and r : K Ñ R . Gr aphs L , R and K ar e resp ectively called the left-hand side (LHS), right-hand side (RHS) and the in terface of p . Morphis ms l and r a re usually injectiv e and can b e taken to b e inclusions without loss of generality . Fig. 3. 1. Example of Simple DPO Production The in terface K o f a pro duction consists of the elements that should b e pr eserved b y the pr oductio n application, while elements in L K are deleted a nd elements of R K 3.1 Double PushOut (DPO) 43 are added. Figure 3.1 shows a simple DPO pro duction named del , that can b e applied if a path of three no des is found. If so, the pro duction eliminates the last no de and all edges a nd creates a lo o p edge in the seco nd no de. A di rect deriv ation can be defined as an application of a pr oductio n to a gr aph through a match b y constr uc ting tw o pushouts. A matc h is a total morphism from the left ha nd s ide of the pro duction o n to the host gra ph, i.e. it is the op eratio n of finding the LHS of the gra mmar rule in the host g raph. Thus, given a graph G , a pro duction p : p L l Ý K r Ý Ñ R q and a ma tc h m : L Ñ G , a direc t deriv ation fro m G to H using p (based o n m ) exists if and only if the diagram in Fig. 3.2 can b e constructed, wher e both squares a re required to b e pushouts in categor y Graph . In Fig. 3.2, red dotted ar r ows r e present the morphisms that must be defined in order to close the diag ram, i.e. to c onstruct the pushouts. D is ca lled the context g raph . In particular, if the context graph can no t be constr ucted then the rule can not be a pplied. A dir e ct deriv ation is written G p,m ù ñ H or simply G ù ñ H if the pro duction a nd the matching are known from co n text. L m K l r d R m G D l r H Fig. 3.2. Direct Deriv ation as DPO Construction F or example, figure 3.1 s ho ws the application of rule del to a graph. Morphisms m , d and m are depicted by showing the c o rresp ondence of the vertexes in the pr o duction and the gra ph. In order to apply a pro duction to a graph G , a pushout complement has to be calcu- lated to obtain gra ph D . The existence of this pushout co mplement is guara n teed if the so-called dangling and ident ification conditions ar e satisfied. The first one establishes that a no de in G ca nnot b e deleted if this causes dangling edges. The second co ndition states that tw o different no des or edge s in L cannot b e ident ified (by means of a no n- 44 3 Graph Grammars Approac hes injectiv e match) as a sing le element in G if one of the elements is deleted and the other is pr eserved. Mor eov er , the inj ectivity of l : K Ñ L guarant ees the uniqueness of the pushout complement. The iden tification condition plus the dangling co ndition is at times known as glui ng condi tion . In the exa mple in Fig. 3 .1 the match p 1 , 2 , 3 q ÞÑ p a, b, c q do es not fulfill the dangling condition, a s the deletion of no de d would make edg es p a, c q and p c, d q bec ome dangling, so the pr o ductio n cannot b e applied at this match . One example (for SPO, but it ca n b e easily translated into DPO) in which the identification condition fails is depicted to the right of Fig. 3.7 on p. 5 0. 3.1.2 Sequen tialization and P aralleli s m A graph g r ammar ca n b e defined a s G xp p : L l Ý K r Ý Ñ R q p P P , G 0 y (see [11], Cha p. 3), where p p : L l Ý K r Ý Ñ R q p P P is a family of pro ductions indexed by their names a nd G 0 is the s tarting gra ph of the grammar . The seman tics of the grammar are a ll reachable graphs that can b e o btained by suc c essively applying the rules in G . Even ts changing a system sta te ca n thus b e mo deled using gra ph transformation rules . In real systems, parallel actions can take place. Two main appr oaches can be follow e d in order to describ e a nd analyze par allel computations . In the first o ne, pa r allel actions are sequentialized, giving rise to different interle avings (for example a s ingle CPU simu- lating mu ltitasking). In the second appr oach, ca lled explicit p ar al lelism , a c tions a re rea lly simult aneous (for example mo re than one CPU p erforming several tasks). R 1 m 1 K 1 r 1 k 1 l 1 L 1 m 1 i L 2 m 2 j K 2 l 2 r 2 k 2 R 2 m 2 H 1 D 1 r 1 l 1 G D 2 l 2 r 2 H 2 Fig. 3. 3. Para llel Indep enden ce In the in terleaving appr o ach, t wo a ctions (rule applications) are considered to be parallel if they can be per formed in any order yielding the sa me result. This c an be understo o d in tw o different wa ys. 3.1 Double PushOut (DPO) 45 The first in terpreta tio n is calle d parallel indep endence and states that tw o alter- native direct deriv a tions H 1 p 1 ð ù G p 2 ù ñ H 2 are indep enden t if there are direct der iv ations such that H 1 p 2 ù ñ X p 1 ð ù H 2 (see Fig. 3.3). Tha t is, bo th der iv ations ar e not in conflict, but o ne can b e p ostpo ned a fter the other . It can b e characterized using mo rphisms in a categ orical style saying that tw o direct deriv a tions (as those depic ted in Fig . 3.3) ar e parallel indep endent if and o nly if D i : L 1 Ñ D 2 , j : L 2 Ñ D 1 | l 2 i m 1 , l 1 j m 2 . (3.1) If one elemen t is preserved b y one deriv a tion, but deleted by the other, then the latter is said to b e weak ly parallel indep endent of the first (it is c ha racterized in equation 3 .4). Th us, para llel independence ca n b e defined a s mutual weak parallel inde- pendenc e . On the other hand (the seco nd int erpr e ta tion), tw o direct deriv ations a re called se- quen tial indep endent if they can b e p erformed in different o rder with no changes in the r esult. That is, b o th G p 1 ù ñ H 1 p 2 ù ñ X and G p 2 ù ñ H 2 p 1 ù ñ X yield the sa me result (see Fig. 3.4). Again, c a tegorically we say that tw o der iv ations are s equen tial indep endent if and only if D i : R 1 Ñ D 2 , j : L 2 Ñ D 1 | l 2 i m 1 , r 1 j m 2 . (3.2) Mind the similar ities with confluence (pr oblem 5) and lo ca l confluence. L 1 m 1 K 1 l 1 k 1 r 1 R 1 m 1 i L 2 m 2 j K 2 l 2 r 2 k 2 R 2 m 2 G 1 D 1 l 1 r 1 H D 2 l 2 r 2 G 2 Fig. 3.4. Sequential Ind epen d ence The conditions for sequential a nd parallel indep endence ar e g iv en in the Lo cal Ch urc h- R osser Theorem [11], Chaps. 3 and 4. It sa ys tha t t wo alternative parallel deriv ations a re par a llel indep endent if their matches only ov erlap in items that are pre- served. Two consecutive dire ct deriv a tions a re sequential indep endent if the match of the second do es not dep end on elements g enerated by the first, and the second deriv ation 46 3 Graph Grammars Approac hes do es not delete an item that has been accessed b y the fir st. Moreover, if t wo direct alter- native deriv ations are parallel indep endent, their concatenation is seq uen tial indep endent and vice versa. The e xpl icit parallelism view [2; 11] abstracts from any application order (no in- termediate states are pr o duce d). In this a pproach, a deriv ation is mo de le d by a s ingle pro duction, called parallel pro duction . Given tw o pro ductions, p 1 and p 2 , the parallel pro duction p 1 p 2 is the dis jo in t union of b oth. The a pplication of such pro duction is denoted as G p 1 p 2 ù ñ X . Two problems ar ise here : The sequentialization of a parallel pro duction ( analysis ), and the para llelization of a der iv ation ( synthesis ). In DP O , the parallelism theorem states tha t a par allel der iv ation G p 1 p 2 ù ñ X can b e sequentialized into tw o deriv ations ( G p 1 ù ñ H 1 p 2 ù ñ X and G p 2 ù ñ H 2 p 1 ù ñ X ) that are s e quen tial indep endent. Conv ersely , t wo deriv atio ns can b e put in parallel if they are sequentially indep endent . This is a limiting case o f amalgamation , which sp ecifies that if ther e are t wo pro- ductions p 1 and p 2 , then the ama lgamated pro duction p 1 ` p 0 p 2 is defined such that the pro duction p 1 and p 2 can be applied in parallel and the a malgamated pro duction p 0 (that represents common par ts of b oth) should b e a pplied only once. The concur rency theo rem 1 deals with the concurren t execution o f productions that need not be sequentially indep e nden t. Hence, accor ding to previous results, it is not po ssible to apply them in pa r allel. Anyw ay , they can b e applied concurr en tly using a so-called E - c oncurr ent gr aph pr o duction , p 1 E p 2 . W e will omit the details , which can be co nsulted in [22]. Let the seq uence G p 1 ,m 1 ù ñ H 1 p 2 ,m 2 ù ñ H 2 be given. It is p ossible to co nstruct a direct deriv ation G p 1 E p 2 ù ñ H 2 . The basic idea is to relate b oth pr o ductio ns throug h a n ov er- lapping gra ph E , which is a subgra ph o f H 1 , E m 1 p R 1 q Y m 2 p L 2 q . The cor r esp o nding restrictions m 1 : R 1 Ñ E and m 2 : L 2 Ñ E of m 1 and m 2 , r espe c tiv ely , must b e jointly surjective. Also, any direct deriv ation G p 1 E p 2 ù ñ H 2 can b e sequentialized. 1 The concurrency theorem appeared in [22] for the first time, to t h e b est of our kn o wledge. A someho w related concept – more general, th ough – wa s introd uced simultaneously for Matrix Graph Grammars in [60]. W e will review it in Sec. 7.4 . 3.1 Double PushOut (DPO) 47 3.1.3 Appli cation Conditi o ns W e will make a brief overview of gr aph constraints a nd a pplication conditions. In [14], graph constraints and application conditions w ere dev elop ed for the Double Pusho ut (DPO) appr oach to graph tra nsformation and generalize d to adhesive HLR categor ies in [22]. A tomic constra in ts were defined to b e either p ositive o r nega tiv e. A p ositive atomic graph constrain t P C p c q (where c is an arbitrary morphism c : P Ñ C ) is satisfied by gra ph G if m P : P Ñ G injective morphism there exists some m C : C Ñ P injectiv e morphism such that m P m C c , mathematically written G | ù P C p c q (see left part of Fig. 3.5). It c a n be interpreted as gr aph C must exist in G if gr aph P is foun d in G . Graph morphism m L : L Ñ G sa tisfies the p os itiv e atomic application conditi on P p c, n 1 c i q (with c : L Ñ P and c i : P Ñ C i ) if as s uming G | ù P C p c q , for all a sso ciated morphisms m P : P Ñ G, D m C i : C i Ñ G such that G | ù P C p c i q . The notation use d is m L | ù P p c, n 1 c i q , having also a similar interpretation to that of g raph constraints: Suppo se L is found in G , if P is a lso in G then there must b e so me C i in G . Refer to the diag r am on the rig h t side of Fig. 3.5. A p o sitiv e graph constrain t is a Bo olean formula ov er po sitiv e ato mic graph constra in ts. Positive application conditio ns , negative application conditions a nd negative graph co nstraints a re defined similar ly . C m C P c m P G C 1 m C 1 P c 1 c n m P L x m L C n m C n G Fig. 3.5. Generic Application Condition Diagram Finally , a n application condition AC p p q p A L , A R q for a pro duction p : L Ñ R consists of a left application condition A L ov e r L (also known as precondition ) and a right applica tion condition or p ostcondition A R ov e r R . A graph transformation satisfies the application condition if the ma tc h s atisfies A L and the comatch satisfies A R . In [14; 3 2] it is shown that graph constr aint s can be transformed into po stconditions 48 3 Graph Grammars Approac hes which even tually c an b e translated in to preconditions . In this w ay , it is p ossible to ensur e that starting with a host graph that meets c ertain restrictions, the application of the pro duction will output a g raph that still satisfies the s a me restrictio ns. DPO a ppr oach has b een embedded in the w eak adhesive HLR categ orical appro ach, which we will shor tly review in the following subsection. 3.1.4 Adhesiv e HLR Categories This sec tion finishes with a celebrated gener alization of DPO. It was during 20 04 that adhesiv e HLR categories were defined by merg ing tw o striking ideas: Adhesive ca te- gories [4 3] and high level replacement systems [16; 17]. See Sec. 2.2 for a quick ov erview of category theory . Basic definitions ar e extended almost immediately to a dhesive HLR systems p C , M q . A pro duction p : p L l Ý K r Ý Ñ R q consists of three ob jects L , K and R , the left hand side, the gluing ob ject and the right hand side, resp ectively , and morphisms l : K Ñ L and r : K Ñ R with l , r P M . There is a slight change in notation and the term deriv ation is s ubstituted by transformation , and dir e c t deriv ation b y di rect transformations . Adhesive HLR gra mmars and la nguages ar e defined in the usual wa y . In o rder to apply a pro duction w e hav e to construct the pusho ut c o mplemen t and a necessary and sufficient condition for it is the gluing condition. F o r adhesive HLR systems this is po ssible if we can construct initial pusho uts, which is an additional requir emen t (it do es not follow from the a xioms of adhes iv e HLR ca teg ories): A match m : L Ñ G satisfies the gluing condition with r espe c t to a pro duction p : p L l Ý K r Ý Ñ R q if for the initial pushout ov e r m in Fig. 3.6 ther e is a morphism f : X Ñ K such that r f f . Parallel and sequential indep endence are defined analog o usly to what has b een pre - sented in Sec. 3.1 and the lo cal Churc h-Rosser and the parallelism theor ems remain v alid. 3.2 Other Categorical Approac hes This s e ction presen ts other categorica l a ppr oaches such as single pus ho ut (SP O) and pullback and co mpares them with DP O (Sec. 3.1). 3.2 Other Categori cal A pproac hes 49 X f f L m K r l R Z γ Z G Fig. 3.6. Gluing Condition In the single pushout appr oach (SPO) to g raph tra nsformation, rules a re mode le d with t wo co mponent gr a phs ( L and R ) a nd direct deriv a tio ns ar e built with one pushout (whic h per forms the gluing and the deletion). SPO relies on catego ry Graph P of graphs and par tial gra ph mo r phisms. A SPO pro duction p can b e defined as p : p L r Ñ R q , where r is an injective partial graph morphis m. Tho s e elements for which there is no ima ge defined are deleted, those for whic h ther e is imag e ar e preserved and those that do not ha ve a preimage are added. A match for a pro duction p in a gra ph G is a total morphis m m : L Ñ G . Given a pro duction p a nd a match m for p in G , the direct deriv a tion fro m G is the pushout of p and m in Graph P . As in DPO, a deriv ation is just a seq uence of dir e ct deriv ations. The left part of Fig. 3.7 s ho ws a n example of the r ule in Fig. 3.1 , but expressed in the SPO approa c h. The pr o duction is a pplied to the same g raph G a s in Fig. 3 .2 but at a different match. An impo rtant difference with resp ect to DPO is that in SP O t here is no dangling condition: An y dangling edge is deleted (so r ules ma y ha ve side effects). In this example, no de c and edges p a, c q and p c, d q are deleted. In a ddition, in ca se of a conflict with the iden tification condition due to a non-injective matching, the conflicting elements are deleted. Due to t he wa y in which SPO ha s b een defined, ev e n though the matc hing from the LHS into the host gra ph is a total morphism, the RHS matching ca n b e a partial morphism (see the example to the r ight of Fig. 3 .7). In order to g uarantee that all matchings a r e total it is nec essary to ask for the conflict-free conditi on : A total mor phism m : L Ñ G is co nflict free for a pro duction 50 3 Graph Grammars Approac hes Fig. 3. 7. SPO Direct Deriv ation p : L Ñ R if and only if m p x q m p y q ù ñ r x, y P dom p p q or x, y R dom p p qs . (3.3) Results for explicit parallelism are slightly different in SPO. In this appro ach, a par al- lel direct deriv a tion G p 1 p 2 ù ñ X can b e sequentialized into G p 1 ù ñ H 1 p 2 ù ñ X if G p 2 ù ñ H 2 is weakly parallel indep enden t of G p 1 ù ñ H 1 (and similar ly for the o ther sequentialization). So as this condition may not hold, there are parallel direc t deriv ations tha t do not have an equiv alent interlea ving sequence. R 1 m 1 L 1 m 1 p 1 L 2 m 2 p 2 R 2 m 2 H 1 G p 1 p 2 H 2 Fig. 3. 8. SPO W eak Para llel I ndep endence These conditions will b e written ex plicitly because w e will mak e a comparison in Sec. 7.1. Deriv a tion d 1 is weakly par allel independent of der iv ation d 2 (see Fig. 3.8) if m p L 2 q X m 1 p m 1 z dom p p 1 qq H . (3.4) There is an analog o us concept, similarly defined, kno wn as w eak sequen tial in- dep endence . Let d 1 and d 2 be as defined in Fig . 3.9, then d 2 is weakly sequentially independent o f d 1 if 3.2 Other Categori cal A pproac hes 51 L 1 m 1 p 1 R 1 m 1 L 2 m 2 p 2 R 2 m 2 G p 1 H 1 p 2 H 2 Fig. 3.9. SPO W eak Sequ en tial Indep endence m 2 p L 2 q X m 1 p R 1 z p 1 p L 1 qq H . (3.5) If a dditio na lly m 1 p R 1 q X m 2 p L 2 z dom p p 2 qq H (3.6) then d 2 is sequentially indep e nden t of d 1 . It is po ssible to syn thesize b o th concepts (weak sequential independence and par allel independenc e ) in a single diagram. See Fig . 3 .10. R 2 m R 2 m 1 R 2 L 2 m L 2 m 1 L 2 p 2 H 2 p 1 1 G p 1 p 2 X L 1 p 1 m L 1 m 1 L 1 H 1 p 1 2 R 1 m R 1 m 1 R 1 Fig. 3.10. Sequential and Paralle l Indep endence. Due to the fact that appr oaches based on the pushout construction can not replica te substructures natura lly , Baudero n and others have pr opo sed a different setting by using pullbacks instead of pushouts [3 ; 4 ; 5]. W e will call them SP B and D PB approaches, depe nding o n the num b er o f pullbacks, similar ly to SPO a nd DPO. Note that pullbacks are sub-ob jects of products (see Sec. 2.2) and that pro ducts are (in so me se ns e) a natural replica tion mechanism. It has b een shown that pullback 52 3 Graph Grammars Approac hes approaches are strictly more express ive than tho se using pushouts, but they have some drawbac ks as well: 1. T he existence condition fo r pullback complements is m uch more complica ted than with pusho uts (gluing condition). 2. In g eneral, this condition ca n not b e tre a ted with computers [36]. 3. T her e is a loss in comprehens ibility and intuitiv e ne s s. In Fig. 3.11 what w e understand by a replica tion that ca n b e handled ea sily with SPB but not with SPO is illustrated. The pullback construction is depicted in da shed red colo r on the same pro duction, which is dr awn twice. T o the left, the pro duction on top with the morphism back to front (its LHS on the r ight a nd vice versa) and the sy s tem evolv es from left to right (a s in SPO or DPO), i.e. the initial s tate is H 1 and the fina l state is H 2 . T o the right of the sa me figure the pro duction is r epresented more naturally for us (the left hand side on the left and the r ight hand side on the r igh t) but on the b ottom of the fig ure. The system evolves on to p from right to left (it should b e more intuitiv e if it evolved from le ft to r ight ). Beside s , we notice that what we understand as the initial state is now given by the RHS of the pro duction while the final state is g iv en b y the left hand side. 2 3.3 No de Replacemen t No de Replacement grammar s [23] (Chap. 1) are a class of graph grammars based on the replacement of no des in a g raph. The scheme is s imila r to the o ne descr ibed in Sec. 1.1, on p. 3 but with some p eculiarities and notationa l changes. There is a mother graph (LHS, nor ma lly it co nsists of a single no de) and a daugh ter graph (RHS) together with a gluing constr uction that defines how the daughter gr aph fits in the host gr a ph once the substitution is ca rried o ut. No des of the mother g raph play a similar role to non-terminals in Chomsky grammars. The differences among differen t no de replacement grammar s reside in the way the gluing is pe rformed. 2 Anyw a y , th is is n ot mislea ding with some practice. 3.3 Nod e R eplacemen t 53 Fig. 3.11. SPB Replication Example W e will start with NLC grammar s (No de Lab el Con trolled, [23], Chap. 1) which ar e defined a s the 5-tuple G p Σ , ∆, P , C, S q (3.7) where Σ a re all no de lab els (a lphabet set), ∆ ar e no de lab els ( ∆ Σ ) that do not app ear on the LHS of any pro duction (alpha b et se t of terminals, so non-termina ls ar e Σ ∆ ), P is the set of pr o ductions, C are the g luing conditions (connection co nstructions) and S is the initial graph. Here only no de labels matter. Eac h pr oductio n is defined as a non-terminal no de pro ducing a graph with terminals and non-ter minals along with a set o f connection instructions. F or example, in Fig . 3.12 we see a pro duction p with X in its LHS and a subgraph in its RHS along with a connectio n relation c in the b ox. Pro duction a pplication (its s eman tics, a lso in Fig. 3.1 2) consists o f deleting the LHS from the ho st g raph, a dd the RHS and fina lly connect the daughter gra ph with the start graph. There are no application c o nditions. The linking part is p erformed acco rding to a co nnection re lation, which is a pair of no de lab els of the for m p x, y q : If the left hand s ide no de was a dja c e n t to a no de lab eled x then all no des in the RHS with lab el y will b e adjacent to it. NLC is a c lass of context-free g raph grammar s , in particula r r ecursively defined pr op- erties can be descr ibed. Also, they are completely lo cal and hav e no application conditions 54 3 Graph Grammars Approac hes Fig. 3.12. Example of NLC Produ ct ion which allows to model deriv ations by der iv ation trees. How ever, the yield of a deriv ation tree is dependent on the order in whic h pro ductions are applied. This prope rt y is kno wn as confluence (see problem 5) and the sub class of NLC grammar s that a re c onfluen t is called C-N LC . A t times it is desirable to refer to a concrete node instea d of to a whole family in the gluing instr uctions. This v ariatio n is known as N CE grammar (Neigh b orho o d Controlled Embedding) and is for mally defined to b e the tuple G p Σ , ∆, P , S q (3.8) where Σ , ∆ and S are defined as ab ov e but pro ductions in s et P are different. The g rammar rule p : X Ñ p D , C q contains the pro duction p : X Ñ D and the connection C . The co nnection is of the fo rm p u, x q where u is a la bel and x is a particular no de in the daug h ter graph. Note that NCE graph grammars are still NLC-like grammars, at leas t co ncerning replacement. NCE can b e ex tended in several w ays but the most p opular one is adding la bels a nd a direc tio n to edges, giving rise to e dNCE gra mmars. Pro ductions in edNCE a r e equal to those in NCE but co nnections differ a little bit, b eing of the form p µ, p { q , x, d q , (3.9) where µ is a no de la bel, p and q a re edge lab els, x is a no de of D and d P t in, out u (whic h sp ecifies the direction of the edg e). F or exa mple, if d in the connectio n in e q. (3.9), it sp ecifies that the embedding pr oce ss s hould establish a n edge with lab el q to no de x of 3.3 Nod e R eplacemen t 55 D fro m each µ -lab eled p -neighbor of m P M (the mother gr aph) that is an in-neighbor of m . The expressive p ower of edNCE is not increased neither if gra mma r rules change directions of edges nor if connection instr uctions make use o f multiple edges. The graphical r e pr esentation differ s a little from that of DPO and SPO. The daughter graph D is included in a b ox and the a rea surrounding it represents its environmen t. Non- terminal symbols are represented by capital letters inside a small b ox (the lar g e box itself can b e view ed as a non-termina l symbol). Co nnection instructio ns ar e directed lines that connect no des inside (new la bels) with no des o utside (old lab els). Fig. 3.13. edNCE Nod e Replacemen t Example Example . The notation G H 2 r n { H 1 s is employ ed for a deriv ation, mea ning that graph G is obtained by making the substitution n ÞÑ H 1 in H 2 , i.e. by replac ing no de n in H 2 with gr aph H 1 . In the exa mple of Fig. 3.13 (with non-ter minal no de N ) we hav e substituted the non-ter minal no de in H 1 by H 2 attaching no des acco rding to lab els in arrows ( α ) to get G . Asso ciativity – reviewed in the next sectio n – is a natura l prop ert y to be demanded on any context-free rewriting framework and is enjoy ed by edNCE g rammars. Some edNCE grammar s are context-dependent b ecaus e they do no t need to b e co nfluen t, i.e. the r esult of a deriv a tion may dep end on the order of application of its pro ductions. The class of confluent edNCE gr ammars is repr esented b y C-edNCE . C-edNCE gr ammars fulfill so me nice prop erties such as b eing closed under no de or edge relab eling. It is p oss ible to define the no tion o f deriv ation tree as in th e case o f context-free string gr ammars (see [23], Cha p. 1). 56 3 Graph Grammars Approac hes Many sub classes of edNCE g rammars hav e b een – and are b eing – studied. Just to men tion some, apar t fro m C-e dNCE , B-edNCE (Boundary , in which non-ter minal no des are not connected), 3 B nd -edNCE (non-terminal neighbor deterministic B-edNCE gr am- mar), 4 A-edNCE (in every connec tion instruction p σ , β { γ , x, d q σ a nd x a re ter minal) and LIN-edNCE (linear, if every pro duction has at most one non- terminal no de). 3.4 Hyp eredge Repla cemen t The ba sic idea is simila r to no de replace ment but a cting on edg es instead of no des, i.e. edges are substituted by graphs , playing the role of non- terminals in Chomsky gr am- mars [23]. Hyper edge replace men t systems are adhesive HLR catego ries that ca n b e r ewritten as DPO gr aph transfor mation systems. W e will illustra te the ideas with an edge replacement example (instead of hyper edge replacement, to b e defined below) in a very simple case. Suppo se w e hav e a gra ph as the o ne depicted to the left of Fig. 3 .14, with a lab eled edge e to b e substituted by the graph depicted to the center of Fig. 3.14, in which the sp ecial no des ( 1 and 2 ) are used as a nc hor p oints. The r esult is display ed to the right of Fig. 3.1 4. Fig. 3. 14. Edge Replacement 3 The daughter do es not hav e edges betw een non-terminal no des and in no connection instru c- tion p σ, β { γ , x, d q σ is non-terminal or, in other w ords, every non-terminal has a bou n dary of terminal neigh b ors. 4 The idea b ehind t his extension is that every neigh b our of a non-terminal is uniqu ely deter- mined by their lab els and the d irection of t he edge joining them. Therefore, when rewriting the non-terminal, it is possible to distinguish b etw een neighbours. 3.4 Hyp eredge Replacemen t 57 A pro duction in essence is what we have done, with a LHS made up o f lab els a nd a graph as RHS. The notation H r e { G 1 s , also G ñ r e { G 1 s , is standar d to mean tha t graph (hyperg raph) H is o btained by deleting edge e and plugging in g r aph G 1 . A h yp eredge is defined in [23] (Chap. 2) a s a n a tomic item with a label and an ordered set of tentacles. Informally , a hyp er gr aph is a se t of no des with a co llection of hyperedges such that each tentacle is attached to one no de. Note that directed gra phs are a sp ecial cas e of h yp ergra phs. Normally it is established that the label of a h yp eredge is the num b er of its tent acles. Let’s pr o vide a formal definition of hypergra ph. F or a given string w , the length of the string is deno ted by | w | . F o r a set A , A is the se t o f a ll strings over A . The free symbolwise extensio n f : A Ñ B of a mapping f : A Ñ B is defined by f p a 1 a k q f p a 1 q f p a k q , (3.10) k P N and a i P A , i P t 1 , . . . , k u . Let C b e a se t of lab els and le t t : C Ñ N b e a typing function. A hypergraph H ov e r C is the tuple p V , E , att, lab , e xt q (3.11) where V is the set of no des, E the set of hyperedges, a tt : E Ñ V a mapping that assigns a sequence o f pa ir wise distinct atta chment no des att p e q to each e P E , l ab : E Ñ C a mapping th at lab els eac h h y pere dg e such that t p l ab p e qq | att p e q| and ext P V are pairwise distinct externa l no des. The type o f a hyperedge is its num b er of tentacles and the t yp e of a h yp ergra ph is its n um b er of external no des. The set of h yp ergraphs will be denoted H , or H C if we need to explicitly refer to the set of types. Two hypergra phs H and H 1 are isomorphic if there exist i p i V , i E q , i V : H V Ñ H 1 V and i E : H E Ñ H 1 E such that: 1. i V p att H p e qq att H 1 p i E p e qq . 2. e P E H , l ab H p e q l ab H 1 p i E p e qq . 3. i V p ext H q ext H 1 . As it usually happ ens in algebra , equalit y is defined up to isomor phism. If R t e 1 , . . . , e n u E H is the s e t o f hyperedges to be r eplaced and there is a pr eserving type function r : R Ñ H ( e P R , t p r p e qq t p e q ) such that r p e i q R i , then we write it b oth as H r e 1 { R 1 , . . . , e n { R n s or a s H r r s . 58 3 Graph Grammars Approac hes Hyper edge replace men t b elongs to the gluing approa c hes a nd follows the high level scheme intro duced in Sec. 1 .1: The replacemen t of R in H accor ding to r is perfor med by first removing R from E H , then e P R the no des a nd hypere dges of r p e q are dis jo in tly added and the i -th external no de o f r p e q is fused with the i -th attachmen t no de of e . If a h yp eredge is replaced its context is not affected. Therefore, h yp eredge replacement provides a context-free type o f rewriting as long as no a dditional application conditions are employ ed. There are three nice pr ope r ties fulfilled by hyperedge replacement gra mmars tha t we will briefly co mmen t and that can b e co mpared with the pr oblems in tro duced in Sec. 1.3, in pa rticular pr oblems 2, 3 and 5, 6. Let’s assume the hypothes is o n hyperedges necessar y so the following formulas make sense: • Se qu en tializatio n and Par al lelism : Assuming pair wise distinct hyperedge s, H r e 1 { H 1 , . . . , e n { H n s H r e 1 { H 1 s H r e n { H n s . (3.12) • C onfluenc e : Let e 1 and e 2 be distinct hyper edges, H r e 1 { H 1 sr e 2 { H 2 s H r e 2 { H 2 sr e 1 { H 1 s . (3.13) • Asso ciativity : H r e 1 { H 1 sr e 2 { H 2 s H r e 2 { H 2 r e 1 { H 1 ss . (3.14) Note howev er that in hyperedge replacement grammar s, confluence is a co nsequence of the fir s t prop erty whic h ho lds due to disjointness of applica tion of gramma r rule s . A pro duction p ov er the set of no n-terminals N C is an ordere d pair p p A, R q with A P N , R P H and t p A q t p R q . A direct deriv ation is the application of a pro duction, i.e. the replacement of a hyperedg e b y a hypergra ph. If H P H , e P E H and p l ab H p e q , R q is a pro duction then H 1 H r e { R s is a direct deriv a tion and is r epresented b y H ñ H 1 . As a lw ays, a deriv ation is a sequence o f direct deriv atio ns. F ormally , a hyperedg e replacement grammar is a system H R G p N , T , P , S q where N is the set of non-ter minals, T is the se t of terminals , P is the s et of pro ductions and S P N is the start symbol. W e will finish this section with a simple example that gener ates the string-g raph language 5 L p A n B n q tp a n b n q | n ¥ 1 u . This is the graph-theor etic counterpart of the 5 This example is adapted (simplified) from one that appears in [23], Chap. 2. 3.5 MSOL A pproac h 59 Chomsky languag e that consists of strings of the fo r m p a n b n q , n ¥ 1, i.e. that has a n y string with an arbitrary finite num b er of a ’s follow ed by the same num b er o f b’s, e.g . aabb , aaabb b , etc. Fig. 3.15. String Grammar Example A black filled c ir cle represents an external no de while non-filled circles are in ternal no des. A b ox represents a hyperedg e with a ttac hments with the lab el inscr ibed in the box. A 2-edg e is r epresented by an ar row joining the fir st no de to the s econd. The gramma r is defined a s A n B n p t S u , t a, b u , P, S q , where the set of pr oductio ns P t p 1 , p 2 u is depicted in Fig. 3.1 5. Pr oductio n p 1 is necessary to g et the gra ph-string ab and to s top rule application. The start gr aph and an evolution of the grammar – deriv ation 6 p 1 ; p 2 ; p 2 – can b e found in Fig. 3.1 6. 3.5 MSOL Approac h It is p ossible to represent graphs as log ical s tr uctures, ex pressing their pr o per ties by logical formulas or, in other words, use logical formulas to characterize classe s of gr aphs and to establish their prop erties out of their logical description. In this section we will g ive a br ief introduction to monadic second order logics (MSOL) for graph transforma tion. Refer to Chap. 5 o f [23] and references there in cited. 6 Produ ctions inside sequences in this b ook are applied from right to left, as in the comp osition of functions. 60 3 Graph Grammars Approac hes Fig. 3. 16. String Grammar Deriv ation Currently it is not pos s ible to define g raph transformatio n in t erms of a utomaton (recall that in lang ua ge theor y it is essen tial to hav e transformations that pro duce outputs while tr av er sing w ords or trees). Quoting B. Courcelle (Chap. 5 of [23]): The deep reason why MSOL logic is so crucial is that it replaces for gr aphs p . . . q the no tion of a finite automa ton p . . . q The key p oint here is that these transfor mations can b e defined in ter ms of MSO L formulas (called definable tr ansductions ). Graph op erations will a llow us to define context-free s e ts o f graphs as comp onen ts of least solutions to systems of equa tions (without us ing any g raph rewr iting r ule) and recogniza ble sets of gra phs (without using any notion o f gra ph automaton). Graphs and gra ph prop erties are r epresented using logic a l structures a nd relatio ns. A binary relatio n R A B is a m ultiv a lued 7 partial mapping that w e will call transduction . Recall from Sec. 2.1 that an in terpretatio n in logics in esse nce defines semantically a structure in terms of another o ne, for which MSOL formulas will b e us ed. Let R b e a finite set o f relation symbols and let ρ p R q be the a rit y of R P R . An R -structure is the tuple S p D , p R q R P R q such that D is the (po s sibly infinite) do main of S and each R is a ρ p R q -ary relation on D , this is , a subs et of D ρ p R q . The class of R -structures is denoted by S T R p R q . 7 One elemen t may hav e sev eral images. 3.5 MSOL A pproac h 61 As a n example of structure, for a simple digraph G made up of no des in V we hav e the asso ciated R -structure | G | 1 p V , edg q , where p x, y q P edg if and only if there is an edge sta rting in x and ending in y . Note that this structure repres en ts simple digraphs . The set of mo nadic seco nd order form ulas over R with free v ar iables in Y is repres en ted by MS p R , Y q . As commented in Sec. 2.1, languages defined by MSOL formulas are regular languages . Let Q and R b e tw o finite r anked sets of relatio n symbols a nd W a finite set of se t v ariable s (the s e t of parameters ). A p R , Q q -definition scheme is a tuple of for mulas of the for m: ∆ φ, ψ 1 , . . . , ψ k , p θ w q w P Q k . (3.15) The a im of these for mulas is to define a s tr ucture T in S TR p Q q out of a structure S in S T R p R q . The no tation needs some comments: • φ P MS p R , W q defines the domain o f the co rresp onding transduction, i.e. T is defined if φ is true for some a ssignment in S of v alues a ssigned to the parameters . • ψ i P MS p R , W Y t x i uq defines the do main of T a s the disjoint union o f e lemen ts in the do main of S that satis fy ψ i for the conside r ed assignment. • θ w P MS p R , W Y t x 1 , . . . , x ρ p q q uq for w p q , j q P Q k , where we define Q k w | q P Q , j P r k s ρ p q q ( and r k s t 1 , . . . k u , k P N . F ormulas θ w define the relation q T . F or a more rigo rous definition with some exa mples, please refer to [23], Chap. 5 . The impo rtant fact of tra nsductions is that they keep monadic seco nd orde r pr ope rties, i.e. monadic seco nd o rder prop erties of S can b e expressed as monadic second order prop- erties in T . F urthermore, the inv erse image of a MS-definable c lass of str uctures under a definable tra nsduction is definable (not so for the image), a s well as the comp osition and the in tersection of a definable structure with the Cartesia n pro duct of tw o definable structures. Ho wev er, there are some “nega tiv e” r esults apa rt fro m that of the image, e.g. the inv e rse of a definable tra nsduction is not definable neither is the intersection of tw o definable tra nsductions. The theor y go es far beyond, for example by defining context free sets of graphs by systems of r ecursive equations, generalizing in some se ns e the concatenation o f words in string grammars. No attention will b e paid to r igorous details a nd definitio ns (aga in, see Chap. 5 in [2 3]) but a simple clas sical exa mple of context free gr ammars will b e rev iew ed: 62 3 Graph Grammars Approac hes Let A t a 1 , . . . , a n u be a finite alphab et, ε the empt y w ord and A the set o f w ords over A . Let’s co nsider the context-free gr ammar G t u Ñ auuv , u Ñ av b, v Ñ av b, v Ñ ab u . The c orresp onding system of r e c ursive e q uations would b e: S x u a. p u. p u.v qq a. p v .b q , v a. p v .b q a.b y where “ . ” is the co ncatenation. It is p ossible, although w e will not see it, to express node replacement and hypere dg e replacement in terms o f systems o f recur sive eq uations. Analogously to the wa y in w hich the equationa l set extends context-freeness, r ecog- nizable sets extend reg ular languag es. F or example, it is p ossible to show that e very set of finite gra phs or h yp ergra phs defined by a formula o f an appro pr iate monadic second order language is r ecognizable with resp ect to an appro priate set of op erations (the conv erse also holds in many cases). 3.6 Relation-Algebraic A pproac h W e will mainly follow [52] and [36] in this section, paying sp ecial attention to the justi- fication that the ca tegory Graph P has pusho uts, which will b e used in Cha p. 6 for one of the definitions of direct deriv ation in Matrix Graph Gr ammars. W e will deviate from standard rela tional metho ds 8 notation in favor of o ther which is pro ba bly more immediate for mathematicians not acquainted with it a nd, b esides, we think ea ses compa r ison with the r est of the appr o aches in this chapter. A relatio n r 1 from S 1 to S 2 is a s ubset of the Cartesia n pro duct S 1 S 2 , denoted by r 1 : S 1 ã S 2 . Its inv erse r 1 : S 2 ã S 1 is s uc h that p s 2 , s 1 q P r 1 1 p s 1 , s 2 q P r 1 . If r 2 : S 2 ã S 3 is a r elation, the comp osition r 2 r 1 r 2 r 1 : S 1 ã S 3 is a gain a relation such that p s 1 , s 3 q P r 2 r 1 r D s 2 P S 2 | p s 1 , s 2 q P r 1 , p s 2 , s 3 q P S 2 s . (3.16) As r elations ar e sets, naive s et o per ations ar e av a ilable such a s inclusio n ( ), int er- section ( X ), union ( Y ) and difference ( ). It is pos sible to form the category Rel of sets and relations (the iden tity re la tion 1 S S ã S is the diagonal s et o f S S ), which bes ides fulfills the following prop erties: 8 Visit the RelMiCS initiativ e at http://ww w2.cs.unibw.de /Proj/relmics/html/ . 3.6 Relation-Algebraic Approach 63 • r 1 1 r . • p r 2 r 1 q 1 r 1 1 r 1 2 . • Distributive law: r 2 p α P A p r α qq r 1 α P A p r 2 r α r 1 q . A relation f : S 1 ã S 2 such that f f 1 1 S 2 is called a partial function and it is represented with a n a rrow instead of a har po on, f : S 1 Ñ S 2 . If 1 S 1 f 1 f also, then it is called a total function . Note that these are the standar d set-theoretic definitions of partial function and total function. The function f is injective if f 1 f 1 S 1 and surjective if f f 1 1 S 2 . The ca tegory of se ts and par tia l functions is represe n ted by Se t P . It c a n b e prov ed that Se t P has sma ll limits and colimits, so in pa rticular it has pusho uts. F or a relation r : S ã T its domain is also a relatio n d : S ã S and is g iven by the formula d p r q r 1 r X 1 S . In order to define graph r ewriting using relations we need a relational repres en tation of graphs . A graph x S, r y is a set S plus a r elation r : S ã S . A partial morphism betw een graph x S 1 , r 1 y and x S 2 , r 2 y , p : S 1 Ñ S 2 , is a partial function p s uc h that: p r 1 d p p q r 2 p. (3.17) It is not difficult to see that the comp osition of tw o pa rtial morphis ms of graphs is again a pa rtial morphism o f gr aphs. It is a bit more difficult (a ltho ugh still easy to understand) to show that the ca tegory Graph P of simple gra phs and partial morphisms has pushouts (Theorem 3.2 in [52]). The squa r e depicted in Fig. 3.17 is a pusho ut in Set P if the formula for the relation h is given by: h m r m 1 Y p g p 1 . (3.18) A pro duction is defined similar ly to the SPO case, as a triple of t wo gra phs x L, l y , x R, r y and a partial morphism p : L Ñ R . A match for p is a mor phism of graphs M : x L, l y Ñ x G, g y . A pro duction plus a matc h is a direct deriv ation. As alwa ys, a deriv ation is a finite sequence of direct deriv a tio ns. Equation (3.1 8) defines a pushout in catego r y Set P which is different than a rewriting square (a direct deriv ation). If we want the rewriting rule to be a pusho ut, the relation in x H, h y m ust b e defined by the equation: 64 3 Graph Grammars Approac hes x L, l y p m x R, r y m x G, g y p x H , h y Fig. 3.17. Pushout for Simple Graphs (Relational) and Direct Deriv ation h m r m 1 Y p g m 1 l m p 1 . (3.19) The relation- algebraic approach is based almo st completely in relational metho ds. T o illustrate the ma in differences with resp ect to c a tegorical appr oaches an example taken from [36] follows that deals with categoric al products. Example . In o rder to define the ca tegorical pro duct – see Sec. 2 .2 – it is neces s ary to chec k the universal pr o per t y of b eing a termina l ob ject, which is a glo bal co ndition (it should b e c heck ed against the rest of candidate elemen ts, in principle all elements in the category ). In contrast, in r elation algebras , the direct pro duct o f tw o ob jects X and Y is a triple p P, Π X , Π Y q satisfying the following prop erties: • Π X Π 1 X 1 X and Π Y Π 1 Y 1 Y . • Π Y Π 1 X U . • Π 1 X Π X X Π 1 Y Π Y 1 P . where U is the universal relation (to b e de fined b elow). Note that this is a lo cal condition, in the sense that it only inv o lv es functions without quant ification (in Ca tegory theo r y this sort of characterizations a re mor e like for al l obje cts in the class ther e exists a unique morphism su ch that... ). The rela tional appro ach is based on the no tion of allegory which is a categor y C as defined in Sec. 2.2 – the underlying categor y – plus tw o op erations ( 1 and X ) with the following prop erties: 9 • r 1 1 r ; p r s q 1 s 1 r 1 ; p r 1 X r 2 q 1 r 1 1 X r 1 2 . • r 1 p r 2 X r 3 q p r 1 r 2 q X p r 1 r 3 q . 9 Compare with those on p. 62. 3.7 Su mmary and Conclusions 65 • Mo dal rul e: p r 1 X r 2 q r 3 r 1 r 3 X r 2 r 1 1 . The universal relation U for tw o ob jects X and Y in an allego ry is the ma ximal element in the set of mo rphisms fro m X to Y , if it e xists. If there is a leas t elemen t, then it is called an empty relation or a zero rel ation . It is p ossible to o btain the other mo dal r ule starting with the a xioms of allegor ies: p r 1 r 2 q r 3 r 3 X r 2 r 1 3 r 2 , (3.20) which can b e sy nthesized in the so-ca lled De dekind formula : p r 1 r 2 q r 3 r 3 X r 2 r 1 3 r 3 X r 2 r 1 1 . (3.21) A lo cally co mplete distributive allego ry is called a Dedekind category . A distribu- tiv e al l egory is an allegor y with joins and zero element; lo c al ly c ompleteness refer to distributivity of co mp osition with resp ect to joins. By using Dedekind catego ries [36] provides a v ariatio n of the DPO appro ach in which graph v ariables a nd replication is p ossible. W e w ill not in tro duce it here b ecause it would take to o lo ng, due mainly to notation and formal definitions, a nd it is not used in our approach. As a fina l r emark, [36] pro ceeds by defining pushouts, pullbacks, c omplemen ts and an ama lg amation of pushouts and pullbacks (called pul louts ) ov er Dedekind categor ies to define pull o ut rewriting . 3.7 Summary and Conclus ions The inten tion o f this quick summar y is to make an up-to -date review of the main ap- proaches to g raph gr ammars and graph transforma tion systems: Categor ical, relational, set-theoretical and log ic al. The theory developed so far for any of these appr oaches go es far b eyond what ha s b een ex p osed here. The rea der is referenced to cites spread across the chapter for further study . Throughout the r e st o f the b o ok we will see that their influence in Matrix Gra ph Grammars v aries consider ably de p ending o n the topic. F o r example, our basic diagra m 66 3 Graph Grammars Approac hes for gr aph r ewriting is similar to tha t of SPO 10 but the wa y to deal with restric tions o n rules (a pplica tion conditions) is muc h more “logical” , so to sp eak. W e are now in the p osition to int ro duce the basics of o ur pr opo sal for graph grammars . This will b e car ried out in the next chapter, Chap. 4, with the peculia rity that (to so me extent) there is no need for a match o f the rule’s left hand side, i.e. we ha ve pro ductions and not direct deriv ations. This is further studied in Chapter 5 with the notion o f initial digr aph and co mpos ition. 10 Chapter 6 defin es what a d eriv ation is in Matrix Graph Grammars. Tw o different but eq uiv a- lent definitions of deriv ations are pro vided, one using a pushout construction plus an operator defined on produ ct ions and anoth er with no need of categorical constructions. 4 Matrix Graph Grammars F un damen tals In this chapter and the next o ne, ideas outlined in Chap. 1 will b e so undly based, assuming a background knowledge on the mater ial o f Secs . 2.1, 2.3 a nd 2.6. No matching to any host graph is a ssumed, although identification of elements (in essence, no des) o f the sa me t yp e will b e sp ecified thro ugh c ompletion . Analysis techniques develop ed in this chapter inc lude co mpatibilit y of pro ductions and sequences as well as coher ence o f sequences . These concepts will b e use d to tackle applicability (pro blem 1), seq uen tial indep endence (problem 3 ) and rea c hability (prob- lem 4). In Sec. 4.1 the dynamic natur e of a single gr ammar rule is developed together with some basic facts. The operatio n o f c ompletion is studied in Sec. 4.2, which basically per mits alge braic op erations to be p erformed as one would like. Section 4.3 dea ls with sequences, i.e. o rdered sets o f g rammar rules a pplied one after the other . 1 T o this end we will introduce the concept of c oher enc e . Due to their imp orta nce, sequences will b e studied in deep detail in Chap. 7. 4.1 Pr o ductions and Compatibilit y A pro duction (a lso known as gr ammar ru le ) is de fined as an application which transforms a simple digr aph into another simple digra ph, p : L Ñ R . W e can descr ibe a pro duction 1 At times we will use the t erm c onc atenation as a synonym. A deriv ation is a concatenation of direct deriv ations, and not just of p roductions. 68 4 Matrix Graph Grammars F undamentals p with tw o matrices (those with an E sup erindex) and t wo vectors (those with an N sup e rindex), p p L E , R E , L N , R N q , where the comp onents are resp ectively the left hand side edges ma trix L E and no des vector L N , and the r ight hand side edges matrix R E and no des vector R N . L E and R E are the adjacenc y matrices and L N and R N are the no des v ector as studied in Sec. 2.3. A fo r mal definition is given for further re ference: Definition 4. 1.1 (Pro duction - Static F ormulation) A grammar rule or pr o duc- tion p is a p artial morphism 2 b etwe en two simple digr aphs L and R , and c an b e s p e cifie d by the tuple p L E , R E , L N , R N , (4.1) wher e E stands for edge and N for node . L is t he left hand side and R is the right hand side. It might seem redundant to sp ecify no des a s they are already in the adjacency matrix. The reason is that they can b e added or deleted during re writing. Nodes a nd edges are cons idered separa tely , a lthough it could b e p ossible to synthesize them in a sing le structure using tensor a lg ebra. See the construction of the incidence tensor – Def. 1 0.3.1 – in Sec. 10.3. It is mor e interesting to characterize the dynamic behaviour of rules for whic h matrices will b e used, describing the basic ac tions that can b e pe rformed by a pro duction: Deletion and addition of no des and edges. Our immediate target is to g et a dynamic form ulation. In this b oo k p will b e injective unless otherwise stated. A productio n mo dels deletion and addition actions on b oth edges and no des, carr ied o ut in the o rder just mentioned, i.e. first deletion and then a ddition. Appro priate matr ices are in tro duced to r epresent them. Definition 4. 1.2 (Deletion and Addition of E dg es) Matric es for deletion and ad- dition of e dges ar e define d elementwise by the formulas e E p e q ij # 1 if e dge p i, j q is to b e er ase d 0 otherwise (4.2) 2 “P artial morphisms” since some elements in L may n ot h a ve an image in R . 4.1 Pro ductions and Compatibility 69 r E p r q ij # 1 if e dge p i, j q is to b e adde d 0 otherwise (4.3) F or a given pro duction p as ab ov e, b oth matr ices can be ca lculated through identit ies: e E L E ^ p L E ^ R E q L E ^ L E _ R E L E ^ R E (4.4) r E R E ^ p L E ^ R E q R E ^ R E _ L E R E ^ L E (4.5) where L E ^ R E are the elements that ar e pres e rved by the rule application (simila r to the K co mp onent in DPO rules , see Sec. 3.1). Thu s, using previo us constr uction, the following tw o conditio ns ho ld a nd will b e frequently used: Edg es can b e a dded if they do not currently exist and may b e deleted only if they are present in the left hand side (LHS) of the pro duction. r E ^ L E R E ^ L E ^ L E r E (4.6) e E ^ L E L E ^ R E ^ L E e E . (4.7) In a similar wa y , vectors for the deletion and addition o f no des c a n b e defined: Definition 4.1. 3 (Deletion and Addition of No des ) e N p e q i # 1 if no de i is t o b e er ase d 0 otherwise (4.8) r N p r q i # 1 if n o de i is to b e adde d 0 otherwise (4.9) Example . An example of pro duction is g raphically depicted in Fig. 4.1. Its asso ciated matrices ar e: L E 1 0 1 1 | 2 0 0 0 | 4 1 0 1 | 5 L N 1 1 | 2 1 | 4 1 | 5 R E 1 0 1 1 | 2 0 1 0 | 3 0 1 1 | 5 R N 1 1 | 2 1 | 3 1 | 5 e E 1 0 1 0 | 2 0 0 0 | 4 1 0 0 | 5 e N 1 0 | 2 1 | 4 0 | 5 r E 1 0 1 0 | 2 0 1 0 | 3 0 1 0 | 5 r N 1 0 | 2 1 | 3 0 | 5 70 4 Matrix Graph Grammars F undamentals Fig. 4. 1. Example of Production The last co lumn of the matrices sp ecify no de ordering , which is ass umed to b e equa l by rows and by columns. The characteriz a tion of pro ductions through matrices will b e completed b y in tro ducing the nihilation matrix (Sec. 4.4) a nd the negative initial digraph (Sec. 5.2). They keep tra c k of a ll elements that can not b e pre sen t in the g raph (dangling edges and those to b e added by the pro duction). F or a n exa mple of pro duction with a ll its ma trices, pleas e see the one on page 7 7. Now w e state some basic pr ope r ties that relate the adjacency matrice s and e and r . Prop osition 4.1 .4 (R ewriting Identities) L et p : L Ñ R b e a pr o duction. The fol- lowing identities ar e fulfil le d: r E ^ e E r E r N ^ e N r N (4.10) e E ^ r E e E e N ^ r N e N (4.11) R E ^ e E R E R N ^ e N R N (4.12) L E ^ r E L E L N ^ r N L N (4.13) Pr o of It is str a ight forward to prov e these re s ults using basic Bo olean iden tities. Only the fir st one is included: r E ^ e E L E ^ R ^ L E ^ R E L E ^ R ^ L E _ L E ^ R E ^ R E L E ^ R E _ L E ^ R E r E _ r E r E . (4.14) 4.1 Pro ductions and Compatibility 71 The rest of the ident ities follow easily by direct substitution of definitions. First t wo equations say that edges or no des cannot be rewritten – era sed and created or v ice versa – by a rule applicatio n (a cons e q uence o f the w ay in which matrice s e a nd r are c alculated). This is b ecause, as we will see in formulas (4.16) and (4.17), elements to b e deleted are those sp ecified by e and thos e to b e added ar e tho s e in r , so common elements are: e ^ r e ^ r ^ r ^ e 0 . (4.15) This co n trasts with the DPO approach, in which edges and no des can be rewritten in a single rule. 3 The r emaining tw o conditions s tate that if a no de or edge is in the r ight hand s ide (RHS), then it can not b e deleted, and that if a no de or edg e is in the LHS, then it can not b e cr eated. Finally w e are r eady to characterize a pro duction p : L Ñ R using deletion a nd addition matr ic es, starting from its L HS: R N r N _ e N ^ L N (4.16) R E r E _ e E ^ L E . (4.17) The resulting gra ph R is calcula ted by first deleting the elements in the initial graph – e ^ L – and then adding the new elements – r _ p e ^ L q –. It can b e prov e d using Prop osition 4.1.4 that, in fact, it do esn’t matter whether deletion is carr ied out first and addition after wards or vice versa. 4 Remark . In the r est of the b o ok w e will omit ^ if poss ible, a nd a void unn ecessa ry parenthesis b earing in mind that ^ has precedence ov er _ . So, e.g. formula (4.1 7) will be wr itten R E r E _ e E L E . (4.18) Besides, if there is no p ossible confusion due to context or a for m ula a pplies to b oth edges and no des, sup erscripts can b e o mitted. F or exa mple, the same formula would read R r _ eL . 3 It migh t be u seful for example to forbid a rule app licatio n if the dangling condition is violated. This is addressed in Matrix Graph Grammars through ε - produ ctions, see Chap. 6. 4 The order in which actions are p erformed do es matter if instead of a single prod uction we consider a sequence. See comments after the proof of Corollary 5.1.3. 72 4 Matrix Graph Grammars F undamentals There are tw o wa y s to characterize a pro duction so far, either using its initia l and final states (see Definition 4.1.1) or the o p era tions it sp ecifies: p e E , r E , e N , r N . (4.19 ) As a matter of fact, they ar e not completely equiv alent. Using L a nd R gives mor e information b ecause those elemen ts whic h a re present in bo th o f them ar e ma ndatory if the production is to b e applied to a host g raph, but they do not app ear in the e-r characterization. 5 An alter nate and complete definition to (4.1) is p L E , e E , r E , L N , e N , r N . (4.20) A dynamic definition o f gr ammar rule is p o stpo ned until Sec. 5.2, Definition 4.4 .1 bec ause there is a useful matrix (the nihilation matr ix) that has not been introduce d yet . Some conditions have to be imp osed on matrices and vectors of no des and edges in or der to keep compatibility when a r ule is a pplied, that is, to av oid da ngling edges once the rule is applied. It is no t difficult to extend the definition of compatibility from adjacency matrices (s e e Def. 2.3.2) to pr oductio ns: Definition 4. 1.5 (Compatibili t y) A pr o duction p : L Ñ R is c omp atible if R p p L q is a simple digr aph. F rom a conceptual p oin t of view the idea is the same a s that of the dangling condition in DPO . Also, what is demanded here is co mpleteness of the underlying space Graph P with r espect to the op erations defined. Next we enumerate the implica tions for Matr ix Graph Grammar s of compatibility . Recall that t denotes tra nspo sition: 1. An incoming edge ca nnot be added r E to a no de that is going to b e deleted e N : r E d e N 1 0 . (4.21) Similarly , for outgoing edges r E t , the condition is: r E t d e N 1 0 . (4.22) 5 This usage of elements whose presence is demanded but are n ot used is a sort of p ositive applic ation c ondi tion . See Chap. 8. 4.1 Pro ductions and Compatibility 73 2. Ano ther forbidden situation is deleting a no de with some incoming edge, if that edge is not deleted as well: e E L E d e N 1 0 . (4.23) Similarly fo r outgo ing edges: e E L E t d e N 1 0 . (4.24) Note that e E L E are element s preserved (used but not deleted) by pro duction p . 3. It is not po ssible to a dd a n inco ming edge r E to a node which is neither present in the LHS L N nor a dded r N by the pro duction: r E d r N L N 1 0 . (4.25) Similarly , for edges s tarting in a given node: r E t d r N L N 1 0 . (4.26) 4. Fina lly , our la st conditions state that it is not p oss ible that an edge re a c hes a no de which does not b elong to the LHS and which is not going to be a dded: e E L E d r N L N 1 0 . (4.27) And ag ain, for outgoing e dg es: e E L E t d r N L N 1 0 . (4.28) Thu s we arrive naturally at the next pr opo sition: Prop osition 4.1 .6 L et p : L Ñ R b e a pr o duct ion. If c onditions (4.21) – (4.28) ar e fulfil le d then R p p L q is c omp atible. 6 Pr o of W e hav e to chec k p M E _ M t E q d M N 1 0, with M E r E _ e E L E and M N r N e N _ L N . Applying (4.11) in the second eq ualit y we hav e 6 p p L q is giv en by (4.16) and (4.17). 74 4 Matrix Graph Grammars F undamentals p M E _ M t E d M N r E _ e E L E _ r E _ e E L E t d r N e N _ L N r E _ e E L E _ r E t _ e E L E t d e N _ r N L N . (4.29) Synt hesizing conditions (4.21) – (4.2 8) or expa nding eq. (4.29) the pro of is co mpleted. A full e x ample is work ed o ut in the next section, together with fur ther expla nations on no de identification acros s pro ductions a nd types. 4.2 T yp es and Completion Besides characterization (with compatibility), in pra ctice w e will need to endo rse g raphs with some “semantics” (types). These types will imp ose some restr ictions on the wa y algebraic oper ations can b e c a rried out (completion). This section is somewhat informal. F or a mo r e formal exp osition, plea se refer to [6 7] and [66], Sec. 2. Grammars in esse nce rely on the p ossibility to apply several morphisms (pro ductions) in sequence, genera ting lang uages. At g rammar design time we do not know in gener al which a c tual initial state is to b e studied but proba bly w e do know whic h elements make up the system under cons ideration and what pr ope rties we ar e go ing to study . F or example, in a lo cal are a netw o rk we know that there are messa ges, clients, ser v ers, routers, hubs, switches a nd cables. W e a lso know that we ar e interested in dep endency , deadlo ck and failur e recovery a lthough we proba bly do not know which actual net we wan t to study . It seems natural to introduce typ es , which a r e simply a lev el of abstraction in the set of elements under consideration. F or example, in previous parag raph, messages , clients, servers, etc would be type s . So there is a ground level in whic h r e al things are (one actua l hu b) and a no ther a little bit more a bstract level in which families of elements live. Example . Along this b o ok we will use tw o ways of typing pro ductions. The first manner will b e to us e natur al num b ers N ¡ 0 and primes to distinguish be t ween elements. T o the left side of Fig. 4.2 there is a typical simple dig raph with three no des 1 (they are of type 1). This is correct as long as we do not need to ope rate with them. During “runtime”, i.e. 4.2 T yp es and Completion 75 Fig. 4.2. Examples of T yp es if so me algebra ic op eration is to b e carried out, it is mandatory to dis tinguish b etw ee n different elemen ts, so pr imes are app ended a s depicted to the center of the same figure. F or the second wa y of typing pr o ductions , chec k out a small netw o r k to the left of Fig. 4 .2 where there ar e t wo clients – (1:C) and (2:C) – one switc h – (1:SW) – one r outer – (1:R) – and one ser v er – (1:S) –. Types a re C , S W , R and S and instead of primes we use natura l num b ers to distinguis h among elements of the s ame type. Their adja c ency matrices are: 1 0 0 0 | 1 1 0 0 0 0 | 1 2 0 0 1 1 | 1 3 1 1 1 0 | 2 0 0 1 0 1 | 1 : C 0 0 0 0 1 | 2 : C 0 0 1 1 0 | 1 : R 0 0 0 0 0 | 1 : S 1 0 0 1 1 | 1 : S W No des of the same type can b e identified across pro ductions or when p erforming any kind of o per ation, while no des of different types m ust remain unrela ted. A pro duction can not change the type of any no de. In some sense , no des in the left and rig h t hand sides of pr o ductions sp e cify their types. Ma tc hing (refer to Chap. 6) transforms them in “actual” elements. Types of edges ar e giv en b y the type of its initia l a nd ter minal no des. In the exa mple of Fig. 4.2, the t yp e of e dge e is p 1 , 2 q and the type of edge e 1 is p 2 , 1 q . F or edg es, types p 1 , 2 q and p 2 , 1 q are different. See [10]. A type is just an element of a predefined set T and the a s signment of t yp es to no des of a given graph G is just a (po ssibly non-injective) total function fr o m the graph under considera tio n to the s et of t yp es, t G : G Ñ T , such that it defines a n equiv alence 76 4 Matrix Graph Grammars F undamentals relation in G . 7 It is imp ortant to have disjoint types (something for gra n ted if the relation is an equiv alence relation) so one element do es not hav e tw o t yp es. In pr evious example, the first w ay of typing nodes would be T 1 N ¡ 0 and the second T 2 tp α : β q| α P N ¡ 0 , β P t C, S, R, S W uu . The no tion of t yp e is a sso ciated to the underlying algebra ic str ucture and normally will b e sp ecified using an extra column on matr ices and vectors. Conditions and r estric- tions on types and the wa y they rela te to each other can be sp ecified using r estrictions (see Chap. 8). Next we introduce the concept of c ompletion . In previous sections we hav e a ssumed that when op erating with matrices and vectors thes e had the same size, but in genera l matrices a nd vectors repr esent gra phs with differe nt sets of no des or edges, although probably there will b e common subsets. Completion mo difies matrice s (and vectors) to allow some sp ecified op eratio n. T wo problems may o ccur: 1. Ma trices may not fully co incide with resp ect to the no des under co nsideration. 2. E ven if they are the same, they may well not b e ordered a s needed. T o a ddr ess the first problem ma trices a nd vectors are enla rged, adding the missing vertexes to the edge matrix and s etting their v a lues to zero. T o declare tha t these elements do no t b elong to the g r aph under consider ation, the cor r esp o nding no de vector is a ls o enlarged setting to zero the newly a dded vertexes. If for example an and is sp ecified b etw een tw o matr ices, say A ^ B , the first thing to do is to re order elements so it makes sense to and element by element, i.e . elements representing the s ame no de a re op erated. If we are defining a g rammar on a computer, the to ol or environment will a uto matically do it but some pro cedure has to b e followed. F or the sake of an example, the following is prop osed: 1. Find the set C o f common e le men ts. 2. Move elements o f C upwards by rows in A and B , maint aining the order . A s imilar op eration must b e do ne moving corre s ponding elements to the left by columns. 3. So rt common elements in B to obtain the s a me order ing as in A . 7 A reflexive ( g P G, g g ), symmetric ( g 1 , g 2 P G, r g 1 g 2 g 2 g 1 s ) and transitiv e ( g 1 , g 2 , g 3 P G, r g 1 g 2 , g 2 g 3 ñ g 1 g 3 s ) relation. 4.2 T yp es and Completion 77 4. Add r emaining elements in A to B sorted as in A , immediately after the elements accessed in previo us step. 5. Add rema ining elements in B to A s o rted as in B . Addition of elements and reo r dering (the op eratio ns needed for completion) extend and mo dify pro ductions s yn tactically but no t from a s eman tical p oint of view. Fig. 4.3. Example of Produ ction (Rep.) Example. Consider the pro duction depicted in Fig. 4.3. Its asso ciated matr ices are rep- resented b elow. As already commented a bove, the notation for matrices will b e extended a little bit in or der to sp ecify node a nd edges t yp es. It is assumed for the adjacency matrix that it is equally o rdered b y r ows so we do not add any row. If it is clear from context or ther e is a pro blem with space , this lab eling column will not app ear, making it ex plicit in words if needed. L E 1 0 1 1 | 2 0 0 0 | 4 1 0 1 | 5 L N 1 1 | 2 1 | 4 1 | 5 R E 1 0 1 1 | 2 0 1 0 | 3 0 1 1 | 5 R N 1 1 | 2 1 | 3 1 | 5 e E 1 0 1 0 | 2 0 0 0 | 4 1 0 0 | 5 e N 1 0 | 2 1 | 4 0 | 5 r E 1 0 1 0 | 2 0 1 0 | 3 0 1 0 | 5 r N 1 0 | 2 1 | 3 0 | 5 F or e x ample, if the op era tion e E 1 r E 1 was to b e p erfor med, then b oth matrices must be co mpleted. F ollowing the steps describ ed ab ov e we obtain: e E 1 0 1 0 0 | 2 0 0 0 0 | 4 1 0 0 0 | 5 0 0 0 0 | 3 r E 1 0 0 0 1 | 2 0 0 0 0 | 4 0 0 0 1 | 5 0 0 0 1 | 3 L N 1 1 | 2 1 | 4 1 | 5 0 | 3 R N 1 1 | 2 0 | 4 1 | 5 1 | 3 78 4 Matrix Graph Grammars F undamentals where, b esides the era sing and a dditio n matrices, the completion of the no des v ectors for bo th left and right hand sides are display ed. Now we ch eck whether r N 1 _ e N 1 L N 1 and r E 1 _ e E 1 L E 1 are compatible, i.e. R E 1 and R N 1 define a simple digraph. Prop ositio n 2.3.4 and equation (2.4) are used, so we need to compute eq. (4.29) and, as r E 1 _ e E 1 L E 1 0 0 1 1 | 2 0 0 0 0 | 4 0 0 1 1 | 5 0 0 0 1 | 3 r N 1 e N 1 _ L N 1 0 | 2 1 | 4 0 | 5 0 | 3 substituting we finally ar rive at p 4.29 q 0 0 1 1 | 2 0 0 0 0 | 4 0 0 1 1 | 5 0 0 0 1 | 3 _ 0 0 0 0 | 2 0 0 0 0 | 4 1 0 1 0 | 5 1 0 1 1 | 3 Æ d 0 | 2 1 | 4 0 | 5 0 | 3 0 | 2 0 | 4 0 | 5 0 | 3 as desir ed. It is not p oss ible, once the pro cess of completion has finished, to ha ve tw o nodes with the same numb er ins ide the same pr o ductio n 8 bec ause from an oper ational p oint of view it is mandator y to know a ll re la tions b etw een no des. If co mpletion is applied to a sequence then we will sp eak of a c omplete d se quenc e . Note that up to this p oint only the pro duction itself has b een taken into acco un t, with no reference to the state o f the s ystem (host g raph). Although this is ha lf truth – as you will promptly see – we may say that we a re starting the a nalysis o f gra mmar rules without the need o f any matching, i.e. we will analyze pro ductions a nd no t necessarily direct deriv atio ns, with the a dv antage of ga thering informa tio n at a g rammar definition stage. O f course this is a desirable prop erty as long as results of this analysis can b e used for der iv ations (during runtime). In some sense completion a nd matching ar e complementary op erations: Inside a se- quence of pro ductions, matc hings – as s ide effect – differentiate or relate nodes (and hence, edg es) of pro ductions. Completion impo ses some restric tio ns to p ossible match- ings. If w e ha ve the imag e of the evolution of a system by the a pplication of a deriv ation 8 F or example, if there are t wo no des of type 8, after completion there should be one with a 8 and the other with an 8 1 . 4.3 Sequences and Coherence 79 as depicted in Fig. 5.1 o n p. 98, then matchings c an be viewed as vertic al identifications, while completions ca n b e see n as horizontal iden tifications. The w ay completio n has been in tro duced, there is a deterministic part limited t o adding dummy elements and a non-deterministic one deciding on identifications. 9 It should b e p ossible to define it as an op erator who s e o utput would b e all p ossible rela tions among elements (of the same type), i.e. co mpletion of t wo matric e s would not be tw o matrices anymore, but the set of matrice s in which all p ossible co m binations would b e considered (or a subset if some of them can b e disca rded). This is related to the definition of initial digr aph set in Sec. 6.3 and the structure therein studied. 4.3 Sequences and Coherence Once we ar e able to c haracter ize a single pro duction, we ca n pro cee d with the s tudy of fi- nite collections of them. 10 Two main op erations, c omp osition and c onc atenation , 11 which are in fac t close ly related, ar e intro duced in this and next sectio ns, along with no tions that ma k e it p ossible to sp eak of “p otential definabilit y”: Coher enc e and c omp atibility . In or der to ease ex pos itio n, in this sectio n w e shall prove pa r tial results co ncerning coherence: we shall consider pro ductions that do not gener ate dangling edges. Coher ence characterization taking into account dangling edges can b e found in Sec. 4.4 or somewhat generalized in [66]. Definition 4.3. 1 (Concatenation) L et G b e a gr ammar. Given a c ol le ction of pr o duc- tions t p 1 , . . . , p n u G , t he notation s n p n ; p n 1 ; . . . ; p 1 defines a se quenc e (c onc ate- nation) of pr o ductions establishing an or der in their applic ation, st art ing with p 1 and ending with p n . Remark . In the literatur e of graph tra nsformation, the conca tenation op erator is defined back to fro n t, this is , in the seq uenc e p 2 ; p 1 , pro duction p 2 would b e applied first and p 1 right afterwards [11]. The order ing alrea dy intro duced is preferred b ecause it follows 9 Non-determinism in MGG is n ot addressed in th is bo ok. Refer to [67]. 10 The term set instead of collection is a voided b ecause rep etition of prod uctions is permitted. 11 Also know n as se quentialization . 80 4 Matrix Graph Grammars F undamentals the mathematica l wa y in which c o mpos ition is defined and r epresented. This is sue will be r aised ag ain in Sec. 10.1. It is w orth stressing that there exis ts a total or de r in a sequence, o ne pro duction b eing applied after the previous has finished, and thus intermediate states are generated. These int ermediate states a re indeed the difference betw een concatenatio n a nd comp osition of pro ductions (see Sec. 5.3). The study of concatena tion is r elated to the interleaving approach to concurrency , while compo sition is related to the explicit par a llelism approach (see Sec. 3.1). A pro duction is move d forwar d , move d to the fr ont or advanc e d if it is shifted o ne or mo r e p ositions to the right inside a sequence o f pro ductions, either in a co mpos ition or a co ncatenation (it is to b e applied earlier ), e.g. p 4 ; p 3 ; p 2 ; p 1 ÞÑ p 3 ; p 2 ; p 1 ; p 4 . On the co n trary , move b ackwar ds or delay means shifting the pro duction to the left, which implies delaying its applicatio n, e.g. p 4 ; p 3 ; p 2 ; p 1 ÞÑ p 1 ; p 4 ; p 3 ; p 2 . Definition 4. 3.2 (Coherence) Given t he set of pr o du ct ions t p 1 , . . . , p n u , the c omplete d se quenc e s n p n ; p n 1 ; . . . ; p 1 is c al le d c oher ent if actions of any pr o duction do n ot pr e- vent actions of the pr o ductions that fol low it, taking into ac c ou n t t he effe cts of interme- diate pr o ductions. Coherence is a concept that deals with potential applicability to a host gra ph o f a sequence s n of pro ductions. It do es no t guarantee that the application of s n and a coherent re o rdering of s n , σ p s n q , lead to the sa me r esult. This latter ca se is a sor t of generaliza tion 12 of s equen tial indep endence applied to sequences, which will b e studied in Cha p. 7. Example. W e extend previo us example (see Fig. 4.3 o n p. 77) with tw o more pro duc- tions. Recall that our first pro duction q 1 deletes edge p 5 , 2 q , which starts in vertex 5 and ends in vertex 2. As depicted in Fig. 4 .4, pr oductio n q 2 adds this edge and q 3 preserves it ( q 3 used but do es not delete this edge). Sequence s 3 q 3 ; q 2 ; q 1 would be co herent if only this vertex was considered. Now we study t he conditions that ha ve to be satisfied by the ma trices asso cia ted with a coher en t a nd dangling- fr ee s equence of pro ductions. Instead of s tating a result concerning conditions on co he r ence and proving it immediately afterwards, we begin by 12 Generalization in the sense that, a priori, we are considering any kind of p erm utation. 4.3 Sequences and Coherence 81 Fig. 4.4. Produ ctions q 1 , q 2 and q 3 discussing the ca se of tw o pro ductions in full detail, w e contin ue with three and we finally set a theorem – Theorem 4.3.5 – for a finite num ber of them. Let us cons ider the concatenation s 2 p 2 ; p 1 . In or de r to dec ide whether the appli- cation o f p 1 do es not exclude p 2 , we imp ose three conditions o n edges : 13 1. T he firs t pro duction – p 1 – do es not delete any edge ( e E 1 ) used by the se cond pro- duction ( L E 2 ): e E 1 L E 2 0 . (4.30) 2. p 2 do es no t add ( r E 2 ) a n y edg e preser ved (used but not deleted, e E 1 L E 1 ) by p 1 : r E 2 L E 1 e E 1 0 . (4.31) 3. No common edges are added by b oth pro ductions: r E 1 r E 2 0 . (4.32) The first co nditio n is needed b ecause if p 1 deletes an edg e used by p 2 , then p 2 would not b e applicable. The last tw o conditions are mandatory in o rder to obtain a s imple digraph (with at most one edge in each direction b et ween t wo no des). Conditions (4.31) and (4.32) are equiv a len t to r E 2 R E 1 0 b ecause, as b oth a re equal to zero , we can do 13 Note the similarities and differences with we ak se quential indep endenc e . See Sec. 3.2. 82 4 Matrix Graph Grammars F undamentals 0 r E 2 L E 1 e E 1 _ r E 2 r E 1 r E 2 r E 1 _ e E 1 L E 1 r E 2 R E 1 which may be read “ p 2 do es not add an y e dge that comes o ut from p 1 ’s a pplication”. All conditions ca n b e sy n thesized in the following identit y : r E 2 R E 1 _ e E 1 L E 2 0 . (4.33) Our immediate target is to obtain a closed formu la to represent these conditions for the case of an arbitra ry finite n umber of pro ductions. Applying (4.10) and (4.11), equation (4 .33) can b e tra nsformed to get: R E 1 e E 2 r E 2 _ L E 2 e E 1 r E 1 0 . (4.34) A simila r rea s oning gives the cor resp onding form ula for no des: R N 1 e N 2 r N 2 _ L N 2 e N 1 r N 1 0 . (4.35) Remark . Note that co nditions (4.3 1) a nd (4.32) do no t really apply to no des as a pply to edg es. F o r example, if a no de of t yp e 1 is to be added and nodes 1 and 1 1 hav e already bee n app ended, then by completion no de 1 2 would be added. It is not p ossible to add a no de that alr eady exists. How ever, coherence lo oks for conditions that g uarantee that the op era tions spe cified by the pro ductions of a sequence do no t interfere one with each o ther. Supp ose the same example but this time, for some unknown r e a son, the no de to b e added is co mpleted as 1 1 – this one has just b een added –. I f conditions of the kind of (4.31) and (4.32) are remov ed, then we would no t detect that there is a p otential problem if this sequence is applied. Next we introduce a gr a phical notation for Bo ole an equations: A vertical arrow means and while a fork s ta nds for or . W e use these diagr ams b ecause for m ulas grow very fast with the num b er of no des. As an example, the representation of equa- tions (4.3 4) and (4.35) is shown in Fig. 4.5. Lemma 4. 3.3 L et s 2 p 2 ; p 1 b e a se quenc e of pr o ductions without dangling e dges. If e quations (4.34) and (4.35) hold, t hen s 2 is c oher ent. 4.3 Sequences and Coherence 83 Fig. 4. 5. Coherence for Tw o Produ ctions Pr o of Only e dges are consider ed b ecause a symmetrical reasoning sets the r esult for no des. Call D the action of deleting a n edge, A its additio n and P its preserv ation, i.e. the edge app ears in b oth LHS and RHS. T able 4.1 comprises a ll nine pos sibilities for t wo pro ductions. D 2 ; D 1 (4.30) D 2 ; P 1 ` D 2 ; A 1 ` P 2 ; D 1 (4.30) P 2 ; P 1 ` P 2 ; A 1 ` A 2 ; D 1 ` A 2 ; P 1 (4.31) A 2 ; A 1 (4.32) T able 4.1. P ossible Act ions for Two Productions A tick means that the actio n is allowed, while a num b er refers to the co nditio n that prohibits the action. F o r exa mple, P 2 ; D 1 means that first pro duction p 1 deletes the edge and second p 2 preserves it (in this or der). If the ta ble is lo oked up we find that this is forbidden by equation (4.30). Now we pro ceed with three pro ductions. W e m ust c heck that p 2 do es not disturb p 3 and tha t p 1 do es not preven t the application of p 2 . Notice that b oth of them are cov ered in our previous explanation (in the t wo pro ductions case), and thus we just need to ensure that p 1 do es not exclude p 3 , taking into account that p 2 is applied in b etw een: 1. p 1 do es no t delete ( e E 1 ) a n y edg e used ( L E 3 ) by p 3 and not added ( r E 2 ) by p 2 : e E 1 L E 3 r E 2 0 . (4.36) 84 4 Matrix Graph Grammars F undamentals 2. P ro duction p 3 do es not add – r E 3 – any edge stemming from p 1 – R E 1 – and not deleted by p 2 – e E 2 –: r E 3 R E 1 e E 2 0 . (4.37) Again, the last condition is nee de d in order to obtain a simple digraph. Performing similar manipulations to those ca rried out fo r s 2 we get the full condition for s 3 , given by the equation: L E 2 e E 1 _ L E 3 e E 1 r E 2 _ e E 2 _ R E 1 e E 2 r E 3 _ r E 2 _ R E 2 r E 3 0 . (4.38) Pro ceeding as b efore, identit y (4.38) is co mpleted: L E 2 e E 1 r E 1 _ L E 3 r E 2 e E 1 r E 1 _ e E 2 _ _ R E 1 e E 2 r E 2 _ e E 3 r E 3 _ R E 2 e E 3 r E 3 0 . (4.39) Its r epresentation is shown in Fig . 4.6 for b oth no des a nd edg es. Fig. 4.6. Coherence Conditions for Three Productions Lemma 4.3.3 can b e extended slightly to include three pro ductions in an obvious wa y , but we will not discuss this further be c ause the g eneralization to cov er n pro ductions is Theorem 4 .3.5. Example . Recall pro ductions q 1 , q 2 and q 3 int ro duced in Fig s . 4.3 and 4.4 (on pp. 77 and 81, re spectively). Sequences q 3 ; q 2 ; q 1 and q 1 ; q 3 ; q 2 are cohere nt, while q 3 ; q 1 ; q 2 is not. The latter is due to the fa ct that edge p 5 , 5 q is deleted (D) b y q 2 , used (U) b y q 1 and added (A) by q 3 , being tw o pairs of forbidden actio ns. F or the former sequences we hav e 4.3 Sequences and Coherence 85 to check a ll a ctions p erfor med o n all e dg es and no des by the pr oductio ns in the order sp ecified by the concatenation, verifying that they do not e x clude ea ch other. Definition 4.3. 4 L et F p x, y q and G p x, y q b e two Bo ole an functions dep endent on p ar am- eters x, y P I in some index set I . Op er ators delta △ and nabla ▽ ar e define d thr ough the e quations: △ t 1 t 0 p F p x, y qq t 1 ª y t 0 t 1 © x y p F p x, y qq (4.40) ▽ t 1 t 0 p G p x, y qq t 1 ª y t 0 y © x t 0 p G p x, y qq . (4.41) These op erators will be useful for the gene r al case o f n pro ductions with coher ence, initial digra phs, G-congr uence and o ther co ncepts. A simple interpretation for b oth op- erators will b e given at the end of the s ection. Example . Let F p x, y q G p x, y q r x e y , then we hav e: △ 3 1 p r x e y q 3 ª y 1 3 © x y p r x e y q r 3 e 3 _ r 3 r 2 e 2 _ r 3 r 2 r 1 e 1 e 3 _ r 3 e 2 _ r 3 r 2 e 1 . ▽ 5 3 p r x e y q 5 ª y 3 x y © x 3 p r x e y q r 3 e 3 _ r 3 r 4 e 4 _ r 3 r 4 r 5 e 5 e 3 _ r 3 e 4 _ r 3 r 4 e 5 . Expressio ns hav e b een simplified applying Prop ositio n 4 .1.4. Now w e are rea dy to character iz e coherent s equences of ar bitrary finite leng th. Theorem 4.3. 5 The dangling-fr e e c onc atenation s n p n ; p n 1 ; . . . ; p 2 ; p 1 is c oher ent if for e dges and no des we have: n ª i 1 R E i ▽ n i 1 e E x r E y _ L E i △ i 1 1 e E y r E x 0 (4.42) n ª i 1 R N i ▽ n i 1 e N x r N y _ L N i △ i 1 1 e N y r N x 0 . (4.43) Pr o of Induction on the num b er of pro ductions (see cases s 2 and s 3 studied a bove). Figure 4.7 includes the gr aph representation of the formulas for coher ence for s 4 p 4 ; p 3 ; p 2 ; p 1 and s 5 p 5 ; p 4 ; p 3 ; p 2 ; p 1 . 86 4 Matrix Graph Grammars F undamentals Fig. 4. 7. Coherence. F our and Five Productions Example. W e are going to verify that s 1 q 1 ; q 3 ; q 2 is coherent (only for edges), where q i are the pro ductions introduced in pr evious examples. Pr o ductio ns are drawn again in Fig. 4.8 for the reader co n venience. W e start expanding formula (4.42) for n 3: 3 ª i 1 p R E i ▽ 3 i 1 e E x r E y _ L E i △ i 1 1 e E y r E x R E 1 e E 2 r E 2 _ e E 2 e E 3 r E 3 _ _ R E 2 e E 3 r E 3 _ L E 2 r E 1 e E 1 _ L E 3 r E 1 r E 2 e E 1 _ r E 2 e E 2 R E 1 r E 2 _ e E 2 r E 3 _ R E 2 r E 3 _ L E 2 e E 1 _ L E 3 e E 1 r E 2 _ e E 2 . which should b e zero. Note that this equation applies to conca tena tion s q 3 ; q 2 ; q 1 and thus we have to map p 1 , 2 , 3 q ÞÑ p 2 , 3 , 1 q to obtain R E 2 r E 3 _ e E 3 r E 1 lo oooooooo omo ooooo ooo on pq _ R E 3 r E 1 _ L E 3 e E 2 lo o ooooo omo o ooooo on pq _ L E 1 e E 2 r E 3 _ e E 3 lo o ooooooo omo oooooo oo on pq 0 . (4.44) Before checking whether these expr e s sions ar e zero or not, we hav e to complete the inv olved matrices. All calcula tions have b een divided into three steps a nd, as they ar e op erated with or , the result will no t be null if one fails to b e zero . Only the second ter m (**) is expanded, with o r dering of no des not sp ecified fo r a matter of space. No des are sorted r 2 3 5 1 4 s bo th by columns a nd by rows, meaning for example that element p 3 , 4 q is an edge starting in no de 5 and ending in no de 1. 4.3 Sequences and Coherence 87 Fig. 4.8. Produ ctions q 1 , q 2 and q 3 (Rep.) 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 , so the sequence is coherent 14 where, as usual, a matrix filled up with zer os is represented by 0. Now consider sequence s 1 3 q 2 ; q 3 ; q 1 where q 2 and q 3 hav e been swapped with resp ect to s 3 . The co ndition for its c o herence is: R E 1 r E 3 _ e E 3 r E 2 lo oooooo oo omo oo oooooo on pq _ R E 3 r E 2 _ L E 3 e E 1 lo oooooo omo oo oooo on pq _ L E 2 e E 1 r E 3 _ e E 3 lo o ooooooo omo ooo ooooo on pq 0 . (4.45) If we fo cus just o n the first term (*) in equation (4.45) 0 1 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 _ 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ we obtain a matrix filled up with zeros except in po sition (3,3) which corr e s ponds to an edge that starts and ends in no de 5. Ordering of no des has b een omitted again due to lack of space, but it is the same a s ab ov e: r 2 3 5 1 4 s . 14 It is also n ecessary to c h ec k that pq p q 0. 88 4 Matrix Graph Grammars F undamentals W e do not only realize that the se q uence is not coherent, but in a ddition information on which no de or edge may pres en t pr oblems when a pplied to an actual host g raph is provided. Note that a s equence not b eing coherent do es not necessa rily mean that the grammar is not well defined, but that we hav e to b e esp ecially ca reful when a pply ing it to a host graph b ecause it is mandatory for the match to iden tify a ll problematic pa r ts in differen t places. This information could b e used when actually finding the match; a p ossible s trategy , if parallel matching for different pro ductions is requir ed, is to sta r t with tho se elements which may pr esent a pro blem. 15 This section ends providing a simple in terpreta tion of ∇ and △ , which in e s sence are a gener alization of the s tructure of a s equence of pro ductions. A s equence p 2 ; p 1 is a complex op eration: T o some p otential digr aph, o ne should star t by de le ting elements sp ecified b y e 1 , then add elements in r 1 , afterw ards delete element s in e 2 and finally add elements in r 2 . Gener alization means that this same structure can b e applied but not limited to matrice s e a nd r , i.e . there is an alternate sequence of “ delete” and “add” op erations with gener al expressions rather than just matrices e and r . F or example, ∇ 3 1 p e x R x _ L y _ r y q . Op erators ∇ and △ represent ascending and descending s e quences. F or ex ample, ∇ 3 1 e x r y p 1 p 2 p r 3 q and △ 3 1 e x r y p 3 p 2 p r 1 q . In so me detail: ∇ 3 1 e x r y e 1 r 1 _ e 1 e 2 r 2 _ e 1 e 2 e 3 r 3 r 1 _ e 1 r 2 _ e 1 e 2 r 3 r 1 _ e 1 p r 2 _ e 2 r 3 q p 1 p p 2 p r 3 qq . W e will make go o d use of this int erpre tation in Chap. 6 to es tablish the equiv alence betw een coher ence plus compatibility of a deriv atio n and finding its minima l and negative initial digraphs in the host graph and its nega tion, resp ectively . As c o mmen ted ab ov e, we shall return to coherence in Sec. 4.4, which is further gen- eralized in [66] through so-ca lled Bo ole an c omplexes . 15 The same remark applies to G-c ongruenc e , to be stud ied in S ec. 7.1. 4.4 Coherence Revisited 89 4.4 Coherence Revisited In this section w e sha ll extend the res ults of Se c . 4.3 taking into a c c oun t p otential dangling edges. T o this end we need to intro duce the nihil matrix K , which will b e very useful in the r est of the b o ok. Our plan now is to fir st make explicit all elements that should not b e pres e n t in a po ten tial ma tc h of the left hand side of a r ule in a ho st g raph, and then characterize them for a finite seque nce . This is car ried out defining something similar to the minimal initial digraph, the ne gative initial digr aph . In or der to keep our philosophy of making our analysis as general as po ssible (independent of any concrete host gra ph) only the elements app earing on the LHS of the pro ductions that make up the sequence plus their actions will b e ta k en into a ccount. W e will r efer to element s that sho uld no t b e presen t as forbidden elements . There are t wo s e ts of elements that for different reas ons should not app ear in a p otent ial initial digraph: 1. E dges added by the pro duction, as we are limited for now to s imple dig r aphs. 2. E dges incident to some no de deleted by the pr o duction (dangling edges). T o consider elements just descr ib ed, the notation to represent pr oductio ns is extended with a new g raph K that we will call the nihilatio n matrix . 16 Note that the concept of grammar rule remains unaltered be c ause w e are just making explicit some implicit information. T o further justify the naturalness o f this matrix let’s opp ose its meaning to that of the LHS and its interpretation as a p ositive applic ation c ondition (the LHS must exist in the hos t gr aph in order to apply the gr ammar rule). In effect, K c a n b e seen as a ne gative applic ation c ondition : If it is found in the host graph then the pro duction can not b e applied. W e will dedicate a whole chapter (Chap. 8) to develop these ideas. 17 16 It will b e normally represented by K . Su bscripts will b e used to distinguish nihil matrices of different pro ductions, e.g. K 2 for th e nihil matrix of pro duction p 2 . When dealing with sequences, e.g. sequence s 3 , w e shall prefer the notation K p s 3 q . 17 In a negativ e application condition w e will b e allo wed to add information of what elemen ts must not be present. Probably it is more p recise t o sp eak of K as an implicit ne gative appli- c ation c ondition . 90 4 Matrix Graph Grammars F undamentals The order in which matr ices are derived is enlar ged to cop e with the nihilation matrix K : p L, R q ÞÝ Ñ p e, r q ÞÝ Ñ K. (4.46) Otherwise stated, a pro duction is s tatic al ly determined by its left and r ight ha nd sides p p L, R q , from which it is p ossible to give a dynamic definitio n p p L, e, r q , to end up with a full sp ecification including its envir onmental 18 behaviour p p L, K, e, r q . Definition 4. 4.1 (Pro duction - Dynamic F orm ulation) A pr o duction p is a mor- phism 19 b etwe en t wo s imple digr aphs L and R , and c an b e sp e cifie d by the tuple p L E , K E , e E , r E , L N , K N , e N , r N . (4.47) Compare with Dfinition 4 .1.1, the static formulation of pro duction. As co mmen ted earlier in the bo o k , it should b e p ossible to consider no des and e dges toge ther using the tensorial co nstruction of Chap. 10. Next lemma s hows how to ca lculate K using the pro duction p , by applying it to a certain ma trix: Lemma 4. 4.2 (Nihilatio n matrix) Using tensor notation (se e Se c. 2.4) le t’s define D e N b p e N q t , wher e t denotes t r ansp osition. Then, K E p D . (4.48) Pr o of The following ma trix sp ecifies po ten tial da ng ling edges incident to no des app ear ing in the left hand side of p : D d i j # 1 if e i N 1 or p e j q N 1 . 0 other wi se. (4.49) Note that D e N b p e N q t . Every element incident to a no de that is going to b e deleted becomes dangling except edges deleted by the pro duction. In a dditio n, edges added by the rule can no t b e present, th us we have K E r E _ e E D p D . 18 Environmen t al b ecause K sp ecifies some elements in the surround in gs of L that should not exist. If the LHS has b een completed – probably b ecause it b elongs to some sequence – th en the nihilation matrix will consider those nodes to o. 19 In fact, a partial fun ction since some elemen ts in L d o not h a ve an image in R . 4.4 Coherence Revisited 91 Fig. 4.9. Example of Nihilation Matrix Example . W e will ca lc ulate the elements appea ring in Lemma 4.4.2 for the pro duction of Fig. 4.9: e N b p e N q t 0 1 1 b 0 1 1 t 1 1 1 1 0 0 1 0 0 The nihilation matrix is given b y equation (4.48): K r _ eD 0 0 0 0 1 0 0 1 1 _ 1 0 0 1 1 0 1 1 1 1 1 1 1 0 0 1 0 0 1 0 0 1 1 0 1 1 1 . This matrix shows that no de 1 ca n not have a self lo op (it would b ecome a dangling edge as it is no t deleted by the pr o duction) but e dg es p 1 , 2 q and p 1 , 3 q may b e pr esent (in fact they must b e present as they b elong to L ). Edge p 2 , 1 q m ust not exist for the same rea son. The self loo p for no de 2 ca n not b e found be cause it is added b y the rule. A similar reasoning tells us that no edge starting in no de 3 ca n exis t: The s e lf lo op and edge p 3 , 2 q bec ause they are go ing to be a dded and p 3 , 1 q bec ause it would become a dangling edge. It is worth stressing that matrix D do not tell actions o f the pro duction to b e p er- formed in the complement o f the host g raph, G . Actions of pro ductions ar e sp ecified exclusively b y matrices e a nd r . Some q uestions of imp ortance remain unsolved rega rding forbidden elements and pro ductions: How are the elements in the nihil matrix transformed by a pro duction p ? Otherwise stated, if the forbidden elemen ts in the LHS of the pro duction are those given by K , what are the forbidden elements in the RHS acco rding to p ? Although this question will b e studied in detail in Sec . 9.2 – in particula r in Prop. 9 .2.5 on p. 217 – we need to a dv ance the answer: for a pro duction p : L Ñ R with nihil pa rt K , 92 4 Matrix Graph Grammars F undamentals the forbidden elements (we shall use the letter Q ) are given by inv er se of the gra mma r rule: Q p 1 p K q . Now w e are in the p osition to extend the results of Sec. 4.3 by conside r ing p otential dangling edges. W e s hall prove that: Theorem 4.4. 3 The c onc aten ation s n p n ; . . . ; p 1 is c oher ent if b esides e q. (4.42) , identity n ª i 1 Q i ▽ n i 1 p e y r x q _ K i △ i 1 1 p r y e x q . (4.50) is also fulfil le d. Pr o of W e pr oc e ed a s for Theor e m 4 .3 .5. Firs t, let’s c o nsider a seq uence o f tw o pro ductions s 2 p 2 ; p 1 . In order to decide whether the application of p 1 do es not exclude p 2 (regarding elements that app ear in the nihil par ts) the following conditions must b e dema nded: 1. No c o mmon element is deleted by b oth pr o ductio ns: e 1 e 2 0 . (4.51) 2. P ro duction p 2 do es not dele te any element that the pro duction p 1 demands no t to be pr esent and that b esides is not added by p 1 : e 2 K 1 r 1 0 . (4.52) 3. T he first pro duction do es not add any elemen t that is demanded not to exis t by the second pr oductio n: r 1 K 2 0 . (4.53) Altogether we can write e 1 e 2 _ r 1 e 2 K 1 _ r 1 K 2 e 2 p e 1 _ r 1 K 1 q _ r 1 K 2 e 2 Q 1 _ r 1 K 2 0 , (4.54) which is equiv a len t to e 2 r 2 Q 1 _ e 1 r 1 K 2 0 (4.55) 4.4 Coherence Revisited 93 due to basic prop erties of MGG pro ductions (see Prop. 4 .1.4). In the case o f a seq ue nc e that cons ists of three pro ductions, s 3 p 3 ; p 2 ; p 1 , the pro cedure is to apply the s ame reasoning to subsequences p 2 ; p 1 (restrictions on p 2 actions due to p 1 ) and p 3 ; p 2 (restrictions on p 3 actions due to p 1 ) and or them. Finally , w e have to deduce which conditions hav e to b e imp osed on the actions o f p 3 due to p 1 , but this time taking into a ccount that p 2 is applied in b etw een. Again, we ca n put all conditions in a single express ion: Q 1 p e 2 _ r 2 e 3 q _ Q 2 e 3 _ K 2 r 1 _ K 3 p r 1 e 2 _ r 2 q 0 . (4.5 6) D 2 ; D 1 (4.53) D 2 ; P 1 ` D 2 ; A 1 ` P 2 ; D 1 (4.53) P 2 ; P 1 ` P 2 ; A 1 ` A 2 ; D 1 ` A 2 ; P 1 (4.52) A 2 ; A 1 (4.51) T able 4. 2. Pos sible Actions (Two Pro ductions In cl. Dangling Edges) W e now chec k that eqs. (4.55) and (4.56) do imply c o herence. T o see that eq. (4.55) implies co herence we only need to en umerate all p ossible actions on the nihil parts. It might b e ea sier if we think in terms of the negation of a p otential host gr aph to which bo th pro ductions would b e applied G and ch eck that any problematic situation is ruled out. See ta ble 4.2 where D is deletion of one element from G (i.e., the element is added to G ), A is addition to G and P is preserv a tion. Notice that these definitions of D , A and P are opp osite to those given for the certaint y case above. 20 F or example, actio n A 2 ; A 1 tells that in firs t plac e p 1 adds one element ε to G . T o do so this element has to be in e 1 , o r incident to a no de that is go ing to b e deleted. After that, p 2 adds the same element, deriving a conflict b et ween the rules. So far we hav e c heck ed coherence for the case n 2. When the sequence has three pro ductions, s p 3 ; p 2 ; p 1 , there are 27 p ossible combinations of actio ns. Ho wev er , some of them a re considere d in the subsequences p 2 ; p 1 and p 3 ; p 2 . T able 4.3 summarizes them. 20 Preserv ation means that th e element is demanded to b e in G b ecause it is demanded not to exist by the produ ction (it app ears in K 1 ) and it remains as non-ex istent after the application of the p roduct ion (it appears also in Q 1 ). 94 4 Matrix Graph Grammars F undamentals D 3 ; D 2 ; D 1 (4.53) D 3 ; D 2 ; P 1 (4.53) D 3 ; D 2 ; A 1 (4.53) P 3 ; D 2 ; D 1 (4.53) P 3 ; D 2 ; P 1 (4.53) P 3 ; D 2 ; A 1 (4.53) A 3 ; D 2 ; D 1 (4.53) A 3 ; D 2 ; P 1 ` A 3 ; D 2 ; A 1 ` D 3 ; P 2 ; D 1 (4.53) D 3 ; P 2 ; P 1 ` D 3 ; P 2 ; A 1 ` P 3 ; P 2 ; D 1 (4.53) P 3 ; P 2 ; P 1 ` P 3 ; P 2 ; A 1 ` A 3 ; P 2 ; D 1 (4.53)/(4.52) A 3 ; P 2 ; P 1 (4.52) A 3 ; P 2 ; A 1 (4.52) D 3 ; A 2 ; D 1 ` D 3 ; A 2 ; P 1 (4.52) D 3 ; A 2 ; A 1 (4.51) P 3 ; A 2 ; D 1 ` P 3 ; A 2 ; P 1 (4.52) P 3 ; A 2 ; A 1 (4.51) A 3 ; A 2 ; D 1 (4.51) A 3 ; A 2 ; P 1 (4.51) A 3 ; A 2 ; A 1 (4.51) T able 4. 3. Pos sible Actions (Three Pro ductions Incl. Dangling Edges) There are four forbidden actions: 21 D 3 ; D 1 , A 3 ; P 1 , P 3 ; D 1 and A 3 ; A 1 . Let’s consider the first one, which co rresp onds to r 1 r 3 (the first pr o duction adds the element – it is erased from G – and the same for p 3 ). In T a ble 4.3 we see that related co nditio ns app ear in p ositions p 1 , 1 q , p 4 , 1 q and p 7 , 1 q . The fir st tw o are ruled out by co nflicts detected in p 2 ; p 1 and p 3 ; p 2 , resp ectively . W e are left with the thir d ca se which is in fact allowed. The co ndition r 3 r 1 taking into a c coun t the presence o f p 2 in the middle in eq . (4.56) is contained in K 3 r 1 e 2 , which includes r 1 e 2 r 3 . This must b e zero, i.e. it is not p ossible for p 1 and p 3 to remove from G one element if it is not added to G b y p 2 . The other three forbidden actions can b e chec ked s imila rly . The proo f can be finished by induction on the n um b er of pro ductions. The induction hypothesis leav es again four ca s es: D n ; D 1 , A n ; P 1 , P n ; D 1 and A n ; A 1 . The cor resp onding table changes but it is no t difficult to fill in the details. There are some duplicated c o nditions, so it could be possible to “optimize” equa- tions (4.42) and (4.50). The form consider ed in Theo rems 4.3 .5 and 4.4 .3 is prefer r ed bec ause we may us e △ a nd ▽ to synth esize the expres sions. Some co mmen ts o n previous pro of follow: 1. No tice that eq. (4.5 1) is a lready consider ed in Theo rem 4.3.5 b ecause eq. (4.30) which demands e 1 L 2 0 (as e 2 L 2 we ha ve that e 1 L 2 0 ñ e 1 e 2 0). 21 Those actions appearing in table 4.1 up dated for p 3 . 4.5 Su mmary and Conclusions 95 2. C o ndition (4.52) is e 2 K 1 r 1 e 2 r 1 r 1 _ e 2 r 1 e 1 D 1 e 2 e 1 D 1 , where we have used that K 1 p D 1 . No te that those e 1 D 1 0 ar e the dang ling edges not deleted by p 1 . 3. E quation (4.53) is r 1 K 2 r 1 p 2 D 2 r 1 r 2 _ e 2 D 2 r 1 r 2 _ r 1 e 2 D 2 . The first term p r 1 r 2 q is already included in Theore m 4.3.5 and the seco nd term is aga in related to dangling edges . Poten tial dangling edges appear in co herence which might indicate a possible link betw een co herence and compatibility . Compatibility for sequences is characteriz ed in Sec. 5.3). Co herence takes into account dang ling edg es, but o nly those that app ear in the “actions” of the pro ductions (in matr ices e and r ). 4.5 Summary and Conclus ions In this chapter w e hav e introduced tw o equiv a len t definitions of pro duction, one empha- sizing the static part of grammar rules and the other stressing its dyna mics. Also, co mpletion ha s b een addressed. T o some exten t it allows us to study pro ductions, forgetting abo ut the state to which the rule is to b e applied. It pr ovides us with a means to relate elements in different graphs, a kind of horiz on tal ident ification of elements among the r ules in a s e quence. Sequences of pro ductions hav e b een in tro duced together with compa tibility and co- herence. The first ensur es that the underlying s tructure (simple digraph) is k ept, i.e. it is closed under the op eratio ns defined in the sequence. Coherence guarantees that a ctions sp ecified by one pro duction do not disturb pro ductions fo llowing it. Coherence ca n be compa red with critic al p airs , us ed in the categ orical a pproach to graph grammars to detect conflicts b etw een grammar r ules. There a re differences, though. The main one is that co her ence in our approa c h covers an y finite sequence of pro ductions while cr itical pair s ar e limited to t wo pr oductio ns. Among other things , co herence would be able to detect if a p otential pr oblem b etw een t wo pr oductio ns is actually fixed by some intermediate rule. In this and the nex t c hapter (devoted to initial digraphs and compositio n) we develop some analytica l tec hniques indep endent to s ome extent of the initial sta te of the sys tem to which the g rammar rules will b e applied. This allows us to obtain information ab out 96 4 Matrix Graph Grammars F undamentals grammar r ules themselves, for example at design time. This information may be useful during r un time. W e will r eturn to this p oint in future chapters. 5 Initial Digraphs and Comp osition In this chapter, which builds in Chapter 4, w e will mainly deal with initial digraphs and comp osition, providing more analys is tec hniques indep endent to s ome extent of the initial state of the g rammar. Initial digraphs (minimal and negative) are simple digraphs with enough elemen ts to per mit the a pplication of a given sequence. They can b e thought of as a pr oxy of a r eal initial state. The adv a n tage is that they allow us to study a grammar witho ut conside r ing a conc r ete initial s ta te. Comp osition is a n o per a tion that defines a single pro duction out of a given sequence of pro ductions. In s ome sense, co mpositio n and concatena tion (seque ntialization, studied in Cha pter 4) a re opp osite op erations. These a nalysis techniques (initial dig raphs and comp osition) will b e o f imp o rtance in addres s ing the problems p osed in Chapter 1. In pa rticular they will b e us e d to tackle applicability (pro blem 1), seq uen tial indep endence (problem 3 ) and rea c hability (prob- lem 4). This chapter is org anized as follo ws. The problem of finding those elements that m ust be pr e sen t ( minimal initial digr aph ) or must not app ear ( ne gative initial digr aph ) are addressed in Secs. 5.1 and 5.2. A t times it is of interest to build a rule that p erforms the same actions than a given cohe r en t sequence but is applied in a single step, i.e. no int er- mediate s tates a re gener a ted. This is c omp osition , as norma lly defined in mathematics. As they are rela ted, the de finitio n of compa tibilit y for a sequence of pro ductions is als o 98 5 Initial Digraphs and Composition int ro duced a nd characterized in Sec. 5.3. Fina lly , as in every chapter, ther e is a sectio n with a summary and some co nclusions. 5.1 Minimal Init ial Digraph Compatibility and comp osition plus matching in MGG are our m ain motiv ations for int ro ducing the co ncepts and res ults in this and the next sections (minimal a nd negative initial digraphs). Next few parag raphs clarify these p oints. Matches find the left hand side of the pro duction in the host graph (see Chap. 6 ) and, a s side effect, relate and unrela te elements among pro ductions . W e may think of matching as a vertic al identific ation of no des – a nd hence edges – relating as a side effect elements, so to sp eak, horizontal ly (see Fig. 5.1). F or e x ample, if L 1 and L 2 hav e ea ch one a no de of type 3 and m 1 : L 1 Ñ G 0 and m 2 : L 2 Ñ G 1 match this no de in the same pla c e of G 0 and G 1 (suppo se it is not deleted by p 1 ) then this no de is horizontal ly related. In Sec. 5.1 we will study in detail this s ort of rela tions. L 1 m 1 p 1 R 1 m 1 L 2 m 2 p 2 R 2 m 2 L 3 m 3 p 3 R 3 m 3 G 0 p 1 G 1 p 2 G 2 p 3 G 3 Fig. 5.1. Example of Sequence and Deriv ation Compatibility is determined b y the result o f applying a pro duction to a n initial graph and c hecking no des and edges of the result. If we try to define co mpatibilit y for a con- catenation or its co mpositio n, we ha ve to decide which is the initial gr a ph (see the next example) but we would prefer not to b egin our analysis o f matches yet. Example. Consider pro ductions u and v defined in Fig. 5.2. It is easy to see that v ; u is coherent but not compa tible. It se ems a bit more difficult to define their co mpos ition v u , as if they were applied to the s ame no des, a dangling edg e would be obta ined. Although co herence itself do e s not guarantee applica bility of a seq uence, we will see that compatibility is sufficient (genera lized to consider concatenations, not only graphs or single pr oductio ns as in Defs. 2 .3.2 and 4.1.5). 5.1 Minimal Initial Digraph 99 Fig. 5.2. Non-Compatible Produ ctions Two po ssibilities a re found in the literature (for the catego r ical approach) in order to define a match, dep ending whether DPO or SPO is follow ed (see Secs. 3.1 and 3.2 or [23]). In the latter, deletion prev a ils so in the pres en t example pro duction v would delete edge p 4 , 2 q . O ur a pproximation to the ma tc h of a pro duction is s ligh tly different, co nsidering it as an op erator that acts on a spac e whose elemen ts ar e productions (see Chap. 6). 1 The example shows a problem that led us to consider not o nly pro ductions, but also the context in which they ar e to b e a pplied. In fact, the minimal context in whic h they can b e applied. This situation might b e ov erco me if we were able to define a minimal and unique 2 “host gr aph” with enough elements to p ermit all ope rations of a given concatenation or comp osition of pro ductions, w e would av o id to so me extent co ns idering matches and would remain within the r ealm of pr o ductio ns alone. In fact, as we shall see, it is p ossible to define such gra phs. W e name it minimal initial digr aph . Note tha t we were able to give a definition of compatibility in Def. 2.3.2 for a single pro duction b ecause it is clear (so o b vious that we did not mention it) which one is the minimal initial digraph: Its left ha nd side. An y pr oductio n dema nds ele men ts to exist in the ho s t gr aph in or der to b e applied. Also, some elemen ts must not be present. W e will touch on “for bidden” elemen ts in Sec. 5.2. Both are quite useful concepts b ecause they allow us to ignore matc hing if staying a t a g rammar definition level is desir ed (to study its potential b ehaviour or to define co nc e pts independently of the ho st gra ph), and als o the applicabilit y pr o blem (see problem 1) ca n b e c hara cterized thro ugh them. W e will r eturn to these co nc e pts once 1 In the SPO approach – see S ec. 3.2 – rewriting h as as side effect the deletion of dan gling edges. On e imp ortan t difference is that in our approac h it is defined as an op erator that enlarges the product ion or the sequence of pro ductions b y adding new ones. 2 Unique once the concatenation has b een completed. Minimal initial digraph m akes horizontal identific ation of elemen ts explicit. 100 5 Initial Digraphs and Composition matching is intro duced and character ized, in Sec. 6 .3 and also in C ha p. 8 when we define graph cons tr aint s a nd application conditions. Let’s turn to define and character ize minimal initial digraphs. One graph is k no wn which fulfills a ll demands of the coher en t sequence s n p n ; . . . ; p 1 – namely L n i 1 L i – in the sense that it ha s enoug h elements to ca rry out all op erations sp ecified in the sequence. Graph L is not completed (eac h L i with resp ect to the rest). If there are coherence issues among all gra mma r rules, then probably all nodes in all LHS of th e rules will b e unrela ted giving r ise to the disjoint unio n of L i . If, on the co ntrary , there are no coher ence pro blems at all, then we can ident ify across productio ns as many no de s of the sa me type in L i as desir ed. Definition 5. 1.1 (Minimal Initi al Digraph) L et s n p n ; . . . ; p 1 b e a c omplete d se- quenc e, a minimal initial digr aph is a s imple digr aph which p ermits al l op er ations of s n and do es n ot c ontain any pr op er su b gr aph with the same pr op erty. This concept will b e slig h tly gener alized in Sec. 6.3, Definition 6.3.1, in whic h w e consider the set of all p otential minim al initial digraphs fo r a given (non-completed) sequence and analyze its structure. In fac t, L is no t a digr aph but this initial dig raph set. Through completion o ne actual digraph can b e fixed. Theorem 5.1. 2 Given a c omplete d c oher ent se qu en c e of pr o ductions s n p n ; . . . ; p 1 , the minimal initial digr aph is define d by the e qu ation: M n ∇ n 1 p r x L y q . (5.1) Super scripts a re omitted to make form ulas easier to r ead (i.e. they apply to b o th nodes and edges ). In Fig. 5.6 on p. 106, for mula (5.1) a nd its nega tion (5.12) are expa nded for three pro ductions. Pr o of T o pro perly prove this theo rem we have to chec k that M n has enough edg e s a nd no des to apply all pro ductions in the s p ecified order , that it is minimal a nd finally that it is unique (up to iso morphisms). W e will pro ceed by induction on the num b er of pro ductions. By hypothesis we know that the conca tenation is co herent and thus the a pplication of one pro duction do es not exclude the ones co ming a fter it. In order to see that there 5.1 Minimal Initial Digraph 101 are sufficient no des and edg es, it is enough to chec k that s n p n i 1 L i q s n p M n q , as the most complete digr aph to start with is L n i 1 L i , which ha s enough elements due to coherence. 3 If we had a sequence consisting of only o ne pro duction s 1 p 1 , then it sho uld b e obvious that the minimal digra ph needed to a pply the concatenation is L 1 . In the case of a s e quence of tw o pro ductions, say s 2 p 2 ; p 1 , what p 1 uses p L 1 q is again needed. All edges that p 2 uses ( L 2 ), except those added ( r 1 ) by the first pro duction, are a lso mandatory . Note that the elements a dded ( r 1 ) by p 1 are no t c onsidered in the minimal initial digra ph. If a n e le men t is pr eserved (used and not er ased, e 1 L 1 ) by p 1 , then it should not b e taken into account: L 1 _ L 2 r 1 p e 1 L 1 q L 1 _ L 2 r 1 e 1 _ L 1 L 1 _ L 2 R 1 . (5.2) This form ula can be para phrased as “elements used by p 1 plus those needed b y p 2 ’s left hand side, except the ones resulting from p 1 ’s applica tion”. It provides enough elements to s 2 : p 2 ; p 1 L 1 _ L 2 R 1 r 2 _ e 2 r 1 _ e 1 L 1 _ L 2 R 1 r 2 _ e 2 R 1 _ r 1 R 1 L 2 _ e 1 R 1 L 2 r 2 _ e 2 p R 1 _ r 1 L 2 _ e 1 L 2 q r 2 _ e 2 p r 1 _ e 1 p L 1 _ L 2 qq p 2 ; p 1 p L 1 _ L 2 q . Let’s mov e one s tep forward with the sequence of three pro ductions s 3 p 3 ; p 2 ; p 1 . The minimal digraph needs what s 2 needed ( L 1 _ L 2 R 1 ), but even more so. W e hav e to add what the third pro duction uses ( L 3 ), exce pt what co mes out from p 1 and is not deleted by pro duction p 2 (this is, R 1 e 2 ), and finally remov e what co mes out ( R 2 ) fro m p 2 : M 3 L 1 _ L 2 R 1 _ L 3 p e 2 R 1 q R 2 L 1 _ L 2 R 1 _ L 3 R 2 e 2 _ R 1 . (5.3) Similarly to what has a lr eady b een done for s 2 , we chec k tha t the minimal initial digraph has enough elements so it is p ossible to a pply p 1 , p 2 and p 3 : 3 Recall that L is not completed so it somehow represents some digraph with enough elemen ts to apply s n to. This is not necessarily the maximal initial di gr aph as introdu ced in Sec. 6.3. 102 5 Initial Digraphs and Composition p 3 ; p 2 ; p 1 p M 3 q r 3 _ e 3 r 2 _ e 2 r 1 _ e 1 L 1 _ L 2 R 1 _ L 3 R 2 e 2 _ R 1 r 3 _ e 3 r 2 _ e 2 e 1 L 2 _ e 1 e 2 L 3 R 2 _ R 1 _ L 3 e 1 R 1 R 2 lo ooooooo omo ooo oooo on R 1 _ L 3 e 1 R 2 r 3 _ e 3 e 2 r 1 _ e 2 e 1 L 1 lo ooooo omo o oooo on e 2 R 1 _ e 2 e 1 L 2 _ r 2 _ L 3 e 1 e 2 r 2 L 2 lo oo ooooo omo o oooo oo on r 2 _ L 3 e 1 e 2 L 2 Æ r 3 _ e 3 p r 2 _ e 2 p r 1 _ e 1 p L 1 _ L 2 _ L 3 q qq p 3 ; p 2 ; p 1 p L 1 _ L 2 _ L 3 q . The same reaso ning applied to the case of four pro ductions yields: M 4 L 1 _ L 2 R 1 _ L 3 p e 2 R 1 q R 2 _ L 4 p e 3 e 2 R 1 q p e 3 R 2 q R 3 . (5.4) Minimality is inferred by construction, b ecause for e a c h L i all elements a dded b y a previous pro duction and not deleted by any pro duction p j , j i , a re r e moved. If any other elemen t is era sed from the minimal initial digraph, then some pro duction in s n would miss so me element. Now we w ant to expres s prev io us form ulas using op erator s ∇ and ▽ . The expressio n L E 1 _ n ª i 2 L E i △ i 1 1 R E x e E y (5.5) is clo se but we would be adding terms that include R E 1 e E 1 , a nd clearly R E 1 e E 1 R E 1 , which is what we have in the minima l initial digra ph. 4 Thu s, co nsidering the fact that ab _ a b a (see Sec. 2.1) we eliminate them by p erforming or op erations: e E 1 ▽ n 1 1 R E x L y 1 . (5.6) we hav e arr ived at a form ula for the minimal initial digraph which is slightly differe n t from that in the theorem: M n L 1 _ e 1 ▽ n 1 1 R x L y 1 _ n ª i 2 L i △ i 1 1 R x e y . (5.7) 4 Not in formula (5.1) bu t in expressions derived up to no w for minimal initial digraph: form u- las (5.2) and (5.3). 5.1 Minimal Initial Digraph 103 Please r efer to Fig. 5 .3 where, to the right, ex pression (5.7) is repre s en ted while to the left the same equation, but simplified, is depicted for n 4. Fig. 5. 3. Minimal Initial Digraph (Intermediate Expression). F our Produ ctions Our nex t step is to show that previous identit y is equiv a len t to M n L 1 _ e 1 ▽ n 1 1 p r x L y 1 q _ n ª i 2 L i △ i 1 1 p r x e y q , (5 .8) illustrating the way to pro ceed fo r n 3. T o this end, equa tion (4.13) is used as well as the fact that a _ ab a _ b (see Sec. 2.1): M 3 L 1 _ L 2 R 1 _ L 3 R 2 e 2 _ R 1 L 1 _ L 2 r 1 e 1 _ L 1 _ L 3 r 2 e 2 _ L 3 r 2 L 2 e 2 _ r 1 e 1 r 1 L 1 L 1 _ L 2 r 1 L 1 _ L 2 e 1 _ L 3 e 2 _ L 3 e 2 e 1 _ L 3 e 2 r 1 L 1 _ L 3 e 2 L 2 lo oooooo oooooooooo omo o ooooooooo oooooo on disapp ears du e to L 3 e 2 _ _ L 3 r 2 L 2 r 1 L 1 _ L 3 r 2 L 2 e 1 L 1 _ L 2 p r 1 _ e 1 q _ L 3 L 2 r 2 r 1 _ L 3 e 2 _ L 3 L 2 r 2 e 1 L 1 _ L 2 r 1 _ L 3 r 2 p e 2 _ r 1 q . But (5.8) is w ha t we hav e in the theorem, b ecause as the concatenatio n is co herent, the third term in (5.8) is z e r o: 5 n ª i 2 L i △ i 1 1 p r x e y q 0 . ( 5.9) 5 This is precisel y the second term in (4.42), the equation that c haracterizes coherence. 104 5 Initial Digraphs and Composition Finally , as L 1 L 1 _ e 1 , it is po ssible to omit e 1 and obtain (5.1), recalling that r L L (by Pr op. 4.1.4). Uniqueness can b e prov ed by cont radictio n. Use equation (5.1) a nd induction on the nu mber of pro ductions. Fig. 5. 4. Non-Compatible Pro ductions (Rep.) Example. Let s 2 u ; v and s 1 2 v ; u (first introduced in Fig. 5.2 o n p. 9 9 and repro - duced in Fig . 5.4 for the reader conv enience). Minimal initial digr a phs for these pr o duc- tions ar e repres en ted in Fig. 5.5. The way we hav e introduced the concept of minimal initial digraph, M 2 cannot be considered as such beca use either for s equence u ; v or v ; u ther e are subgra phs tha t p e rmit their application. In the same figure the minimal initial digr aphs for pro ductions q 3 ; q 2 ; q 1 and q 1 ; q 3 ; q 2 are also repr esen ted. Pro ductions q i can b e found in Fig . 4.8 on p. 87. Fig. 5. 5. Minimal Initial Digraph. Examples and Coun terexample W e will explicitly c ompute the minimal initial digra ph for the co nc a tenation q 3 ; q 2 ; q 1 . In this example, and in order to illustr ate some o f the steps used to prov e the pr e vious theorem, for m ula (5 .7) is used. O nce simplified, it lays the equa tion: 5.1 Minimal Initial Digraph 105 L E 1 _ L E 2 R E 1 lo ooo o omo o ooo on pq _ L E 3 R E 2 e E 2 _ R E 1 lo o ooooooo omo o ooooooo on pq . The or dering of no des is r 2 3 5 1 4 s . W e will o nly display the co mputation for (*) , being (**) very similar: 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 _ 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 0 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 pq _ pq 0 0 1 0 1 | 2 0 0 0 0 0 | 3 1 0 1 0 0 | 5 0 0 0 0 0 | 1 0 0 0 0 0 | 4 _ 0 0 0 1 0 | 2 0 0 0 0 0 | 3 1 0 0 0 0 | 5 0 0 0 1 0 | 1 0 0 0 0 0 | 4 0 0 1 1 1 | 2 0 0 0 0 0 | 3 1 0 1 0 0 | 5 0 0 0 1 0 | 1 0 0 0 0 0 | 4 Depicted to the center of Fig . 5.5. A clo sed formula for the effect of the a pplication of a coherent concatenatio n can b e useful if we wan t to o per ate in the general ca se. This is whe r e next coro llary comes in. Corollary 5.1. 3 L et s n p n ; . . . ; p 1 b e a c oher ent c onc atenation of c omplete d pr o duc- tions, and M n its minimal initial digr aph as define d in (5.1). Then, s n M E n n © i 1 e E i M E n _ △ n 1 e E x r E y (5.10) s n p M E n q n © i 1 r E i M E n _ △ n 1 r E x e E y (5.11) Pr o of Theorem 5.1.2 proves that s n M E n s n p n i 1 L i q . T o de r ive the formulas apply in- duction on the num b er of pro ductions a nd eq. (4.10). Remark . Equation (5.11) will b e useful in Sec. 5.3 to calcula te the compatibility of a sequence. More interestingly , note that eq uation (5.1 0) has the sa me shap e as a single pro duction p r _ e L , where: e n © i 1 e E i r △ n 1 e E x r E y . 106 5 Initial Digraphs and Composition How ever, in contrast to wha t happ ens with a single pro duction, the or der o f appli- cation do es matter, being nece s sary to carr y out deletion first and addition afterwards. The firs t e q uation are tho se elements not deleted by any pr oductio n and the second is what a gra mma r rule adds and no previous pro duction deletes ( pr evious with resp ect to the o rder of application). Equation (5.10) is closely rela ted to compositio n of a sequence of pro ductions as defined in Sec. 5.3, Pro p. 5.3.4. This explains why it is p ossible to interpret a cohe r en t sequence of pr oductio ns as a s ingle pro duction. Recall that any sequence is coherent if the a ppropriate horizontal identific ations ar e per fo rmed. Fig. 5.6. F ormulas (5.1) and (5.12) fo r Three Pro ductions The negation of the minimal initial digra ph that app ear s in equatio n (5.11) – seen in Fig. 5.6 – can b e ex plic itly calculated in terms of o pera tor nabla: M n ∇ n 1 1 L x r y _ n © i 1 L i . (5.12) F or th e sak e of curiosit y , if we used form ula (5.8) to calculate the minimal initial digraph, the repr e sen tation of its negation is included in Fig. 5.7 for n 3 and n 4. It might be use ful to find an expr e ssion using op erator s ▽ and ∇ for these digr a phs. 5.2 Negativ e Initial Digraph 107 Fig. 5.7. Equation (5.8) for 3 and 4 Pro ductions (N egation of MID) 5.2 Negative Initial Digraph W e will make use in this s ection o f forbidden elements and the nihil matrix K as intro- duced in Sec. 4.4. The ne gative initial digr aph K p s n q for a c o herent seq ue nce s n p n ; . . . ; p 1 is the smallest simple digraph whos e elements can not b e found in the hos t gra ph to guar ant ee the a pplica bilit y of s n . 6 It is the symmetr ic concept to minimal initial digraph, but for nihilation matr ic es. Definition 5.2. 1 (Negativ e Initial Digraph) L et s n p n ; . . . ; p 1 b e a c omplete d se- quenc e, a ne gative initial digr aph is a simple d igr aph c ontaining a l l elements tha t c an sp oil any of the op er ations of s n . Negative initial digr aphs dep end on the w ay productions are completed (minimal initial digra phs to o). In fact, a s minimal and neg ative initial digraphs are nor mally cal- culated a t the same time for a g iven s equence, there is a clos e relationship betw een them (in the sense that o ne conditions the other). This concept will be addressed again in Sec. 6.3, together with minimal initial digraphs a nd initial set s . Let’s int ro duce the dual notion to that of ne g ative initial dig raph: 6 It is n ot possible to sp eak of applicabilit y b ecause we are not considering matches yet. This is just a w ay to intuitively introdu ce th e concept . 108 5 Initial Digraphs and Composition T r b r t ^ e b e t . (5.13) T ar e the newly av aila ble edges after the a pplication of a pro duction due to the addition of no des. 7 The first term, r b r t , has a one in all edges inciden t to a vertex that is added by the pr oductio n. W e hav e to r emov e those edges that a re incident to so me no de deleted by the pro duction, which is what e b e t do es. Fig. 5. 8. Av ailable and Una v ailable Edges After the A pplication of a Pro duction Example . Figure 5.8 depicts to the left a pr oductio n q that deletes node 1 and a dds no de 3. Its nihil term and its image are K q D r _ eD 1 0 1 1 0 1 1 0 0 Q q 1 p K q e _ r K 1 1 1 1 0 0 1 0 0 T o the right of Fig. 5.8, matrix T is included. It sp ecifies thos e elements that ar e not forbidden once pro duction q has bee n applied. W e will pr ov e how the nihil matrix evolv es according to the pro duction in Sec. 9.2 – in particular in P r op. 9.2.5 on p. 21 7. As commented in Sec.4 .4 for the matr ix D , notice that T do not tell actions of the production to be p erformed in the complement of the host graph, G . Actions of pro ductions ar e sp ecified exclus ively by matrices e a nd r . Theorem 5.2. 2 Given a c omplete d c oher ent se qu en c e of pr o ductions s n p n ; . . . ; p 1 , the ne gative initial digr aph is given by the e qu ation: K p s n q ∇ n 1 e x T x K y . (5 .14) 7 This is why T do es not app ear in the calculation of the coherence of a seq u ence: coherence takes care of real actions p e, r q and not of p otential elements that ma y or may not b e av ailable D , T . 5.2 Negativ e Initial Digraph 109 Pr o of (Sketch) W e can prov e the result taking into account elements added by productions in the sequence but not dangling edges fo r now. The pro of is similar to that of Theorem 5.1.2, so it can b e used to fill in the g aps. A mo re detailed pro of can b e found in [66]. Let’s concent rate on wha t should not b e found in the host g raph assuming that what a pro duction adds is no t interfered by actions of previo us pro ductions. Note that this is coherence, assumed by hypo thes is. Consider for example sequence s 2 p 2 ; p 1 . Coherence detects those elements added by b oth pro ductions ( r 1 r 2 0) and als o if p 2 adds what p 1 uses but do es not delete ( r 2 e 1 L 2 0). 8 Hence, we may not care ab out them. In the pro of of Theorem 5.1.2, the final part precisely address es this p oint. Now we pro ceed by induction. The ca se for one pro duction p 1 considers elemen ts added by p 1 , i.e. r 1 . F or tw o pro ductions s 2 p 2 ; p 1 , b esides what p 1 rejects, what p 2 is g o ing to add can not b e found, except if p 1 deleted it: r 1 _ r 2 e 1 . Three pro ductions s 3 p 3 ; p 2 ; p 1 should reject what s 2 rejects and also what p 3 adds and no previous pro duction deletes: r 1 _ r 2 e 1 _ r 3 e 2 e 1 . W e are using coherence here b ecause the case in which p 1 deletes edge ǫ and p 2 adds edge ǫ (we should hav e a pro blem if p 3 also a dded ǫ ) is ruled out. By induction we finally obtain: ∇ n i 1 p e x r y q . (5.15) Now, instead of considering as forbidden only those elements to b e app ended b y a pro duction (not deleted by prev ious ones), any p otential dangling edge 9 is also ta k en into account, i.e. r y can be substituted by K y (note that e α K α K α ). T o derive eq. (5.1 4) just put T x for thos e edges that are av a ilable again. Example . Recall pro ductio ns q 1 (Fig. 4.3 on p. 77), q 2 and q 3 (Fig. 4.4 on p. 81), repro duced in Fig. 5 .9 for the r e ader conv enience. W e will calculate the neg ative initial 8 This is precisely t he part of coherence (equation 4.42) not used in the pro of of Theorem 5.1.2, the one for minimal initial digraphs: n i 1 R E i ▽ n i 1 e E x r E y . Another reason for the natu- ralness of K . 9 Of course edges incident to nod es considered in t h e pro ductions. There is no information at this p oin t on edges provided by other nodes that migh t b e in the host graph (to distance one to a n ode that is going to b e deleted). 110 5 Initial Digraphs and Composition Fig. 5. 9. Prod uctions q 1 , q 2 and q 3 (Rep.) digraph for sequence s 3 q 3 ; q 2 ; q 1 . Its minimal initial dig r aph can b e found in Fig. 5.5, on p. 104. Expanding equation (5.14) for s 3 we get: K p s 3 q K 1 _ e 1 K 2 _ e 1 e 2 K 3 . (5.16) In Fig. 5.10 we hav e represented neg ative gra phs fo r the pro ductions ( K i ) and graph K for s 3 . As there ar e quite a lot of arrows, if t wo no des are connected in both directions then a single bo ld arrow is used. Adjacency matrices (order ed r 2 4 5 3 1 s ) for first three graphs ar e: K 1 0 0 0 1 0 1 1 1 1 1 0 1 0 1 0 0 1 0 1 0 0 1 0 0 0 ; K 2 r 2 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 ; K 3 0 0 0 1 0 1 1 1 1 1 0 1 0 1 0 0 1 0 1 0 0 1 0 0 0 The r est of ma trices and calc ulations are omitted fo r spac e considera tions. Matrix K pro vides information on what will b e ca lle d intern al ε -pr o ductions in Sec. 6.4. These ε -pro ductions ar e grammar rules automatically gener a ted to deal with dangling edges. W e will distinguish betw een internal and external , being in ternal (to the sequence) thos e tha t deal with edges added by a previo us pro duction. As ab ov e, think of G as an “ambien t gr aph” in which o pera tions take place. A final remark is that T makes the num b e r of edges in G as small as p ossible. F or example, 5.3 Comp osition and Compatibilit y 111 Fig. 5. 10. NID for s 3 q 3 ; q 2 ; q 1 (Bold = Tw o Arrows) in e 1 e 2 T 1 T 2 K 2 we ar e in particula r demanding e 1 T 1 T 2 r 2 (beca use K 2 r 2 _ e 2 D 2 ). If we start with a co mpa tible ho s t g r aph, it is not ne c e ssary to ask for the absence of edges incident to no des that are added by a pro duction ( p otential ly available ). Notice that these edge s could no t b e in the hos t graph as they would b e da ngling edges or w e would b e adding an a lr eady existent no de. Summar izing, if compatibility is assumed o r demanded by hypo thesis, w e may safely ig nore T x in the formula fo r the initial digra ph. This r emark will be used in the pro of of the G-c ongruence characteriz a tion theo rem in Sec. 7.1. 5.3 Comp osition and Comp atibilit y Next we a re going to introduce compatibility for sequences (extending Definition 4.1 .5) and als o comp osition. Comp osition defines a unique pro duction that to a ce r tain extent 10 per forms the same actions than its cor resp onding sequence (the o ne that defines it). Recall tha t co mpatibilit y is a means to deal with dangling edges, equiv alent to the dangling condition in DPO. When a co nc a tenation of pro ductions is consider ed, we are not only concerned with the final re s ult but also with intermediate states – par tia l results – of the sequence. Compatibility should take this into a ccount and thus a concatenation is said to b e compatible if the overall effect on its minimal initia l digraph gives as result 10 If a pro duction inside a sequen ce deletes a n ode and afterw ards another pro duction adds that same no de, the ov erall effect is t h at the no de is not touched. This ma y affect t h e deletion of dangling edges in an actual host graph (those incident to some no de not app earing in the prod uctions). 112 5 Initial Digraphs and Composition a compatible digraph starting from the first pr oductio n and increas ing the sequence un til we g e t the full concatenation. W e s hould then ch eck compatibility for the g rowing sequence of concatenations S t s 1 , s 2 , . . . , s n u where s m q m ; q m 1 ; . . . ; q 1 , 1 ¤ m ¤ n . Definition 5. 3.1 A c oher en t se quenc e s n q n ; . . . ; q 1 is said to b e c o mpatible if the fol lowing identity is verifie d: n ª m 1 s m M E m _ s m M E m t d s m p M N m q 1 0 . (5.17) Corollar y 5.1.3 – equa tions (5.10) and (5.11) – give clo s ed form formulas for the terms in (5 .1 7). Of co urse this definition co incides with Def. 4.1.5 fo r one pro duction and with Def. 2 .3.2 for the case o f a single gra ph (consider the iden tity pro duction, for e x am- ple). Coherence exa mines whether a ctions sp ecified b y a sequence of pr o ductions are fea- sible. It warns us if o ne pr oductio n adds or dele tes a n element that it should not, a s some later pro duction might need that element to carry out a n op era tion that b ecomes impo ssible. Co mpatibilit y is a mo re b asic co ncept b ecause it examines if the r esult is a digraph, that is, if the cla ss o f a ll dig raphs is clo sed under the op erations s pecified by the se quence. Fig. 5.11. Minimal Initial Digraphs for s 2 q 2 ; q 1 5.3 Comp osition and Compatibilit y 113 Example. Consider sequence s 3 q 3 ; q 2 ; q 1 , with q i as defined in Figs. 4 .3 and 4.4 o n pp. 77 and 8 1, r e spectively . In order to chec k eq ua tion (5.17) w e ne e d the minimal initial digraphs M 1 (the LHS of q 1 ), M 21 (coincides with the LHS of q 1 ) and M 321 , that can be found in Figs. 5.11 and 5.1 2 on p. 1 16. Equation (5.17) for m 1 is compa tibility o f pro duction q 1 which has b een calculated in the exa mple of p. 77. F or m 2 we ha ve s 2 M E 21 _ s 2 M E 21 t d s 2 M N 21 1 (5.18) which should b e zero with no des order ed a s b efore, r 2 3 5 1 4 s . The evolution of the v ector of no des is r 1 0 1 0 1 s q 1 ÞÝ Ñ r 1 1 1 0 0 s q 2 ÞÝ Ñ r 1 1 1 0 1 s . Making all substitutions accor ding to v a lues display ed in Fig. 5.11 we obtain: p 5.18 q 0 0 1 0 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 0 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Æ Æ Æ d 0 0 0 1 0 0 | 2 0 | 3 0 | 5 0 | 1 0 | 4 As c ommen ted ab ov e, we can ma k e us e of ident ities (5.10) and (5.11). The case m 3 is very similar to m 2. There is another example b elow (on p. 115) with the gra phical evolution of the states of the s y stem. Once w e hav e se e n compatibility for a sequence the following corolla ry to Theo- rems 5 .1.2 and 5.2 .2 can b e sta ted: Corollary 5.3. 2 L et M b e a minimal initial digr aph and K the c orr esp onding ne gative initial digr aph for a c oher ent and c omp atible se quenc e, then M ^ K 0 . Pr o of Just co mpare equations M ∇ n 1 p r x L y q and K ∇ n 1 e x T x K y . W e know that e le men ts added a nd deleted by a pro duction a re disjoint. This implies that the nega tion of the corres p onding adjacency matrices hav e no common elements. Int uitively , if we interpret matrices M and K a s elements that must b e and m ust not be present in a p otential host gra ph in order to apply the sequence , then it sho uld b e clear that L i and K i m ust also b e disjoint. This point will b e address ed in Cha p. 8. The next prop ositio n is a sort of conv erse to Coro llary 5.3.2. 114 5 Initial Digraphs and Composition Prop osition 5.3 .3 L et s p n ; . . . ; p 1 b e a se quenc e c onsisting of c omp atible pr o duc- tions. If ▽ n 1 p e x r x M p s y q K p s y qq 0 (5.19 ) then s is c omp atible, wher e M p s m q and K p s m q ar e the minimal and ne gative initial digr aphs of s m p m ; . . . ; p 1 , m P t 1 , . . . , n u . Pr o of (Sketch) Equation (5 .19) is a restatement of the definition of co mpatibilit y for a se q uence of pro ductions. The co ndition appea rs when the certaint y and nihil pa rts are demanded to hav e no common elements. Compatibilit y of ea c h production is used to s implify ter ms of the for m L i K i . As happ ened with coherence – a nd will happ en with gr aph congruence in Sec. 7.1 – eq . (5.1 9) for compa tibilit y provides information on whic h elements may preven t it. Compatibility and coherence are related notio ns but only to some exten t. Coherence deals with actions of pro ductions, while compatibility with p otential pres e nce or absence of element s. So far we hav e prese nted co mpatibilit y and will end this sectio n studying compositio n and the circumsta nces under which it is p ossible to define a single pr o duction if a coherent concatenation is g iv en. When w e in tro duced the notion of pro duction, w e first defined its LHS and RHS and then w e ass oc ia ted some matrices ( e and r ) to them. The situation for defining comp osition is similar, but this time w e first obs e rve the overall effect (its dynamics , i.e. matrices e and r ) o f the pro duction a nd then decide its left and rig h t hand sides. Assume s n p n ; . . . ; p 1 is coher en t, then the co mpositio n of its pro ductions is ag ain a pro duction defined by the rule c p n p n 1 . . . p 1 . 11 The de s cription of its era sing and its addition matric es e a nd r are given by equations: S E n ¸ i 1 r E i e E i (5.20) S N n ¸ i 1 r N i e N i . (5 .21) 11 The concept and notation are those commonly u sed in math ematics. 5.3 Comp osition and Compatibilit y 115 Due to co herence we know that elements of S E and S N are either 1, 0 or 1, so they can b e split int o their p ositive and negative parts, S E r E e E , S N r N e N , (5.22) where a ll r and e elements are either zer o or o ne. W e hav e: Prop osition 5.3 .4 L et s n p n ; . . . ; p 1 b e a c oher ent and c omp atible c onc atenation of pr o duct ions. Then, the c omp osition c p n p n 1 . . . p 1 defines a pr o duction with matric es r E r E , r N r N and e E e E , e N e N . Pr o of F ollows from comments ab ov e. The LHS is the minimal digraph neces sary to carr y out all op erations sp ecified b y the comp o sition (plus those preserved by the pro ductions). As it is only one pro duction, its L HS equa ls its er a sing matrix plus pr eserved elemen ts and its r ight hand side is just the image . The concept o f comp osition is closely r elated to the formula which outputs the ima g e of a compatible and coherent sequence. Refer to Co rollary 5.1.3. Note that preserved ele ments do dep end on the or der o f pro ductions in the sequence. F or example, sequence s 3 p 3 ; p 2 ; p 1 first preserves (appears in L 1 and R 1 ) then deletes ( p 2 ) and finally adds ( p 3 ) element α . This element is nece s sary in order to apply s 3 . How ever, the p ermutation p 1 3 p 2 ; p 1 ; p 3 first adds α , then preserves it and finally deletes it. It canno t be a pplied if the element is pre sen t. Corollary 5.3. 5 With t he notation as ab ove, c p M n q s n p M n q . Comp osition is helpful when w e ha ve a co herent co ncatenation and int ermediate states are useless or undesired. It will be utilized in sequential indep endence and explicit par- allelism (Secs. 7 .2 and 7 .4). Example. W e finish this sectio n consider ing sequence s 3 q 3 ; q 2 ; q 1 again, calculat- ing its compo sition c 3 and comparing its result with tha t of s 3 . Recall that S E p s 3 q ° 3 i 1 r E i e E i r E e E . 3 ¸ i 1 r E i 1 1 0 0 1 | 2 1 1 0 0 0 | 3 1 1 1 0 0 | 5 0 0 0 0 0 | 1 0 0 0 0 0 | 4 3 ¸ i 1 e E i 0 1 0 0 1 | 2 1 0 0 0 0 | 3 1 0 1 0 0 | 5 0 0 0 1 0 | 1 0 0 0 0 0 | 4 116 5 Initial Digraphs and Composition Fig. 5.12. Composition and Concatenation of a non-Compatible Sequence S E p s 3 q 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 r E e E . Sequence s 3 has b een chosen not only to illus trate comp osition, but a lso compatibility and the s ort of problems that may arise if it is not fulfilled. In this case, q 3 deletes no de 3 and edg e (3 ,2 ) but do es not sp ecify anythin g ab out edges (3,3 ) a nd (3,5 ) – the red dotted elements in Fig. 5.12 –. In order to apply the comp osition, either the co mp osed pro duction is c hanged by considering these elemen ts or elements hav e to b e related in other way (in this cas e, unrelated). Previous exa mple pr ovides us with some clues o n how the match could b e defined. The basic idea is to int ro duce a n op erator ov er the set of pro ductions, so once a ma tc h ident ifies a place in the host gr aph where the r ule might b e a pplied, the op erator mo di- fies the rule enlarging the deletio n ma tr ix. This way no dangling edge appear s (it should enlarge the grammar rule to include the cont ext of the original rule in the graph, adding all elements on b oth LHS and RHS). In essence, a match should be an injectiv e mor- phism (in Matrix Graph Gr ammars) plus an op erator. Pr e-calculated information for 5.4 Su mmary and Conclusions 117 coherence, sequentialization, and the like, should help and hop efully reduce the amount of calcula tions during runtime. W e will s tudy this in Chap. 6. This section ends noting that, in Matrix Graph Grammar s, one pr oductio n is a mor- phism b etw een tw o simple dig r aphs and thus it may carr y o ut just o ne action on each element. When the comp osition of a conca tena tion is p erformed we get a s ingle pro duc- tion. Suppose o ne pr oductio n sp ecifies the deletion of an element and another its addition, the overall ma thematical result o f the comp osition should leave the ele men t unalter ed. When a match is considere d, depe nding on the chosen approa c h, a ll dang ling edg e s in- cident to those erased no des should be remov ed, establis hing an imp ortant difference betw een a seque nc e and its comp osition. 5.4 Summary and Conclus ions Minimal and negative initial digraphs are of fundamental imp ortance, demanding the minimal (maximal) se t of elements tha t m ust b e found (m ust not b e found) in o rder to apply the seq uence under co nsideration. In particular they will b e used to give one characterization of the applicability problem (problem 1). Also, c ompo s ition and the main differences b e tw een this and co nc a tenation hav e bee n addressed. Composition can be a useful to ol to study concurrency . Recall from Sec. 5.3 that differences in the image of th e compositio n are not due to the o rder in which op era tions are p erformed but in those e lemen ts needed by the pro ductions, i.e. in the initial dig raph. This also g ives infor mation on initial digraphs a nd its ca lc ulation. This topic – which w e call G-c ongruenc e – will b e addressed in deep er detail in Sec. 7.1 . So far we hav e develop ed some analytical techniques independent (to some exten t) of the initial state of the system to which the gramma r r ules will b e applied. This allows us to obtain informa tio n ab out gra mmar rules themselves, for exa mple at des ign time. This infor mation may b e useful during runtime. W e will return to this p oint in future chapters. Chapter 6 star ts with the semantics of a grammar r ule application, so a host graph or initial state will b e considered. Among other things the fundamental concept of direct deriv ation is introduce d. W e will see what ca n b e recovered of what we hav e develop ed so far and how it can b e use d. 6 Matc hing There are tw o fundamental parts in a g r ammar: Actions to b e p erformed in every single step (grammar rules) and where these actions are to b e p erformed in a sy s tem ( matching ). Previous c hapter deals with the former and this c hapter with the latter. Also, restrictio ns on the applicabilit y of rules and their embedding in the host graph need to b e addressed. This topic is studied in Cha p. 8. If a r ule is applied we a utomatically have the pair (pro duction, matc h ) – norma lly called dir e ct deriva tion – which in esse nc e sp ecifies what to do and wher e to do it. If instead o f a single rule we consider a sequence with their cor resp onding matches then w e will sp eak of deri vation . These initial definitions, together with the matching are studied in Sec. 6.1 in which w e will make use of some functional analys is notation (see Sec. 2.5). When a match is c onsidered, ther e is the p oss ibility that a new pro duction (so called ε -pro duction) is co ncatenated to the orig inal one. 1 Both pro ductions must b e a pplied (matched) to the same no des. The mec hanism to obtain this effect can be found in Sec. 6.2 (ma rking). An impor tan t issue is to s tudy to what exten t the notions intro duce d at s pecifica tion time (coherence, comp osition, etc) can b e recov ered when a host graph is considere d. They will be revisited considering minimal and negative initial dig raphs (see Secs. 5.1 and 5.2) in a wider context in Sec. 6 .3. A clas sification of ε -pro ductions – helpful in Chap. 1 0 – is accomplished in Sec. 6.4. The chapter ends with a summar y in Sec. 6.5. 1 ε -pro ductions take care of those edges – dangling edges – not sp ecified by the pro duction and incident to some no d e that is going to b e deleted. 120 6 Matc hing 6.1 Matc h and E xtended Match Matching is th e ope ration of iden tifying the LHS of a rule inside a host gr aph. This ident ification is not necessa rily unique, b ecoming one sour ce of non de ter minism. 2 The match can b e considere d a s one of the ways of completing L with res pect to G . Definition 6. 1.1 (Matc h) Given a pr o duction p : L Ñ R and a simple digr aph G , any tuple m p m L , m K q is c al le d a match (for p in G ), with m L : L Ñ G and m K : K E Ñ G E total inje ctive morphisms. Besides, m L p n q m K p n q , n P L N . (6.1) The tw o main differences with resp ect to matches as defined in the liter ature is that Def. 6.1.1 dema nds the non-e x istence of p otential pr o blematic elements a nd that m must be injective. It is useful to consider the str ucture defined by the negatio n of the host g raph, G p G E , G N q . It is ma de up o f the graph G E and the vector of no des G N . Note that the negation of a graph (both, the a dja c ency matrix and the no de vector) is not a g raph bec ause in g e neral compa tibilit y will fail. Of course, the adjacency matrix alone G E do es define a graph. The negatio n of a g raph is equiv alent to taking its co mplemen t. In gener a l this com- plement will b e taken inside so me “big g er gr aph”, norma lly constructed by p erfor ming the completion with respect to other g r aphs in volved in the operatio ns. F or example, when chec king if gr aph A is in G E (suppo se that A has a no de that is not in G ) we obtain that A cannot b e fo und in G E , unless G E is previously co mpleted with that node and all its incident edges. Notice that t he negation of a g raph G coincides with its complemen t. Proba bly it should b e mor e appropriate to keep the negation symbo l (the ov erline) when there is no completion (in other words, complement is taken with resp ect to the gra ph itself ) a nd use c when other graphs a re in volv ed. F rom now on the ov er line will be used in all c a ses. This abuse o f notation sho uld not b e co nfusing. 2 In fact t here are tw o sources of non-d eterminism. A p art from the one already men tioned, the rule to be applied is also c hosen non-d eterministicall y . 6.1 Matc h and Extended Matc h 121 Next, a notion of dir ect der iv ation that covers not only elements that must b e present ( L ) but a ls o those that should no t appea r ( K ) is prese n ted. This extends the concept o f deriv ation found in the literature, which only considers ex plicitly p ositive information. K E m K L m L p R m L G E G p H Fig. 6.1. Produ ction Plus Matc h ( Direct Deriv ation) Definition 6.1. 2 (Direct Deriv ation) Given a pr o duction p : L Ñ R as in Fig. 6.1 and a m atch m p m L , m K q , d p p, m q is c al le d a direct deriv ation with r esult H p p G q if t he squar e is a pushout: m L p p L q p m L p L q . (6.2 ) The standar d notation in t his c ase is G p p,m q ù ñ H , or even G ù ñ H if p , m or b oth ar e not r elevant. W e will see b elow that it is not necessar y to rely on catego ry theory to define direct deriv ations in Ma trix Graph Grammar s. It is included to ease compar ison with DPO and SPO a pproaches. Figure 6.1 dis pla ys a pro duction p and a match m for p in G. It is p ossible to c lo se the diag ram making it commutativ e p m p p m q , using the pushout co nstruction (see [22]) on categor y Graph P of simple digraphs and partial functions. This categor ical construction for r elational gra ph rewriting is ca rried out in [52]. Se e Sec. 3.6 for a quick ov e rview on the relational appro ach . 3 3 There is a sligh t d ifference, though, as w e h a ve a simpler case. W e demand matchings to b e injectiv e which, by Prop. 2.6 in [52], implies that comatches are injective. 122 6 Matc hing If a c oncatenation s p n ; . . . ; p 1 is considered together with the se t of matc hings m t m 1 , . . . , m n u , then d p s, m q is a derivation . In this case the notatio n G ù ñ H is used. When applying a rule to a host g raph, the main pro blem to concentrate o n is that of so - called dangling e dges , whic h is differently addressed in DP O and SPO (se e Secs. 3.1 and 3.2). In DPO, if one edge co mes to b e dangling then the r ule is no t appli- cable for that match. SPO allows the pro duction to b e a pplied by deleting any dangling edge. F or Matrix Graph Grammars w e prop ose a n SPO-like b ehaviour as in our case a DP O - like b ehaviour 4 would b e a pa rticular c ase if compatibility is considered as an application condition (see Cha p. 8). 5 L c p R c L m G m L p R m L G m ε p H m ε G p p H L i L m G m L p R i R m L G i G m ε p H i H m ε L G x m p p R H x m G p p H Fig. 6.2. (a) Neighborho od. (b) Extended Match Figure 6 .2 shows our s tr ategy to ha ndle dang ling edges: 1. C o mplete L with resp ect to G ( c and c to the left of Fig . 6 .2). It is necessar y to match L in G to this end. 6 4 In future sections w e will sp eak of fixe d and flo ating grammars, resp ectiv ely . 5 If ε -p rodu ct ions are not allo wed and a ru le can b e applied if t he output is again a simple digraph (compatibilit y) then we ob t ain a DPO-lik e behaviour. 6 Abusing a little of the notation, graphs b efore completion and after completion are represented with the same letter, L and R . 6.1 Matc h and Extended Matc h 123 2. Mo rphism m L will iden tify rule’s le ft hand side (after completion) in the host graph. 3. A neighborho o d o f m p L q G cov ering all r elev ant extra elements is selected taking int o a ccount all dangling edges not co nsidered by match m L with their corre s pond- ing sourc e and targ e t no des. This is p erformed by a morphism to b e studied later, represented b y m ε . 4. Fina lly , p is enla rged erasing any p oten tial dangling edge . This is carr ied o ut by an op erator that we will write a s T ε . See definition b elow on p. 12 5. The order o f previous steps is imp ortant as p otent ial da ngling element s must be ident ified and er ased b efore any no de is deleted by the or iginal rule. The c opro duct in Fig. 6.2 should b e under s too d as a mea ns to couple L and G . The existence of a mo rphism p that closes the top square on the right of Fig. 6.2 is not guaranteed. This is where m ε comes in. This mapping, a s explained in po in t 2 above, extends the pro duction to consider any edge to distance 1 fro m no des a ppea r ing in the left hand side of p . 7 Note that if it is p ossible to define p (to clos e the square) then m ε would b e the ident ity , a nd vice versa. In o ther w o rds, if there are no dangling edges then it is p ossible to make the top squa re in Fig . 6.1 commute and, hence, it is not necessa ry to car ry out any production “c o n tinuation”. The conv erse is also true. Γ X m p L q m p L q Γ Γ Y m p L q Fig. 6.3. Matc h Plus Poten tial D angling Edges Let b e given a pro duction p : L Ñ R , a host gra ph G and a ma tc h m : L Ñ G . The graph Γ is the set o f dangling edge s together with their source and targ et no des. 7 The idea may resem ble analytical contin uation in complex va riable, when a function defined in a smal ler domain is un iquely ext ended to a larger one. 124 6 Matc hing Abusing a little bit of the no tation (justified by the pushout co nstruction in Fig. 6 .3) we will write Γ Y m p L q for the g raph consisting of the image of L by the match plus its po ten tial dangling edges (and a n y inciden t no de). Recall nihilatio n ma trix definition, esp ecially Lemma 4.4.2. Definition 6. 1.3 (Extended M atc h) With notation as ab ove (r efer also to Fig. 6.2), the ext ende d match p m : L G Ñ G is a m orphism with image Γ Y m p L q . As commented ab ov e, copro duct in Fig. 6.2 is use d just for coupling L a nd G , b eing the fir st embedded int o the second by morphism m L . W e will use the notation L def m G p L q def p m ε m q p L q (6.3) when the image of the LHS is extended with its p otential dang ling edges, i.e. ex tended digraphs ar e underlined and defined by comp osing m and m ε . 8 Fig. 6.4. Matc hing and Extend ed Match Example. Consider the digr aph L 1 , the host gra ph G and the morphism match depicted to the left of Fig. 6.4. On the top rig ht side in the s ame figure m 1 p L 1 q is drawn and 8 There is a notational tric k here, where “contin uation” is represented as composition of mor- phisms p m L m ε q . This is not correct un less, as explained in Sec. 4.2, matrices are completed. Recall that comp letion ext en ds th e domain of morphisms (interpreting matrices as morphisms b et ween d igraphs). This is precisely step 1 on p. 122. 6.1 Matc h and Extended Matc h 125 m G p L 1 q on the b ottom rig h t side. No des 2 and 3 a nd edges p 2 , 1 q and p 2 , 3 q hav e b een added to m G p L q which would b ecome dangling in the ima ge “ graph” of G b y p 1 (as it can not b e defined it ha s b een drawn s hadow ed). This is wh y p 1 can not b e defined: no de p 1 : C q would b e deleted but not edg es p 1 : C, 2 : C q nor p 1 : C, 1 : S q , so H 1 would not be a digra ph. As commen ted ab ov e, the comp osition is p erformed beca use m 1 and m ε, 1 are functions betw een Bo olean ma tr ices that have been co mpleted. Actually it is not necessa ry to rely on ca teg ory theor y to define direct deriv ations. The basic idea is given precisely by that of ana lytical contin uatio n. What morphis m m ε do es is to extend the left hand side of the pro duction, i.e. it adds elements to L . As matches ar e total functions, they can not delete ele ments (no de s or edges ) in contrast to pro ductions. Hence, a matc h c a n b e seen as a particular type of pro duction with left hand side L and right hand side G . The LHS of th e pro duction is enlarge d with an y po tential dangling edge and the sa me for the RHS exce pt for edges incident to no des de le ted by the pro duction (as they are not added to its RHS, these edges will b e deleted). This way , a direc t deriv a tion would be H p p p m p L qq . (6.4) Adv ancing some material fro m the ne x t section, m is es sen tially used to mark no des in whic h p acts. Pro duction p is the identit y in almost all elemen ts except in some no des (edges) mar k ed by m . 9 The rest of the section is devoted to the in terpretation of this “co n tinuation technique” as a pro duction, in pa rticular that of m ε . Once we ar e a ble to complete the rule’s LHS w e hav e to do the same with the rest of the rule. T o this end we define an ope r ator T ε : G Ñ G 1 , wher e G is the o riginal gramma r and G 1 is the grammar transfor med once T ε has mo dified the pro duction. In words, T ε extends pr oductio n p such that T ε p p q has the same effect than p but also deletes any dangling edge. 9 Note that p ’s erasing and addition matrices, although as big as the entire system state – probably huge – w ould b e zero almost ev erywhere. 126 6 Matc hing The notation that w e use fro m no w on is borr ow ed from f unctional analy sis (see Sec. 2.5). Br inging this nota tio n to graph g rammar rules, a rule is wr itten as R x L, p y (separating the static a nd dynamic parts o f the pro duction) while the grammar r ule transformatio n including matching is: R x m G p L q , T ε p y . (6.5) Prop osition 6.1 .4 With notatio n as ab ove, pr o duction p c an b e ex tende d to c ons ider any dangling e dge, R x m G p L q , T ε p y . Pr o of What we do is to s plit the identit y op erator in such a w ay that any problematic element is taken into acco un t (er ased) by the pro duction. In s ome sense , we first add elements to p ’s LHS and afterwards e nla rge p to delete them. Otherwis e stated, m G T 1 ε and T ε m 1 G , s o we hav e: R x L, p y L, T 1 ε T ε p D x m G p L q , T ε p p qy R. The equality R R is v a lid only for edg es as R N has the source and target no des of the dangling edges. The effect of a match ca n be interpreted as a new pr o ductio n co ncatenated to the original pr oductio n. Let p ε def T ε , R x m G p L q , T ε p p qy x T ε p m G p L qq , p y (6.6) p p T ε p m G p L qqq p ; p ε ; m G p L q p ; p ε p L q . Pro duction p ε is the ε -pro duction asso cia ted to pro duction p . It s aim is to delete po ten tial dangling edg es. The dynamic definition o f p ε is given in (6.7) and (6.8). The fact of taking the match in to account can be int erpre ted a s a tempo rary mo difi- cation o f the g r ammar, so it can b e sa id that the gramma r mo difies the host gr aph and the ho st graph interacts with the g rammar (altering it temp ora r ily). If we think of m G and T ε as pro ductions resp ectively applied to L and m G p L q , it is necessary to sp ecify their er asing and addition matrices. T o this end, reca ll matrix D defined in Lemma 4.4.2, with elements in row i and column i equal to one if no de i is to be er ased by p a nd zero otherwise, which considers a n y p otential dangling edge. 6.1 Matc h and Extended Matc h 127 F or m G we ha ve that e N e E 0, a nd r L L (for both no des a nd edges), as the production has to add the e le men ts in L that ar e not presen t in L . Let p ε e E T ε , r E T ε , e N T ε , r N T ε , then e N T ε r E T ε r N T ε 0 (6.7) e E T ε D ^ L E . (6.8) Example . Consider r ules depicted in Fig. 6.5, in which ser verDown is applied to model a ser v er failur e . W e hav e: e E r E L E 0 1 ; e N 1 1 r N 0 1 ; L N 1 1 ; R E R N H . Once m G L E , L N , r E , 0 , 0 , 0 and o p era tor T ε hav e b een applied, giving rise to p ε L E , L N , 0 , 0 , e E T ε , 0 , the re sulting matrices a re: r E 0 0 0 1 0 0 1 0 0 , L E 0 0 0 1 0 0 1 0 0 , R E 0 0 0 0 , e E T ε 0 0 0 1 0 0 1 0 0 , where o rdering of no des is r 1 : S, 1 : C , 2 : C s for ma trices r E , L E and e E T ε and r 1 : C , 2 : C s for R E . Matrix r E , b esides edges added by the pro duction, spe c ifie s those to be a dded b y m G to the LHS in order to consider any p otential dangling edge (in this cas e p 1 : C, 1 : S q and p 2 : C, 1 : S q ). As neither m G nor pro duction serve rDown delete any ele ment, e E 0. Finally , p ε remov es all p otential dang ling edges (chec k out matrix e E T ε ) but it do es not add any , so r E T ε 0. V ec to rs for no des have been omitted. Let T ε T ε N , T ε E be the adjoint opera tor of T ε . W e will end this s ection giving an explicit formula for T ε . Define e E ε and r E ε resp ectively as the er asing a nd addition matrices of T ε p p q . It is clear that r E ε r E r E and e E ε e E _ D L E , s o R E L E , T ε p p q D r E ε _ e E ε L E r E _ e E _ D L E L E r E _ D _ L E e E L E r E _ e E D L E . (6.9) Previous identities show that R E L E , T E ε p E D D L E , p E D , which pr o ves the ident ity: 128 6 Matc hing Fig. 6.5. F u ll Production and Application T ε T ε N , T ε E p id, D q . (6.10) Summarizing, when a matc h m is considered for a pro duction p , the pr o duction itself is first mo dified in o rder to co nsider a ll po ten tial dangling edg es. Morphism m is automatically transfor med into a match w hich is free from any dangling elemen t and, in a seco nd step, a pr e-pro duction p ε is app ended to for m the conca tena tion 10 p p p ; p ε . (6.11) Note tha t as injectiveness o f matches is demanded, there is no problem such as ele- men ts identified by matc hes that a re b oth kept and deleted. Depending on the op erator T ε , side effects are per mitted (SPO -like be haviour) o r not (DPO-like b ehaviour). A fixe d gr ammar or fixed Ma trix Gr aph Gra mmar is o ne in which (mandato r y) the ope r ator T ε is the identit y . If the op erator is not forced to b e the identit y , we will sp eak of a flo ating gr ammar or floa ting Matrix Gra ph Gra mmar. Notice that the exis tence of side effects is equiv a len t to transforming a pro duction in to a sequence. This will also b e the case when we deal with gra ph constraints and application conditions (Cha p. 8). 10 It is also p ossible to defin e it as the comp osition: p p p p ε . 6.2 Marking 129 6.2 Marking In prev ious section the problem of dangling edg es has b een a ddressed by adding an ε - pro duction which deletes any problematic edge, so the or iginal rule can b e applied a s it is. How ever there is no wa y to guarantee that b oth pr o ductions will use the sa me elements (recall that in g eneral matches are non-deterministic). The same problem exists with application co nditions (Sec. 8.3) or whenever a rule is split into subrule s and applying them to the same ele men ts in the hos t graph is desir ed. This topic is studied in [73] (for a differen t reason) and the so lution prop osed there is to “pass” the match from one pro duction to the other . W e will tackle this problem in a different wa y that c onsists in defining an op erator T µ,α for a lab el α a cting o n pro duction p as follows: • If no no de is typed α in p then a new no de labele d α is added and co nnected to ev ery already ex is ting no de. • If, on the contrary , there exists a no de of that type then it is deleted. The ba sic idea is to mark no des and related pro ductions with a no de of t yp e α . The op erator b ehav es differ en tly dep ending on whether it is mark ing the s tate (it adds no de α ) o r it is extending the pr o ductions ( α -typed no des ar e removed) . F or an exa mple o f a short seq uence of tw o productions , please refer to Fig. 6.6. Using functional ana lysis notation: R x L, p y ÞÝ Ñ R x m ε p L q , T ε p p qy ÞÝ Ñ R x m ε p L q , T µ T ε p p qy (6.12) where, a s in Sec. 6.1, R is the ex tended rule’s RHS that c o nsiders any dangling edge. If a productio n is split in to tw o subpro ductions, say p ÞÝ Ñ T ε p p q p ; p ε and w e want them to b e a pplied in the same no des of the host g raph, we may pro ceed as follows: • Enla rge p ε to add one no de of s ome non-exis ten t type ( α ) toge ther with edg es starting in this no de and ending in no des used by p ε . • Enla rge p to dele te α no des of previous step. It is imp ortant to no te that p must b e enlarged to delete only the previo usly added no de ( α ) and not the edge s sta rting in α a ppended by T µ to p ε . The reason is that in case 130 6 Matc hing of a sequence in which the ε -pr oductio n is a dv anced s ev eral p ositions, there exists the po ssibilit y to create unrea l dep endencies b etw een p and some pr o duction applied b efore p but after p ε (the exa mple b elow illustrates this p oint in particular). Marking will no rmally cre a te new ε -pro ductions related to p . Note how ever that no recursive proce ss should ar ise as ther e shouldn’t b e any in terest in p ermut ing (adv ancing) this new ε -pro ductions. F or ε -productions all this makes sense just in case we do not compose p p ε (no marking would be needed). Two different oper ators, one for α no des a dditio n a nd another for α no des deletio n (instead of just one) a re not defined b ecause marking a lw ays ac ts on different pro ductions. This should not cause any confusion. Fig. 6.6. Example of Marking and Sequence s p ; p ε Example. Figure 6.6 illustra tes the pr o c e ss for a simple pro duction p that deletes no de 1 and is a pplied to a host graph in which one o r t wo dangling edges (dep ending on the match, 1 or 1 1 ) would b e generated, p 1 , 2 q or p 1 1 , 2 q and p 1 1 , 3 q . W e have chosen no de 1 for the match so there should be one dangling edge p 1 , 2 q . In order to avoid it, an ε - pro duction p ε which deletes p 1 , 2 q is app ended to p . The mar king pro cess mo difies p ε and p b ecoming p ε ÞÑ T µ p p ε q and p ÞÑ T µ p p q , resp ectively . Note that T µ p p q generates t wo da ng ling edges – p α, 1 q and p α, 2 q – so a ne w ε -pro duction p 1 ε ought to b e a dded. When the production is applied, a sequence is generated as opera tors ac t on the pro duction – p ÞÑ T ε p p q ÞÑ T µ T ε p p q ÞÑ T ε T µ T ε p p q – giving rise to the following sequence o f pro ductions: 6.3 In itial D igraph Set and Negativ e Digraph Set 131 p ÞÝ Ñ p ; p ε ÞÝ Ñ T µ p p q ; p 1 ε ; T µ p p ε q . (6.13) The rea son why it is imp orta nt to sp ecify only the new no de deletion ( α ) and not the edges star ting in this node is not difficult but might b e a bit subtle. It has been men tioned ab ov e. The r e st of the e xample is devoted to ex pla ining it. If we sp ecified the edges a lso, s ay p α, 1 q and p α, 2 q as a bove, then the transfor med pro duction T µ p p q would use no de 2 as it should app ear in its LHS and RHS (remember that p did not act o n no de 2 ). Now imagine that we are interested in adv ancing the ε -pro duction three positions, for example b ecause we know that it is external (see Sec. 6.4) and indep endent: p ; p ε ; p 2 ; p 1 ÞÑ p ; p 2 ; p 1 ; p ε . Suppose that pro duction p 1 (placed b etw een p a nd the new alloca tion of p ε ) deletes no de 2 a nd pro duction p 2 adds it. If p was sequential indep endent with resp ect to p 1 and p 2 then it would no t b e anymore due to the edge ending in no de 2 b ecause now p would use no de 2 (a ppear s in its left a nd rig h t hand sides). Note that as the marking pro cess can be ea s ily a utomated, we can safely igno re it and assume that it is so meho w b eing p e rformed, by some runtime environmen t for ex a mple. 6.3 Initial Digraph Set and Negativ e Digraph Set Concerning minimal and negative initial digraphs there may b e different wa y s to complete rule matrices, dep ending o n the matches. Ther efore, we no longe r have a unique initial digraph but a set (if we assume a n y p ossible match). In fact tw o sets, one for element s that must b e found in the ho st gra ph and ano ther for those that must b e found in its complement. This se c tion is c losely related to Secs. 5.1 and 5.2 and extends re sults therein prov ed. The initial digr aph set contains all g raphs that c a n b e p otentially identified by matches in concr ete host graphs. Definition 6.3. 1 (Initial Digraph Set) Given se quenc e s n , its asso ciate d initial di- graph set M p s n q is the set of simple digr aphs M i such that M i P M p s n q : 1. M i has enough no des and e dges for every pr o duction of t he c onc atenation to b e appli e d in t he sp e cifie d or der. 132 6 Matc hing 2. M i has n o pr op er su b gr aph with pr evious pr op erty (ke eping identific ations). Every element M i P M p s n q is said to be an initial digr aph for s n . It is easy to see that s n finite se q uence of pr oductio ns we hav e M p s n q H . In Sec. 4.3 coher e nc e was used in a more or les s absolute w ay when dealing with sequences, a ssuming some hor iz on tal iden tification o f elements. Now we s e e that, due to matching, co herence is a prop erty that may dep end on the given initial digraph so, depe nding on the context, it migh t b e appro priate to s ay that s n is cohere nt with r espe ct to initial digr a ph M i (just in case dir ect deriv a tions are c o nsidered). Note that what we fix by choos ing an initia l digra ph is the relative matching o f no des a cross pro ductions (one of the actions of co mpletion). F or the initial digra ph set we can define the maximal initial digr aph as the element M n P M p s n q that co nsiders a ll no des in p i to be differe n t. This element is uniq ue up to isomorphism, and corr esp o nds to co nsidering the pa rallel application of every pr o duction in the sequence , i.e. the LHS of every pro duction in the sequence is matched in disjoint parts o f the ho st gr aph. This concept has alre a dy been used altho ugh it was not e xplicitly mentioned: In the pro of of Theor e m 5 .1 .2 we star ted with n i 1 L i , a dig raph that ha d eno ugh no des to per form all actions sp ecified by the sequence. In a similar way , M i P M p s n q in which a ll p ossible identifications ar e p erformed ar e known a s minimal initial digr aphs . Con trary to the max imal initial digraph, minimal initial digraphs need not b e unique as the following example shows. Example . In Figure 6 .7 we have re pr esented the minimal digraph set for the se q uence s = remove Channel ;removeChannel . The pro duction is also de pic ted in the figure where S sta nds for server a nd C for client . Note that it is not co herent if all no des in L 3 are ident ified beca us e the link b etw een tw o clien ts is deleted t wice. Therefore, the initial digraphs sho uld provide at least (in fact, at most) tw o different links b etw een clients. In the figure , the maximal initial digraph is M 7 and M 1 and M 3 are the tw o minimal initial digra phs. Identifications ar e written as i j meaning that no des i and j b ecome one and the same. A t op-b ottom pro cedure has b een follow ed, starting o ut with the biggest digr aph M 7 and ending in the smallest. Numbers on la bels are a ll different to ease identifications o n the initia l digr aph tree to the right of Fig. 6 .7. 6.3 In itial D igraph Set and Negativ e Digraph Set 133 Fig. 6.7. Initial Digraph Set for s=remov e channel;re move channel W e can pr ovide M p s n q with so me structure T p s n q . See the right side of Fig. 6.7. Every node in T r epresents an element of M . A directed edg e from one no de to another stands for one op eration o f identification betw een cor resp onding no des in the LHS and RHS of pro ductions o f the sequence s n . F ollowing with the example ab ov e, no de M 7 is the max ima l initial digraph, a s it only has outgoing edge s. No des M 1 and M 3 are minimal as they only have ingoing edg es. The structure T is an acyclic dig r aph with single ro ot no de (reca ll that there is just one maximal initial dig raph), known as gr aph-structur e d stack . It is p ossible to make a similar construction for negative initial digraphs that we will call ne gative initial set . It will be re presented by N p s n q where s n is the sequence under study . Definition 6.3. 2 (Negativ e Initial Set) Given se quenc e s n , its asso ciate d neg ativ e initial set N p s n q is the set of simple digr aphs K i such t hat K i P N p s n q : 1. K i sp e cifies al l e dges that c an p otent ial ly pr event the applic ation of some pr o duction of s n . 2. K i has no pr op er s u b gr aph with pr evious pr op erty (ke eping identific ations). 134 6 Matc hing Fig. 6.8. Negative Digraph Set for s=clientDown;c lientDown Example . W e s tudy the sequence s=c lientDo wn;clientDown very similar to that in the example o f p. 1 3 2 but deleting one node and t wo edg e s. It is depicted in Fig. 6 .8 and represents the failure of a client connected to a server and to a nother client. The same la b eling criter ia ha s be en follow ed to ease compar ison. Minimal digr aphs are very simila r to those in Fig. 6.7 and in fact identifications ha ve b een p erformed such that K i corres p onds to M i . Graphs do not include all edges that sho uld not a ppear bec ause there would b e many edges , probably b eing a confusing instead of a cla r ifying example. F or instance, in K 4 there can no t b e any edge incident to no de p 6 : C q (except those co ming from p 1 : S q and p 4 : S q ), in particular edge p 2 : C, 6 : C q which is not represented. Complete gra ph K 4 can b e found in Fig. 6.9. Note that for K 4 the o rder of deletion is imp ortant, first no de p 2 : C q and then no de p 3 : C q . Fig. 6. 9. Complete Negativ e Initial Digraph K 4 6.4 Internal and External ε -pro ductions 135 The relatio nship b etw ee n elements in M and N is co mpiled in Coro llary 5 .3.2. Note that the ca r dinalit y of b oth se ts do not necessar ily coincide. In the ex a mple of p. 1 32, pro duction s do es not add any edge nor deletes any no de (hence, no for bidden element) so its negative digra ph s e t is empty . Although in this bo o k w e are staying at a mo re theoretical level, we will mak e a small digressio n o n application of these co nce pts and p ossible implementations. Let’s take as an e x ample the ca lculation of M 0 in Prop ositio n 7.3 .2, which sta tes that t wo deriv ations d a nd d 1 are sequential independent if they hav e a common initial digraph for some identification of no des, i.e. if M p d q X M p d 1 q H . W e see tha t it is po ssible to follow t wo complemen tary approa c hes: • T op-b ottom . Begin with the maximal initial digr aph and sta rt iden tifying ele ments un til we get the desir ed initial digraph o r even tually g et a co n tradiction. • Bottom -up . Start with different initia l dig raphs and unrela te no des until a n a nswer is reached. In Fig. 6.7 on p. 13 3 either w e b egin with M 7 and start ident ifying no des, even tually getting any element of the minimal initial set, or we start with M 1 – which is not neces- sarily uniq ue – a nd build up the whole set, o r stop as so on as we get the de s ired minimal initial digraph. Let the matrix filled up with 1’s in all p ositions b e repres en ted by 1 . F o r the firs t case the following identit y may b e of some help: M d M d 1 M d M d 1 _ M d M d 1 1 . (6.14) A SA T solver can b e used on (6.14) to obtain conditions, setting all elements in M as v ar ia bles except those alr e ady known. In order to store M , binary decis io n diag rams – BDD – can b e e mployed. Refer to [8]. The same alternative pro cesses might b e applied to the negative initial set to even- tually rea c h a n y of its elements. 6.4 Internal and External ε -pro ductions Dangling edges ca n be classified into tw o dis join t sets a ccording to the plac e where they app ear, whether they hav e b een a dded by a pr evious pro duction or not. 136 6 Matc hing F or example, given the sequence p 2 ; p 1 , suppo se that rule p 1 uses but do es not delete edge p 4 , 1 q , that rule p 2 sp ecifies the deletion of no de 1 a nd that w e hav e iden tified bo th no des 1. It is mandato r y to add one ε -pro duction p ε, 2 to the g rammar with the disadv antage that there is a n unav oidable problem o f coherenc e b etw ee n p 1 and p ε, 2 if we w anted to adv a nc e the application of p ε, 2 to p 1 , i.e. they a re sequentially dep enden t. Hence, edges of ε -pro ductions a re of t wo different types: • Ext ernal : Any edg e not app earing e x plicitly in the gramma r rules , i.e. edges o f the host gr aph “in the s urroundings” of the ac tua l initial digraph. 11 Examples are edges p 1 : C, 1 : S q and p 2 : C , 1 : S q in Fig . 6.5 on p. 128. • I n ternal : Any edge used o r a ppended by a previous pr o duction in the c o ncatenation. One exa mple is edg e p 4 , 1 q men tioned ab ov e. ε -pro ductions can be classified in internal ε -pro ductions if any of its edges is internal and external ε -pro ductions otherwis e. The “a dv antage” of in ternal ov er externa l ε -pro ductions is that the former can be considered (are known) during rule sp ecification while external remain unknown until the pro duction is applied. This, in turn, may sp oil coherence, compa tibilit y and other calculations p erformed dur ing gr ammar definition. On the other hand, external ε -pro ductions do not interfere with grammar rules s o they can b e adv a nced to the b eginning and they can even b e comp osed to g et a single pro duction if so desired (these a re called exact derivations , defined b elow). Fig. 6. 10. Example of Internal and External Edges 11 Among all p ossible initial digraphs in the initial digraph set for a giv en concatenation, if one is already fixed (matches hav e already b een chosen), it will b e kn o wn as actual initial digr aph . 6.4 Internal and External ε -pro ductions 137 Example . Let’s consider the deriv atio n d 2 p 2 ; p 1 (see Fig. 6.10). E dg e p 1 , 2 q in g raph G 1 is int ernal (it ha s b een a dded by pro duction p 1 ) while edge p 2 , 3 q in the same g raph is externa l (it a lready existed in G 0 ). Given a ho st graph G in which s n – coheren t a nd compatible – is to b e applied, and assuming a ma tc h which identifies s n ’s actual initial digraph ( M n ) in G (defining a deriv a tio n d n out o f s n ), we chec k whether for some p m and x T ε , which respectively represent a ll changes to be done to M n and all mo difications to s n , it is corr ect to write H n d n p M n q A p m p M n q , x T ε p s n q E , (6.15) where H n is the subgra ph of the final state H co rresp onding to the imag e of M n . Equation (5.10) a llows us to consider a conc a tenation almost as a pro duction, justi- fying op erator s x T ε and p m in eq. (6.15) a nd our abuse o f no ta tion (reca ll that br a and kets apply to pro ductions and not to sequences). All previo us consider ations together with the following example are co mpiled in to the definition o f exact se quenc e . Example. Let s 2 p 2 ; p 1 be a coherent and compatible co ncatenation. Using op erator s we can write H x m G, 2 px m G, 1 p M 2 q , T ε, 1 p p 1 qyq , T ε, 2 p p 2 qy , (6.16) which is eq uiv alent to H p 2 ; p ε, 2 ; p 1 ; p ε, 1 M 2 , with actual initial digr aph t wice mo d- ified M 2 m G, 2 p m G, 1 p M 2 qq p m G, 2 m G, 1 q p M 2 q . Definition 6.4. 1 (Exact Deriv atio n) L et d n p s n , m n q b e a derivatio n with actual initial digr aph M n , se quenc e s n p n ; . . . ; p 1 , matches m n t m G, 1 , . . . , m G,n u and ε - pr o duct ions t p ε, 1 , . . . , p ε,n u . It is an exa ct derivation if ther e exist p m and p T ε such that e quation (6.15) is fulfil le d. Equation (6.15) is satisfied if once all matches are calculated, the following identit y holds: p n ; p ε,n ; . . . ; p 1 ; p ε, 1 p n ; . . . ; p 1 ; p ε,n ; . . . ; p ε, 1 . (6.17) Prop osition 6.4 .2 With notation as in Def. 6.4.1, if p ε,j K p p j 1 ; . . . ; p 1 q , j , t hen d n is exact. 138 6 Matc hing Pr o of Op erator x T ε mo difies the sequence adding a unique ε - pro duction, the co mpositio n of all ε -pro ductions p ε,i . T o see this, if one edg e is to dangle, it should b e eliminated by the corres p onding ε -pro duction so no other ε -pro duction deletes it unless it is added by a subsequent pr o ductio n. But by h yp othesis there is sequential independence o f every p ε,j with resp ect to a ll preceding pr o ductions a nd hence p ε,j do es no t dele te any edg e used by p j 1 , . . . , p 1 . In particular no edge a dded by any of these pro ductions is er ased. In Def. 6 .4.1, p m is the extension of the match m whic h identifies the a c tual initia l digraph in the host graph, so it adds to m p M n q all no des and edg es to distance one to no des that are go ing to be erased. A symmetrica l re a soning to that of x T ε shows that p m is the comp osition of all m G,i . With Def. 6.4.1 and P rop. 6.4 .2 it is fea s ible to obtain a conca tenation where all ε -pro ductions are a pplied fir st, a nd a ll grammar rules afterwards, re c o vering the original concatenation. Despite some obvious adv antages, all dangling edges a re deleted at the beg inning which may b e counterin tuitive or even undesired if, for ex ample, the deletion of a par ticular edge is us e d for s y nc hroniza tio n purpo s es. The following cor ollary states that exa ctness can only b e ruined by internal ε - pro ductions. Corollary 6. 4.3 L et s n b e a se quenc e to b e applie d t o a host gr aph G and M k P M p s n q . Assume ther e exists at le ast one match in G for M k that do es not add any internal ε -pr o duction. Then, d n is exact. Pr o of (sketch) All p oten tial dangling elements are edges sur rounding the a ctual initial digra ph. It is thus p ossible to adapt the part of the ho st graph mo dified b y th e sequence a t the beg inning, s o applying Pro p. 6 .4.2 we get ex actness. W e are now in the po sition to characterize applicability , problem 1 stated on p. 7 . In essence, applicability characterizes when a sequence is a deriv a tion with resp ect to a given initial g raph. Theorem 6.4. 4 (Appli cabilit y Characterization) A se qu enc e s n is applic able to G if ther e a r e matches for eve ry p r o duction (de fine the derivation d n as the se qu enc e s n plus t hese matches) su ch that any of the t wo fol lowing e quivalent c onditions is fulfil le d: 6.5 Su mmary and Conclusions 139 • Derivation d n is c oher ent and c omp atible. • d n ’s minimal initial digr aph is in G and d n ’s ne gative initial digr aph is in G . Pr o of 6.5 Summary and Conclus ions In this c hapter we hav e seen how it is p ossible to match the left hand side o f a pro duction in a given gra ph. W e hav e not given a matching algorithm, but the construction of deriv ations o ut of pro ductions. There are tw o prop erties that we would like to highlight. The ex pr essive power of Matrix Gra ph Gra mma rs lies in b etw een that of other appr o aches such as DPO and SPO: • W e find it more in tuitive and conv enient to demand injectiveness on matches. This ca n be seen as a limitation on the semantics of the gra mmar but, on the other hand, not asking for injectiv eness might pres e nt a serious problem. F or example, when injectivit y is necess a ry for some r ules or non-injectivity is not allow ed in some pa rts of the ho st graph. In a limit s ituation, it ca n be the case that several no des and edg es co llapse to a sing le no de and a single edge. • Rules can be applied even if they do not consider every edge that can app ear in some given s tate. T he g rammar designer can concentrate on the alg orithm at a more abstract level, without worrying ab out e v ery single case in which a co nc r ete rule needs to be a pplied. 12 An adv antage o f ε -pro ductions ov er previous approaches to dang ling e dges is that they a re erased by pro ductions. This increa ses our analy s is abilities as ther e ar e no s ide effects. 12 In cases of hundreds of rules, when every rule adds and deletes no des and edges, it can b e very difficult to keep track if some actions are still a v ailable. The canonical example w ould b e a rule p that deletes some sp ecial no de but can not b e applied b ecause some other pro duction even t u ally add ed one incident edge th at is not considered in the left hand side of p . 140 6 Matc hing W e have also introduced marking, useful in many s ituations in which it is neces sary to g uarantee tha t some parts of tw o or mor e r ule s will b e matched in the same area of the ho st graph. It will b e used throughout the r est of the b o o k. Initial and negative digraph sets a r e a gener alization of minimal and negative initial digraphs in whic h so me or all po s sible identifications are considered. Actually , these concepts could hav e b een in tr oduce d in C ha p. 5, but we hav e p ostp oned their study bec ause we find it mor e natural to consider them once matching has b een intro duced. W e ha ve classified the pr oductio ns gener ated at run time in internal and externa l. In fact, it would b e more appro priate to sp eak of internal and ex ter nal edges, but this classification s uffices for our pur pos es. Applicability (problem 1 stated o n p. 7) will b e used in Chap. 8 to characterize c onsistency of applic ation c onditions a nd gra ph constra in ts. In the next chapter sequentialization and pa rallelism a re s tudied in detail. Pro blem 3, sequential independence (stated on p. 8), will b e address e d and, in doing s o, we will touch on par allelism and related topics. Chapter 8 generalizes graph cons traint s and applica tion conditions and a dapts them to Matrix Graph Grammar s . This step is not neces sary but conv enient to study reacha- bilit y , pro blem 4 s tated on p. 8, which will b e ca rried out in Chap. 10. 7 Sequen tialization and P arallelism In this chapter we will study in some deta il problem 3 (seq uen tial indepe ndence, p. 8) which is a par ticular case of pr oblem 2 (indepe ndence, p. 8). Reca ll from Chap. 1 that t wo deriv ations d and d 1 are indep endent for a given state G if d p G q H H 1 d 1 p G q . W e call them se quential indep endent if, b esides, D σ per m utation s uc h that d 1 σ p d q . Applicability (problem 1) is one of the pr emises of indep endence, establishing an obvi- ous co nnection b etw een them. In Chap. 10 we will sketc h the relationship with r e achability (problem 4) a nd conjecture one with c onfluenc e (problem 5) in Chap. 11. In Sec. 7.1 G- c ongruenc e is prese nted, which in ess e nce po ses conditions for tw o deriv ations (one per m utation of the o ther) to ha ve the same minimal and negative initial digraphs. The idea b ehind se quential indep endenc e is that changes of order in the p osition of pro ductions inside a sequence do not alter the result of their a pplication. This is addressed in Sec. 7.2 fo r sequences a nd in Sec. 7.3 for deriv a tions. If a quick rev iew of per m utation groups nota tion is nee ded, please see Sec. 2 .3. In Sec. 7.4 we w ill see that there is a clo se link b etw een sequential indep endence and paralleliz a tion (see Churc h- Rosser theo rems in, e.g. [11]). As in every chapter, we will close with a summary (Sec. 7 .5). 7.1 Graph Congruence Sameness of minimal and nega tiv e initial digraphs for tw o sequences – one a p ermutation of the o ther – or for tw o der iv ations if some matc hes hav e b een given, will b e known 142 7 Sequentializati on an d Parall elism as graph cong ruence o r G-c ongru en c e . This concept helps in c ha r acterizing sequential independenc e (se e Theo rems 7.2.2 and 7 .2.3). Definition 7. 1.1 (G-congruence) Two c oher ent se quenc es s n and σ p s n q , wher e σ is a p ermutation, ar e c al le d G-congr uen t if they have the same minimal and n e gative initial digr aphs, M p s n q M p σ p s n qq and K p s n q K p σ p s n q q . W e w ill ident ify the conditions that must be fulfilled in order to guarantee equality of initial digraphs , firs t fo r pr oductio ns adv ancement and then for delaying, s tarting with t wo pro ductions, contin uing with three a nd four to end up s etting the theor em fo r the general ca se. The basic re ma rk that justifies the way we tackle G-congr uence is tha t a sequence and a p ermutation of it p erform the sa me actions but in differe nt order. Initial digr aphs depe nd on actions a nd the order in which they are p erformed. The idea is to concentrate on how a change in the or der of actions may affect initial digraphs. Suppo se we hav e a coher en t sequence made up of tw o pro ductions s 2 p 2 ; p 1 with minimal initial digra ph M 2 and, applying the (only p ossible) pe r m utation σ 2 , g et another coherent concatenation s 1 2 p 1 ; p 2 with minimal initial digra ph M 1 2 . Pro duction p 1 do es not delete any element added by p 2 bec ause, otherwise, if p 1 in s 2 deleted so mething , it would mea n that it already existed (as p 1 is a pplied firs t in s 2 ) while p 2 adding that same elemen t in s 1 2 would mea n that this element w as not present (b ecause p 2 is applied first in s 1 2 ). This co ndition can b e wr itten: e 1 r 2 0 . (7.1) A simila r rea s oning states tha t p 1 can not add any element that p 2 is going to use: r 1 L 2 0 . (7.2) Analogously fo r p 2 against p 1 , i.e. for s 1 2 p 1 ; p 2 , we hav e: e 2 r 1 0 (7.3) r 2 L 1 0 . ( 7.4) As a matter of fact t wo eq uations are redundant – (7.1) and (7.3) – b ecause they ar e already co n tained in the o ther tw o. Note that e i L i e i , i.e. in some sense e i L i , s o it is enoug h to ask for: 7.1 Graph Congruence 143 r 1 L 2 _ r 2 L 1 0 . (7.5 ) It is eas y to check that these conditions make minimal initial digr a phs coincide, M 2 M 1 2 . In detail: M 2 M 2 _ r 1 L 2 L 1 _ r 1 L 2 _ r 1 L 2 L 1 _ L 2 M 1 2 M 1 2 _ r 2 L 1 L 2 _ r 2 L 1 _ r 2 L 1 L 2 _ L 1 . W e will very briefly compare conditions for tw o pro ductions with those of the SPO approach. In references [23; 24], sequential independence is defined and catego rically characterized. See also Secs . 3.1 and 3.2, in particular equa tions (3.5) and (3.6)). It is not difficult to translate those conditions to o ur matrix la nguage: r 1 L 2 0 (7.6) e 2 R 1 e 2 r 1 _ e 2 e 1 L 1 0 . ( 7.7) First co ndition is eq. (7.2 ) and, a s mentioned ab ov e, first part of s econd conditio n ( e 2 r 1 0) is already co ns idered in eq. (7.2). Second part o f second equa tion ( e 2 e 1 L 1 = 0) is demanded for co herence, in fact so mething a bit strong er: e 2 L 1 0. Hence G- congruence plus coherence imply sequen tial independence in the SPO case, at leas t for a sequence of tw o pro ductions. The conv e r se do es no t hold in g eneral. O ur co nditions a re more demanding b ecause we consider simple digra phs. Let’s now turn to the nega tive initial dig raph, for which the first pro duction sho uld not delete any elemen t forbidden for p 2 . In such a case these elements w ould be in G for p 1 ; p 2 and in G for p 2 ; p 1 : 0 e 1 K 2 e 1 r 2 _ e 1 e 2 D 2 . (7.8) Note that we already had e 1 r 2 0 in equation (7.1). A symmetrical reaso ning yields e 2 e 1 D 1 0, and alto gether: e 1 e 2 D 2 _ e 2 e 1 D 1 0 . (7.9) First mo nomial in eq. (7.9) simply states that no p otential dangling edge for p 2 (not deleted by p 2 ) c an b e deleted by p 1 . Equations (7.5) a nd (7.9) are schematically represented in Fig. 7.1. 144 7 Sequentializati on an d Parall elism Fig. 7.1. G-congruence for s 2 p 2 ; p 1 It is straightforw ard to show that equation (7.9) guarantees the same negative initial digraph. In p 2 ; p 1 the negative initial dig r aph is given b y K 1 _ e 1 K 2 . Condition (7.8) demands e 1 K 2 0 so we can or them to get: K 1 _ e 1 K 2 _ e 1 K 2 K 1 _ K 2 . (7.10) A simila r rea s oning applies to p 1 ; p 2 , o btaining the same result. W e will pro ceed with three pro ductions so, following a consistent notation, we set s 3 p 3 ; p 2 ; p 1 , s 1 3 p 2 ; p 1 ; p 3 with pe r m utation σ 3 r 1 3 2 s and their corresp onding minimal initial digraphs M 3 L 1 _ r 1 L 2 _ r 1 r 2 L 3 and M 1 3 r 3 L 1 _ r 3 r 2 L 2 _ L 3 . Conditions a re deduced similar ly to the tw o pro ductions ca s e: 1 r 3 L 1 0 r 3 L 2 r 1 0 r 1 L 3 0 r 2 L 3 e 1 0 . (7.11) Let’s in terpret them all. r 3 L 1 0 says that p 3 cannot add an edge that p 1 uses. This is b ecause this would mean (by s 3 ) that the edge is in the ho st graph (it is used by p 1 ) but s 1 3 says that it is not ther e (it is g oing to b e added b y p 3 ). The second condition is almost equal but with p 2 in the role of p 1 , which is why we demand p 1 not to a dd the element p r 1 q . Third equation is symmetrical with r esp e c t to the fir st. The fourth e quation states that we would derive a co ntradiction if the s e c ond pro duction adds something p r 2 q that pro duction p 3 uses p L 3 q and p 1 do es no t delete p e 1 q . This is b ecaus e by s 3 the element was not in the host graph. Note that s 1 3 says the oppo site, as p 3 (to be applied first) uses it. All can b e put to gether in a sing le expressio n: 1 As far as w e know , there is no rule of thum b to d educe the conditions for G-congruence. They dep end on t he operations that pro ductions define and their relativ e order. 7.1 Graph Congruence 145 L 3 p r 1 _ e 1 r 2 q _ r 3 p L 1 _ r 1 L 2 q 0 . (7.12) F or the s ake of completeness le t’s p oint out that there are other four conditions but they are alre a dy considered in (7 .1 2): e 1 r 3 0 r 3 e 2 r 1 0 e 3 r 1 0 r 2 e 3 e 1 0 . (7.13) Now we deal with those elements that must not b e present. F our co nditions similar to those for t wo pro ductions – compar e with equations in (7.8) – are needed: e 1 K 3 e 1 r 3 _ e 1 e 3 D 3 0 e 3 K 1 e 3 r 1 _ e 3 e 1 D 1 0 e 3 K 2 e 1 e 3 r 2 e 1 _ e 3 e 1 e 2 D 2 0 e 2 K 3 r 1 e 2 r 3 r 1 _ e 2 r 1 e 3 D 3 0 . (7.14) Note tha t the fir st monomial in e v ery equation can b e disca rded as they are already considered in (7.12). W e put them altogether to g et: e 1 e 3 D 3 _ e 3 e 2 e 1 D 2 _ e 3 e 1 D 1 _ e 2 e 3 r 1 D 3 e 3 e 1 D 1 _ e 1 e 2 D 2 _ e 3 D 3 p e 1 _ r 1 e 2 q . (7 .15) In Fig. 7.2 there is a schematic represen tation of all G-congruence conditions for sequences s 3 p 3 ; p 2 ; p 1 and s 1 3 p 2 ; p 1 ; p 3 . These conditions guar antee sameness o f the minimal a nd negative initial dig r aphs, which will be pr ov ed b elow, in Theorem 7 .1.6. 2 Moving one pro duction thre e positions forw ard in a sequence of four productio ns, i.e. p 4 ; p 3 ; p 2 ; p 1 ÞÑ p 3 ; p 2 ; p 1 ; p 4 , while main taining the minimal initial digraph has a s asso ciated conditions those given b y the eq ua tion: L 4 p r 1 _ e 1 r 2 _ e 1 e 2 r 3 q _ r 4 p L 1 _ r 1 L 2 _ r 1 r 2 L 3 q 0 . (7.16) and for the negative initial digr aph we hav e : e 4 e 1 D 1 _ e 1 e 2 D 2 _ e 1 e 2 e 3 D 3 _ e 4 D 4 p e 1 _ r 1 e 2 _ r 1 r 2 e 3 q 0 . (7 .17) 2 Notice that b y Prop. 4.1.4, equations (4.10) and (4.13) in p articular, we can put r i L i instead of just L i and e i r i instead of just r i . It will b e useful in order to find a closed formula in terms of ∇ . 146 7 Sequentializati on an d Parall elism Fig. 7. 2. G-congruence for Sequences s 3 p 3 ; p 2 ; p 1 and s 1 3 p 2 ; p 1 ; p 3 Equations (7.16 ) and (7.1 7) together give G-congruence for s 4 and s 1 4 are depicted on Fig. 7.3 . Before moving to the g eneral case, let’s br iefly introduce and put an example of a simple notation for cycles moving forward and ba ckward a single pr oductio n: 1. Adv ance pro duction n 1 p o sitions: φ n r 1 n n 1 . . . 3 2 s . 2. Delay pro duction n 1 p ositions: δ n r 1 2 . . . n 1 n s . Fig. 7. 3. G-congruence for s 4 p 4 ; p 3 ; p 2 ; p 1 and s 1 4 p 3 ; p 2 ; p 1 ; p 4 7.1 Graph Congruence 147 Example. Consider adv a ncing three p ositions the production p 5 inside the sequence s 5 p 5 ; p 4 ; p 3 ; p 2 ; p 1 to get φ 4 p s 5 q p 4 ; p 3 ; p 2 ; p 5 ; p 1 , wher e φ 4 r 1 4 3 2 s . T o illustrate the w ay in whic h we repr esent delaying a pro duction, mo ving backwards pro duction p 2 t wo places p 5 ; p 4 ; p 3 ; p 2 ; p 1 ÞÝ Ñ p 5 ; p 2 ; p 4 ; p 3 ; p 1 has as as so ciated cycle δ 4 r 2 3 4 s . No te that the num b ers in the p ermutation r efer to the pla ce the pro duction o ccupies in the sequence, num b ering from left to right, and no t to its subindex. Conditions tha t must be fulfilled in or der to maintain the minimal and negative initial digraphs will b e called c ongruenc e c onditions a nd will be abbr e viated as CC , p ositive CC if they r e fer to minimal initial dig raph and ne gative CC for the neg ative initial digra ph. By induction it can b e pr ov ed that for adv ancement of one productio n n 1 p ositions inside the sequence of n pro ductions s n p n ; . . . ; p 1 , the equation whic h contains a ll p ositive CC can b e expr essed in terms of op era to r ∇ and has the fo rm: C C n p φ n , s n q L n ∇ n 1 1 p e x r y q _ r n ∇ n 1 1 p r x L y q 0 . (7.18) and for the ne gative CC : C C n p φ n , s n q D n e n ∇ n 1 1 p r x e y q _ e n ∇ n 1 1 e x D y 0 . (7.19) Remark . Some monomials were discarded in eq. (7.1 4) beca use they were a lr eady con- sidered in eq. (7.1 2). If (7.1 9) is not used in conjunction with 7.18, then the more complete form C C n p φ n , s n q K n ∇ n 1 1 p r x e y q _ e n ∇ n 1 1 p e x K y q (7.20) should be pr eferred. Recall that K h r h _ e h D h . The p oin t is that e h D h considers po ten tial da ng ling edges while K h also includes those to b e a dded. It is p ossible to put eq s. (7.18) and (7.19) in terms of L i and K i . W e will do it for sequences s 3 and s 1 3 to obtain an equiv ale n t form of Fig. 7 .2 (repr esen ted in Fig. 7.4). What we do is to merg e the firs t branch in Fig. 7.2 with the third branch and the second br anch with the fourth. One illus trating exa mple should suffice: 3 3 The term r 1 can b e omitted. 148 7 Sequentializati on an d Parall elism Fig. 7. 4. G-congruence (Alternate F orm) for s 3 and s 1 3 r 3 r 1 L 1 _ D 3 e 3 r 1 e 1 r 1 L 1 r 3 _ e 1 e 3 D 3 r 1 L 1 r 3 e 1 _ r 3 e 1 _ e 1 e 3 D 3 r 1 L 1 p e 1 K 3 _ r 3 e 1 q r 1 L 1 K 3 p e 1 _ r 3 q . (7.21) Last equality holds b e cause K i r i r i _ r i D i r i and a _ ab a _ b . W e have also used that K i e i e i r i _ e i D i K i . The same so rt of calculatio ns for s 4 and s 1 4 are summarized in Fig. 7.5. Fig. 7. 5. G-congruence (Alternate F orm) for s 4 and s 1 4 A for m ula considering the p ositive (7.18) and the negative (7 .19) par ts ca n b e derived by induction. It is presented as a pro pos ition: 7.1 Graph Congruence 149 Prop osition 7.1 .2 Positive and ne gative c ongruenc e c onditions for se quenc es s n and s 1 n φ n p s n q ar e given by: C C n p φ n , s n q L n ∇ n 1 1 e x K y p r y _ e n q _ K n ∇ n 1 1 r x L y p e y _ r n q . (7.22) Pr o of G -congruence is obtained when C C n p φ n , s n q 0. An e q uiv alent reaso ning do es it for a pro duction dela yed n 1 po sitions, giving v ery similar formulas. Supp ose that pro duction p 1 is mov ed backw ards in co nc a tenation s n to get s 2 n p 1 ; p n ; . . . ; p 2 , i.e. δ n is applied. The p o sitive part o f the co ndition is: C C n p δ n , s n q L 1 ∇ n 2 p e x r y q _ r 1 ∇ n 2 p r x L y q 0 (7.23) and the negative part: C C n p δ n , s n q D 1 e 1 ∇ n 2 p r x e y q _ e 1 ∇ n 2 e x D y 0 . (7.24) As in the p ositive case it is p ossible to merge equations (7.23) and (7.2 4) to get a single ex pr ession: Prop osition 7.1 .3 Positive and ne gative c ongruenc e c onditions for se quenc es s n and s 2 n δ n p s n q ar e given by: C C n p δ n , s n q L 1 ∇ n 2 e x K y p r y _ e 1 q _ K 1 ∇ n 2 r x L y p e y _ r 1 q . (7.25) Pr o of It is necessar y to show that these conditio ns guarant ee sameness of minimal and negative initial digr aphs, but first we need a technical lemma that provides us with some ident ities used to tr ansform the minimal initia l digra phs. Adv ance ment and dela ying are very similar so only adv ancement is considered in the res t of the section. Lemma 7.1. 4 Supp ose s n p n ; . . . ; p 1 and s 1 n σ p s n q p n 1 ; . . . ; p 1 ; p n and that C C n p φ n q is s atisfi e d. Then the fol lowing identity may b e or e d t o s n ’s m inimal initial digr aph M n without changing it: D C n p φ n , s n q L n ∇ n 2 1 p r x e y q . (7.26) 150 7 Sequentializati on an d Parall elism Pr o of Let’s star t w ith three pro ductions. Recall that M 3 L 1 _ other terms and that L 1 L 1 _ e 1 L 1 _ e 1 _ e 1 L 3 (last equality holds in prop ositiona l logics a _ ab a ). Note that e 1 L 3 is eq. (7.2 6) for n 3. F or n 4, apar t fro m e 1 L 4 , we ne e d to get e 2 r 1 L 4 (beca use the full condition is D C 4 L 4 p e 1 _ r 1 e 2 q ). Recall again the minimal initial digraph for four pro ductions whose first t wo terms are M 4 L 1 _ r 1 L 2 . It is not necessar y to consider all terms in M 4 to get D C 4 : M 4 p L 1 _ e 1 q _ p r 1 L 2 _ r 1 e 2 q _ . . . p L 1 _ e 1 _ e 1 L 4 q _ p r 1 L 2 _ r 1 e 2 _ r 1 e 2 L 4 q _ . . . p L 1 _ e 1 L 4 q _ p r 1 L 2 _ r 1 e 2 L 4 q _ . . . M 4 _ D C 4 . The pr o of can b e finished by induction. Next lemma states a similar result for negative initial digr aphs. W e will need it to prov e inv ar iance of the neg ative initial digr aph. Lemma 7. 1.5 With notation as ab ove and assuming that C C n p φ n q is satisfie d, the fol lowing identity may b e or e d to the ne gative initial digr aph K without changing it: D C n p φ n , s n q e n D n ∇ n 2 1 p e x r y q . (7.27) Pr o of W e follow the same scheme as in the pro of of Lemma 7.1.4. Let’s start with three pro ductions. Recall tha t K 3 K 1 _ other terms and tha t K 1 K 1 _ r 1 K 1 _ r 1 _ r 1 e 3 D 3 . No te that r 1 e 3 D 3 is eq. (7.2 7) for n 3. F or n 4, b esides the term r 1 e 4 D 4 we need to get e 1 r 2 e 4 D 4 (beca use D C 4 e 4 D 4 p r 1 _ e 1 r 2 q ). The first tw o terms of the negative initial digr a ph for four productions are K 4 K 1 _ e 1 K 2 . Ag a in, it is not necess ary to co nsider the whole formula for K 4 : K 4 p K 1 _ r 1 q _ p e 1 K 2 _ r 2 e 1 q _ . . . K 1 _ r 1 _ r 1 e 4 D 4 _ e 1 K 2 _ e 1 r 2 _ e 1 r 2 e 4 D 4 _ . . . K 1 _ r 1 e 4 D 4 _ e 1 K 2 _ e 1 r 2 e 4 D 4 _ . . . K 4 _ D C 4 . 7.1 Graph Congruence 151 The pro of can b e finished by induction. Fig. 7. 6. Pos itive and Negativ e DC Conditions, DC 5 and D C 5 Both, D C 5 and D C 5 are depicted in Fig . 7.6 for adv ancement of a single pr o duction s 5 p 5 ; p 4 ; p 3 ; p 2 ; p 1 ÞÝ Ñ s 1 3 p 4 ; p 3 ; p 2 ; p 1 ; p 5 . Notice the similarities with first and fourth bra nc hes o f Fig. 7.3. Remark . If C C n and D C n are applied independently of C C n and D C n then the expression D C n p φ n , s n q K n ∇ n 2 1 p e x r y q (7.28) should b e used instead o f the definition given b y equation (7.27). W e a re ready to formally state a characterizatio n of G-co ngruence in ter ms of c o n- gruence conditions C C : Theorem 7.1. 6 With notation as ab ove, if s n and s 1 n φ n p s n q ar e c oher ent and c on- dition C C p φ n , s n q is satisfie d t hen they ar e G-c ongruent . Pr o of First, using C C i and D C i , w e will prov e M i M 1 i for three and five pr o ductions. Ident ities a _ a b a _ b and a _ a b a _ b will b e used: 152 7 Sequentializati on an d Parall elism M 3 _ C C 3 _ D C 3 r L 1 _ r 1 L 2 _ r 1 r 2 L 3 s _ r r 1 L 3 _ e 1 r 2 L 3 _ r 3 L 1 _ _ r 1 r 3 L 2 s _ r e 1 L 3 s L 1 _ r 1 L 2 _ r 1 r 2 L 3 _ r 1 L 3 _ _ e 1 r 2 L 3 _ e 1 L 3 L 1 _ r 1 L 2 _ r 2 L 3 _ r 2 L 3 _ _ L 3 p r 1 _ e 1 q L 1 _ r 1 L 2 _ L 3 . In our first step, a s neither r 3 L 1 nor r 1 r 3 L 2 are applied to M 3 , they hav e b een omitted (for example, L 1 _ r 3 L 1 L 1 ). Once r 1 L 3 , e 1 L 3 and r 2 L 3 hav e b een used, they are omitted a s well. Let’s ch eck out M 1 3 , where in the seco nd equa lit y r 1 L 3 and r 2 e 1 L 3 are ruled out since they a re not used: M 1 3 _ C C 3 r r 3 L 1 _ r 1 r 3 L 2 _ L 3 s _ r r 1 L 3 _ r 2 e 1 L 3 _ r 3 L 1 _ r 1 r 3 L 2 s r 3 L 1 _ r 1 r 3 L 2 _ L 3 _ r 3 L 1 _ r 1 r 3 L 2 L 1 _ r 1 L 2 _ L 3 . The ca s e for five pr o ductions is a lmost equa l to that of three pro ductions but it is useful to illustrate in detail how C C 5 and DC 5 are used to prove that M 5 M 1 5 in a more complex situation. The key p oint is the trans fo rmation r 1 r 2 r 3 r 4 L 5 ÞÝ Ñ L 5 and the following identities show the wa y to pr o ceed: r 1 r 2 r 3 r 4 L 5 _ r 1 L 5 r 2 r 3 r 4 L 5 r 2 r 3 r 4 L 5 _ e 1 r 2 L 5 _ e 1 L 5 r 3 r 4 L 5 r 3 r 4 L 5 _ e 1 e 2 r 3 L 5 _ e 1 L 5 _ r 1 e 2 L 5 _ r 1 L 5 r 4 L 5 r 4 L 5 _ e 1 e 2 e 3 r 4 L 5 _ e 1 L 5 _ r 1 e 2 L 5 _ r 1 L 5 _ r 1 r 2 e 3 L 5 _ e 1 r 2 L 5 L 5 . Note that we ar e in a kind of iter ativ e proces s: What w e get on the r ight of the equality is inser ted and s implified o n the left of the following o ne, until we get L 5 . F or L 4 the pr oce ss is s imilar. Now one example for the negative initial dig raph is studied, K p s 3 q _ C C 3 _ D C 3 K 1 p s 3 q _ C C 3 : 7.1 Graph Congruence 153 K 1 p s 3 q _ C C 3 r e 3 K 1 _ e 1 e 3 K 2 _ K 3 s _ r e 1 K 3 _ e 2 r 1 K 3 _ e 3 K 1 _ e 1 e 3 K 2 s e 3 K 1 _ e 1 e 3 K 2 _ K 3 _ e 3 K 1 _ e 1 e 3 K 2 K 1 _ e 1 K 2 _ K 3 . K 1 p s 3 q _ C C 3 r e 3 K 1 _ e 1 e 3 K 2 _ K 3 s _ r e 1 K 3 _ e 2 r 1 K 3 _ e 3 K 1 _ e 1 e 3 K 2 s e 3 K 1 _ e 1 e 3 K 2 _ K 3 _ e 3 K 1 _ e 1 e 3 K 2 K 1 _ e 1 K 2 _ K 3 . The pro cedure follow ed to show K p s 3 q K 1 p s 3 q is completely analogous to that o f M 3 M 1 3 . Fig. 7.7. Altered Prod uction q 1 3 Plus Productions q 1 and q 2 Remark Congruence conditions rep ort what elements preven t graph congruence. In this wa y not only information of sa meness of minimal a nd nega tiv e initial digra phs is av aila ble but also what elemen ts prevent G-congruence. F or example, a no ther way to see congruence co nditions is a s the difference o f the minimal initial dig raphs in the p ositive case. Example. Reusing pr o ductions in tro duced so far ( q 1 , q 2 and q 3 ), 4 we ar e going to c heck G-congruence for a sequence of three pro ductions in which one is direc tly delayed tw o 4 In examples on p p. 77, 80 , 104 and 115. 154 7 Sequentializati on an d Parall elism po sitions, i.e. it is no t delay e d in tw o steps but just in one. As commen ted befor e, it is mandatory to change q 3 in order to keep co mpatibilit y , so a new production q 1 3 is int ro duced, depic ted in Fig. 7.7. The minimal initial dig raph for the sequence q 1 3 ; q 2 ; q 1 remains unaltered, i.e. M q 1 3 ; q 2 ; q 1 M q 3 ; q 2 ; q 1 (compare with Fig. 5.12 on p. 11 6), but the one for q 1 ; q 1 3 ; q 2 is s light ly differ- ent a nd can b e found in Fig. 7.8 a long with the concatenation s 1 123 q 1 ; q 1 3 ; q 2 and its int ermediate states. Fig. 7.8. Composition and Concatenation. Three Productions In this example, productio n q 1 is delayed t wo po sitions inside s 3 q 1 3 ; q 2 ; q 1 to o btain δ 3 p s 3 q q 1 ; q 1 3 ; q 2 . Such p ermutation can b e express e d as δ 3 r 1 2 3 s . 5 Only the p ositive case C C 3 p δ 3 , s 3 q is illustra ted. F o rm ula (7.23) expanded and simplified is: L 1 p r 2 _ e 2 r 3 q lo ooooo omo oooo o on pq _ r 1 L 2 _ r 2 L 1 3 lo o ooooo omo o ooooo on pq . (7.29 ) If the minima l initial digra phs a re equal, then equa tion (7.29) should b e zero. No de ordering is r 2 3 5 1 4 s , not included due to lack of space. 5 Numbers 1, 2 and 3 in the p erm utation mean p osition inside the sequen ce, not pro duction subindex. 7.2 Sequentializa tion – Grammar Rules 155 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 Æ Æ Æ similarly for pq : 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 _ 1 1 1 1 0 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 1 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 Æ Æ Æ W e detect nonzero elements p 1 , 5 q and p 3 , 1 q in (*) and p 1 , 2 q , p 2 , 3 q and p 3 , 2 q in (**). They corresp ond to edges p 2 , 4 q , p 5 , 2 q , p 2 , 3 q , p 3 , 3 q and p 5 , 3 q , resp ectively . B oth minimal initial digraphs ar e depicted together in Fig. 7.9 to ease compariso n. Fig. 7. 9. Example of Minimal Initial Digraphs Previous results not only detect if the application of a p ermutation (limited to ad- v ancing or delaying a sing le element) leav es minimal initial dig r aphs unaltered, but als o what elements ar e c hanged. 7.2 Sequen tialization – Gram mar Rules In this se c tion we will deal with p o sition interc hange inside a s equence of pro ductions. F or example, let s 3 p 3 ; p 2 ; p 1 be a co herent sequence made up of thre e pro ductions a nd 156 7 Sequentializati on an d Parall elism suppo se we wan ted to move p 3 forward one p ositio n to obtain σ p s 3 q p 2 ; p 3 ; p 1 . This can b e seen as a p ermutation σ acting on s 3 ’s indexes. 6 Although we a re not considering ma tc hes in this sec tio n, there is a clo s e relationship betw een p osition interc hange and pro blem 3 tha t w e will explore in this a nd next sections. This section first in tro duces sequential indep endence for pro ductions and a c hara cter- ization through G-c o ngruence, compa tibility and coherence. G-congruenc e and rela ted conditions hav e been studied in Sec. 7.1. Similar results for c oherence (adv a ncemen t and delaying of a single pro duction) a re also derived. Definition 7. 2.1 (Sequen tial Indep endence) L et s n p n ; . . . ; p 1 b e a se quenc e and σ a p ermutation. Then, s n and σ p s n q ar e said to b e se quential indep endent if b oth add and r emove t he same elements and have the same minimal and ne gative initial digr aphs. Compatibility and co herence imply s equen tial indep endence provided s n and σ p s n q hav e the sa me minimal and initial digraphs. Theorem 7.2. 2 With notation as ab ove, if s n is c omp atible and c oher ent and σ p s n q is c omp atible and c oher ent and b oth ar e G-c ongruent, then they ar e se quential indep endent. Pr o of By hypothesis we can define tw o pro ductions c s , c σ p s q which a re resp ectively the com- po sitions coming fro m s n and σ p s n q . Using comm utativity of sum in formulas (5.2 0) and 5.2 1) – i.e. the o rder in which elements are added do es not matter – we directly see that s n and σ p s n q add and remov e the sa me elements. G-congruence g uarantees sa meness of minimal and negative initial digraphs. Note that, e ven though the final r e sult is the same when moving sequential indepen- dent pro ductions inside a given concatenation, intermediate s tates can b e v ery different. In the res t of this sectio n we will discuss p ermutations that mov e o ne pro duction for- ward o r ba c kward a certain num b er of p ositions, yielding the sa me re sult. This mea ns, using T he o rem 7 .2 .2 and assuming compatibilit y and G-congr uence, finding out the con- ditions to be satisfied such that starting with a c o herent seque nc e w e aga in obta in a coherent sequence a fter applying the per m utation. 6 Notation of permutation groups is summarized in S ec. 2.6 7.2 Sequentializa tion – Grammar Rules 157 Theorem 7.2. 3 Consider c oher ent se quenc es t n p α ; p n ; p n 1 ; . . . ; p 2 ; p 1 and s n p n ; p n 1 ; . . . ; p 2 ; p 1 ; p β and p ermutations φ n 1 and δ n 1 . 1. φ n 1 p t n q – advanc es p α applic ation – is c oher ent if e E α ▽ n 1 r E x L E y _ R E α ▽ n 1 e E x r E y 0 . (7.30) 2. δ n 1 p s n q – delays p β applic ation – is c oher ent if L E β △ n 1 r E x e E y _ r E β △ n 1 e E x R E y 0 . (7.31) Pr o of Both cases hav e a very similar pro of so o nly pro duction adv ancement is included. The wa y to pro ceed is to c heck differences b etw een the original sequence t n and the swapped one, φ n 1 p t n q , dis c arding conditions already imp osed by t n . W e start with t 2 p α ; p 2 ; p 1 ÞÝ Ñ φ 3 p t 2 q p 2 ; p 1 ; p α , wher e φ 3 r 1 3 2 s . Coherenc e of b oth sequences sp ecify several conditions to b e fulfilled, included in T able 7 .1. Note that conditio ns (t.1.7) and (t.1.10) ca n b e found in the or iginal s equence – (t.1.2) and (t.1.5) – so they c an b e dis r egarded. Coherence of p α ; p 2 ; p 1 Coherence of p 2 ; p 1 ; p α e E 2 L E α 0 p t. 1 . 1 q e E 1 L E 2 0 p t. 1 . 7 q e E 1 L E 2 0 p t. 1 . 2 q e E α L E 1 0 p t. 1 . 8 q e E 1 L E α r E 2 0 p t. 1 . 3 q e E α L E 2 r E 1 0 p t. 1 . 9 q r E α R E 2 0 p t. 1 . 4 q r E 2 R E 1 0 p t. 1 . 10 q r E 2 R E 1 0 p t. 1 . 5 q r E 1 R E α 0 p t. 1 . 11 q r E α R E 1 e E 2 0 p t. 1 . 6 q r E 2 R E α e E 1 0 p t. 1 . 12 q T able 7. 1. Coherence for Adv ancement of Tw o Prod u ctions W e would like to express all prev ious identities using op erator s delta (4.40) a nd nabla (4.4 1) for which equation 4.1 3 is us ed on (t.1.8) a nd (t.1.9): e E α L E 1 r E 1 0 (7.32 ) e E α L E 2 r E 2 r E 1 0 . (7.33) 158 7 Sequentializati on an d Parall elism F or the same r eason, applying (4.1 0) to conditions (t.1.1 1) and (t.1.12 ): r E 1 e E 1 R E α 0 (7.34) r E 2 e E 2 R E α e E 1 0 . (7.35) Condition (t.1.4) c a n b e split into tw o parts – r ecall (4.31) a nd 4.32) – b eing r E 2 r E 3 0 one o f them. Doing the same op eration on (t.1.12), r E 2 r E 3 e E 1 0 is obtained, which is automatically verified a nd therefor e should not be cons idered. It is not r uled out since, as stated ab ov e, w e want to get formulas ex pressible using op erators delta and na bla. Finally we obta in the e quation: R E α e E 1 r E 1 _ e E 2 r E 2 _ e E α r E 1 L E 1 _ r E 2 L E 2 0 . (7.36) Fig. 7.10. Advancemen t. Three and Fiv e Pro ductions Performing similar manipulations on the sequence t 3 p α ; p 3 ; p 2 ; p 1 we get φ 4 p t 3 q p 3 ; p 2 ; p 1 ; p α (with φ 4 r 1 4 3 2 s ); we find out that the condition to b e sa tisfied is: 7.2 Sequentializa tion – Grammar Rules 159 R E α e E 1 r E 1 _ e E 2 r E 2 _ e E 3 r E 3 _ _ e E α r E 1 L E 1 _ r E 2 L E 2 _ r E 3 L E 3 0 . (7.37) Figure 7.1 0 includes the asso ciated gra phs to previous example a nd to n 4. The pro of can b e finished by induction. Previous theore ms fo ster the following notation: If eq. (7.30) is s a tisfied and we ha ve sequential independence , w e will wr ite p α K p p n ; . . . ; p 1 q whereas if equation (7.31) is true and again they a r e sequential indep enden t, it will b e repre sen ted by p p n ; . . . ; p 1 q K p β . Note that if we hav e the coherent sequence made up of tw o pro ductions p 2 ; p 1 and we hav e that p 1 ; p 2 is coherent we can wr ite p 2 K p 1 to mean that either p 2 may b e mov ed to the fro n t or p 1 to the ba c k. Example. It is not difficult to put an e x ample of th ree pro ductions t 3 w 3 ; w 2 ; w 1 where the a dv ancement of the third pr oductio n tw o positions to get t 1 3 w 2 ; w 1 ; w 3 has the following properties : Their asso ciated minimal initial digra phs – M and M 1 , resp ectively – coincide, they are bo th co herent (and th us sequential indep endent ) but t 2 3 w 2 ; w 3 ; w 1 can not b e p erformed, so it is not p oss ible to adv ance w 3 one p ositio n and, right afterwards, ano ther one, i.e. the adv a ncemen t of tw o places must b e car ried out in a single s tep. Fig. 7.11. Three Simple Productions As dr a wn in Fig. 7.11, w 1 deletes edge p 1 , 2 q , w 2 adds it while it is pres e rved by w 3 (app e ars on its left hand side but it is not deleted). Using previo us notation, this is a n example where w 3 K p w 2 ; w 1 q but w 3 M w 2 . As far as we know, in SPO or DPO approaches, testing wheth er w 3 K p w 2 ; w 1 q or not has to be p erformed in tw o steps: w 3 K w 2 , that w ould a llow fo r w 3 ; w 2 ; w 1 ÞÑ w 2 ; w 3 ; w 1 , and w 3 K w 1 to get the desired r esult: w 2 ; w 1 ; w 3 . 160 7 Sequentializati on an d Parall elism Fig. 7.12. Altered Prod u ction q 1 3 Plus Productions q 1 and q 2 (Rep.) Example. W e will use pr oductio ns q 1 , q 2 and q 1 3 (repro duced ag ain in Fig. 7.12). P ro- duction q 1 3 is adv anced tw o p ositions inside q 1 3 ; q 2 ; q 1 to obtain q 2 ; q 1 ; q 1 3 . Such per m utation can b e expr e ssed as φ 3 r 1 3 2 s . 7 F ormula (7.30) expa nded, simplified and adapted for this ca se is: e 3 p L 1 _ r 1 L 2 q lo oo ooo omo o oooo on pq _ R 3 p r 1 _ e 1 r 2 q loo oo ooomoo ooo oon pq . (7 .38) Finally , all elements are substituted and the op erations are p erformed, chec king that the result is the nu ll matrix . Node ordering is r 2 3 5 1 4 s , not included due to lack of space. The fir st part pq is zero : 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 _ 1 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ and the same for pq : 7 Numbers 1, 2 and 3 in the p erm utation mean p osition inside the sequen ce, not pro duction subindex. 7.3 Seq uentia l Indep endence – Deriv ations 161 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ and hence the p ermutation is also coher en t. 7.3 Sequen tial Indep endence – Deriv ations Sequential indep endence for deriv a tions is very simila r to sequences studied in pr evious section, the main difference b eing that ther e is a s tate now to b e taken in to account. Here σ will r epresent an element of the group of p ermu tations a nd der iv ation d n will hav e a sso ciated sequence s n . Note that tw o sequences s n and s 1 n σ p s n q carry o ut the same o pera tions but in different or der. Definition 7.3. 1 Two derivations d n and d 1 n σ p d n q ar e se quential indep endent with r esp e ct to G if d n p G q H n H 1 n d 1 n p G q . Compare with problem 3 on p. 8. Even though s 1 n σ p s n q , if ε -pro ductions a ppea r bec ause the same pr o ductio ns are matched to different pla ces in the host gr aph, then it might not b e true that d 1 n σ p d n q . A resta temen t o f Def. 7.3.1 is the following prop osition. Prop osition 7.3 .2 If for two applic able derivations d n and d 1 n σ p d n q 1. D M 0 G su ch that H M 0 P M p s n q X M p s 1 n q and 2. the c orr esp onding n e gative initial digr aph K 0 P N p s n q X N p s 1 n q , then d n p M 0 q and d 1 n p M 0 q ar e se quent ial indep endent. Pr o of Existence of a minimal initial digraph and its corresp onding negative initia l digraph guarantees co her ence and compatibilit y . As it is the same in bo th cases, they are G- congruent. A der iv ation and any o f its p ermutations c a rry out the same actions, but in different order. Hence, their result m ust b e iso morphic. 162 7 Sequentializati on an d Parall elism If tw o deriv ations (with underlying p ermuted s equences) are not a p ermut ation of each o ther due to ε -pro ductions but ar e confluent (their image graphs are isomorphic), then in fact it is p oss ible to write them a s a p ermutation of each other: Prop osition 7.3 .3 If d n and d 1 n ar e se quential indep endent and s 1 n σ p s n q , then D ˆ σ | d 1 n ˆ σ p d n q for s ome appr opriate c omp osition of ε -pr o ductions. Pr o of Let p T : p ε ÞÑ p T p p ε q be an op erator a cting on ε -pr o ductions , whic h splits them into a sequence o f n pro ductions ea c h with one edge. 8 If p T is a pplied to d n and d 1 n we must get the same nu mber of ε -pro ductions. Mor eov er , the num b er must b e the s a me for every type o f edg e or a co n tradiction can b e der ived as ε - pro ductions only delete elements. Example. Define tw o pro ductions p 1 and p 2 , where p 1 deletes edge p 2 , 1 q and p 2 deletes no de 1 and edge p 1 , 3 q . Define sequences s 2 p 2 ; p 1 and s 1 2 p 1 ; p 2 and apply them to graph G depicted in Fig . 7.1 3 to get H n and H 1 n , r espec tively . Note that p 1 and p 2 are not sequential independent in the sense of Sec. 7.2 with this identification. Fig. 7. 13. Sequential In dep endence with F r e e Matching Suppo se that in s 1 2 the ma tch m 2 for pr o duction p 2 ident ifies no de 1. In this case an ε -pro duction p ε, 2 should app ear deleting edge p 2 , 1 q , tra nsforming the co ncatenation to s 1 2 p 1 ; p 2 ; p ε, 2 and mak ing p 1 inapplicable. If m 2 ident ifies no de 1 1 instead o f 1, then we hav e H n H 1 n with the ob vious isomorphism p 1 , 2 , 3 q ÞÑ p 1 1 , 2 , 3 q , getting in this ca se p 2 K p 1 . No te that M 0 p s 1 2 q P M p s 2 q X M p s 1 2 q (see Fig. 7.1 4). Neither sequence s 2 nor s 1 2 add any edge and o nly p 2 deletes one no de. The nega tiv e digraph set has just one element that has b een ca lled K 2 , a lso depicted in Fig. 7 .14. 8 More on operator p T in Chap. 8. It is used in Sec. 8.3 for application conditions. 7.4 Exp licit P arallelism 163 Fig. 7. 14. Asso ciated Minimal and Negativ e Initial Digraphs The theo ry developed so far fits well here. Results for seq ue ntial indep endence such as Theo r em 7.2 .2, for coher ence (Theorems 4.3.5, 7.2.3 and 7.2 .3) and for minimal and negative initial digr aphs are r ecov e red. Marking (see Sec. 6.2) can be used to freeze the place in w hich productio ns a re applied. F or example, if a pro duction is adv anced and w e alr eady know that ther e is sequential independenc e , any no de identification acr oss pro ductions should b e kept b ecause if the pro duction was applied at a different match sequential indep endence co uld b e r uined. 7.4 Explicit Pa rallelism This chapter finishes analyzing which pro ductions or gro up o f pro ductions can b e com- puted in par allel and what co nditions guarantee this op eration. Firs tly we will take into account productions only , witho ut initial state. X 1 p 2 G p 1 p 2 p 1 p 2 H X 2 p 1 Fig. 7.15. Paral lel Execution 164 7 Sequentializati on an d Parall elism In the catego r ical approach the definitio n fo r tw o pro ductions is settled co nsidering the tw o alternative sequen tial wa ys in which they can b e comp osed, lo oking for eq ua lit y in their final state. Intermediate sta tes a re disregar ded using c a tegorical c opro duct o f the inv o lv ed pro ductions (see Sec. 3.1). Then, the main difference b etw een sequential and parallel execution is the existence of intermediate states in the former , a s seen in Fig . 7.15. W e follow the same appro ach saying that it is po ssible to execute t wo pro ductions in parallel if the result do es no t dep end on gene r ated intermediate states. Definition 7. 4.1 Two pr o duct ions p 1 and p 2 ar e said to b e t ruly c oncurr ent if it is p ossible to define their c omp osition and it do es not dep en d on the or der: p 2 p 1 p 1 p 2 . (7.39 ) W e use the notation p 1 k p 2 to denote true c o ncurrency . T rue concurrency defines a symmetric r elation so it do es not matter whether p 1 k p 2 or p 2 k p 1 is written. Next prop osition compares true concur rency a nd sequent ial indep endence for tw o pro- ductions, in the style of the p ar al lelism the or em – see [1 1] –. 9 The pro of is straightforw ard in o ur case a nd is not included. Prop osition 7.4 .2 L et s 2 p 2 ; p 1 b e a c oher ent and c omp atible c onc atenation, then: p 1 k p 2 ð ñ p 2 K p 1 . (7 .40) Pr o of Assuming compa tibilit y frees us from ε -pro ductions. So far we hav e just considered o ne pro duction per branch when parallelizing, as represented to the left o f Fig. 7.16. O ne w ay to deal with mor e general schemes – center and right of the same figure – is to test parallelism for each element in one bra nc h a g ainst every elemen t in the other . Consider the scheme in the middle of Fig . 7.1 6. Sequences s 1 p 6 ; p 5 ; p 4 and s 2 p 3 ; p 2 ; p 1 can be computed in para lle l if there is sequential indepe ndence for ev- ery in ter leaving. This is true if p i k p j , i P t 4 , 5 , 6 u , j P t 1 , 2 , 3 u . There a re ma n y 9 How ever, in D PO it is p ossible to identify elements once the coprod u ct has b een p erformed through non-injective matches. 7.4 Exp licit P arallelism 165 combinations tha t keep the relative o rder of s 1 and s 2 , for example p 6 ; p 3 ; p 2 ; p 5 ; p 1 ; p 4 or p 3 ; p 6 ; p 2 ; p 5 ; p 1 ; p 4 . In o rder to a pply these tw o s equences in parallel, all interlea vings that ma in tain the rela tiv e order should hav e the s ame result. p 3 p 2 p 0 p 3 p 1 p 0 p 7 p 6 ; p 5 ; p 4 p 0 p 7 p 3 ; p 2 ; p 1 p 0 w 4 w 3 w 0 w 4 w 2 ; w 1 w 0 Fig. 7.16. Examples of P arallel Execution Although it is no t true in gener al, in man y cases it is no t necess ary to chec k tr ue concurrency for every tw o pro ductions. The following exa mple illustrates the idea that is develope d afterwards. Example. Let b e given the co ncatenation w 4 ; w 3 ; w 2 ; w 1 ; w 0 . See Fig. 7.16 (right). Some of its pro ductions are depicted in Fig. 7.11 on p. 15 9. Rule w 1 deletes one edge, w 2 adds the sa me edge while w 3 preserves it. W e alr eady know that w 3 ; w 2 ; w 1 is compatible and coherent and that w 3 K p w 2 ; w 1 q . Both hav e the same minimal initial dig r aph. F ollowing our previous study for tw o pro- ductions we would lik e to put w 3 and w 2 ; w 1 in parallel, as depicted to the righ t of Fig. 7.1 6. F rom a s e quen tial point of view this dia gram can b e interpreted in different ways, depe nding on how they are co mputed. Ther e are three diss imilar in terleavings: (1) w 3 ; w 2 ; w 1 , (2) w 2 ; w 1 ; w 3 and (3) w 2 ; w 3 ; w 1 . An y problem involving the first tw o p ossibilities is ruled out by c o herence. As a matter of fact, w 3 and w 2 ; w 1 can no t b e para llelized b e c ause it could b e the ca se that w 3 is using edge p 1 , 2 q when w 1 has just deleted it and b efore w 2 adds it, which is what the third case expres ses, leaving the system in an inconsistent state. Thus, we do no t have w 3 k w 2 nor w 3 k w 1 – we do not hav e sequential indep endence – but b oth w 3 ; w 2 ; w 1 and w 2 ; w 1 ; w 3 are coher en t. 166 7 Sequentializati on an d Parall elism One p ossibility to pro ceed is to use the fact that although it could b e the case tha t p 3 M p 2 , it still might b e p ossible to a dv ance the pro duction with the help of another pro duction, i.e. p 3 K p p 2 ; p 1 q as se e n in Se c s. 7.2 and 7.3. Although there ar e some similarities b et ween this concept and the theorem o f c o n- currency , 10 here we rely on the p ossibility to characterize pro duction a dv ancement or delaying inside sequenc e s more than just o ne p osition, hence, b eing more genera l. Theorem 7.4. 3 L et s n p n ; . . . ; p 1 and t m q m ; . . . ; q 1 b e two c omp atible and c oher ent se quenc es with the same minimal initial digr aph, wher e either n 1 or m 1 . Supp ose r m n t m ; s n is c omp atible and c oher ent and either t m K s n or s n K t m . Then, t m k s n thr ough c omp osition. Pr o of Using Pr opo sition 7.4.2. Thr ough c omp osition means that the concatena tio n with length greater than one must be transfor med in to a single pro duction using comp osition. This is p ossible b ecause it is coherent a nd compatible – refer to Pr op. 5.3.4 –. In fact it s ho uld not b e nec e ssary to transform the whole co ncatenation using c o mpos ition, but only the par ts that pres en t a problem. Setting n 1 cor resp onds to a dv ancing a pro duction in sequent ial indep endence, while m 1 to moving a pro duction backw a rds inside a co ncatenation. In addition, in the hypothesis we ask for coher ence of r n and either t m K s n or s m K t n . In fact, if r m n is coherent and t m K s n , then s n K t m . It is also true that if r m n is coher en t and s n K t m , then t m K s n (it co uld b e pr ov ed by cont radictio n). The idea b ehind Theorem 7.4.3 is to eras e intermediate sta tes thro ugh co mp osition but, in a real system, this is not always po ssible or desira ble if for example these states were used for synchronization of pro ductions o r states. All this se c tion ca n b e ex tended easily to consider deriv ations. 10 See Sec. 3.1 or [22]. 7.5 Su mmary and Conclusions 167 7.5 Summary and Conclus ions In this chapter we hav e studied in mor e detail sequences a nd deriv atio ns, paying special attent ion to sequential indep endence. W e r emark once more that certain prop erties of sequences c a n b e ga thered during grammar s pecifica tio n. This information ca n b e used for an a- pr iori analysis o f the graph transformatio n sys tem (gra mmar if an initial state is also provided) or, if pro per ly stored, during runtime. In ess ence, sequential independence co rresp onds to the concept o f commutativit y p a ; b b ; a q or a generaliza tion of it, b ecause co mm utativity is defined for tw o elements and here we allow a o r b to b e se quences. It ca n b e used to r educe the size of the state space asso ciated to the grammar. F rom a theo retical o r practical-theoretica l p oint o f view, sequential indep endence helps by reducing the amount of pro ductions co m binatorics in sequences o r deriv a tions. This is of int erest, for example, for confluence (problem 5 o n p. 9). Besides seq uen tial indep endence for co ncatenations and deriv ations, we hav e also studied G-congr uence, whic h guara ntees sa meness of the minima l and negative initial digraphs, and explicit paralle lis m, useful for par allel computation. One of the ob jectives of the pres en t bo ok is to tac kle problems 2 and 3, indep en- dence and sequential indep endence, r espec tively , defined in Sec. 1 .2. The who le chapter is dir ected to this end, but with success in the r estricted c a se o f a dv ancing or delaying a single pro duction an arbitrary n umber of p ositions in a sequence. This is a c hieved in Theor ems 7 .2 .2 and 7.2.3, which rely o n Theorem 7.1.6 (G-congr uence), and a lso in Props. 7 .3.2 and 7.3.3. These results can be gene r alized b y addr essing other types of p ermutations such as adv ancing or delaying blo cks of pro ductions. Another p ossibility is to study the swap of tw o productions inside a sequence. It ca n b e addressed fo llowing the same sort o f developmen t along this chapter. Swaps of t wo pr oductio ns are 2-cycles and it is well known that any pe r m utation is the pr o duct of 2-cycles. In or der to link this c hapter with the next one a nd Chapter 9, which deal with application conditions and r estrictions on graphs, let’s note that conditions that need to be fulfilled in order to obtain sequential indep endence can b e interpreted as g raph 168 7 Sequentializati on an d Parall elism constraints and application c o nditions. Gra ph co nstraints a nd a pplication co nditio ns are impo rtant both from the theoretica l a nd from the practical p oints of view. 8 Restrictions on Rules In this chapter graph constra in ts and a pplication conditio ns – that we call r estrictions – for Matrix Graph Gramma r s will b e studied, generalizing previo us appro aches to this topic. F or us, a res tr iction is just a condition to b e fulfilled by some graph. This study will b e completed in the following chapter. In the literature there are t wo kinds o f restric tio ns: Appl ic ation c onditions and gr aph c onstr aints . Gra ph cons tr aint s expres s a g lobal res triction on a g raph while application conditions ar e normally though t o f as lo cal properties , namely in the area where the match identifies the LHS of the gr ammar rule. By generalizing gra ph co ns traints a nd application conditions we will see that they can expr ess both lo cal and glo bal proper ties and, mor e o ver, that application conditions are a par ticular case of gr aph constr a in ts. It is at times a dvisable to sp eak o f pr op ert ies ra ther than r estrictions . F or a given grammar , restrictions ca n b e set e ither during rule application (applica tion conditions, to b e chec ked b efore the r ule is applied or after it is applied) or o n the shape of the state (graph constr ain ts, which can b e s et on the input state o r on the o utput state). Application conditions are impor ta n t from b oth the practical and the theor etical po in ts o f view. On the practical s ide, they are conv e nie nt to concisely ex press pr oper ties or to synthesize pro ductions. They also op en the p ossibility to partially act on the nihila tio n matrix. On the theoretical side, application co nditions put into a new per spective the left and right ha nd sides o f a pr o ductio n. They also enlarg e the scop e o f Matrix Gr aph Grammars, including multidigraphs (though this will b e a ddressed in Chap. 9). 170 8 Restrictions on Rules This b o ok extends pre vious approaches using monadic second or der logic (MSOL, see Sec. 2 .1 for a quick ov erview). Sectio n 8.1 s ets the bas ics for gra ph constraints and ap- plication conditions b y introducing diagr ams and their seman tics. In Sec . 8 .2 deriv ations and diag rams ar e put tog ether, showing that diagra ms a re a natural generalizatio n of graphs L and K (in the preco ndition ca se). Section 8.3 expres s es all these r esults us ing the functiona l notation introduced in Sec. 6.1 (see a lso Sec. 2.5). W e prove tha t any ap- plication condition is equiv alent to some (set of ) se quence(s) of pro ductions. Section 8.4 closes the chapter with a summary and so me more comments. 8.1 Graph Constraints and Application Conditions A graph constra in t (GC) in Matrix Graph Gr a mmars is defined as a diagr am (a set of graphs and partial injective mo rphisms) plus a MSOL formula. 1 The diagram is made of a set o f graphs and mo rphisms (partial injective functions) which sp ecify the r elationship betw een element s of the graphs. The fo r m ula sp ecifies the conditions to be fulfilled in order to make the host graph G satisfy the GC, i.e. we chec k whether G is a mo del for the dia gram and the for m ula. The domain of disc ourse are s imple digraphs, a nd the diagram is a mea ns to repre s en t the interpr etation function I . Reca ll that in essence the domain of discour s e is a s et o f individual e lemen ts whic h can b e quantified ov er. The in ter pr etation function assigns meanings (s e ma n tics) to sy mbo ls. See Sec. 2 .1 and r eferences ther ein for mo re details. A 0 m A 0 A 1 d 10 m A 1 L d L 0 d L 1 p m L R G H Fig. 8.1. Application Condition on a Rule’s Left Hand Side 1 MSOL corresponds t o regular languages [12], whic h are appropriate to express patterns. 8.1 Graph Constraints and Application Conditions 171 Example . Figure 8 .1 s ho ws a diagram asso ciated to the left ha nd side of a pro duction p : L Ñ R ma tc hed to a hos t gr aph G by m L . An exa mple of asso ciated formula can b e f D L A 0 D A 1 r L p A 0 ñ A 1 qs . W e will fo cus on logical expressions encoding that one simple digra ph is contained in another, b ecause this is in essence wha t ma tc hing do es. T o this end, the following tw o predicates ar e introduced: P p X 1 , X 2 q m r F p m, X 1 q ñ F p m, X 2 qs (8.1) Q p X 1 , X 2 q D e r F p e, X 1 q ^ F p e, X 2 qs , (8.2) which rely o n pre dicate F p m, X q , “ no de or edge m is in digr aph X ” , or on F p e, X q , “ edge e is in digra ph X ”. Predica te P p X 1 , X 2 q holds if a nd only if X 1 X 2 and Q p X 1 , X 2 q is true if a nd only if X 1 X X 2 H . F ormula P will deal with tota l morphisms and Q with non-empty partial mor phisms (see graph cons traint satisfaction, Def. 8.1.6). Remark . P E p X 1 , X 2 q says that every edge 2 in gra ph X 1 should als o b e present in X 2 , so a morphism d 12 : X 1 Ñ X 2 is demanded. The diagram ma y already include o ne such mor phism (which can b e seen as r estrictions imp osed on function I ) and w e can either allow extensions of d 12 (relate more no de s if nec e s sary) or keep it as defined in the diag ram. This latter possibility will b e represen ted app ending the subscript U to P E ÞÑ P E U . Pre dic a te P E U can b e expr essed 3 using P E : P E U p X 1 , X 2 q a r p F p a, D q F p a, coD qq s P E p D , c oD q ^ P E p D c , coD c q (8.3) where D D om p d 12 q , coD coD om p d 12 q , c stands for the complemen t ( D c is the complement of D om p d 12 q w.r.t. X 1 ) a nd is the xor op era tion. F or e xample, following the notation in Fig. 8.5, P U p A 1 , A 0 q would mea n that it is not p ossible to further relate another elemen t apart from 1 b etw een A 0 and A 1 . This could only happen when A 0 and A 1 are matched in the host g raph. 2 Mind the superscript E in P E . As in previous chapters, an E sup erscript means e dge and an N sup erindex stands for no de . 3 Non-extensible existence of d 10 for a graph constrain t is x P A 0 , y P A 1 , m A 0 p x q m A 1 p y q y d 10 p x q , with notation as in Fig. 8.5. In words: When elemen ts are matched in the host graph (or in other graphs th rough different d ij ) elemen ts unrelated by d 10 remain unrelated. 172 8 Restrictions on Rules P U will be us ed as a means to indicate that elements not related b y their mor phis ms in the diagram must re main unre la ted. These relatio nships (forbidden a ccording to P U ) could be sp ecified either by other morphisms in the dia gram or by matches in the host graph. F or example, tw o unrelated no des of the same type in differe nt graphs of the diagram can b e identified as the sa me no de by the cor resp onding matches in the host graph. Hence, even though no t explicitly specified, there would ex ist a morphism relating these no des in the diagr am. P U preven ts this side effect of matches. The same can happ en if ther e is a chain of mor phisms in the dia g ram such as A 0 Ñ A 1 Ñ A 2 . There might exist an implicit unsp ecified morphism A 0 Ñ A 2 . 01 1: Machine A 0 1: Machine A 1 1: Conveyor d Fig. 8.2. Example of Diagram Example . Before starting with fo r mal definitions, w e give an intuition of GCs . The following GC is satisfied if f or every A 0 in G it is p ossible to find a related A 1 in G : A 0 D A 1 r A 0 ñ A 1 s , equiv alent by definition to A 0 D A 1 r P p A 0 , G q ñ P p A 1 , G qs . No des and edges in A 0 and A 1 are related through the diagram shown in Fig . 8.2, which re- lates elements with the same n umber and type. As a no ta tional con venience, to enhance readability , each graph in the diagra m has b een marked with the quantifier given in the formula. The gra ph constra in t in Fig. 8.2 expres ses that ea c h machine should hav e a n output co n vey or. It is interesting fo r restrictio ns to be able to express negative conditions, that is , to express that so me e le ments should no t be presen t in the ho st g raph. By elements we mean no des, edges or b oth. When some elements are requested not to exist in G , o ne po ssibilit y is to find them in the co mplemen tary graph. T o this end we will define a structure G G E , G N that in first instance consis ts of the neg ation of the adjacency matrix o f G and the neg ation of its vector of no des. 8.1 Graph Constraints and Application Conditions 173 W e s p eak of structure b ecause the negation of a dig r aph is no t a digraph. In general, compatibility fails for G . 4 Although it has been commented already , we will insist in the difference b etw een completion a nd negation o f the a djacency matrix. The co mplemen t of a g r aph coincides with the nega tion of the a djacency matr ix , but while negation is just the logica l op er- ation, ta king the complement means that a completion ope r ation ha s b een per formed befo re. Hence, ta king the co mplemen t of a matrix G is the negatio n with resp ect to some appropria te completio n of G . As long as no confusion a rises neg ation a nd complements will not b e sy n tactically distinguished. Gr aph with resp ect to which the completion (if any) is p erformed will no t b e explicitly written fr om now on. Fig. 8.3. Finding Complemen t and Negation Example . Suppo se w e hav e tw o graphs A and G as thos e depicted in Fig. 8.3 and that we w ant to chec k that A is not in G . Note that A is not contained in G (node 3 do es not even app ear) but it do es a ppear in the negatio n of the completio n with resp ect to A of G (gr aph G A in the sa me figure). The notation ( syntax) will b e alleviated a bit more b y making the host graph G the default seco nd a rgument for predicates P a nd Q . Besides, it w ill b e ass umed that by default total morphisms are dema nded. That is, predicate P w ill b e ass umed unless otherwise stated. Our pr o po s al to simplify the notation is to o mit G and P in these cas e s. Also, it is not neces s ary to rep eat quantifiers that are together, e.g . A 0 D A 1 D A 2 A 3 can be a bbreviated as A 0 D A 1 A 2 A 3 . Example . A sophisticated wa y of demanding the exis tence of one g raph D A r A s is: 4 In Chap. 4 a matrix for ed ges and a vector for no des were introduced to differentiate one from the other, mainly b ecause op erations could b e p erformed on no des or on edges. Recall that compatibility related both of them and completion p ermitted operations on matrices of different size (with a different num b er of n odes). 174 8 Restrictions on Rules D A N D A E P A N , A E ^ A N ^ A E that re a ds it is p ossible to find in G the set of no des of A and its set of e dges in the same plac e – P A N , A E – . In this case it is p ossible to use the universal quantifier instead, as ther e is a single o ccur rence of A N in A E up to isomor phisms: A N D A E P A N , A E ^ A N ^ A E . As another example, the following graph co nstraint is fulfilled if for every A 0 in G it is p ossible to find a rela ted A 1 in G : A 0 D A 1 r A 0 ñ A 1 s , (8.4) which b y definition is equiv a len t to A 0 D A 1 r P p A 0 , G q ñ P p A 1 , G q s . (8.5) These syntax simplificatio ns just try to simplify most commo nly used r ules. Negations inside abbrevia tio ns must b e a pplied to the corr espo nding predicate, e.g. D A A D A P p A, G q is no t the negation of A ’s adjacency matrix. F or the case of edges, the following identit y is fulfilled: P E p A, G q Q p A, G E q . (8.6 ) The pa rt that takes care of the no des is eas ier, so from now on we will mainly con- centrate on edges and adjacency matrices. 5 A bit mor e for mally , the syntax of well-formed for m ulas is inductively defined as in monadic se c ond-order logic, which is first-or der logic plus v aria bles for the subset of the domain of disco urse. Acro ss this chapter, for m ulas will no rmally hav e one v ar iable term G which repres en ts the host gra ph. Usually , the rest of the terms will b e given (they will b e constant terms). P redicates will consist o f P and Q and combinations of them through negation and binary co nnec tiv es. Next definition formally pr esents the notion o f diagr am . 5 Using the tensor p roduct it is p ossible to embed th e n ode vector into the adjacency matrix. This is not used in this b o ok except in Chap. 10. See the definition of the incidenc e tensor in Sec. 10. 3 . 8.1 Graph Constraints and Application Conditions 175 Definition 8.1. 1 (Diagram) A diagram d is a set of simple digr aphs t A i u i P I and a set of p artial inje ctive morphisms t d k u k P K , d k : A i Ñ A j . We wil l say that a diagr am is well defined if every cycle of morphisms c ommute. T o illustrate well-definedness consider the diagram of Fig. 8.4. Node typed 2 has t wo different images, 2 2 and 2 3 , dep ending if morphism d 12 d 01 is considere d or d 02 . There would be an inconsis tency if d 01 p 2 q 2 1 , d 02 p 2 q 2 3 and d 12 p 2 1 q 2 2 bec ause d 12 d 01 p 2 q 2 2 while. Notice that no de 2 w ould hav e t wo different images and w e hav e impo sed by hypothesis that all mor phisms must be injective. Fig. 8.4. non-Injective Morphisms in Application Condition The term gr ound formula will mean a MSO closed formula w hich uses P a nd Q with constant no des (i.e. no des of a concrete type which can b e matched with no des o f the same type). The formulae in the c onstraints use v aria bles in the s et t A i u i P I , and predicates P and Q . F o rmu lae a re restricted to have no free v a riables except for the default s econd argument of predica tes P a nd Q , which is the ho s t g raph G in which we ev aluate the GC. Next definition presents the notion of GC. Definition 8.1. 2 (Graph Constraint) GC p d pt A i u i P I , t d j u j P J q , f q is a gr aph c onstr aint, wher e d is a wel l define d diagr am and f a sent en c e with variables in t A i u i P I . A c onstr aint is c al le d basic if | I | 2 (with one b ound variable and one fr e e variable) and J H . In gener a l, there will b e an outsta nding v a r iable among the A i representing the host graph, being the only free v ar iable in f . In pr e vious para graphs it has b een denoted by G , the default se cond a rgument for pr e dicates P and Q . W e s ometimes s peak o f a “GC defined ov er G”. A basic GC will b e o ne made of just o ne graph and no morphisms in 176 8 Restrictions on Rules the diagr am (reca ll that the host graph is no t represented by default in the diagr am nor included in the formulas). F or now w e will limit to ground for m ulas a nd it will not b e un til Sec. 9.3 that variable no des a re consider ed. A v ariable no de is o ne whose type is not sp e cified. How gra ph constra in ts ca n b e expressed using diagrams and lo gic formulas will be illustrated with so me examples 6 throughout this s ection, comparing with the way they should b e written using FOL and MSOL. Fig. 8. 5. At Most Two Outgoing Edges Example (at most t w o outgoing edges). Let’s character ize gr aphs in which every no de of type 1 ha s at most tw o outgoing edg es. Using FOL: f 1 y 1 , y 2 , y 3 r edg p 1 , y 1 q ^ edg p 1 , y 2 q ^ ^ edg p 1 , y 3 q ñ y 1 y 2 _ y 1 y 3 _ y 2 y 3 s , (8.7) where function edg p x, y q is true if ther e exists a n edge starting in no de x and ending in no de y . In our ca se, we consider the diagra m to the left of Fig. 8.5 to g ether with the formula: f 1 A 0 E A 1 r A 0 ñ p A 1 ^ P U p D , c oD qq s (8.8) where D D om p d 10 q and coD coD om p d 10 q . There must b e tw o total injective morphisms m A 0 : A 0 Ñ G , m A 1 : A 1 Ñ G and a partial injectiv e mo rphism m A 1 A 0 : A 1 Ñ A 0 which do es not e xtend d 10 ( m A 1 A 0 d 10 ), 6 Examples “at most tw o outgoing edges” b elo w and “3-vertex colorable graph” on p. 182 hav e b een adapted from [12]. 8.1 Graph Constraints and Application Conditions 177 i.e. elements of type 1 are r elated and v ariables y 1 and y 2 remain unrela ted with y 3 . Hence, tw o outgo ing edges a re allowed but not three. In this case it is also p ossible to cons ider the diagram to the right of Fig. 8.5 together with the muc h simpler for m ula f 2 E A 2 r A 2 s . This form will b e used when the theory is extended to cop e with multidigraphs in Sec. 9.3. A graph constra in t is a limitation on the shap e of a graph, i.e. what elements it is made up o f. This is something that can always be dema nded on any g raph, irr espe ctiv e of the existence o f a gra mmar o r rule. This is not the case for applicatio n co nditions which need the presence of pr o ductions . In the follo wing few paragraphs, a pplication conditions will be in tro duced. Out of the definition it is not difficult to see application conditions as a particula r cas e o f gr aph constraints in this framework: one of the gr aphs in the dia g ram is the r ule’s LHS (ex- istentially qua n tified over the host g raph) and a nother one is the gr aph induced by the nihilation matr ix (existentially qua n tified over the negation of the host graph). Definition 8.1. 3 (W eak Precondition) Given a pr o duction p : L Ñ R with nihilation matrix K , a we ak pr e c ondition is a gr aph c onstr aint over G satisfying: 1. D ! i, j such that A i L and A j K . 2. D ! k such that A k G is the only fr e e variable. 3. f mu st demand the existenc e of L in G and the existenc e of K in G E . The simple graph G can b e thought of a s a host graph to which s ome grammar rules are to b e applied. F or simplicit y , we usually do not explicitly show the condition 3 in the formulae of A Cs, nor the nihilatio n matrix K in the diag ram. Ho wev er, if omitted, bo th L and K a re ex isten tially quantified b efore any other gra ph o f the AC. Thus, an A C has the for m D L E K... r L ^ P p K, G q ^ ... s . F or technical reas ons to b e clarified in Sec. 9.2, it is b etter not to have morphisms whose co domains are L o r K , for example d i : A i Ñ L or d j : A j Ñ K . This is not a big issue as we ma y alwa ys use their inverses due to d i ’s injectiveness, i.e. o ne may consider d 1 i : L Ñ A i and d 1 j : K Ñ A j instead. Note the similar ities betw e en Def. 8.1.3 and that of deriv ation in Sec. 6 .1.2. Actually , this definition int erpre ts the left hand side of a pro duction and its nihilatio n matrix a s 178 8 Restrictions on Rules a weak preco nditio n. Hence, any well defined pro duction has a na tural ass o cia ted weak precondition. Starting with the definition of weak pr econdition we define we ak p ostc onditions sim- ilarly but using the comatch m R : R Ñ H , H p p G q . A pr e c ondition is a weak pre- condition plus a match m L : L Ñ G and, symmetrically , a p ostc ondition is a weak po stcondition plus a comatch m R : R Ñ H . Every pro duction naturally sp e c ifies a w eak p ostcondition. Elements that must b e present are thos e found at R , while e _ D should not b e found by the comatch. W eak application conditions, weak preco nditions a nd weak p ostconditions p ermit the sp ecification of restrictions at a gr ammar definition s ta ge with no need for matches, as in Cha ps. 4 a nd 5. Definition 8. 1.4 ((W eak) Application Conditi on) F or a pr o duction p , a (weak) application condition is a (we ak) pr e c ondition plus a (we ak) p ostc ondition, AC p AC L , AC R q . Fig. 8.6. Example of Precondition Plus P ostcondition Example . Figure 8.6 depicts a pro duction with diagram d LH S t A u for its LHS a nd di- agram d RH S t B u for its RHS. If the asso ciated formula for d LH S is f LH S D L D A L A then there are tw o different p ossibilities dep ending o n how morphism d A is defined: 1. d A ident ifies no de 1 in L and A . Whenever L is ma tched in a host graph ther e can not b e at least o ne A , i.e. at least for one matching o f A – with no de 1 in common with L – in the ho s t gr aph either edge p 1 , 1 q or e dg e p 1 , 3 q are missing. 2. d A do es not identify no de 1 in L and A . This do es no t necessarily mea n that they m ust b e differe n t when ma tc hed in an actual host graph. Now, it is sufficient not to find o ne A which would b e fine for a ny match of L in the host graph. 8.1 Graph Constraints and Application Conditions 179 Recall that the interpretation of the quantified parts D L and D A a r e, resp ectively , to find no des 1 a nd 2 and 1 and 3 (edg es to o). In the first bullet ab ov e, both no des 1 must coincide while in the second case they may coincide or they may be different. The stor y v a ries if for m ula f LH S D L A L A is co nsidered. Ther e a re again tw o cases, but now: 1. d A ident ifies no de 1 in L and A . No other no de 3 ca n b e linked to no de 1 if it has a self lo o p. 2. d A do es not identify no de 1 in L and A . The sa me as ab ov e, but now b oth no des 1 need no t b e the same. A similar in terpreta tion ca n be g iven to the pos tcondition d RH S together with formula f RH S D R D A R A and f RH S D R A R A . Remark (lo cal vs . glo bal prop erties) . As commented in the in tro duction of this chapter, graph constra ints are nor mally thought of as global conditions on the en tire graph while applica tio n conditions are lo cal prop erties, defined in the neighborho o d of the ma tch (and usually not b eyond). In o ur setting, the use o f quantifiers on restrictio ns p ermit “lo cal” gr aph constra in ts and “global” a pplication conditions. The fir st by using existen tial quant ifiers (so as so on as the restriction is fulfilled in one piece o f the host graph, the graph constraint is fulfilled) and the latter through univ ersal quantifiers (for ev ery potential matc h of the application condition it must b e fulfilled). Remark (semantics o f quan tification) . In GCs or A C s , graphs are qua n tified either existentially or universally . W e now give the in tuition o f the se ma n tics of such quantifi- cation applied to basic formulae. Th us, w e consider four case s : (i) D A r A s , (ii) A r A s , (iii) E A r A s , (iv) { A r A s . Case (i) states that a graph A s hould b e found in G . F o r example, in Fig. 8.7, the GC D opM achine r opM achine s demands an o ccurrence of opM a chine in G (which exis ts). Case (ii) demands that, for all p ot en t ial o c curr enc es o f A in G , the shap e of gra ph A is actually found. The term po ten tial o ccurrences mea ns a ll distinct maximal partia l matches 7 (whic h are total on nodes) of A in G . A non-empty pa r tial matc h in G is 7 A matc h is partial if it does not identif y all no des or edges of the source graph . The domain of a partial matc h should be a graph. 180 8 Restrictions on Rules maximal if it is not strictly included in another partial o r total ma tc h. F or ex a mple, consider the GC opM achine r opM achine s in the context o f Fig. 8.7. There are tw o po ssible instantiations of opM achi ne (as there are tw o machines and one op era to r), and these are the tw o input elements to the for m ula. As only one of them sa tisfies P p opM achine, G q (the expa nded form o f r opM achine s ) the GC is not sa tis fie d by G . opMachine 1: Piece 2: Conveyor 1: Machine 2: Machine G 1: Conveyor 1: Operator 1: Machine 1: Operator Fig. 8. 7. Quantification Examp le Case (iii) demands that, for all po ten tial o ccurrences of A , none of them should hav e the shap e o f A . The term p o ten tial o ccurrence hav e the sa me meaning as in case (ii). In Fig. 8.7, ther e are t wo p otential insta n tiations of the GC E opM achine r opM achine s . As one of them actually satisfies P p opM achine, G q , the for m ula is not satisfied by G . Finally , c a se (iv) is equiv alent to D A r A s , where by definition A P p A, G q . This GC states that for all poss ible instantiations of A , o ne of them does not ha ve the shape of A . This means that a non-empty par tial morphism should be found fr om A to G . In Fig. 8 .7, the GC D opM achine r opM achine s is satisfied b y G , b ecause again there are t wo p ossible instantiations, and one of them a c tually do es no t hav e an edg e be t ween the op erato r a nd the ma chine. Some notation for the set o f morphisms and isomor phisms b e t ween tw o graphs is needed in o rder to interpret bas ic constra in ts sa tisfaction. par max p A i , A j q t f : A i Ñ A j | f maximal non-empty partial mor phism with D om p f q N A N ( tot p A i , A j q t f : A i Ñ A j | f is a total mo r phism u par max p A, G q iso p A i , A j q t f : A i Ñ A j | f is an iso mo rphism u tot p A, G q 8.1 Graph Constraints and Application Conditions 181 where D om p f q N are the nodes of the gr aph in the domain o f f . Thus, par max p A, G q denotes the set of all potential o ccurr ences of a g iv en constraint graph A in G (where we require all nodes in A to b e present in the domain of f ). Note that each f P par max may be empty in edges. Definition 8.1. 5 (Basic Constraint Satisfaction) The four most b asic gr aph c on- str aint satisfactions ar e: • Gr aph G satisfies D A r A s iff D f P par max p A, G q | f P tot p A, G q . • Gr aph G satisfies A r A s iff f P par max p A, G q | f P tot p A, G q . • Gr aph G satisfies E A r A s iff f P par max p A, G q | f R tot p A, G q . • Gr aph G satisfies A r A s iff D f P par max p A, G q | f R tot p A, G q . The diagrams as so ciated to the formu las in pr evious definition ha ve b een omitted for simplicity as they consis t of a single element: A . Recall that by default predica te P is assumed as w ell as G as second arg umen t, e.g. the first for m ula in previous definition D A r A s is a ctually D A r P p A, G qs . In fact, only the first t wo cases a re needed b ecause one has E A r P p A, G qs A r P p A, G qs and { A r P p A, G qs D A r P p A, G qs . Given a graph G a nd a gr aph constra in t GC , the next step is to state when G satisfies GC . This definition also applies to a pplica tion conditions. Definition 8.1. 6 (Graph Constraint Satisfaction) We say that d 0 pt A i u , t d j uq satisfies the gr aph c onstr aint GC p d pt X i u , t d j uq , f q under the interpr etation function I , written p I , d 0 q | ù f , if d 0 is a mo del for f that satisfies the element r elations 8 sp e cifie d by the diagr am d , and the fol lowing interpr etation for the pr e dic ates in f : 1. I p P p X i , X j qq m T : X i Ñ X j total inje ctive morphism. 2. I p Q p X i , X j qq m P : X i Ñ X j p artial inje ctive morphism, non-empty in e dges. wher e m T | D d k m P | D with 9 d k : X i Ñ X j and D D om p d k q . The interpr etation of quantific ation is as in D ef. 8.1.5 but setting X i and X j inste ad of A and G , r esp e ctively. 8 As any mapping, d j assigns elements in the domain to elements in the co domain. Elements so related should b e mapp ed to the same element. F or example, Let a P X 1 and d 1 i : X 1 Ñ X i with b d 12 p a q and c d 13 p a q . F urth er, assume d 23 : X 2 Ñ X 3 , then d 23 p b q c . 9 It can be the case that Dom m P X Dom p d k q H . 182 8 Restrictions on Rules Recall that we say that a mor phism is t otal if its do ma in coincides with the initial set and p artial if it is a prop er subset. Remark . There ca n not e x ist a mo del if there is any co n tradiction in the definition of the graph co nstraint. A co n tradiction is to ask for an element to app ear in G and also to be in G . In the case of an application condition, some con tradictions are av oidable while others ar e not. W e will return to this po in t in Sec. 8.2 with a n example and appr opriate definitions. The four basic constraint satisfactions of Def. 8.1.5 can b e written G | ù D A r A s , G | ù A r A s , G | ù E A r A s and G | ù A r A s . The no tation deser ves the follo wing co mmen ts: 1. T he notation p I , d 0 q | ù f means that the formula f is satisfie d under interpretation given by I , as signment s giv en b y morphisms specified in d 0 and substituting the v ariable s in f with the gra phs in d 0 . 2. As commen ted after Def. 8.1.2, in many ca ses the for mula f will hav e a sing le v ariable (the one r epresenting the host gr aph G ) and always the int erpre ta tion function will be that given in Def. 8.1.6. W e may thus write G | ù f . The notation G | ù GC may also be use d. 3. Simila rly , as an AC is just a GC where L , K and G are pres e n t, w e may wr ite G | ù AC . F or practical purp oses, w e are in terested in c hecking whether, given a host gra ph G , a cer tain match m L : L Ñ G satisfies the AC. In this case we write p G, m L q | ù AC . In this wa y , the satisfac tion of a n A C b y a matc h and a host gra ph is like the satisfaction of a GC by a gr aph G , where a morphism m L is alrea dy sp ecified in the diag ram of the GC. Example (3-v ertex colorable graph). In order to expre s s that a graph G is 3-vertex colora ble we need to state tw o basic facts: First, every single node b elongs to one of thre e disjoint sets, called X 1 , X 2 and X 3 : Check first three lines in formula (8.9). Second, ev ery t wo no des joined by one edge must b e long to different X i , i 1 , 2 , 3, which is stated in the la st tw o lines of (8.9). Using MSOL: 8.1 Graph Constraints and Application Conditions 183 Fig. 8. 8. Diagram for Three V ertex Colorable Graph Constrain t f 2 D X 1 , X 2 , X 3 r x p x P X 1 _ x P X 2 _ x P X 3 q ^ x p ψ p x, X 1 , X 2 , X 3 q ^ ψ p x, X 2 , X 1 , X 3 q ^ ψ p x, X 3 , X 2 , X 1 qq ^ x, y p edg p x, y q ^ p x y q ñ φ p x, y , X 1 q ^ φ p x, y , X 2 q ^ φ p x, y , X 3 qqs (8.9) where, ψ p x, X , Y , Z q r x P X ñ x R Y ^ x R Z s φ p x, y , X q r p x P X ^ y P X q s r x R X _ y R X s . In o ur case, we consider the diagr am of Fig . 8 .8 and formula f 2 D X 1 D X 2 D X 3 A x E A y 3 © i 1 X i ñ r A ^ A y s (8.10) where A p P p A x , X 1 q P p A x , X 2 q P p A x , X 3 qq . Digraphs X i split G into thr e e disjoint subsets (the three co lors) through predica te A , which states the disjointness of X i and, with the res t of the clause, the cov erability of G , G X 1 X 2 X 3 . Example Figure 8.9 s hows rule c ontr act , with an AC given by the diag ram in the figure (where morphisms identif y elements with the s ame type and num b er, this conv ention is 184 8 Restrictions on Rules R 1: Piece 2: Conveyor 1: Machine 2: Machine G 1: Conveyor 1: Operator 1: Machine 2: Operator 1: Machine L contract 1: Operator 1: Machine bOp bMach Fig. 8.9. Satisfaction of Application Condition. follow e d thro ug hout the pap er), tog ether with for m ula D L E bM ach bO p r L ^ bM ach ^ bO p s . The rule creates a new operato r, and assigns it to a mac hine. The r ule can b e applied if there is a match of the LHS (a ma c hine is found), the machin e is not busy ( E bM ach r bM ach s ), and all op erator s ar e busy ( bO p r bO p s ). Graph G to the right sa tisfies the AC, with the match that identifies the machine in the LHS with the machine in G with the same num b er. Using the terminolog y of A Cs in the algebra ic approach [22], E bM ach r bM ach s is a negative application condition (NA C). On the other hand, there is no thing e q uiv alent to bO p r bO p s in the alg e braic appro ach, but in this ca se it could be emulated by a diagra m made of tw o graphs stating that if an op erator e xists then it do es not hav e a self-lo op. How ever, this is not p ossible in all ca ses as next exa mple shows. out 1: Piece 4: Conveyor 1: Conveyor 1: Piece 1: Conveyor R 2: Conveyor 2: Conveyor 4: Conveyor 3: Conveyor 1: Conveyor 5: Conveyor G 1: Piece 2: Conveyor 4: Conveyor 3: Conveyor 6: Conveyor 1: Conveyor 5: Conveyor 1: Piece G’ 1: Conveyor 4: Conveyor 3: Conveyor move L 1: Conveyor 2: Conveyor AllC 3: Conveyor 4: Conveyor Cv next Fig. 8. 10. Example of Application Condition. Example . Figure 8.1 0 shows rule move , whic h ha s an a pplication co nditio n with fo r m ula: D C v All C D out D next rp All C ^ out q ñ p next ^ C v qs . As previo usly stated, in this example and the followings, the rule’s LHS and the nihilation ma trix are omitted in the AC’s 8.2 Embedd ing Application Conditions in to Rules 185 formula. The example AC chec ks whether all convey o rs connected to conv eyor 1 in the LHS reac h a common target convey o r in one step. W e can use “global” information, as graph C v has to be found in G and then all output conv eyors are c heck ed to b e connected to it ( C v is ex isten tially quantified in the for m ula b efore the universal). Note that we first obtain all p ossible co n vey ors ( All C ). As the identifications of the morphism L Ñ All C hav e to be pres erved, we co nsider only those p oten tial instances o f All C with 1 : C onve y or equal to 1 : C onv ey or in L . F r om these , we take tho se that are connected ( D out ), and whic h therefore have to b e connected with the conv eyor identified b y the LHS. Graph G satisfies the AC, while graph G 1 do es not, as the targ et conv eyor connected to 5 is not the s ame as the one connected to 2 and 4. T o the best o f our efforts it is not po ssible to expres s this conditio n using the standar d ACs in the DPO appro ach g iv en in [2 2]. 8.2 Embedding Application Cond itions in to Rules The question of whether our definition o f direct deriv ation is p ow er ful enoug h to deal with application conditions (from a seman tical point o f view ) will be prov ed in Theorem 8.2.3 and Co rollary 8.2.4 in this section. It is necessar y to c heck that direct deriv atio ns can be the co doma in of the interpretation function, i.e. “MGG + AC = MGG” and “MGG + GC = MGG”. Note tha t a direct der iv ation in ess ence corr espo nds to the formula: D L D K L ^ P K, G E (8.11) but additional application conditions (AC) may represe nt muc h more g eneral prop erties, due to univ ersa l qua ntifiers and partial morphisms . Normally , for different reaso ns, o ther approaches to gra ph tr a nsformation do not care a b out elements that can no t be present at a rule sp ecification level. If so, a dir ect der iv ation would b e a s simple a s: D L r L s . (8.1 2) Thu s, one way to embed A Cs into grammar rules is to s eek for a means to tr a nslate universal quantifiers and par tial mo rphisms in to existential qua n tifiers and total mor- phisms. T o this end, we introduce t wo op era tions on ba sic diag r ams: Closur e ( C ) and 186 8 Restrictions on Rules De c omp osition ( D ). The first deals with univ e r sal quantifiers and the second with partial morphisms. In some s ense they ar e complementary (compa re equa tio ns (8 .13) and (8.14)). The closure op erator co n verts a universal q ua n tification into a n umber of ex is ten tials, as many as max ima l par tial matches ther e are in the host g raph (see Definition 8.1.5). Thu s, given a ho st g raph G , demanding the universal appea rance of graph A in G is equiv alent to asking for the exis tence of as many r eplicas of A a s pa r tial matches o f A are in G . Definition 8. 2.1 (Closure) Given t he GC p d , f q with diagr am d t A u , gr ound for- mula f A r A s and a host gr aph G , the r esult of applying C t o GC is c alculate d as fol lows: d ÞÝ Ñ d 1 t A 1 , . . . , A n u , d ij : A i Ñ A j f ÞÝ Ñ f 1 D A 1 . . . D A n n © i 1 A i © i,j 1 , j ¡ i P U p A i , A j q (8.13) with A i A , d ij R iso p A i , A j q , C p GC q GC 1 p d 1 , f 1 q and n | par max p A, G q | . The co nditio n that morphism d ij m ust not b e an is omorphism means that at least one elemen t of A i and A j will be identified in differ en t places of G . This is a ccomplished by means of predicate P U (see its definition in equation (8.3 )) which ensur es that the elements not rela ted by d ij : A i Ñ A j , a re not related in G . gen 1: Conveyor 1: Generator gen 1 d 12 2: Conveyor 1: Generator gen 2 d 23 3: Conveyor 1: Generator gen 3 d 13 1: Generator 1: Conveyor 2: Conveyor 3: Conveyor 1: Operator 1: Piece 1: Machine G (b) (c) (a) 1’: Conveyor 1’: Generator Fig. 8.11. (a) GC diagram (b) Graph to whic h GC applies (c) Clo sure of GC Example . Assume the diagr am to the le ft of Fig. 8.11, ma de of just gra ph g en , together with formula g en r g en s , and gra ph G , whe r e such GC is to b e ev aluated. The GC asks 8.2 Embedd ing Application Conditions in to Rules 187 G for the existence o f all potential connectio ns b etw een each generato r a nd each co n- vey o r. P erfor ming clo sure we obtain C pp g en, g en r g en sqq p d C , D g en 1 D g en 2 D g en 3 r g en 1 ^ g en 2 ^ g en 3 ^ P U p g en 1 , g en 2 q ^ P U p g en 1 , g en 3 q ^ P U p g en 2 , g en 3 qsq , where diagram d C is shown to the rig h t of Fig. 8.11, and ea c h d ij ident ifies elements with the sa me num be r and t yp e. The c losure o pera tor makes explicit that three po ten tial o ccurr ences must b e found (as | par max p g en, G q| 3), th us, taking information from the graph where the GC is ev aluated a nd pla c ing it in the GC itself. Ther e is another example rig h t after the definition o f the de c omp osition op erator, on p. 188. The interpretation of the closure op erator is that demanding the universal appea rance of a gra ph is equiv alent to the existence of all of its p otential instances in the sp ecified digraph ( G , G or whatever). Whenever no des in A are identified in G , edges o f A must also b e found. Therefor e, ea c h A i contains the imag e of a pos sible match of A in G (there are n p ossible o ccurrences of A in G ) a nd d ij ident ifies elements consider ed equal. Now we turn to de c omp osition . The idea b ehind it is to split a g raph into its compo - nent s to tra nsform partial mo rphisms into total morphisms o f one of its parts. If no des are co nsidered a s the building blocks of gra phs for this pur pos e, then if t wo g raphs share a no de of the same type ther e w ould be a partial match b etw een them, irres pective o f the links established by the edges o f the graphs. Also, a s s tated ab ov e, we ar e more int eres ted in the b ehavior of edge s (which to some extent comprises no des as source a nd target el- ement s of the edges , except fo r iso la ted no des) than on no des alone as they define the top olo gy of the graph. 10 These ar e the reaso ns wh y decomp osition o p era tor D is defined to split a digraph A into its edges, genera ting as many digra phs as edg es in A . If so desired, in order to consider isolated no des, it is pos sible to define tw o decomp o- sition op erato rs, one for no des and o ne for edg es. Note how e ver that decomp osition for no des makes sense mostly for gr aphs ma de up of isola ted no des, or for pa r ts of graphs consisting of is olated no des only . In this ca se, we would b e dealing with sets more than with g raphs. Definition 8.2. 2 (Decomp ositi on) Given a GC p d , f q with gr ound formula f D A r Q p A qs , diagr am d t A u and host gra ph G , D acts on GC – D p GC q GC 1 p d 1 , f 1 q 10 This is why p redicate Q was defi ned to b e tru e in the presence of a partial m orph ism n on- empty in edges. 188 8 Restrictions on Rules – in the fol lowing way: d ÞÝ Ñ d 1 t A 1 , . . . , A n u , d ij : A i Ñ A j f ÞÝ Ñ f 1 D A 1 . . . D A n n ª i 1 A i (8.14) wher e n # t edg p A qu , the numb er of e dges of A . So A i A , c ontaining a single e dge of digr aph A . In words: Demanding a par tial mo r phism is equiv ale n t to asking for the existence of a total morphism of some of its edges, i.e. eac h A i contains o ne and o nly one of the edg es of A . It do es not se em to b e r elev ant whether A i includes all no des of A or just the so urce and targ et no des. Notice that decomp osition is not a ffected by the ho st graph. Fig. 8.12. Closure and Decomp osition Example . W e will cons ider co nditions represented in Fig . 8.12, A 0 for closure and A 1 for deco mpositio n, to illustrate Defs. 8 .2.1 (again) and 8.2.2. Recall that the formula asso ciated to clos ur e is f A r A s . Closure applied to A 0 outputs tw o digraphs, A 1 0 and A 2 0 , and a morphism d 0 12 that identifies no des 1 and 3. Any further match of A 0 in G would imply a n iso morphism. Equation (8.13) for A 0 is f 1 D A 1 0 D A 2 0 A 1 0 ^ A 2 0 (8.15) with a sso ciated diagram d 1 t A 1 0 , A 2 0 u , d 0 12 : A 1 0 Ñ A 2 0 (8.16) 8.2 Embedd ing Application Conditions in to Rules 189 depicted to the center of Fig. 8.12. Note that the ma x im um num b er o f non-empty partial morphisms no t b eing is omorphisms is 2. F ormula asso cia ted to D is f D A r Q p A, G qs . Deco mpos itio n ca n b e found to the right of the same figure, in this ca se with asso ciated for mulas: d 1 t A 1 1 , A 2 1 u , d 1 12 : A 1 1 Ñ A 2 1 f 1 D A 1 1 D A 2 1 A 1 1 _ A 2 1 . (8 .17) The num b er of edges tha t make up the graph is 2 , which is the num b er of different graphs A i 1 . Now we get to the main result of this section. The following theorem states that it is p ossible to r educe any form ula in a g r aph constraint (or a pplication condition) to one using existential q ua n tifiers a nd total morphisms. Rec a ll that, in Ma tr ix Graph Grammars, ma tc hes a r e total morphisms. 11 Theorem 8.2. 3 L et GC p d , f q b e a gr aph c onst r aint su ch that f f p P, Q q is a gr ound function. Then, f c an b e tr ansforme d into a lo gic al ly e quivalent f 1 f 1 p P q with existential quantifiers only. Pr o of Define the depth of a gr aph for a fixed no de n 0 to be the maximum over the shortest path (to avoid cycles) starting in any no de different fro m n 0 and ending in n 0 . The diag r am d is a gr aph 12 with a sp ecial no de G . W e will use the notation depth p GC q depth p d q , the depth of the dia gram. In order to prove the theor em we apply induction o n the depth, chec king out e very case. There are sixteen p ossibilities for depth p d q 1 and a single element A , summarized in T able 8 .1. Elements in the same row for each pair of columns are re la ted using equa lities E A r A s A r A s and A r A s D A r A s , so it is pos sible to reduce the study to cases (1)–(4 ) and (9)–(12). 13 Ident ities Q p A q P p A, G q and Q p A q P p A, G q (see also equation (8.6)) reduce (9)– (12) to formulas (1)–(4): 11 In fact in an y approac h to graph transformation, to the best of our knowledge. 12 Where nodes are digraphs A i and edges are morphisms d ij . 13 Notice that should be read “not for all. . . ” and n ot “there isn’t any. . . ”. 190 8 Restrictions on Rules (1) D A r A s (5) A r A s (9) D A r Q p A qs (13) A r Q p A qs (2) D A r A s (6) A r A s (10) D A r Q p A qs (14) A r Q p A qs (3) E A r A s (7) A r A s (11) E A r Q p A qs (15) A r Q p A qs (4) E A r A s (8) A r A s (12) E A r Q p A qs (16) A r Q p A qs T able 8. 1. All Po ssible Diagrams for a Single Elemen t D A r Q p A qs D A P p A, G q D A r Q p A qs D A P p A, G q E A r Q p A qs E A P p A, G q E A r Q p A qs E A P p A, G q . What we mean with this is that it is enough to study the firs t four cas es, althoug h it will b e necess ary to sp ecify if A must b e found in G or in G . Finally , every c a se in the first co lumn can b e r educed to (1): • ( 1) is the definition of match in Sec. 6.1. • ( 2) c an b e tr ansformed into total morphisms (case 1) using op erato r D : D A A D A Q p A, G q D A 1 . . . D A n n ª i 1 P A i , G . (8.18) • ( 3) c an b e tr ansformed into total morphisms (case 1) using op erato r C : E A A A r A s D A 1 . . . D A n n © i 1 A i . (8.19) The conditions on P U are suppo sed to b e satisfied a nd th us hav e not b een included. • ( 4) co mbines (2) and (3), where o per ators C a nd D ar e applied in order D C (see remark after the end of this proo f ). Again, conditions on P U are supposed to be fulfilled and thus hav e b een omitted: E A r A s A A D A 11 . . . D A mn m © i 1 n ª j 1 P A ij , G . (8.20) If ther e is mo r e than one element at depth 1, this same pro cedur e can b e applied mechanically . Note tha t if depth is 1, gra phs on the diagr am a r e unrelated (otherwise, 8.2 Embedd ing Application Conditions in to Rules 191 depth ¡ 1). W ell-definedness g uarantees indepe ndence with res pect to the o rder in which elements are selected. F or the induction step, when there is a universal quantifier A , accor ding to eq. (8 .13), elements o f A are replicated as many times as po tential instances of this graph can be found in the host g raph. Suppo s e the connected g raph is called B . There ar e tw o po ssibilities: Either B is existentially quantified A D B or universally quantified A B . If B is existentially quantified then it is replic a ted as many times as A . There is no problem as morphisms d ij : B i Ñ B j can b e isomorphisms. 14 Mind the imp ortance of the o rder: A D B D B A . If B is universally quantified, a g ain it is replicated as many times as A . Afterwards, B itself needs b e replicated due to its universalit y . Note that the o rder in which these replications are performed is not relev ant, A B B A . The order in the genera l case is given b y the for mula f . Mo re in detail, when clo sure is applied to A , we iter ate o n all graphs B j in the diag ram: • If B j is existen tially quan tified after A ( A... D B j ) then it is replicated as man y times as A . Appro priate morphisms a re created b etw ee n eac h A i and B i j if a mor phism d : A Ñ B existed. The new morphisms identify ele ments in A i and B i j according to d . This allows finding different matches of B j for each A i , some of w hich can b e equal. 15 • If B j is existentially qua n tified before A ( D B j ... A ) then it is not replicated, but just connected to each r eplica of A if necessa ry . This ensures that a unique B j has to b e found for each A i . Mor eov er , the replica tion o f A has to preserve the shap e of the original dia g ram. That is, if there is a morphism d : B Ñ A , then each d i : B Ñ A i has to preser v e the identifications of d (this mea ns that we take only those A i which preserve the str ucture of the diagra m). • If B j is universally qua n tified (no matter if it is quantified b efore or after A ), again it is r e plicated as many times as A . Afterwards, B j itself needs to b e replicated due 14 If for ex amp le there are th ree instances of A in the host graph but on ly one of B , then the three replicas of B are matched to the same part of G . 15 If for example th ere are three instances of A in th e host graph bu t only one of B j , then the three replicas of B j are matc hed to the same part of G . 192 8 Restrictions on Rules to its univ ersality . The order in which these replications are perfor med is not relev ant as A B j B j A . Remark . It is no t difficult to see that C and D commute, i.e. C D D C . In fact in equation (8 .20) it do es not matter whether D C o r D C is consider ed. Comp osition D C is a direct tr anslation of A r A s which, in first instance, co nsiders all app earances of no des in A and then s plits these o ccurrences into separ ate digra phs. This is the same as consider ing every pair of sing le no des connected in A by o ne edge and take their closur e, i.e. C D . Fig. 8.13. Application Condition Example Examples . Let b e g iven a diag ram like the one that app ears in Figure 8.13 with for- m ula f D A 1 A 2 D A 3 r A 2 ñ p A 1 ^ A 3 q s . Say C stands for conv eyor. 16 If a conv eyor is connected to three co n vey ors, then they are even tually join t into a sing le conv eyor. Graph G in the same figur e satisfies the a pplica tion condition as elements p 2 : C q , p 4 : C q and p 5 : C q are connected to a s ingle no de p 3 : C q . Graph G 1 do es not satisfy the applica tion condition. Note that: f D A 1 A 2 D A 3 r A 2 ñ p A 1 ^ A 3 qs D A 1 A 2 D A 3 A 2 _ p A 1 ^ A 3 q . ( 8.21 ) Suppo se that the second fo rm of f in (8.21) is us ed. Closure applies to A 2 , so it is copied three times with the a dditio na l prop erty of mandatory b eing identified in differ en t 16 T aken fro m th e stu dy case in A pp. A. 8.2 Embedd ing Application Conditions in to Rules 193 parts of the hos t graph. As A 3 is connected to A 2 it is also r eplicated. A 1 has no common element with A 2 so it needs not be replicated. Hence, a single A 1 app ears when the closure op erator is applied. Note how ever that there is no difference if A 1 is also replicated bec ause all different copies can b e identified in the sa me part of the ho s t graph. Fig. 8.14. Closure Example The key po in t is that A 2 m ust be matc hed in different places of the host graph (otherwise there sho uld be some isomor phism) and the same may a pply to A 3 (as long as no de p 4 : C q in A 3 is different for A 3 , A 1 3 and A 2 3 ) but A 1 , A 1 1 and A 2 1 can b e matched in the sa me plac e. Here ther e is no difference in a sking for thre e ma tc hes of A 1 or a single match, as long as they can b e matched in the same place. A 1 , A 1 1 and A 2 1 are depicted to the right of Fig. 8 .1 4. In fact, there is something wr ong in our previous reaso ning b ecause A 2 demands all po ten tial matches o f A 2 . This includes the gr aph made up of no des p 1 : C q and p 3 : C q and the edge joining the first with the s econd. T o obta in the b ehavior desc ribed in previo us paragr aphs we need to add a nother g raph A 4 that has only no des p 1 : C q and p 4 : C q , mo dify the formula f D A 1 A 4 D A 2 D A 3 r p A 4 ^ A 2 q ñ p A 1 ^ A 3 qs (8.22) and also the morphisms in the diag r ams. It is a ll depicted in Fig. 8.1 5. Theorem 8.2.3 is of in terest b ecause deriv ations as defined in Matrix Graph Grammars (the matc hing pa r t) use only total mor phisms and existential quantifiers. An application 194 8 Restrictions on Rules Fig. 8.15. Application Condition Example Corrected condition AC p d AC , f AC q is a graph constra in t GC p d GC , f GC q with 17 f AC D L D K L ^ P K, G ^ f GC , (8.23) so Theo rem 8.2.3 can b e applied to a pplication conditions . Corollary 8. 2.4 Any applic ation c ondition AC p d , f q such t hat f f p P, Q q is a gr ound function c an b e emb e dde d into its c orr esp onding dir e ct derivation. This corollar y ass erts that an y a pplica tion condition can b e ex pr essed in terms o f Matrix Gr aph Grammar s rules. So w e have proved the informal equa tions MGG + A C = MGG + GC = MGG. E xamples illustrating formulas (8.18), (8.19) and (8.20) and Corollar y 8.2.4 can b e found in Sec. 8.3. 8.3 Sequen tialization of Application Conditions In this section, operato rs C and D ar e tr anslated int o the funct ional notation of pre- vious chapters (see Sec. 2 .5 for a quick introductio n), inspired by the Dirac or bra-ket notation, where pro ductions ca n b e written as R x L, p y . This notation is very c o n- venien t for se veral reas o ns, for example, it splits the static par t (initial s ta te, L ) fro m 17 Actually , it is not n ecessary to demand the existence of the nodes of K b ecause they are the same as those of L . 8.3 Seq uentia lization of Application Conditions 195 the dynamics (element addition and deletion, p ). Besides, this will p ermit us to in terpret application co nditions as sequences or s ets of sequences to e.g. study their cons istency through a pplicabilit y (Sec. 9 .1). Op erators C and D will be formally repres en ted as q T and p T , respec tively . Recall that p T has b een us e d in the pro of o f Prop. 7.3.3. Let p : L Ñ R be a pro duction w ith application condition AC p d , f q . W e will follow a cas e by case study of the pro of of Theorem 8.2.3 to str uc tur e this section. The first ca se a ddressed in the proo f of Theo rem 8 .2.3 is the most simple: If the node s of A are found in G then its e dges must also be matched. d p A, d : L Ñ A q , f D A r A s . (8.24) Let id A be the productio n that does nothing on A – id A p A q A – and also the op erator that demands 18 the ex is tence of A . The s et of identit ies x L _ A, p y x L, id A p p qy x L, p id A y (8.25) prov es that id A p L q L _ A, (8.26) which is the adjoint op erator o f id A . Her e , or is carr ied out according to identifications sp ecified by d . P ro duction id A can b e seen as an o per a tor (adjoints are defined only for op erators). As a matter of fact, it is easy to prove that a n y pro duction is in particula r an op era to r. 19 So if AC asks for the existence of a graph like in eq. (8.24), it is poss ible to en- large the pr o ductio n p ÞÑ p id A . The marking op era tor T µ (Sec. 6.2) enables us to use concatenation instea d of comp osition a s in equation (8.25): x L _ A, p y p ; id A , (8 .2 7) to b e understo o d in the sense of applicability . The follo wing lemma has just be e n prov ed: 18 Op erator id A p p q could b e thou ght of as a “pro duction” th at in a single step deletes and adds the elemen ts of A . 19 Just define its action. 196 8 Restrictions on Rules Lemma 8. 3.1 (Matc h) L et p : L Ñ R b e a pr o duction to gether with an a pplic ation c ondition as in e q. (8.24) . Its applic ability is e quivalent to the appli c ability of the se qu enc e p ; id A , as in e quation (8.27) . Fig. 8.16. Pro duction T ransformation According to Lemma 8.3.1 Examples . T o the left of Fig. 8.1 6 a pro duction and the diagram o f its weak applicatio n condition is depic ted. Let its formula be D A r A s . T o the right, its tra nsformation according to (8.27) is repr esen ted, but using comp osition ins tea d of concatenatio n. The A C of rule mov eOpe rator in Fig. 8.17 (a) ha s a sso ciated formula D Ready r Ready s (i.e. the op erato r may move to a machine with an incoming piece). Using prev ious c o n- struction, w e obta in that the rule is equiv a len t to sequence mov eOpe rator 5 ; id Ready , where mov eO perator 5 is the original rule without the A C. Rule id Ready is shown in Fig. 8 .17 (b). Alterna tiv ely , w e could use comp osition to obtain mov eO perator 5 id Ready as shown in Fig. 8.17 (c). Ready 2: Machine 1: Machine 2: Machine 1: Operator L 2: Machine 1: Conveyor 1: Piece id Ready 1: Machine 2: Machine 1: Operator R id Ready 1: Machine 2: Machine 1: Operator L 1: Piece 1: Conveyor 1: Machine 2: Machine moveOperator 1: Conveyor 1: Piece (a) L=R (b) moveOperator o 1: Operator R 1: Piece 1: Conveyor (c) Fig. 8.17. T ransforming D Ready r Ready s into a Sequ ence. W e will introduce a kind of conjugate of pro duction i d A , to b e written id A . T o the left of Fig . 8.18 ther e is a repr esen tation of id A . It simply preserves (uses but do es not delete) all elements of A , which is equiv alent to demand their existence. T o the right w e hav e its conjuga te, id A , which as ks for no thing to the host graph except the existence of A in the complement of G . 8.3 Seq uentia lization of Application Conditions 197 A N A id A A A A N R id A A N R G E G G G E G G Fig. 8.18. Identit y id A and Conjugate id A for Edges If instead of introducing id A directly , a definition on the basis of a lready kno wn concepts is pr eferred we may pro ceed a s follows. Reca ll that K r _ e D , so our only chance to define id A is to act on the elements that some pro duction adds. Let p e ; p r (8.28) be a s equence such that the fir s t pro duction p p r q adds elements whose presence is to b e av o ided and the second p p e q deletes them (s e e Fig. 8.19). The o verall effect is the iden tity (no effect) but the sequence can b e applied if and only if elements of A are in G E . Note that a s imilar construction do es not work for no des beca us e if a no de is already present in the host gr aph, a ne w one ca n be added witho ut any pro blem (a dding and deleting a no de do es not guar an tee that the no de is not in the host gra ph). The way to pr oce e d is to care only ab out no des that ar e present in the host graph as the o thers, together with their e dg es, will be present in the co mpletion of the comple- men t of G . This is repres en ted by A N R , where R stands for r est riction . Restriction and completion a r e in so me sense complementary op erations. A N R A _ A N R A A A A N R G H G Fig. 8. 19. id A as Sequence for Edges 198 8 Restrictions on Rules Our analysis contin ues with the s econd case in the pro of of Theorem 8.2 .3, whic h states that some edges of A c a n not be found in G for some identification of no des in G , i.e. A r A s D A A . This c o rresp onds to o per ator p T A (decomp osition), defined by: p T A p p q t p 1 , . . . , p n u . (8.29) Here, p i p id A i with A i a gr aph consisting of one edge o f A (together with its sour ce and target nodes) and n # t edg p A qu , the n um b er of edges of A . Equiv a len tly , the formula is trans fo rmed into: f D A r A s ÞÝ Ñ f 1 D x A 1 . . . D x A n n ª i 1 P x A i , G , (8.30 ) i.e. the matrix of edg es that must not app ear in order to apply the pro duction is enla rged K i K _ A i (being K i the nihila tio n matrix of p i ). If composition is c hosen, the grammar is mo dified b y removing rule p a nd adding the set of pr oductio ns t p 1 , . . . , p n u . If the pr oductio n is part of the s equence q 2 ; p ; q 1 then w e are a llowing v a riability on pr o ductio n p as it can b e substituted b y any p i , i P t 1 , . . . , n u , i.e. q 2 ; p ; q 1 ÞÝ Ñ q 2 ; p i ; q 1 . A similar reas o ning a pplies if we use concatenation instead o f comp o sition but it is not necessary to eliminate pro duction p from the gr ammar: q 2 ; p ; q 1 ÞÑ q 2 ; p ; id A i ; q 1 . Pro duction p and sequence id A i are related thr o ugh marking . Lemma 8. 3.2 (Decomp ositi on) With notation as ab ove, let p : L Ñ R b e a pr o duction to gether with an applic ation c ondition as in e q. (8.30) . Its applic ability is e quivalent t o the applic ability of any of the se quen c es s i p ; id x A i (8.31) wher e x A i is define d as in e quations (8.18) or (8.30) . Before moving on to the third ca se in the pro of of Theorem 8 .2.3, pr evious results will b e clar ifie d with a simple exa mple with similar c onditions to those of Fig. 8 .12. Examples . Consider pro duction p to the left of Fig. 8.20 and a pplication c o ndition A to the center of the sa me figure. If the a s so ciated form ula for A is f D A A then three sequences a re derived ( p i , i P t 1 , 2 , 3 u ) with p i p ; id x A i , b eing x A i those depicted to the right of Fig. 8.20. 8.3 Seq uentia lization of Application Conditions 199 Fig. 8.20. Decomp osition Op erator The application condition of rule r emov e in Fig. 8.21 has as ass o cia ted formula D someE mpty r someE mpty s . The formula states that the machine can be removed if there is one piece that is not c o nnected to the input or output conv eyor (as we m ust no t find a total morphism from someE mpty to G ). Applying Lemma 8.3.2, rule r emov e is appli- cable if some of the sequences in the set t remov e 5 ; del someE mpty i ; a dd someE mpty i u i t 1 , 2 u is applicable, where pro ductions add someE mpty 2 and del someE mpty 2 are like the r ules in the figure, but considering conv eyor 2 instea d. Thus id someE mpty i del someE mpty i add someE mpty i , P 1: Conveyor 2: Conveyor 1: Conveyor 2: Machine 2: Conveyor 1: Conveyor 2: Conveyor remove R L del someEmpty 1 1: Piece 1: Conveyor 1: Piece 1: Conveyor R L add someEmpty 1 R 1: Piece 1: Conveyor L 1: Piece 1: Conveyor 1: Piece someEmpty Fig. 8. 21. T ransforming D someE mpty r someE mpty s into a Sequ ence. The thir d cas e in the pro of of Theorem 8.2.3 demands that for any identification of no des in the host gra ph ev ery edg e m ust als o be found. Recall that E A r A s A r A s which is asso ciated to op erator q T A (closure). W e will a ssume tha t all insta nces a re matched in their corre s ponding parts , s o the P U part of eq ua tion (8.13) is always fulfilled (is alwa ys true). 20 Hence, f E A r A s ÞÝ Ñ D | A 1 . . . D | A n n © i 1 | A i . (8.32) 20 When dealing with morphisms P U w as u sed. F or op erators, the marking op erator T µ acting on the host graph and on A i suffices. This remark applies to the rest of the c hapter. 200 8 Restrictions on Rules This mea ns that more edges must b e presen t in order to apply the pro duction, L ÞÝ Ñ n i 1 L _ A i . B y a similar rea soning to that of the deriv ation of eq. (8 .2 6): C n ª i 1 | A i _ L , p G A L, q T A p p q E L, id | A 1 . . . id } A n p p q D L, p id q A D , (8.3 3) – where id q A id | A 1 . . . id } A n – the adjoint op erato r can b e calculated: q T A p L q L _ n ª i 1 | A i . (8.34) As commented ab ov e, the marking op era tor T µ allows us to substitute comp osition with co ncatenation: C n ª i 1 | A i _ L , p G p ; id | A 1 ; . . . ; id } A n p ; id q A (8.35) to b e understo o d in the sense of a pplicabilit y . W e hav e pr oved the following lemma: Lemma 8. 3.3 (Closure) With notation as ab ove, let p : L Ñ R b e a pr o duction to- gether with an applic ation c ondition as in e q. (8.32) . Its applic ability is e quivalent to the applic ability of the se quenc e p ; id q A . Fig. 8.22. Closure Op erator Example . Consider pro duction p to the left of Fig. 8.22 and application condition A to the center of the same fig ure. If the asso ciated formula for A is f A r A s then tw o sequences ar e derived ( p i , i P t 1 , 2 u ) with p i p ; id | A i , b eing | A i those depicted to the right of Fig. 8.22. The fourth case is equiv a len t to that known in the liter ature as n e gative applic ation c ondition , NA C, which is a mixture of cas e s (2) and (3), in which the order of comp o sition 8.3 Seq uentia lization of Application Conditions 201 do es not matter due to the fact that q T and p T commute. 21 It says that there do es not exist an identification of no des of A for which all edges in A can also b e found, E A r A s , i.e. for every identification of no des there is at least o ne edge in G . If we define r T A p p q p T A q T A p p q q T A p T A p p q , (8.36) then f A r A s ÞÝ Ñ D A 11 . . . D A mn m © i 1 n ª j 1 A ij . (8.37) In more detail, if we firs t apply closure to A then we o btain a sequence of m 1 pro ductions, p ÞÝ Ñ p ; id | A 1 ; . . . ; id } A m , assuming m differen t match es of A in the ho st graph G . Right afterwards, decomp osition splits every | A i int o its comp onents (in this case there are n edges in A ). So ev ery match of A in G is transfor med to lo ok for at least one missing edge, id | A 1 ÞÝ Ñ id A 11 _ . . . _ id A 1 n . Op erator r T A acting on a pro duction p with a weak pr econdition A r esults in a set of pro ductions r T A p p q t p 1 , . . . , p r u where r m n . E a c h p k is the comp osition of m 1 pro ductions, defined a s p k p id A u 0 v 0 . . . id A u m v m . Mar king op erator T µ of Sec. 6.2 p e r mits conca tenation instead of comp osition: r T A p p q p k | p k p ; id A u 0 v 0 ; . . . ; id A u m v m ( k Pt 1 ,...,m n u . (8.38) Lemma 8.3. 4 (Neg ativ e Application Conditions) Ke eping notation as ab ove, let p : L Ñ R b e a pr o duction to gether with an applic ation c ondition as in e q. (8 .37) , then its appli c ability i s e quivalent to the applic ability o f some of the se quenc es derive d fr om e quation (8.3 8) . Example . If there ar e t wo matc he s and A has three edges, i 3 and j 2, then equation (8 .37) b ecomes: 3 © i 1 2 ª j 1 A ij A 11 _ A 12 A 21 _ A 22 A 31 _ A 32 A 11 A 21 A 31 _ A 11 A 21 A 32 _ . . . _ A 12 A 22 A 31 _ A 12 A 22 A 32 . 21 See remark on p. 192. 202 8 Restrictions on Rules F or example, the fir s t monomia l A 11 A 21 A 31 is the sequence p ; id A 11 ; id A 21 ; id A 31 Summarizing in a sort of rule of th umb, there are tw o op eratio ns – and and or – that might be c o m bined using the r ule s of monadic s econd order log ics. These op erations a r e transformed in the following wa y: • Operation and in the f of an a pplication condition bec omes an o r when ca lculating an equiv alent pro duction. • Operation or enlarges the grammar with new pro ductions, removing the o riginal rule if co mpositio n instead o f conca tenation is chosen. A 0 m A 0 A 1 d 10 m A 1 L d L 0 d L 1 p m L R G H Fig. 8.23. Example of Diagram with Tw o Graphs Example . Let AC p d , f q be a graph constraint with diagra m d depicted in Fig. 8 .23 (graphs shown in Fig . 8 .24) a nd a sso ciated formula f D L A 0 D A 1 r L p A 0 ñ A 1 qs , d L 0 pt 1 uq t 1 u . Let morphisms b e defined a s follows: d L 1 pt 1 uq t 1 u , d 10 p t 1 uq t 1 u and d 10 pt 2 uq t 2 u . The interpretation of f is that L must b e found in G (for simplicity K is omitted) and whenever no des of A 0 are found then there m ust exist a match for the no des of A 1 such that there is an edg e joining b oth no des. Note that matching of no des of A 0 and A 1 m ust coincide (this is wha t d 10 is for) and that no de 1 has to be the sa me as tha t matched by m L for L in G (morphisms d L 0 and d L 1 ). 8.3 Seq uentia lization of Application Conditions 203 Fig. 8.24. Precondition and P ostcondition Application of op erator q T for the universal quantifier yields six digra phs for A 0 and an- other s ix for A 1 , repr e sen ted in Fig. 8.24. Note that in this ca se we hav e A i 0 P E A i 0 , G bec ause A i 0 has o nly one edge. Supp ose that m L pt 1 , 2 , 3 uq t 1 2 , 2 1 , 3 u , then f b ecomes f 1 D L D A 4 0 D A 5 0 D A 4 1 D A 5 1 L A 4 0 _ A 4 1 A 5 0 _ A 5 1 . (8.39) Different ma tc hes and rela tions among compo nen ts of the applica tion condition derive different for mulas f . F or example, we could fix o nly no de 1 in d 10 , allowing no de 2 to be differently matched in G . No tice that neither A 3 1 nor A 6 1 exist in G so the condition would not be fulfilled for A 3 0 or A 6 0 bec ause terms A 3 0 _ A 6 0 and A 3 1 _ A 6 1 would b e false ( A 3 0 and A 6 0 are in G for a n y identification of no des). Previous lemmas prove that weak preco nditions can be re duce d to studying se quences of pro ductions . If instead of weak preco nditio ns we hav e prec o nditions then we should study deriv ations (o r sets of deriv ations ) instead of seq ue nc e s . Theorem 8.3. 5 Any we ak pr e c ondition c an b e r e duc e d to the st udy of the c orr esp onding set of se qu enc es. Pr o of This result is the sequential version of Theorem 8.2.3. The four cases of its pro of corr e- sp ond to Lemmas 8.3.1 thro ugh 8.3.4. 204 8 Restrictions on Rules Example . Contin uing example o n p. 20 2, equation (8.39) put in norma l disjunctive form reads f 1 D L D A 4 0 D A 5 0 D A 4 1 D A 5 1 LA 4 0 A 5 0 _ LA 4 0 A 5 1 _ LA 4 1 A 5 0 _ LA 4 1 A 5 1 (8.40) which is equiv a len t to f 1 D L D A 4 0 D A 5 0 D A 4 1 D A 5 1 LA 4 1 A 5 1 bec ause A 4 0 and A 5 0 can be fo und in G . This is the same as applying the seque nc e p ; id A 4 1 ; id A 5 1 or p ; i d A 5 1 ; id A 4 1 (beca use i d A 4 1 K id A 5 1 ). So the s atisfaction of an AC , once match m L has b een fixed, 22 is equiv ale nt to the applicability of the seq uence to which eq ua tion (8.40) gives rise. 8.4 Summary and Conclusions In this chapter, g raph constraints a nd application c o nditions hav e b een intro duced and studied in detail for the Matr ix Gra ph Grammar a pproach. Our prop osal co nsiderably generalizes previous effor ts in other a pproaches suc h a s SPO or DPO . Generalization is not necessar ily go o d in itself, but in our opinio n it is interesting in this case . W e have b een able to “reduce” gr a ph co nstraints and applicatio n condi- tions one to ea c h o ther (whic h will be useful in Sec. 9.3). Besides, the left hand side, right hand side and nihilation matrices app ear a s particular cases of this mor e general framework, g iving the impress ion of b e ing a very natural extension of the theor y . Also, it is a lw ays pos sible to embed a pplica tion co nditions in Matrix Graph Gr ammars direct deriv ations (Theorem 8.2.3 and Co rollary 8.2 .4) . W e hav e mana g ed to study precondi- tions, p ostconditions a nd their weak counterparts, independently to so me extent of any match. Other interesting p oints ar e tha t applica tion conditions seem to b e a g oo d wa y to synthesize closely related gra mma r rules. B e sides, they allow us to partially act o n the nihilation matrices K and Q (recall that the nihilation matrix was directly derived o ut of L , e and r ). Representing application conditions using the functional no tation introduce d for pr o- ductions a nd direct der iv ations allowed us to prove a very useful fact: Any application 22 In this example. In general it is n ot necessary to fix the match in adv ance. 8.4 Su mmary and Conclusions 205 condition is equiv alent to s o me s equence of pro ductions (or a set of them). See Theo- rem 8.3 .5 (and a lso Theor em 9.2.2 in the next chapter). It is worth stre s sing the imp or- tance o f the relatio nship b et ween application conditions and sequences of pro ductions and will b e used extensively in Chap. 9. Chapter 9 contin ues our study o f restrictions with conce pts s uc h as consistency , the tra nsformation of preconditions into p ostconditions and vice versa and a pr actical– theoretical application: the ex tension o f Matrix Graph Grammar s to cop e with multidi- graphs with no ma jor mo dification o f the theory . Chapter 10 addr esses one fundamental topic in g rammars: Reachabilit y . This topic has b een stated as proble m 4 and is widely address e d in the literature, sp e c ially in the theory of Petri nets. 9 T ran sformation of Restrictions In this chapter we contin ue the study o f graph constr aint s and applicatio n conditions – restrictions – star ted in Cha p. 8. Section 9.1 int ro duces cons istency , compatibility and cohe r ence of application condi- tions. Section 9 .2 ta c kles the trans fo rmation of application conditions imposed to a r ule’s LHS into one equiv alent application condition but on the r ule’s RHS. The conv erse, more natural fro m a practical point of view, is als o addressed. Besides, w e sha ll outline how to mov e applica tion conditions from one pro duction to another ins ide the same sequence. As an application of restrictions to Matrix Graph Grammars, Sec. 9.3 sho ws how to make MGG deal with m ultidigraphs instead of just simple digraphs without ma jor mo d- ifications to the theory . Section 9.4 clos es the chapter with a summary and so me more comments. 9.1 Consistency and Comp atibilit y W e shall sta r t by defining some (desirable) prop erties of applicatio n conditions. As po in ted out ab ov e, a n y application condition is equiv alent to s o me s equence or set of sequences so we will b e a ble to characterize thes e prop erties using the theo r y developed so far . Definition 9.1. 1 (Consistency , Coherence, Compatibi lit y) L et AC p d , f q b e a we ak appli c ation c ondition on the gr ammar rule p : L Ñ R . We say that the AC is: 208 9 T ransformation of Restrictions • coheren t if it is not a fal lacy (i.e., false in al l sc enarios). • compatible if, to gether with the ru le’s actions, pr o du c es a simple digr aph. • consisten t if D G host gr aph such that G | ù AC to which the pr o duction is applic able. The definitions for applica tion conditions instead of their weak counterparts are al- most the same, except that consistency do es no t ask for the existence of some host g r aph but takes into account the one alr eady considered. Coherence of ACs studies whether there ar e contradictions in it prevent ing its appli- cation in any scenario. Typically , coherence is not sa tis fie d if the condition simultaneously asks for the existence a nd non-existence of some element. Compatibility of ACs chec ks whether ther e ar e conflicts b et ween the A C and the rule’s a ctions. Here w e hav e to c heck for exa mple that if a gr aph o f the AC dema nds the ex istence of so me edge, then it can not be incident to a no de that is deleted b y pro duction p . Consis tency is a kind of well- formedness o f the AC when a pr o ductio n is ta ken into account. Next, w e show some examples o f non-consistent, non-compa tible and non- c o herent ACs. R 1: Machine 1: Operator 1: Conveyor 1: Machine L break 1: Conveyor R 1: Conveyor 1: Machine 1: Operator 1: Conveyor Operated break’ L 1: Operator Fig. 9. 1. Non-Compatible Application Condition Examples . Non-compatibility can b e av oided at times just rephras ing the AC and the rule. Co ns ider the ex ample to the left of Fig . 9.1. The rule mo dels the breakdown of a ma- chine b y deleting it. The AC states that the machine can b e broken if it is b eing op erated. The AC has asso ciated diagra m d t Ope rated u and formula f D Ope rated r Ope rated s . As the productio n deletes the mac hine and the A C asks for the existence of an edg e connecting the opera tor with the mac hine, it is for sure that if the rule is a pplied we will obtain at lea st one da ngling edge. The key p oint is that the AC a s ks for the existence of the edge but the pro duction demands its non-existence as it is included in the nihilation matrix K . I n this case , the 9.1 Consistency and Compatibilit y 209 rule br eak 1 depicted to the right of the same figure is equiv alent to p but with no p otential compatibility issues. Notice that c o herence is fulfilled in the exa mple to the le ft of Fig . 9.1 (the AC alone do es not enco de a n y co n tradiction) but not consistency as no hos t gr aph can satisfy it. busy 1: Machine 1: Operator R 1: Machine 1: Operator L 1: Machine 1: Operator rest 1: Operator work Fig. 9.2. Non-Coherent Ap plication Condition An example of non-co herent applica tion co nditio n can b e found in Fig. 9.2. The AC has ass o cia ted formula f busy D wor k r busy ^ P p wor k , G qs . There is no pr oblem with the edge deleted by the rule, but with the self-lo op of the op erator. No te that due to busy , it must app ear in any po ten tial host gra ph but w ork says that it should not b e present. Just to clarify the terminolo g y , we will see that a n application condition is coherent if and only if its asso ciated sequence is coherent, and the s a me for co mpa tibilit y (this is why these concepts hav e b een named this w ay). W e will also see that an a pplica tion condition is consis ten t if its asso ciated se quence is applicable. Here, morphisms play a similar ro le in the g r aphs that make up the application co ndition to completion in sequences of rules. Another exa mple follows. Example . As commented above, non-compatibility ca n b e avoided at times just rephras- ing the co ndition and the rule. C o nsider the weak precondition A as repr esent ed to the left o f Fig. 9.3. There is a dia gram d t A u with asso ciated for m ula f D A r A s , b eing morphism d A p 1 q 1. As the pro duction deletes no de 1 and the application condition 210 9 T ransformation of Restrictions Fig. 9.3. Avoidable non-Compatible A pplication Condition asks for the existence of edge p 1 , 3 q , it is for sure that if the rule is applied we will obtain at leas t one dangling edge. The key po in t is that the co ndition asks for the existence of edge p 1 , 3 q but the pro duction demands its non-existence as it is included in the nihilation matrix K . In this case, the rule p 1 depicted to the right of the sa me figure is co mpletely equiv a len t to p but with no p otential compatibility issues. Fig. 9. 4. non-Coherent A pplication Condition A non-coherent applica tio n condition ca n b e found in Fig. 9.4. Mor phisms ident ify all no des: d Li pt 1 uq t 1 u d 12 pt 1 uq , d Li pt 2 uq t 2 u , d 12 pt 3 uq t 3 u with formula f D L A 1 D A 2 L ñ A 1 ^ P A 2 , G . There is no pr oblem with edge p 1 , 2 q but with p 1 , 1 q there is one. Note that due to A 1 , it must app ear in any p otential hos t graph but A 2 says that it should not be pr esen t. A dir ect applica tion of Theorem 8.3.5 allows us to test if a weak pr e condition sp ecifies a ta utology or a fallacy . It will also b e used in the next section to study how to construct 9.1 Consistency and Compatibilit y 211 weak po stconditions equiv alent to given weak preconditions . It is also useful to pro ceed in the opp osite wa y , i.e. to tr ansform p ostconditio ns into equiv alent preco nditions. Corollary 9.1. 2 A we ak pr e c ondition is c oher ent if and only if its asso ciate d se quenc e (set of se quenc es) i s c oher ent. A lso, it is c omp atible if and only if its se quenc e (set of se quenc es) is c omp atible and it is c onsistent if and only if its se quen c e (set of se quenc es) is applic able. Example . F or coher e nce we will change the formula of pr evious example (Fig. 9.4) a little. Cons ider f 2 D L A 0 D A 1 L A 1 ñ A 0 . Note that f 2 cannot b e fulfilled b ecause on the o ne hand edg e s p 1 , 1 q and p 1 , 2 q m ust b e found in G and o n the other edg e p 1 , 1 q m ust b e in G . T o simplify the example, supp ose that some match is a lready given. The sequence to study is p ; id A 1 ; id A 0 , which is no t coher en t b e c ause in its equiv alent form p ; id A 1 ; p e 0 ; p r 0 pro duction p e 0 deletes edge p 1 , 1 q used by id A 1 . Corollary 9.1. 3 A we ak pr e c ondition is c onsistent if and only if it is c oher ent and c om- p atible. Examples . Compatibility for ACs tells us whether there is a conflict b et ween an AC and the r ule’s action. As stated in Coro llary 9.1.2, this prop erty is studied b y ana ly z- ing the co mpatibilit y of the resulting sequence. Rule br e ak in Fig. 9.1 has an AC with formula D Ope rated r Ope rated s . This results in sequence: br eak 5 ; id Oper ated , where the machine in b oth rules is identified (i.e. has to be the same). Our analy sis tec hniq ue for compatibility [60] outputs a matrix with a 1 in the po sition corr espo nding to e dg e p 1 : O per ator , 1 : M ach ine q , thus signaling the da ngling edge. Coherence detects conflicts b et ween the gra phs of the A C (which includes L and K ) and we can s tudy it by analy zing coher ence of the resulting sequence. F o r the case of rule “rest” in Fig. 9.2, we would obtain a n umber of sequences, each testing that “busy” is found, but the self-lo op of “work” is not. This is no t po ssible, b e cause this self-lo op is also par t of “busy”. Co herence detects such conflict and the pro blematic element. In additio n, w e can also use the MGG techniques of previous chapters to analyze application co nditions a nd gather mo r e information. This is re view ed in the res t of the section. 212 9 T ransformation of Restrictions • Seq uential Indep ende nce . W e can use MGG results for sequential independence of sequences to investigate if, once sev eral rules with ACs are tra nslated in to sequence s, we can for example delay all the rules checking the AC cons traints to the end of the sequence. Note that us ua lly , when tr a nsforming an AC into a seq uence, the orig inal flat rule sho uld b e applied last. Sequential indep endence allows us to choos e some other o r der. Mor e over, for a given sequence of pro ductions, ACs are to some extent delo calized inside the sequence. In particular it co uld b e p oss ible to pass conditions from o ne pr o duction to others inside a seque nc e (paying due attention to compatibility and coherence). F or ex ample, a p ost-condition for p 1 in the sequence p 2 ; p 1 might be translated into a pre-co ndition for p 2 , a nd vice versa. Example . The sequence resulting from the rule in Fig. 8 .17 is mov eOpe rator 5 ; id Ready . In this ca se, b oth rules are indep enden t and can b e a pplied in a n y o rder. This is due to the fact that the rule effects do not affect the A C. • Minimal and N e gativ e Initial Di graphs . The conce pts of MID and NID allow us to obtain the (set o f ) minimal gr aph(s) able to satisfy a given GC (or AC), or to obtain the (set o f ) minimal gra ph(s) which cannot b e found in G for the GC (or A C) to b e applicable. In case the AC results in a single s e q uence, we ca n obtain a minimal graph; if we obtain a set of sequences , we g e t a set o f minimal g raphs. In ca se universal quant ifiers are present, we hav e to complete all exis ting partial matches so it might b e useful to limit the num b er o f no des in the ho st g raph under study . 1 A direct applica tion o f the MID/NID tec hnique allows us to solve the pr oblem of finding a graph that sa tisfies a given A C. The technique can b e extended to cop e with more general GCs. Examples . Rule r emov e in Fig ure 8.21 res ults in t wo sequences. In this case, the min- imal initial digr aph enabling the a pplicabilit y for b oth is eq ual to the LHS of the rule. The tw o nega tiv e initial digr aphs ar e shown in Fig . 9.5 (and b oth assume a single piece 1 This, in man y cases, arises n aturally . F or example in [67] MGG is studied as a model of computation and a formal grammar, and also it is compared to T uring machines and Boolean Circuits. Recall that Boolean Circuits h a ve fix ed input vari ables, giving rise to MGGs with a fixed num b er of no des. In fact, something similar happ ens when modeling T uring machines, giving rise to t he so-called (MGG) nodeless mo del of computation. 9.1 Consistency and Compatibilit y 213 2: Conveyor 2: Machine 1 someEmpty N 2: Machine 2 someEmpty N 1: Piece 2: Conveyor 1: Conveyor 1: Piece 1: Conveyor Fig. 9.5. Negative Graphs Disabling th e S eq uences in Fig. 8.2 1 in G ). This mea ns that the rule is not applicable if G has any edge stemming fro m the machine, or tw o edges stemming from the piece to the tw o conv eyors. Figure 9.6 shows the minimal initial digra ph for executing rule mov eP . As the rule has a univ ersa lly quantified condition ( conn r conn s ), we hav e to complete the tw o pa rtial matches of the initial digraph so a s to e na ble the execution of the rule . moveP 1: Conveyor 1: Machine 1: Piece R 2: Conveyor 1: Conveyor 1: Piece 1: Machine L 2: Conveyor 1: Piece 1: Machine 1: Conveyor 1: Piece 1: Machine 2: Machine 3: Conveyor conn M’ 2: Conveyor 1: Conveyor (a) (b) (c) M 2: Conveyor Fig. 9.6. (a) Example rule (b) MID without AC (c) Completed MID • G-congruence .Gr a ph congr uence characterize s se q uences with the same initial di- graph. Therefore, it can be used to s tudy when tw o GCs/ACs are equiv alent for all morphisms o r for some of them. See Section 7 in [6 6] or Sec tio n 7.1. The current approa c h to restrictions allows us to analyze proper ties which up to now hav e b een analy zed either witho ut ACs or with NACs, but not with arbitrar y A Cs : • Criti cal Pairs . A cr itica l pair is a minimal g raph in which tw o r ules are applicable, and a pply ing one disa bles the other [31]. Critical pairs hav e b een studied for rules without ACs [31] or fo r rules with NACs [44]. The techniques in MGG how ever enable the study of critical pairs with any kind of A C. This ca n be done by con verting 214 9 T ransformation of Restrictions the rules into sequences, ca lculating the graphs which enable the a pplication of bo th sequences, and then ch ecking whether the applica tion of a sequenc e disables the other . In order to calculate the g raphs enabling b oth seque nc e s, we deriv e the minima l digraph s et for each sequence as describ ed in pr e vious item. Then, we calculate the graphs ena bling b oth sequences (which now do not hav e to b e minimal, but we should hav e jointly surjective matches from the LHS of b oth rules) by ident ifying the no des in ea c h minimal g raph of ea c h set in every p ossible wa y . Due to universals, some of the obtained gra phs may not ena ble the a pplication of some sequence. The way to pro ceed is to complete the partial matches of the universally quantified graphs, so as to make the sequence applicable. Once we hav e the set of star ting gra phs, we take each one of them and a pply one sequence. Then, the s e quence for the second r ule is r ecomputed – as the gr aph has changed – and applied to the g raph. If it can b e applied, there are no conflicts for the g iven initia l gr aph, otherwise there is a conflict. Besides the conflicts known for rules without ACs or with NACs (delete-use and pro duce-forbid [22], our ACs may pro duce additional kinds of conflicts. F or ex ample, a rule c a n cr eate ele men ts which pro duce a partial match for a universally qua ntified constraint in ano ther AC, th us making the latter sequence inapplicable. inM 1: Conveyor L 1: Machine 1: Conveyor 1: Conveyor 1: Piece 1: Machine 1: Conveyor 1: Piece 2: Conveyor L 1: Conveyor 1: Piece C 1 2: Conveyor 1: Conveyor 1: Piece 3: Conveyor C 2 2: Conveyor 1: Conveyor 1: Piece M 1 2: Conveyor 1: Conveyor M 2 2: Machine 3: Conveyor 2: Machine 1: Conveyor createM2 R R 2: Conveyor createM1 (b) (a) (c) outM Fig. 9. 7. (a) Example Rules (b) MIDs (c) Starting Graphs fo r Analyzing Conflicts 9.2 Mo ving Conditions 215 Example . Fig ure 9.7(a) shows tw o r ules, cr eateM 1 a nd cr eateM 2, with ACs E inM r inM s and outM r outM s , r espec tively . The center of the same fig ure depicts the minima l di- graphs M 1 and M 2 , enabling the execution of the sequences derived fro m createM 1 and cre ateM 2, res pectively . In this case, b oth are equa l to the LHS of e ach r ule. The r ight of the fig ure shows the tw o res ulting graphs once we identify the no des in M 1 and M 2 in each p ossible wa y . These ar e the starting graphs that a re used to analyze the con- flicts. The rules present several conflicts. Firs t, rule cr eateM 1 disa bles the execution of cre ateM 2, as the former crea tes a new mac hine, which is not connected to all conv eyors, th us dis a bling the outM r outM s condition o f cr eateM 2. The co nflict is detected by ex- ecuting the sequence a sso ciated to crea teM 1 (starting from either C 1 or C 2 ), and then recomputing the sequence for cr eateM 2 , taking the mo dified gr aph a s the star ting o ne. Similarly , ex e c uting rule c reateM 2 may disable createM 1 if the new machine is created in the convey o r with the piece (this is a co nflict of type pro duce-forbid [44]). • Rul e Indep endence . In Matrix Graph Grammar s, we conv er t the rules int o sets o f sequences and then check ea c h co m bination o f sequences of the tw o rules. 9.2 Movin g Conditions Roughly sp eaking , there hav e b een tw o ba sic ideas in pre vious sections that allow ed us to chec k co nsistency of the definition of direct de r iv ations with weak pre conditions, and also provided us with some mea ns to us e the theory developed so far in order to con tinue the study of application conditions: • Embed application conditions into the pro duction or deriv ation. The left hand s ide L of a productio n receives element s that must b e found – P p A, G q – and K those whos e presence is forbidden – P p A, G q –. • Find a sequence or a set of sequences whos e b e havior is equiv ale nt to that o f the pro duction plus the application condition. In this sec tion we will care a b out how (weak) preconditions can b e transfor med into (w eak) pos tconditions and vic e versa: Given a weak precondition A , what is the equiv alent weak p ostcondition (if any) and how ca n o ne b e transformed in to the other? Before this, it is necessa r y to state the main results o f previo us sections for po stconditions. 216 9 T ransformation of Restrictions The no ta tion needs to be further enlarged so w e will app end a le ft arrow on top of conditions to indicate that they are (weak) preconditions and an upper right a rrow for (weak) p ostconditions. Examples a re A for a weak prec o ndition and Ñ A for a weak po stcondition. If it is clear fro m the context, we will omit ar rows. There is a direct trans la tion o f Theor em 8.2 .3 for p ostconditions. O pera tors p T Ñ A and q T Ñ A are defined similarly for weak p ostco nditions. Again, if it is clea r from co ntext, it will not be nece s sary to ov e r-elab orate the notatio n. Equiv alent results to lemmas in Sec. 8.3, in particular to equa tions (8.27), (8.31), (8.35) a nd (8.38) are given in the following prop osition: Prop osition 9.2 .1 L et Ñ A p f , d q p f , pt A u , d : R Ñ A qq b e a we ak p ostc ondition. Then we c an obtain a s et of e quivalent se quenc es to given b asic formulae as fol lows: (Match) f D A r A s ÞÝ Ñ T A p p q id A ; p. (9.1) (De c omp osition) f D A r A s ÞÝ Ñ p T A p p q id A ; p. (9.2) (Closur e) f E A r A s ÞÝ Ñ q T A p p q id A 1 ; . . . ; id A m ; p. (9.3) (NAC) f E A r A s ÞÝ Ñ r T A p p q id A u 0 v 0 ; . . . ; id A u m v m ; p. (9.4) Pr o of There is a sy mmetric result to Theo rem 8.3.5 for weak p ostconditions tha t dir ectly stems from Prop. 9.2.1. The developmen t a nd ideas a re the same, so we will not rep eat them her e. Theorem 9.2. 2 Any we ak p ostc ondition c an b e r e duc e d to the stu dy of the c orr esp onding set of se qu enc es. Pr o of Corollar ies 9 .1 .2 and 9.1.3 have their versions for p ostconditions which are explicitly stated for further refer e nc e . Corollary 9. 2.3 A we ak p ostc ondition is c oher ent if and only if its asso ciate d se qu en c e (set of se quenc es) is c oher ent . Also, it is c omp atible if and only if its se quenc e (set of 9.2 Mo ving Conditions 217 se quenc es) is c omp atible and it is c onsistent if and only if its se quen c e (set of se quenc es) is applic able. Corollary 9.2. 4 A we ak p ostc ondition is c onsistent if and only if it is c oher ent and c omp atible. Let p : L Ñ R b e a pr oductio n applied to graph G such that p p G q H . E lemen ts to be found in G are those that app ear in L . Similar ly , elements that a re mandatory in the “p ost” side are those in R . The e v olution of the p ositive par t (to be added to L ) of a weak application condition is g iv en by the gramma r rule itself. The evolution of the negative pa rt K has not been addr essed up to now as it has no t bee n needed. Recall that K represents the negative elements of the L HS of the pro duction and let’s repr esent by Q those element s that must no t b e present in the RHS. 2 Prop osition 9.2 .5 L et p : L Ñ R b e a c omp atible pr o duction wi th ne gative left hand side K and ne gative right hand side Q . Then, Q p 1 p K q . (9.5) Pr o of First supp ose that K is the o ne naturally defined by the pr oductio n, i.e. the one found in Lemma 4.4 .2. The only elements that should not appear in the RHS are po tential dangling edge s and those deleted by the pro duction: e _ D . It coincides with (9 .5 ) as shown b y the following set of identities: p 1 p K q e _ r K e _ r r _ e D e _ e r D e _ r D e _ D . (9.6 ) In the las t e qualit y of (9.6) compatibility has been used, r D D . Now supp ose that K ha s b een mo dified, adding so me elements that should not be found in the host graph (Theorem 8 .3.5). There a re three p ossibilities: • The element is e r ased by the pr oductio n. This case is r uled out b y Coro llary 9.1.2 as the weak pr e c ondition could not be co herent. • The element is added by the pr o duction. Then, in fact, the condition is s up erfluo us as it is alr eady considered in K without mo difications, i.e. (9.6) can be a pplied. 2 Note that K and Q precede L and R in the alphab et. 218 9 T ransformation of Restrictions • N one of the ab ov e. Then equa tion (9.5) is trivia lly fulfilled b ecause the pro duction do es not a ffect this element. Just a sing le element has b een consider ed to ea se exp osition. Remark . Though s trange at a first glance, a dua l b ehavior of the negative par t o f a pro duction with resp ect to the p ositive part s hould be exp ected. The fa ct tha t K uses p 1 rather tha n p for its evolution is quite natural. When a productio n p erases o ne element, it a s ks its LHS to include it, s o it demands its pr e sence. The o ppos ite happ ens when p adds so me element. F o r K things happ en quite in the opp osite direction. If the pro duction a sks for the addition of some element, then the siz e of K is increa s ed while if so me element is deleted, K shrinks . Now w e ca n pro ceed to prov e that it is po ssible to transform pr econditions into po stconditions and back ag ain. Pro pos itio n 9.2.5 allows us to co nsider the p ositive part only . The negative part would follow using the inverse of the pr o ductio ns. There is a restricted case that can b e dir ectly addre s sed us ing equations (9.1) – (9.4 ), Theorems 8 .3.5 and 9.2.2 and Corolla ries 9.1.2 a nd 9.2.3. It is the ca se in whic h the transformed p ostco ndition for a g iv en pr econdition do es not change. 3 The questio n of whether it is a lw ays p ossible to transform a pr e condition into a postco ndition – and ba ck again – w ould b e equiv a len t to a sking fo r seque ntial independence of the pro ductio n a nd ident ities, i.e. whether id A i K p or no t. In g eneral the pro duction may act on ele ments that app ear on the definition of the graphs of the precondition. Recall that one demand on preco ndition s pecifica tion is that L and K a re alwa ys the domain of their re s pective morphisms d L and d K (refer to comments on p. 177). The reaso n for doing so will b e clarified s hortly . Theorems o n this a nd pr evious s ections make it p ossible to interpret pr econditions and p ostconditions as se quences. The only difference is that preconditions a re represented by pr oductio ns to b e applied b efore p while p ostconditions need to be applied after p . Hence, the only thing we hav e to do to transfor m a pr econdition into a po stcondition (or vice versa) is to pass pro ductio ns from one part to the other. The case in which w e hav e sequential indep endence has b een studied ab ov e. If there is no seque ntial indep endence 3 Note that this is not so unrealistic. F or example, if the pro duction preserves all elemen ts app earing in the p recondition. 9.2 Mo ving Conditions 219 A m A p A Ñ A m Ñ A m A L p d L R d L L p m L d L R m L d L G p H A p A Ñ A Fig. 9. 8. (W eak) Precondition to (W eak) P ostcondition T ransformation the tr a nsformation c an b e reduced to a pushout cons truction 4 – as for direct deriv a tio n definition – e xcept for one detail: In direct deriv ations matches are total morphisms while here d L and d K need not b e (see Fig. 9.8). The w ay to pr oce ed is to re s trict to the part in which the morphisms are de fined (they are trivially total in that part). F or ex a mple, the transfor ma tion for the w eak application condition depicted to the left of Fig. 9.9 is a pushout. It is again represented to the rig h t of the same figure. Fig. 9. 9. Restriction to Common P arts: T otal Morphism The notation is extended to represent this trans formation of preconditions in to post- conditions a s follows: 4 The square made up of L , R , A and Ñ A is a pushout where p , L , d L , R and A are know n and Ñ A , p A and d L need to b e calculated. Recall from S ec. 6.1 th at pro duction comp osition can b e used instead of pu shout constructions. The same applies here, b u t we will not enter this topic for no w. 220 9 T ransformation of Restrictions Ñ A p A . (9.7) T o see that precondition satisfactio n is equiv alent to p ostcondition s a tisfaction all we hav e to do is to use their representation as sequences of pro ductions (Theo- rems 8.3.5 and 9.2 .2). Note that applying p delays the application of the result (t he id A or id A pro ductions) in the seq uenc e , i.e. w e hav e a k ind of sequential indep endence except that pro ductions can be different ( id A id Ñ A ) b ecause they may b e mo dified by the pr oductio n: p ; id A ÞÝ Ñ id Ñ A ; p. (9.8) If the weak preco ndition is consistent so must the weak p ostcondition b e. There c a n not be a n y compatibility issue and co herence is maintained (aga in, id A and id A may be mo dified b y the pro duction). Pro duction p deals with the p ositive part of the precondition and, b y Prop osition 9.2 .5, p 1 will manage the part asso ciated to K . F or the p ost-to -pre transformatio n roles of p a nd p 1 are in terchanged. Pre-to- pos t or p ost-to -pre tr ansformations do not a ffect the shape of the form ula asso ciated to a diagram e x cept in the ca s e where redundant g raphs are discarded. There are t wo clear exa mples of this: • The application condition r equires the gr aph to a pp ear and the pro duction deletes all its elements. • The application condition r equires the gr aph not to app ear and the pro duction adds all its elements. Recalling that there can not b e any compatibilit y nor coherence problem due to precondition consistency , consistency permits the transforma tion, proving the main result of this section: Theorem 9.2. 6 Any c onsistent (we ak) pr e c ondition is e quivalent to some c onsistent (we ak) p ostc ondition and vic e versa. Pr o of (Sketch) What has been addr essed in previo us page s is the equiv alent to the first case in the pro of of Theorem 8.2.3 or to Lemma 8.3 .1. Hence, a similar pro cedure using closure, decomp osition or b oth proves the res ult. Notice that it is necessa r y to consider the host graph in order to calculate the e q uiv alence. 9.2 Mo ving Conditions 221 This r esult allows us to extend the notation to consider the transforma tion of a precondition. A p ostconditio n is the imag e of some precondition, and vice versa: Ñ A A A , p E . (9.9) As commented ab ov e, for a given a pplica tion co ndition AC it is not neces sarily tr ue that A p 1 ; p p A q bec ause some new elements may b e added and some obso lete elements can b e disca rded. What we will get is an equiv a len t co ndition adapted to p that holds whenever A holds and fails to b e true whenever A is false. Fig. 9. 10. Precondition to P ostcondition Example Example . In Fig. 9 .10 there is a very simple transformation of a precondition in to a po stcondition thro ug h morphism p p A q . The pro duction deletes one arrow and a dds a new one. The overall effect is reverting the direction of the edge b et ween no des 1 a nd 2. The opp osite transformation, from p ostcondition to precondition, can b e obtaine d b y reverting the arr o w, i.e. through p 1 p A q . More g eneral schemes can b e s tudied apply- ing the same principles, a ltho ugh diagr ams will b e a bit cumbersome with only a few application conditions. Let A p 1 p A . If a pre- pos t- pre tra nsformation is ca rried o ut, we will have A A because edge (2,1 ) would b e added to A . How ever, it is true that A p 1 p p A q . Note that in fact id A K p if we limit ourselves to edges, so it would be p ossible to simply mov e the precondition to a p ostcondition as it is . Nonetheless, we hav e to c onsider no des 222 9 T ransformation of Restrictions 1 a nd 2 as the commo n parts betw e en L and A . This is the same k ind of restriction tha n the o ne illustrated in Fig . 9.9. If the pre-p ost-pre tra nsformation is thought of as an o per a tor T p acting on application conditions, then it fulfills T 2 p id, (9.1 0) where id is the identit y . The same would a ls o b e true for a p ost-pr e-p o st trans formation. Theorem 9.2.6 can b e gene r alized a t lea st in tw o wa ys. W e will just sketc h how to pro ceed as it is no t difficult with the theory developed so far. Firstly , an application condition ha s b een transfor med into an equiv a le n t seq uence of pro ductions (or set of sequences) but no ε -pro ductions hav e b een int ro duced to help with compatibility of the application condition. Think of a pro duction that deletes o ne no de and that so me gr a ph of the applica tion condition has an edge incident to that no de (and that edg e is not deleted by the pro duction). So to sp eak, we hav e a fixed gr ammar pre to p ost transfor mation theorem. It should not b e very difficult to pro ceed as in C ha p. 6 to define a floating grammar be havior. Secondly , application conditions can now b e thought o f as pro per ties of the pro duc- tion, and not necessar ily as part o f its left o r right hand sides. It is not difficult to see that, for a given sequence of pro ductions, applica tion conditions are to some extent delo c alize d in the sequence. In particular it would be possible to pass conditions from one pr o duction to others inside a seq uence (paying due attention to compatibility and cohe r ence). Note that a po stcondition for p 1 in the sequence p 2 ; p 1 might b e translated into a pr e c ondition for p 2 , a nd vice versa. 5 When defining diag rams some “pra ctical problems” may turn up. F o r example, if the diagram d L d L 0 Ñ A 0 d 10 A 1 is conside r ed then there a re tw o p otential problems: 1. T he direction in the ar row A 0 A 1 is not the natural one. Nevertheless, injectiv eness allows us to safely revert the arrow, d 01 d 1 10 . 5 This transformation can b e carried out under appropriate ci rcumstances, b ut w e are n ot limited to sequ en tial indep endence. Recall th at pro ductions sp ecifying constraints can b e adv anced or delay ed even though they are not sequential ind ep end ent with resp ect to th e prod uctions that define the sequ ence. 9.3 F rom Simple Digraphs to Multidigraphs 223 2. E ven though we only forma lly state d L 0 and d 10 , o ther morphisms naturally app ear and need to be c heck ed out, e.g. d L 1 : R Ñ A 1 . New mo rphisms should b e considered if they re la te at lea st one e lemen t. 6 A po s sible interpretation o f eq. (9.10) is that the definition of the application condition can v ary from the n atur al one, according to the pro duction under cons ideration. P re-p ost- pre or po st-pre-p ost transfor mations adjust applicatio n conditions to the cor resp onding pro duction. Let’s end this section relating gr a ph c o nstraints and moving co nditions. Recall equa- tion (8.2 3) in which a fir s t relatio ns hip b etw een a pplication co nditio ns and g raph con- straints is established. T ha t e q uation sta tes how to enlarge the r equirements a lready impo sed b y a gr aph constraint to a given host graph if, b e sides, a given pr o duction is to be a pplied. Another different though r elated p oint is how to make pro ductions resp ect some prop erties o f a graph. This topic is addre s sed in the liter ature, for example in [22]. The prop osed wa y to pr o c e ed is to tr ansform a gr a ph constra in t in to a p ostcondition first and a prec o ndition right afterwards. The equiv alent condition to (8.23) would be f P C D R D Q R ^ P Q, G ^ f GC , (9.11) being f GC the g raph constraint to b e kept b y the pro duction. 9.3 F r om Simple Digraphs t o Multidigraphs In this section we show how it is p ossible to consider m ultidigra phs (directed graphs allowing multiple pa r allel edges) without changing the theor y developed so far. At first sight this might s eem a har d task a s Matrix Graph Gr ammars heavily dep end on ad- jacency matrices. A djacency matrices a r e w ell suited for s imple digraphs but can not deal with parallel edg es. This section is a the or etic al applic ation o f gra ph constraints and application conditions to Matrix Graph Grammars. 6 Otherwise stated: Any condition made up of n graphs A i can b e identified as t he complete graph K n , in which no des are A i and morphisms are d ij . Wheth er this is a directed graph or not is a matter of taste (morphisms are injective). 224 9 T ransformation of Restrictions Before addressing mu ltidigra phs, v ar ia ble no des are introduced as one dep ends on the other. W e will follow refer ence [34] to which the reader is r eferred for further details. If instea d of no des of fixed type v ariable types are allow ed, we get a so c alled gr aph p attern . A rule scheme is just a pro duction in which gr a phs are graph pa tterns. A substitu- tion function ι s pecifies ho w v ar iable names taking place in a pro duction a re substituted. A rule scheme p is instant iated via substitution functions pr oducing a par ticular pro duc- tion. F o r example, for substitution function ι we get p ι . The s et of pro duction insta nc e s for p is defined as the se t I p p q t p ι | ι is a substitution u . The kernel o f a graph G , k er p G q , is defined as the gra ph res ulting when all v a riable no des are removed. It might b e the ca se that k e r p G q H . The bas ic idea is to reduce any rule scheme to a set of rule instances. Note that it is not po ssible in g eneral to gener ate I p p q bec ause this s et can b e infinite. The way to pro ceed is simple: 1. Find a match for the kernel of L . 2. Induce a substitution ι such that the matc h for the kernel b ecomes a full match m : L ι Ñ G . 3. C o nstruct the instance R ι and apply p ι to get the direct deriv ation G p ι ù ñ H . Mind the non-determinism of step (2), which is matching. Rule s c hemes are requir ed to satisfy tw o conditions: 1. Any v ariable na me o ccur s at mo s t once in L . 2. Rule schemes do not add v ariable no des. These tw o conditions greatly simplify rule a pplication when there are v aria ble no des, sp ecially for the DPO a pproach. In our case they are not that impo rtant b ecaus e , among other things , matches in Ma tr ix Graph Gra mmars are injective. Let’s start with multidigraphs and how it is p ossible to ex tend Matrix Gra ph Gram- mars to co pe with them without a n y ma jor mo difica tion. The idea is not difficult: A sp ecial kind o f no de (call it multino de ) asso ciated to ev ery edge in the gra ph is intro- duced. Gr aphically , they will b e represented by a filled squar e. Now t wo or more edges can join the sa me no des, a s in fact there a re multinodes in the middle that conv ert them into simple digra phs. The term multinode is just a means to distinguish them fro m the rest of “nor mal” no des tha t we will call simple no des a nd 9.3 F rom Simple Digraphs to Multidigraphs 225 will b e r epresented as us ua l with co lored cir cles. They a r e no t of a different kind as for example hyper edges with res pect to edges (see Sec. 3.4). In our case, s imple no des a nd m ultino des are defined similarly and ob e y the s ame r ules, although their semantics differ. There are some restrictions to be impo sed o n the actio ns that can b e p erfor med on m ultino des (application conditions) or , more precisely , the shape or topo logy o f permitted graphs (gr aph constra in ts). Op erations previo usly specified on edg es no w act on multinodes . Edges a r e managed through multinodes : Adding an edge is tr ansformed into a multinode addition and edge deletion b ecomes multinode deletion. Still, there are edges in the “old” sense, to link m ultino des to their source and target simple no des. W e will touch on ε -pro ductions later in this section. Fig. 9. 11. Multidigraph with Tw o Outgoing Edges Example . Consider the simple pro duction in Fig. 9.11 with tw o edges b et ween no des 1 and 3. Multino des are r epresented b y square no des while normal nodes ar e left unc hanged. When p deletes an edge, p τ deletes a multinode. Adjacency matr ices for p τ are: L 0 0 0 1 1 1 | 1 0 0 0 0 0 0 | 2 0 0 0 0 0 0 | 3 0 0 1 0 0 0 | a 1 0 0 1 0 0 0 | a 2 0 1 0 0 0 0 | b R 0 0 0 1 1 | 1 0 0 0 0 0 | 2 0 0 0 0 0 | 3 0 0 1 0 0 | a 2 0 1 0 0 0 | b K 0 0 0 0 0 0 | 1 0 0 0 1 0 0 | 2 0 0 0 1 0 0 | 3 1 1 0 1 1 1 | a 1 0 0 0 1 0 0 | a 2 0 0 0 1 0 0 | b e 0 0 0 1 0 0 | 1 0 0 0 0 0 0 | 2 0 0 0 0 0 0 | 3 0 0 1 0 0 0 | a 1 0 0 0 0 0 0 | a 2 0 0 0 0 0 0 | b 226 9 T ransformation of Restrictions Adjacency matrices are more sparse b ecause simple no des ar e no t directly connected by edges anymore. Note that the num b er of edge s m ust b e even. In a real situation, a development to ol s uc h as A T oM 3 should take care of all these representation issues . A user w ould see wha t app ear s to the left of Fig. 9.11 and no t what is depicted to the right of the same figure. F ro m a repres e ntation p oint of view we can safely draw p instea d of p τ . In fact, acco rding to Theo r em 9 .3.1, it do es not matter which one is used. Some r estrictions on what a pro duction can do to a multidigraph are necessa ry in order to obtain a multidigraph again. Think for example the case in which after applying some pro ductions we get a graph in which there is an is olated m ultino de (which would stand for a n edge with no source nor target no des). The question is to find the prop erties that define one edg e a nd impos e them on m ultino des a s gra ph co ns traints. This wa y , multinodes will b ehav e a s edges. In the bullets that follow, gr aphs b et ween bracket s can b e found in Fig. 9.12: • One edge always c onnects tw o no des (diagram d 1 , dig raphs C 0 and C 1 ). • S imple no des can not b e directly connected by one edge ( D 0 and E 0 ). Now edges start in a simple no de and end in a m ultino de or vice v ersa , linking simple no des with m ultino des but no t simple no des b etw een them. • A multinode can not b e dire c tly connected to another multinode ( D 1 and E 1 ). The contrary would mean that an edge in the s imple digr aph case is incident to a nother edge, which is not p ossible. • Edges always hav e a sing le simple no de as source ( E 2 ) and a sing le simple no de as target ( E 3 ). 7 The gr aph constraint co nsists of three parts: Fir st diagr a m d 1 is clo sely related to compatibility of the multidigraph 8 and has a sso ciated formula: 7 This condition can be relaxed in case hyperedges w ere considered. See Sec. 3.4. 8 Note th at now there are “tw o level s” when t alking ab out a graph. F or examp le, if w e say compatibilit y we may mean compatibilit y of the multidigraph (left side in Fig. 9.11) or of the underlying simple digraph (righ t side in Fig. 9.11 ) whic h are quite different. I n the first case w e t alk about edges connecting no des while in th e second w e speak of edges conn ecting some nod e with some multino d e. 9.3 F rom Simple Digraphs to Multidigraphs 227 Fig. 9.12. Multidigraph Constraints f 1 X D C 0 D C 1 D A D B r X A p C 0 _ B C 1 qs . (9.12) Diagram d 2 and formula f 2 D 0 D 1 D 0 D 1 (9.13) preven ts that a simple node or a multinode could be link ed b y an edge to its e lf. Se lf loops should b e represe n ted as in C 0 . Finally , when consider ing t wo or mo re simple no des or multinodes, configur a tions in diagram d 3 are not allowed. Its asso ciated formula is: f 3 E 0 E 1 E 2 E 3 Q p E 0 q Q p E 1 q E 2 E 3 . (9.14) This set of constr ain ts will b e known a s mu ltidigr aph c onstr ains , and the abbrevia tion M C p d 1 Y d 2 Y d 3 , f 1 ^ f 2 ^ f 3 q will b e used. Refer to Fig. 9.1 2. Some of these dia g rams could b e mer ged, also unifying (and simplifying a little bit) their co rresp onding formulas. F or example, instead of D 0 , D 1 , E 0 and E 1 we could hav e co ns idered the diagr am in Fig. 9.1 3. Its as s oc ia ted formula would hav e be e n f 4 F 0 Q p F 0 q . Ho wever, a new constra in t needs to consider the case in which a single 228 9 T ransformation of Restrictions Fig. 9. 13. Simplified Diagram for Multidigraph Constrain t simple no de or a single multinode is found in the hos t graph (as these tw o cases ar e not taken in to a ccount b y p d 4 , f 4 q ). Theorem 9.3. 1 (Mul tidigraphs) A ny mult idigr aph is isomorphic to some simple di- gr aph G to gether with mu ltidigr aph c onstr aint M C p f , d q , with d as define d in Fig. 9.12 and f as in e qs. (9.1 2) , (9.13) and (9.14) . Pr o of (sketch) A graph with multiple edges M p V , E , s, t q consists o f disjoint finite sets V of no des and E of e dg es a nd so urce and target functions s : E Ñ V and t : E Ñ V , resp ectively . F unction v s p e q , v P V , e P E returns the no de source v for edge e . W e a re co nsidering m ultidigra phs beca use the pair function p s, t q : E Ñ V V need not b e injective, i.e. several different edges may hav e the same source and tar get no des. W e hav e digra phs bec ause ther e is a distinction b etw een source a nd ta rget no des. This is the standard definition fo und in any textb oo k. It is clear that any M can b e represented as a mu ltidigra ph G satisfying M C . The conv erse a lso ho lds. T o s e e it, just co nsider a ll po ssible combinations o f tw o no des and t wo multinodes and chec k that any problematic situation is ruled out by M C . Induction finishes the pro of. The multidigraph constraint M C p f , d q m ust b e fulfilled b y any host gr aph. If there is a pro duction p : L Ñ R inv olved, M C has to b e transfor med into an a pplica tion co ndi- tion ov er p . In fact, the multidigraph constraint should b e demanded bo th as preco ndition and p ostcondition (recall that we can trans form preco nditions into p ostconditions and vice versa). In Sec. 8.1 we saw that this is an easy task in Matrix Graph Gra mma rs: 9.3 F rom Simple Digraphs to Multidigraphs 229 See e q uations (8.23) and (9.11). This is a clear a dv antage of b eing able to relate gr aph constraints and application conditions. This section is closed analyzing what behavior we hav e for multidigraphs with r espec t to da ngling edg es. With the theory as develope d so far, if a production sp ecifies the deletion of a s imple no de then an ε -pro duction would delete any e dg e incident to this simple no de, connec ting it to any surro unding multinode. But restrictions impo s ed by the multidigraph constraint do not allow this so any pro duction with p otential dangling edges ca n not b e applied. Thus, we hav e a DPO-like b ehavior with resp ect to da ngling edges for multidigraphs. In order to hav e a SPO-like b ehavior ε -pro ductions need to be restated, defining them at a mult idigra ph level, i.e. ε - pro ductions have to delete any p otential “dangling m ultino de”. A new type of pro ductions ( Ξ -productio ns) a re in tro duced to get rid of annoying edges 9 that would dangle when multinodes a re also deleted by ε -pro ductions. W e will not develop it in detail and will limit to descr ibe the c oncepts. The wa y to pro ceed is very similar to what has b een studied in Sec. 6.1, by defining the appro pr iate op erator T Ξ and redefining T ε . Fig. 9. 14. ε -pro duction and Ξ -prod uction A pro duction p : L Ñ R betw een multidigraphs that deletes one simple no de may give r ise to one ε - pro duction that deletes o ne or more multinodes. This ε -pro duction can in turn b e applied only if any e dg e has already b een er ased, hence pos sibly prov oking the a ppear ance of one Ξ -pr o duction. 9 Edges connect simple nodes and multinodes. 230 9 T ransformation of Restrictions This pro ces s is depicted in Fig. 9 .1 4 where, in order to apply pro duction p , pr o ductions p ε and p Ξ need to b e applied b e fore p Ý Ñ p ; p ε ; p Ξ (9.15) Even tually , one could s imply comp ose the Ξ -pro duction with its ε - pro duction, r e- naming it to ε - pro duction and defining it as the wa y to deal with dang ling edges in case of multiple edges, fully recov er ing a SPO- lik e b ehavior. As co mmen ted ab ov e, a p otential user of a developmen t too l s uc h a s A T oM 3 would still see things as in the simple digraph case, with no need to worry ab out Ξ -pro ductions . Another theoretical use of application conditions and graph constraints is the enco d- ing of T uring Machines a nd Bo olean Circuits using Matrix Gr a ph Gra mmars. See [6 7]. In Sec. 10.2 we will see how to e ncode Petri nets using Matrix Gr aph Gr ammars. 9.4 Summary and Conclusions This c hapter is a con tinuation of Chap. 8 in the study of gra ph co ns traints and applica tion conditions. Besides, w e hav e seen how the nihilation matr ix ev olves with gramma r rules. The usefulness of the transformation of a pplication co nditions int o s equences is apparent in this chapter: • t o c haracter iz e pr oper ties such as consistency of application conditions and g r aph constraints in Sec. 9.1; • t o transform pre conditions into p ostconditions and v ice versa in Sec. 9 .2; • t o extend MGG to deal with multidigraphs in Sec. 9.3. W e hav e also s een that to s ome extent applicatio n conditions ar e delo calized inside sequences of pr oductio ns. Bes ides, we hav e sketc hed the usefulness of the a na lysis tech- niques of prev ious chapters to study applica tion conditions. The next chapter addresses o ne fundamen tal topic in grammar s: Rea c hability . This topic has b een stated as problem 4 and is widely addr essed in the literatur e, sp ecially in the theor y of Petri nets. W e will pr o ve that Petri nets can b e interpreted as a prop er subset of MGG, thus all techniques develop ed so far c an b e used to study them. MGG 9.4 Su mmary and Conclusions 231 will b enefit a lso from this relationship and alg ebraic techniques fo r reachability in Petri nets will b e g eneralized to cop e with mor e genera l grammars. Chapter 11 closes the theory in this b o ok with a general summary , so me mor e con- clusions a nd pro pos als for further r esearch. Appendix A presents a work ed out exa mple to illustrate all the theor y developed in this b o ok, fo cusing more on the pr actic al side of the theo r y . 10 Reac habilit y In this chapter we will brush over reachability , prese nted as pr oblem 4 in Sec. 1 .2. It is an imp ortant concept for b oth, practice and theory . Given a gr ammar G rec all that, for some fixed initial S 0 and final S T states, reachability solves the question of whether it is po ssible to go from S 0 to S T with pro ductions in G . It should be of ca pital imp orta nc e to provide one o r mor e sequences that carry this out, o r identify tha t S T is unr e ach able. At least, it should be very v a luable to gather some infor mation of what pro ductions would be involv e d a nd the n umber of times tha t they app ear. So far, this problem is ea sily r elated to (in the se nse that it dep ends o n) problem 1, applicability , b ecause we lo ok for a sequence applicable to S 0 . Also problem 3 contributes bec ause if it is no t p ossible to give a concr ete sequence but a set of pro ductions (the or der is unknown) together with the num ber of times that pro duction appear s in the sequence, problem 3 may reduce the size o f the sea rch space (to find out one concrete sequence that tr ansforms S 0 int o S T ). The chapter is org anized as follows. Sectio n 10 .1 introduces Petri nets and explains why in our o pinion the state e quation is a nece s sary but not a sufficient condition. In Sec. 10.2 Petri nets a re in terpreted as a pr ope r subset of Ma trix Graph Gramma r s. Also, the co ncept of initial marking (minimal initia l digra ph) is defined and the main concepts of Matr ix Graph Grammars are re v isited for Petri nets. The rest of the chapter enlarges the state equation to cop e with more general g r aph gr ammars. W e will make use o f the tensor notation in tro duced in Sec. 2.4. First, in Sec. 10 .3 for fixed Matrix Graph Grammars (gramma r s with no dangling edg es) a nd in Sec. 10.4 for gene r al Matrix 234 10 Reachabilit y Graph Gra mma rs (flo a ting g rammars). As in every chapter, we finish with a s umma r y in Sec. 10.5 with some further comments, in par ticular on other pro blems that can b e addressed simila rly to what is done her e for reachabilit y . 10.1 Crash Course in Petri Nets A P etri net (also a Plac e / T ransition net or P /T net) is a mathematical representation of a dis c rete distributed system, [5 4 ]. The structure o f the distributed s ystem is depicted a s a bipar tite digra ph. There a re place no des, tra nsition no des a nd ar cs co nnecting pla c es with tr ansitions. A place may contain any num b er of tokens. A distribution of tokens ov e r the places is ca lle d a marking . A transition is enabled if it can fire. When a tr ansition fires consumes tokens from its input plac es and puts a num b er of tokens in its o utput places. The execution of Petri nets is no n- deterministic, s o they are appr opriate to model concurrent behaviour o f distributed systems. More fo r mally , Definition 10 .1.1 (P etri N et) A Petri n et is a 5-tuple P N p P, T , F, W, M 0 q wher e • P t p 1 , . . . , p m u is a finite set of plac es. • T t t 1 , . . . , t n u is a finite set of tr ansitions. • F p P T q Y p T P q is a set of ar cs. • W : F Ñ N ¡ 1 is a weight function. • M 0 : P Ñ N is the initial marking. • P X T H and P Y T H . The set of arcs establishes th e flo w direction. A Petri net structur e is the 4-tuple N p P, T , F, W q in which the initial marking is no t sp ecified. No rmally , a Petri net with a initial mar king is written P N p N , M 0 q . Algebraic techniques for Petri ne ts are base d on the repr e sen tation o f the net with an incidence matrix A in w hich columns ar e transitio ns. Element A i j is the num b er o f tokens that tra nsition i remov es – negative – or adds – p ositive – to pla c e j . One of the problems that can b e a nalyzed using algebr aic techniques is r e achability . Given an initial mark ing M 0 and a final marking M d , a necessary c o ndition to reach M d from M 0 is to find a s olution x to the equation M d M 0 Ax , which can b e re written as the linear system 10.1 Crash Course in P etri Nets 235 M Ax. (10.1) Solution x – kno wn as Parikh ve ct or – specifies the num b er o f times that each trans i- tion should be fired, but not the o rder. Identit y (10.1) is the state e quation . Refer to [54 ] for a more detailed e x planation. The idea s presented up to the end of the section are in terpretations of the author and should not b e considered a s standar d in the theory of Petri nets. The state eq ua tion intro duces a matrix, whic h co nceptually can b e though t of as asso ciating a vector space to the dynamic b eha viour of the Petri net. It is interesting to graphically interpret the op era tions inv olved in linear com binations: Addition and m ultiplication by scalar s, as depicted in Fig . 10 .1. The addition of tw o tra nsitions is a gain a transition t k t i t j for which input places are the additio n of input places of every transitio n and the same for output places. If a place a pp ear s as input a nd output place in t k , then it can b e re moved. Multiplication by 1 inverts the transition, i.e. input plac e s b ecome o utput places and vice versa, which in s ome sense is equiv alent to disa pplying the transition. Fig. 10. 1. Linear Com binations in the Context of Petri Nets One important issue is that of notation. Linear algebra use s an additiv e no tation (addition a nd subtrac tio n) which is normally employed when an Abelian structure is under consider ation. F or non-commutativ e structures, s uc h as p ermutation g roups, the m ultiplicative notation (composition and inv e r ses) is preferred. The basic op eration with 236 10 Reachabilit y pro ductions is the definition of sequences (conca tenation) for which historica lly a multi- plicative notation ha s b een c hosen, but substituting comp osition “ ” by the concatenation “;” o per ation. 1 F rom a co nceptual p oint of view, w e are interested in rela ting linear combinations and sequences of pro ductions. 2 Note that, due to commutativit y , linear combinations do not have a n asso ciated notion of order ing, e.g. linear co m bination P V 1 p 1 2 p 2 p 3 coming from Parikh vector r 1 , 2 , 1 s can represent seq uences p 1 ; p 2 ; p 3 ; p 2 or p 2 ; p 2 ; p 3 ; p 1 , which ca n b e quite different. The fundamen tal concept that deals with commutativit y is precisely sequential independenc e . F ollowing this reasoning, we ca n find the problem that makes the state equation a necessa r y but not a sufficient condition: Some tr ansitions can temp orarily ow e some tokens to the net. The Parikh vector spec ifies a line a r co mbination o f tra ns itions and th us, negatives are tempo rarily allowed (subtra ction). Prop osition 1 Sufficiency of the state e quation c an only b e ruine d by tr ansitions t em- p or arily b orr owing tokens fr om the Petri net. Pr o of If there are enough tokens in every place then the transitions can b e fired (equiv., pro ductions c a n be applied). In this case the state e q uation guara n tees reachabilit y . A negative n umber of tokens in o ne place (temp orarily ) represents a coher ence problem in the sequence. Note that due to the way in which Petri nets are defined ther e can no t b e compatibility issues, hence re a c hability depends exclusively on coherenc e . In the pro of we hav e used Matrix Graph Gra mmars conce pts s uch as sequences and coherence. Notice that we ha ve not stated how a Petri net is co ded in Ma trix Graph Grammars. This p oint is addressed in Sec. 10 .2. Prop osition 1 do es not provide any cr iteria based o n the top ology of the Petri net, as Theorems 16, 17, 18 and Cor o llaries 2 and 3 in [54], but contains the essential ide a in 1 This is the reason why Chap. 4 introdu ces “;” to b e read from right to left, contrary to th e Graph T ransformation Systems literature. 2 Linear combinations are the building blocks of vector spaces, and t h e structure to be kept by matrix application. 10.2 MGG T ec hniqu es for Petri Nets 237 their pro ofs: The h yp othesis in previously mentioned theorems g uarantee that cycle s in the Petri net will not ruin coher ence. 10.2 MGG T ec hniques for Petri Nets In this section we will brush over s ome of the concepts develop ed so far for Matr ix Graph Grammar s and se e how they can b e a pplied to Petri nets . Given a Petri net, we will consider it as the initia l host graph in our Matrix Graph Grammar . One pro duction is asso ciated to every transition in whic h places a nd tokens ar e no des and there is an arr ow joining ea c h token to its pla c e. In fact, we r epresent pla ces for illustrative purp oses only as they are not strictly necessary (including tokens alone is enough). Figure 10 .2 shows an example, where pro duction p i corres p onds to transitio n t i . The fir ing of a trans itio n corre sponds to the application of a r ule. Fig. 10.2. Petri N et with R elated Production Set Thu s, Petri nets can b e considered as a pro per subse t o f Ma trix Graph Grammars with tw o imp orta n t pro p erties: 1. T her e are no dangling edges when applying pro ductions (firing trans itio ns). 238 10 Reachabilit y 2. E very production can only b e a pplied in one pa rt of the host g r aph. Prop erties (1) and (2) somehow allow us to safely “ig no re” matchings as introduced in Cha p. 6. In [67] no deles s MGGs are introduced. The main pr ope rt y of this submo del of computation is to avoid dangling edg es, as prop erty (1) ab ov e. Pro p erty (2) pr ev ents one of the tw o type s of non-determinism asso ciated to MGGs: where a productio n should be applied in cas e there were mor e than one matching. Permitting no n- determinism in which pro duction to apply is o ne of the characteristics of Petri nets, useful to des crib e concurrence. W e shall consider Petri nets with no self-lo ops. 3 T ransla ting to Matrix Graph Gram- mars, this mea ns that one pr oductio n either adds or deletes no des of a co ncrete type, but there is never a s im ultaneous addition and deletion of no des of the same type. This agrees with the expected b ehaviour of Matrix Gra ph Grammars pro ductions with r e- sp ect to no de s (which is the b ehaviour of edges as well, see Sec. 4 .1) and will b e kept throughout the present chapter, mainly b ecause rules in floating grammar s are a dapted depe nding o n whether a given pro duction deletes no des o r not (refer to Sec . 1 0.4). Remark . It is advisa ble tha t elements are not relative int eger s. A num b er four must mean that pro duction p adds four no des of type x and not that p adds four no des mor e than it deletes of type x . If we had one such pro duction p , a p ossible wa y to pr o c e ed is to split p into tw o rules, o ne p erforming the addition actions, p r , and another for the deletion ones, p e . Seq uen tially , p should b e decomp osed as p p r ; p e . Minimal Marking . The conce pt of minimal initial digr aph can be used to find the minim um mar king able to fir e a given transition sequence. F o r example, Fig. 1 0.3 shows the calculation of the minimal mar king able to fire trans ition sequence t 5 ; t 3 ; t 1 (from right to left). Notice that p r 1 L 1 q _ p r 1 L 2 qp r 2 L 2 q _ _ p r 1 L n q p r n L n q is the e xpanded form of equa tion (5.1). The for m ula is transfo rmed according to r 1 2 3 s ÞÝ Ñ r 1 3 5 s . Reac hability . The re a c hability problem ca n also b e expresse d using Matrix Graph Grammar co ncepts, as the following definition shows. 3 P etri nets without self-loops are called pur e Petri nets . A place p and a transition t are on a self-loop if p is b oth an input and an output place of t . 10.3 Fixed Matrix Graph Grammars 239 Fig. 10. 3. Minimal Marking Firing Sequence t 5 ; t 3 ; t 1 Definition 10.2. 1 (R e ac habili ty) F or a gr ammar G p M 0 , t p 1 , . . . , p n uq , a s t ate M d is c al le d rea chable starting in st ate M 0 , if ther e exists a c oher ent c onc aten ation made up of pr o ductions p i P G with minimal initial digr aph c ontaine d in M 0 and image in M d . This definition will b e us ed to extend the state equation from Petri nets to Matr ix Graph Gr a mmars. Compatibili t y and Co h e rence. As p oint ed out in the pro of of Pr o p. 1, ther e ca n not be compatibility issues for Petri nets as no da ngling edge may ever ha ppen. Cohe r ence of the sequence of transition fir ing implies applicability (problem 1). It will b e p ossible to unrelate problema tic no des (make the sequence co herent ) if there are enough no des in the curr en t state, which even tually dep ends on the initial marking . 10.3 Fixed Matrix Graph Grammars In this and next sections we will b e concerned with the genera lization of the state equa tion to wider types of g rammars. Recall fro m Sec. 6.1 that by a fixed Matr ix Graph Grammar we understand a gra m- mar as intro duce d in Chap. 4, but in which r ule application is not a llow ed to ge ne r ate dangling edg es, i.e. any pro duction p that deletes a no de but not all of its inco ming and outgoing edges can not b e applied. In other words, o p era tor T ε is forced to b e the ident ity . Prop erty 2 of Petri nets (s e e Sec. 10.2, p. 237) is relaxed b ecause now a single pro duction may even tually b e a pplied in several different places o f the host graph. The appr oach of this section can b e used w ith c la ssical DPO gr aph g rammars [22]. How ever, following the discussion after Prop. 4.1.4 o n p. 70, we res tr ict to DPO rules in which no des (or edges) o f the sa me type are no t rewr itten (deleted and created) in the same r ule. 240 10 Reachabilit y In order to per form an a priori ana lysis it is mandator y to get rid of matches. T o this end, either an a pproach as prop osed in Cha ps . 4, 5 and 6 is followed (as we did in Sec. 10.2) or types of no des are taken into a ccount instead of no des themselves. The second alternative is chosen 4 so pro ductions, initial s tate a nd fina l s tate a re tr ansformed such that types of elements are cons idered, obtaining matrices with elements in Z . T ensor notation will be use d in the rest of the chapter to extend the state equation. Although it will b e avoided whenever p ossible, five indexes may b e us e d simultaneously , E 0 A i j k . T o p left index indicates whether w e a re working with nodes (N) or with edges (E). Bottom left index sp ecifies the p osition inside a sequence, if any . T o p right and bo ttom right are con trav ariant and cov ar iant indexes, resp ectively , where k k 0 is the a djacency matrix (with types of elements, as c ommen ted ab ov e) co rresp onding to pr o duction p k 0 . Definition 10 .3.1 L et G p 0 M , t p 1 , . . . , p n uq b e a fi x e d gr aph gr ammar and m the numb er o f differ ent typ es of no des in G . The incidenc e matrix for no des N A A i k wher e i P t 1 , . . . , n u and k P t 1 , . . . , m u is define d by the identity A i k # r if pr o duction k adds r no des of typ e i r if pr o duction k deletes r no des of typ e i (10.2) It is straig h tforward to deduce for no des an equation similar to (10 .1): N d M i N 0 M i n ¸ k 1 N A i k x k . (10.3) The ca se for edg es is similar, with the pe culiarity that edg e s are r epresented b y matrices instead of vectors and thus the incidence ma tr ix b ecomes the incidenc e tensor E A i j k . Again, o nly t y p es of edges, a nd no t edg es themselves, a r e taken in to account. T wo edges e 1 and e 2 are of the s ame type if their starting no des are of the same type and their terminal no des are of the same type. Source nodes will b e assumed to ha ve a contra v ar ia n t b ehaviour (index on top, i ) while target no des (firs t index, j ) a nd pro ductions (seco nd index, k ) will b ehav e cov a r iantly (index o n b ottom). See diag ram to the center of Fig. 10.5. 4 Notice that this abstraction p ro vok es informa tion loss unless there is a single n ode p er t yp e. The problem here is that of non-determinism inside the host graph (where th e p roduction is to b e app lied). 10.3 Fixed Matrix Graph Grammars 241 Example . Some rules for a s imple client-server s ystem are defined in Fig. 1 0.4. There are three types of no des: Clients (C), servers (S) and ro uters (R), and messag es (self-loo ps in clie nts) can only b e br o adcasted. In the Matrix Gr aph Gr ammar a pproach, this transfor mation system will b ehave as a fix e d o r floa ting gr a mmar dep ending on the initial state. Note that production p 4 adds and deletes edges o f the sa me type p C, C q . F or now, the rule will not b e split into its addition a nd deletion comp onents a s sugg ested in Sec. 10 .2. See Subsec. 10 .4.1 for an example o f this s plitting. Fig. 10. 4. Rules for a Clien t- Server Broadcast-Limited System Incidence tenso r (edges) for these rules can be represented comp o nen twise, e ach com- po nen t b eing the matrix asso ciated to the corr espo nding pro duction. E A i j 1 0 0 0 | C 0 0 1 | R 0 1 0 | S ; E A i j 2 0 2 0 | C 2 0 1 | R 0 1 0 | S E A i j 3 0 2 0 | C 2 0 0 | R 0 0 0 | S ; E A i j 4 1 0 0 | C 0 0 0 | R 0 0 0 | S Columns follow the same ordering r C R S s . Lemma 10.3 . 2 With notation as ab ove, a ne c essary c ondition for st ate d M to b e r e ach- able fr om st ate 0 M is d M 0 M E M E M i j n ¸ k 1 E A i j k x k j n ¸ k 1 p k E A b x ip j k , (10.4) 242 10 Reachabilit y wher e i, j P t 1 , . . . , m u . Last equa lit y in equation (10.4) is the de finitio n of and inner pr oduct – see Sec. 2.4 – so we further hav e: d M 0 M E A, x D . (10.5) Pr o of Consider the construction depicted to the cen ter of Fig. 10 .5 in whic h tensor A i j k is represented as a cub e. Setting k k 0 fixes pro duction p k 0 . A pro duct for this ob ject is defined in the following wa y: Every vector in the cub e p erp endicular to matrix x a cts on the cor resp onding row of the matrix in the usual way , i.e. for every fix ed i i 0 and j j 0 in eq . (10.4), E d M i 0 j 0 E 0 M i 0 j 0 n ¸ k 1 E A i 0 j 0 k x k j 0 . (10.6) Fig. 10. 5. Matrix Representation for No des, T ensor for Edges and Their Coupling Every co lumn in matrix x is a Parikh vector a s defined for Petri nets. Its elements sp ecify the amount of times that every pro duction must b e applied, so all rows m ust b e equal a nd hence equation (10.6) nee ds to b e enlarged with some additional identities: $ ' & ' % M i j n ¸ k 1 A i j k x k j x k p x k q (10.7) with p, q P t 1 , . . . , m u . This uniqueness together with previous equations pr ovide the int uition to r aise (10 .4). 10.3 Fixed Matrix Graph Grammars 243 Informally , we are enlar ging the space of p ossible so lutio ns and then pro jecting a c- cording to so me res trictions. T o see that it is a necessary co ndition suppo se that there exists a sequence s n such that s n p 0 M q d M and that equa tion (10 .6) do es not provide any so lution. Without loss of genera lit y we may a ssume that the fir st column fails (the one corresp onding to no des emerging fro m the first no de), which pro duces an eq uation completely a na logous to the s tate equation for Petri nets, der iving a contradiction. Fig. 10.6. Initial and Final States for Productions in Fig. 10.4 Example (Con t’ d) . Let’s check whether it is p ossible to mov e from s tate S 0 to s tate S d (see Fig. 1 0.6) with the pro ductions defined in Fig. 10.4 on p. 241. Matrices for the states (edge s only) a nd their difference are: E S 0 1 0 0 | C 0 0 0 | R 0 0 0 | S ; E S d 3 1 0 | C 1 0 1 | R 0 1 0 | S ; E S E S d E S 0 2 1 0 | C 1 0 1 | R 0 1 0 | S The pro o f of Pr op. 10.3.4 p oses the following matrice s , where the or dering on rows and columns is r C R S s : E A i 1 k 0 0 0 1 0 2 2 0 0 0 0 0 ; E A i 2 k 0 2 2 0 0 0 0 0 1 1 0 0 ; E A i 3 k 0 0 0 0 1 1 0 0 0 0 0 0 . These matr ic es act on matrix x x p q , p P t 1 , 2 , 3 , 4 u , q P t 1 , 2 , 3 u to obtain: 244 10 Reachabilit y E S 1 4 ¸ k 1 E A 1 k x k 1 x 4 1 2 x 2 1 2 x 3 1 0 E S 2 4 ¸ k 1 E A 2 k x k 2 2 x 2 2 2 x 3 2 0 x 1 2 x 2 2 E S 3 4 ¸ k 1 E A 3 k x k 3 0 x 2 3 x 3 3 0 (10.8) Recall that x must satisfy: x 1 1 x 1 2 x 1 3 ; x 2 1 x 2 2 x 2 3 ; x 3 1 x 3 2 x 3 3 ; x 4 1 x 4 2 x 4 3 . A contradiction is derived for example with eq uations x 2 3 x 2 2 , 1 x 2 3 x 3 3 , x 3 2 x 3 3 and 1 2 x 2 2 2 x 3 2 . Remark . If there is no developmen t to ol handy and you need to write equations (1 0.8) it is useful to remember the following rules of thum b: • The subscr ipt of S coincides with the subscr ipts of all x and it is the terminal no de for edges. Hence, there will b e as many equations in S i as types of terminal no des to which modified edges arr iv e. The fir st thing to do is a list of thes e no des. • F or a fixed S j , there will b e as ma n y e quations in the vector of v ar iables a s initial no des for mo dified edg es. The terminal no de is j in this ca s e. • The sup erscript of x is the pro duction. T o derive each equatio n just co unt ho w man y edges of the same t yp e are added and deleted and sum up. F or a la rger example see Sec. A.4 . It is straightforward to derive a unique equation for r eachabilit y which co nsiders bo th no des and edges, i.e. equa tions (10.3) plus (10.4). This is acco mplished extending the incidence matrix M from M : E Ñ E to M : E N Ñ E (from M m m to M m p m 1 q ), where co lumn m 1 cor resp onds to no des. 10.4 Floating Matrix Graph Grammars 245 Definition 10.3. 3 (Incidence T enso r) L et G p 0 M , t p 1 , . . . , p n uq b e a Matrix Gr aph Gr ammar. The incidenc e t ensor A i j k with i P t 1 , . . . , m u and j P t 1 , . . . , m 1 u is define d by e q. (10 .4) if 1 ¤ j ¤ m and by e q. (10.3) if j m 1 . T op left index in o ur notatio n w ork s as follo ws: N A refers to nodes, E A to edges a nd A to their co upling. Note that a similar co nstruction can b e carr ied out for pro ductions if it was desired to consider no des and edg es in a single expressio n. Almost all the theor y as developed so far would rema in without ma jor notationa l changes. The exception would probably b e compatibility which would need to b e re phrased. An immediate extensio n of Lemma 1 0.3.2 is: Prop osition 10. 3.4 (State Equation for Fixed MGG) L et notation b e as ab ove. A ne c essary c ondition for state d M t o b e r e achable (fr om state 0 M ) is: M i j n ¸ k 1 A i j k x k . (10.9) Pr o of Equation (10.9) is a genera lization o f eq. (10.1) fo r P e tr i nets. If ther e is just o ne place of application for each pro duction then the sta te eq ua tion a s stated for Petri nets is recovered. 10.4 Floating Matrix Graph Grammars Our inten tion now is to r elax the first pro per t y of P etri nets (Sec. 10.2, p. 237) and allow pro duction a pplication even though s ome dang ling edg e might app ear (see Chap. 6). The plan is carried out in tw o sta g es which cor r esp o nd to the subs e ctions that follow, according to the classifica tion of ε -pro ductions in Se c . 6.4. In Ma trix Graph Grammar s, if applying a pro duction p 0 causes dangling edges then the pro duction c an b e applied but a new pr oductio n (a so -called ε -pro duction) is created and applied first. In this wa y a sequence p 0 ; p ε 0 is obtained with the r estriction that p ε 0 is applied at a ma tch that includes all no des deleted by p 0 . See Cha p. 6 for details. 246 10 Reachabilit y Inside a s equence, a pr oductio n p 0 that deletes an edge or node can hav e an external or internal b ehaviour, dep ending on the identifications car r ied out b y the ma tc h. F ollowing Chap. 6, if the deleted elemen t w as added or used b y a previous pro duction the pro duction is labeled as internal (accor ding to the s e quence). On the other hand, if the deleted element is provided by the host gra ph and it is not used unt il p 0 ’s turn, then p 0 is a n external pro duction. Their pr op e r ties ar e (somewha t) co mplemen tary: While externa l ε -pro ductions can be adv anced and comp osed to eventually g et a single initial production whic h ada pts the host g raph to the sequence, int ernal ε -pro ductions are more static 5 in nature. On the other hand, in ternal ε -pro ductions depend on pro ductions themselves and are somewhat independent of the hos t graph, in contrast to external ε -pr o ductions . Note howev e r that int ernal nodes can b e unrelated if, for example, ma tc hings identified them in different parts o f the ho st gr aph, th us b ecoming ex ternal. 10.4.1 External ε - pro duction The main prop erty of external ε -pro ductions, compare d to internal ones, is that they act only on e dg es that a ppea r in the initial state, so their application can be adv anced to the beg inning of the sequence. In this s ituation, the firs t thing to k no w for a given Ma tr ix Graph Gra mma r G p 0 M , t p 1 , . . . , p n uq – with a t most external ε -pro ductions – when applied to 0 M is the maximum num b er of edges that can be erased from its initial state. The p otent ial dangling edges (those with any incident no de to b e er a sed) a r e g iven by e n ª k 1 N k e b N k e , (10.10) which is clo s ely r e lated to the nihilation matrix introduced in Sec. 4.4, in pa rticular in Lemma 4.4 .2. Prop osition 10. 4.1 A ne c essary c ondition for state d M t o b e r e achable (fr om state 0 M ) is: 5 Ma yb e it is p ossible to adv ance their application but, for sure, not to the b eginning of the sequence. 10.4 Floating Matrix Graph Grammars 247 M i j n ¸ k 1 A i j k x k b i j , (10.11) with the r est riction 0 M e ¤ b i j ¤ 0 . Pr o of (Sketch) According to Sec. 6.4, all ε -pro ductions can b e adv anced to the b eginning of the s equence and can be comp osed to obtain a single pro duction, adapting the initial dig raph befo re applying the sequence, which in some sense interprets ma trix b a s t he pr oductio n num b e r n 1 in the seq ue nc e (the fir st to b e applied). Because it is not poss ible to k no w in adv ance the o rder o f applica tion of pro ductions, all we can do is to provide bo unds for the n umber of edges to b e e r ased. This is in ess ence what b do es. Note that e q uation (10 .9) in Prop. 10.3.4 is recovered from (10.11) if there are no external ε -pr oductio ns. Example. Consider the initial and final states shown in Fig . 10.7. Pro ductions of pre- vious ex amples are used, but tw o o f them a re mo dified ( p 2 and p 3 ). Fig. 10.7. Initial and Final States (Based on Product ions of Fig. 10.4) In this case ther e are sequences that transform s tate 0 S in d S , for example, s 4 p 4 ; p 1 3 ; p 1 ; p 1 2 . Note that the pr oblems a re in edg es p 1 : S, 1 : R q and p 1 : C, 1 : R q of the initial state: Router 1 is able to receive pack ets from ser ver 1 and client 1, but not to send them. Next, matr ices for the states and their differ e nc e are ca lculated. The first three columns corresp ond to edges (first to clien ts, second to router s and third to servers) and fourth to no des which ha s bee n split by a v ertical line for illustra tive purp oses only . The orde r ing of no des is r C R S s bo th by columns and by r ows. 248 10 Reachabilit y 0 S 1 1 0 | 3 2 0 0 | 2 0 2 0 | 1 ; d S 2 1 0 | 3 3 0 1 | 2 0 2 0 | 1 ; S d S 0 S 1 0 0 | 0 1 0 1 | 0 0 0 0 | 0 The incidence tensor s for every pro duction (recall that p 2 and p 3 are as in Fig. 10.7) hav e the for m A i j 1 0 0 0 | 0 | C 0 0 1 | 1 | R 0 1 0 | 0 | S A i j 2 0 0 0 | 0 | C 0 0 0 | 1 | R 0 0 0 | 0 | S A i j 3 0 1 0 | 0 | C 1 0 0 | 0 | R 0 0 0 | 0 | S A i j 4 1 0 0 | 0 | C 0 0 0 | 0 | R 0 0 0 | 0 | S Although it does not seem to be strictly necessary here, more information is k ept a nd calculations are more flexible if pro duction p 4 is split into the part that deletes messages and the part that adds them, p 4 p 4 ; p 4 . Refer to comments in Sec. 10.2. A i j 4 1 0 0 | 0 | C 0 0 0 | 0 | R 0 0 0 | 0 | S A i j 4 2 0 0 | 0 | C 0 0 0 | 0 | R 0 0 0 | 0 | S As in the exa mple of Sec. 10.3, the following matrices are mor e appr opriate for cal- culations: A i 1 k 0 0 0 1 2 0 0 1 0 0 0 0 0 0 0 A i 2 k 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 A i 3 k 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 A i 4 k 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 If equation (1 0.9) was directly applied, we w ould get x 1 0 and x 1 1 (third row of A i 2 k and second of A i 3 k ) deriving a contradiction. The v ar iations p ermitted for the initial state ar e given by the matrix 0 M e 0 α 1 2 0 0 α 2 1 0 0 0 0 α 3 2 0 0 (10.12) with α 1 2 P t 0 , 1 u , α 2 1 , α 3 2 P t 0 , 1 , 2 u . Setting b 1 2 1 and b 3 2 1 (one edge p S, R q and one edg e p C, R q remov ed) the s ystem to b e so lved is 10.4 Floating Matrix Graph Grammars 249 1 1 0 0 1 0 1 0 0 1 0 0 x 4 2 x 4 x 3 0 0 x 3 0 x 1 x 1 x 2 0 x 1 0 0 with solutio n x 1 x 2 x 3 x 4 1, s 4 being o ne of its asso cia ted sequence s. Notice that the re striction in Prop. 10 .4.1 is fulfilled, s ee equation (10.12). In previous example, as w e knew a sequence ( s 4 ) answer to the reachabilit y pr oblem, we hav e fixed matr ix b dir ectly to show how Prop. 10 .4.1 works. Although this w ill not be norma lly the case, the wa y to pr o c e ed is very similar: Relax ma trix M b y subtracting b , find a set o f solutions t x, b u and chec k whether the res tr iction for matrix b is fulfilled or not. 10.4.2 Internal ε -pro duction Int ernal ε -pro ductions de le te edg es app ended or used by productions preceding it in the sequence. In t his subsection we fir s t limit to s equences which may hav e only internal ε -pro ductions a nd, by the end of the section, w e will put to gether Pr o p. 10 .4.1 from Subsec. 10.4 .1 with r esults derived here to state Theor em 10.4.3 for floating Ma trix Graph Gr a mmars. The pr opo sed wa y to proceed is analog ous to that of externa l ε -pro ductions. The idea is to a llow so me v a riation in the amount of edges era sed by every pro duction, but this v aria tion is constra ined depending on the b ehaviour (definition) of the res t of the rules. Unfortuna tely , not s o muc h information is gathered in this case and what we are basically doing is ignor ing this pa rt of the sta te equa tion. Define h i j k A i j k p e b I k q max p t A p e b I q , 0 uq , wher e vector I k r 1 , . . . , 1 s p 1 ,k q . 6 Prop osition 10. 4.2 A ne c essary c ondition for state d M to b e r e achable (fr om state 0 M ) is: M i j n ¸ k 1 A i j k V x k (10.13) with the r est riction h i j k ¤ V i j k ¤ 0 . 6 e b I p k q defines a tensor of typ e (1,2 ) whic h “repeats” matrix e “k” times. 250 10 Reachabilit y Pr o of In s o me sense, externa l ε -pr o ductions are the limiting ca se of internal ε -pro ductions and can be seen almost as a particular case: As ε -pro ductions do not interfere with previous pro ductions they hav e to act exclusively on the host gr aph. The full generaliza tion of the state equation for non- r estricted Matr ix Graph Gram- mars is given in the next theorem. Theorem 10.4 .3 (State Equation ) With notation as ab ove, a ne c ess ary c ondition for state d M t o b e r e achable (fr om state 0 M ) is M i j n ¸ k 1 A i j k V x k b i j , (10.1 4) with b i j satisfying re strictions s p e cifie d in Pr op. 10.4.1 and V satisfying those in Pr op. 10.4.2. Pr o of One interesting p ossibility of eq. (10.14) is that we can sp ecify if pro ductions ac ting on some edges must ha ve a fixed o r floa ting behaviour, depending whether v ariances ar e per mitted or not. Strengthening h yp othesis, formula (10.14) bec omes those already s tudied for floating grammar s with int ernal ε -pro ductions ( b 0), with external ε -pro ductions ( V 0), fixed grammar s (from multilin ear to linear tra ns formations) or Petri nets, fully recovering the original for m of the state equa tion. 10.5 Summary and Conclusions The s ta rting po in t of the present chapter is the study of Petri nets as a par ticular case of Matrix Gr a ph Grammars . W e hav e adapted concepts o f Matrix Graph Grammars to Petri nets, such a s initial marking. Next, reachability and the state e quation hav e b een reformulated and extended with the language of this approach, tr ying to provide to ols for g r ammars as g eneral a s pos sible. 10.5 Summary and Conclusions 251 Matrix Graph Grammar s hav e also bene fited from the theo ry develop ed for Petri nets: Throug h the generalize d s ta te equa tion (10.14) it is p ossible to tackle pro blem 4. Despite the fact that the more general the grammar is, the less informatio n the sta te equation provides, Theore m 1 0.4.3 can b e co nsidered a s a full generalization of the state equation. Equation (10.14) is more a ccurate as long as the rate of the a mo un t o f types of no des with res pect to the amount of nodes approaches one. Hence, in general, it will be of little practical use if there are many nodes but few t yp es. Although the use of vector spaces (a s in Petri nets) and multilinear a lgebra is a lmost straightforward, many other algebraic structures are av ailable to improve the res ults herein presented. F or example, Lie algebras s eem a go o d candidate if w e think of the Lie brack et as a measur e of commutativit y (r ecall Subsec. 10.1 in which we saw that this is one of the main problems o f using linear co m binations). It should b e poss ible to extend a little the Lie brack et to cons ide r t wo sequences instead of just t wo pro ductions. 7 With the theory of Chap. 7 the cas e of one pro duction and one sequence can b e dir ectly address ed. Other Petri nets concepts have a lgebraic characterizations and can be studied with Matrix Graph Grammar s. Also, it is p ossible to extend their definition from Petri nets to Matrix Graph Grammar s . A shor t summary of some of them follows: • Conservative Petri nets ar e those for which the sum of the tokens rema ins constant. F or example, think o f tokens as resour ces of the problem under co nsideration. • An invariant is some quantit y that do es not change dur ing r un time. They are divided in tw o main families: Plac e invariants and tr ansition invariants . • Liveness studies whether transitions in a Petri net can b e fire d. There a re five lev els ( L 0 to L 4) with algebra ic character izations of neces sary conditions. • Bounde dness of a Petri net studies the num b er o f tokens in places (in particular if this num b er re ma ins b ounded). Sufficient conditions are known. Note that reachabilit y can b e directly used to study in v ar iance under sequences of initial states. If the initial state must no t change, set the initial a nd the final s tates a s one and the same. This wa y , the state equation must b e equalized to zero. This is rela ted 7 If sequences are coherent, composition can b e used to reco ver a singl e pro duction. 252 10 Reachabilit y to termination b ecause if ther e are sequences that leav e some state in v ar ian t, then there are cycles in the execution of the gr ammar, preven ting termination. The bo ok finishes in Chap. 11, a summary with further res e a rch propo sals. Ap- pendix A presents a full worked out exa mple that illustra tes all r elev ant concepts pre- sented in this dis sertation in a mo r e or less realistic case. Its main o b jective is to show the use and practical utility of compatibilit y , co herence, minimal and negative initial di- graphs, applicability , sequential indep endence and reachabilit y . In particular pro per ties of the sy stem related to pro blems 1, 3 and 4 are addressed. 11 Conclusions and F urther Researc h This chapter closes the main b o dy o f the b o ok. There is still App endix A. It inc ludes a detailed rea l world case study in which m uch of the theor y developed so far is a pplied. This chapter is org anized in tw o sections . In Sec. 11.1 we summar ize the theory and highlight s o me topics that can b e further investigated with Matrix Graph Gr a mmars as develop ed so far. Sec. 1 1 .2 exp oses a long term progra m to address termination, confluence a nd complexity from the p oin t o f view of Matrix Graph Gr ammars. 11.1 Summary and Sh ort T erm Research In this bo ok w e hav e pr esented a new theory to study gr aph dynamics. Also, increas - ingly difficult pr oblems of gra ph grammar s ha ve been addr essed: Applicability , sequential independenc e a nd reachabilit y . First, t wo characterizations of action over g raphs (known as pro ductions or gram- mar rules) are defined, one emphasizing its static pa rt and one its dynamics. T o s ome extent it is p ossible to study thes e ac tions without taking into acc o un t the initial state of the system. Hence, information on the gr ammar ca n b e gathered at des ig n time, b e- ing p otentially useful during runtime. No des and edges are considered indep endent ly , although related b y compatibility . It should b e p ossible, using the tensoria l constr uction of Chap. 10, to define a single (alge braic) structure a nd set compatibility as one of its axioms (a pro perty to be fulfilled). 254 11 Conclusions and F urt her R esearc h Sequences of pro ductions are studied in grea t deta il as they are r espo nsible for the dynamics o f any gra mmar. Compo s ition, para lle lism and tr ue conc ur rency hav e also b een addressed. The effect of a rule on a gr aph dep ends o n where the rule is applied (matching). In Matrix Graph Gra mmars, matches a re injectiv e morphisms. As different pro ductions in a sequence ca n b e a pplied at different pla ces non-deterministica lly , marking links parts of pro ductions g ua ranteeing their applicability on the same elemen ts. It is po s sible to define b oth matching and ma r king as op erator s acting on pro ductions. Pro duction application may ha ve side effects, e.g. the deletion of dangling edges. A sp ecial t yp e of pro ductions, known as ε -pro ductions, appear to k eep co mpatibilit y . It is shown that t hey are the output of some op erator acting on pr o ductions as w ell as matching and marking . 1 Op erators for compa tibility , ma tc hing and mar king can b e translated int o pro ductions o f a sequence. T his new p ersp ective eases their analys is. Minimal and nega tiv e initial digra phs a re resp ectively generalized to initia l and neg - ative dig raph s ets. Two characterizations for applicabilit y ar e given. One depends on coherence and co mpatibilit y and the other on minima l a nd negative initial digraphs. Sequential indep endence is closely re la ted to co mm utativity , but with the p ossibility to consider more than tw o elements a t a time. This has b een s tudied in the case of one pro duction b eing adv a nce d or delay ed an arbitrar y (but finite) num b er of p ositio ns. One interesting questio n is whether tw o se q uences nee d the s ame initial elements or not, esp e cially when one is a p ermutation of the other. G-congruence and cong ruence conditions tac kle this point again f or one pro duction being adv a nced or dela yed a fi- nite num ber o f p ositions inside a sequence. An interesting topic fo r further study is to obtain s imilar r esults but considering moving blo cks of pro ductions instead of a sing le pro duction. Graph constra in ts and particula rly application conditions ar e of great interest, mainly for tw o r easons: First, the left hand side a nd the nihila tio n matrix are particular ca ses, and second it is p ossible to deal with multidigraphs witho ut any ma jor mo difications of the theory . W e have s een that applica tio n co nditions ar e a pa r ticular case of graph 1 Compatibilit y is a must. The operator may act app end ing new ε -p roduct ions, reco vering a floating b ehavior or it can b e “deactiv ated” getting a fi xed b eha vior. Throughout th is b o ok w e hav e focused on floating grammars, which are more general. 11.1 Summary and Short T erm Research 255 constraints and that a g raph constraint can b e reduced to a n application condition in the presence of a pro duction. Application conditions can again be seen as op era to rs acting on pro ductions. This, once more , means that they are equiv alent to sequences of a ce r tain t yp e. Among o ther things, this r e duces the study of co nsistency of applica tion co nditions to that of applicability . As it is p ossible to transfo rm preconditions into postco nditions and back again, they are in so me sense delo c alize d in a pro duction. Although this is sketc hed in so me detail in Chap. 9, no concrete theorem is established concerning the p ossibility to move application conditions amo ng pro ductions inside a sequence. W e do no t foresee, to the b est of our knowledge, an y sp ecial difficult y in addre s sing this topic with the theory develop e d so far. This would b e one application of sequential indep endence – pro blem 3 – to application conditions. Finally , in order to co ns ider rea c hability – pro blem 4 – Petri nets ar e pres en ted a s a par ticular case o f Ma trix Graph Grammars . F rom this p ersp ective, notio ns of Matrix Graphs Gr a mmars like the minimal initial digr a ph are dir ectly applied to Petri nets. Also it is interesting that concepts and res ults from Petri nets can be genera lized to b e included in Ma trix Gr aph Grammars . Precis ely , one example of this is r eachabilit y . Some other concepts can also b e inv estigated such a s liveness, b oundedness, etc., and a re left for future work. F or our resea rch in rea c hability w e hav e almost dire c tly generalized prev ious ap- proaches (v ector spac es) to r eachabilit y by using tensor algebr a . It is worth studying other alg ebraic structures such a s Lie alg e bras. Also, our study of r eachabilit y has not taken into account the nihilation matrix no r application conditions, o ther tw o p ossible directions for further resea rch. In our o pinion, the main contribution of this b o ok is the nov elt y of the gr aph grammar representation, simple and powerful. It naturally brings in several branches of mathemat- ics that can be applied to Matrix Graph Gra mmars, allowing a potential use o f adv anced results to solve old and new pr oblems: First a nd second order log ic s , gr oup theory , tenso r algebra, g raph theory , catego ry theory and functional ana lysis. 256 11 Conclusions and F urt her R esearc h 11.2 Long T erm R esearc h Program On the prac tica l side, as Appendix A shows, some tasks need to b e automa ted to ea se further rese a rch. Manipula tions can get ra ther tedious a nd error prone. The development or improv ement of a to ol such as A T oM 3 would b e very v aluable. Besides, a g o od b ehavior of an implementation of Matrix Graph Gr ammars is exp ected. A t a mor e theor etical level we prop ose to study o ther three increasing ly difficult problems: T er mination, co nfluence a nd complexity . W e think that the theory developed in this b o ok c an b e use ful. See Fig. 11.1. Fig. 11.1. Diagram of Problem Depen dencies. T ermination, in essence, ask s whether there is a solution for a given problem (if some state is rea ched). In o ther branches of mathematics t his is the w ell-k no wn concept of existenc e . Reachabilit y with some improvemen ts can b e o f help in t wo directions. Starting in some initial state, if for s o me sequence o f pro ductions some inv ariant state is reached, then the grammar can not b e terminating (as it enters a cycle as so on as it is rea c hed). Second, to c heck the in v ar iance for a given state (if there exis ts so me sequence that leav es the gr aph una ltered), the state equa tion can also b e used b y equaling the initial a nd final states. If we hav e a terminating grammar we may w onder whether there is a s ing le fina l state or more than one: Confluence. In other branches of mathematics this is the well-kno wn concept of un iqueness . Sequential independence can b e used in this ca se. 11.2 Long T erm Researc h Program 257 If a gr a mmar is terminating and confluent, the next natural ques tio n seems to b e how m uch it takes to get to its final state. This is co mplexit y , which can also b e addressed using Matrix Graph Grammar s. It is not difficult to interpret Matrix Graph Gra mmars as a ne w mo del of co mputation, just as Bo olean Circuits [79] o r T uring machines [5 8]. This is currently our ma in dire ction o f research. See [67] for some initial results. The main co ncept address ed in this b o ok is s equen tialization, whose complexity is enco ded the cla sses P , NP a nd more gener ally in the Polynomial Hiera rch y , PH . See [58] for a comprehensive in tro duction to this topic. Notice that there are tw o pr oper ties that make Matrix Graph Grammar s differ from standard T uring machines: Its p otential non-uniformity (shar e d with Bo olea n Circuits) and the use of an ora cle, in its strong est version, whos e asso ciated de c is ion problem is NP -complete. Non-uniformity is widely addressed in the theory of B oo lean Circuits. The s ame idea s po ssibly with some changes can b e applied to Matr ix Graph Grammars. The str o ngest version of Matr ix Gra ph Grammars as intro duce d he r e use an o racle whose a sso ciated decisio n pro blem is NP -co mplete: The subgra ph isomor phis m problem, SI, to match the left hand side of a pro duction in the host gra ph. If problems that need to distinguish low er level complexity classes (assuming P NP ) such as P are cons idered, it is pos s ible to r estrict ourselves to s o me prop er submo del of computation. F or exa mple, the ma tch op eration can b e forced to use GI instead. 2 Limitations on matc hing are not the only natural submodels of Matrix Graph Gr am- mars. The permitted op eratio ns can be co nstrained, fo r exa mple fo r bidding the addition and deletion o f no des (this would b e clos ely related to non-uniformity and the use of a GI -complete problem rather than SI). Also, we can ac t on the s et of p ermitted graphs to derive submo dels of computation. F or example, co nsider only those gr aphs with a single incoming a nd a sing le outg oing edge in every no de. 3 2 GI, Graph Isomorphism, is widely b elieved n ot to b e NP -complete, though th is is still a conjecture. Problems that can be reduced to GI define the complexit y class GI . 3 By the wa y , what standard and very we ll k n o wn mathematical stru cture is isomorphic to these graphs?. A Case S tudy This Appendix pr esen ts a full worked out ex a mple that illustrates many of the co ncepts and r esults o f this bo ok (more co nce ptual asp ects s uc h as functional representations, adjoints and the like are omitted in this app endix). Although the aim o f Matrix Gr aph Grammars is to b e a theoretical to ol for the study of gr a ph gramma rs and gr aph trans- formation systems, we will see that it is also of pr a ctical interest. The case study herein pr esented tries to b e simple enough to be appro a c hed with pap er and p e ncil but co mplex enough to lo ok realistic. As will b e noticed throughout this app endix, Matrix Graph Grammar s (as well as any approach to graph transforma tion) encourag es the definition of a particular language to solve a pa r ticular problem. Thes e are known as Domain-Sp ecific languages (DSL). See [35]. Section A.1 presents a n assembly line with four types o f machines (as s em bler, dis- assembler, quality and pack aging), one or mor e o per a tors and so me items to pro cess. Section A.2 present s so me sequences and deriv ations, together with pos s ible states of the system. Section A.3 tackles minimal and negative initial digraphs and G-congruence . As we pr ogress, the example will b e e nlarged to b e more de ta iled. Sec tio n A.4 deals with applicability , seq ue ntial independence, r e a c hability a nd confluence. Graph constraints and applicatio n co nditio ns are exemplified in Sec. A.5. Section A.6 returns t o de r iv a- tions, adding and mo difying pro ductions. Dangling edges and their treatment with ε - pro ductions will show up thro ughout the case study . 260 A Case Study A.1 P resen tation of t he Scenario In this section our sa mple scenario is set up. Some basic co ncepts will b e illustrated: Matrix r e pr esentation of g raphs and pro ductions (Sec. 4.1), compa tibilit y (Secs. 2 .3, 4.1 a nd 5.3), c ompletion (Sec. 4 .2) and the nihila tion matr ix (Sec. 4.4). Our initial assembly line will consist of four machines that tak e as input one or more items and output one or more items. Depending on the ma c hine, items are pro cessed transforming them into other items or some decision is ta k en (reject, a ccept items) with no mo dification. There are four types of items, it em1 – i tem4 . One a ssembly machine (named assemb ler , connected to tw o input co n vey ors) pro cesses one piece of ite m1 and o ne piece of item 2 to output in a nother con vey or one piece of t y pe it em3 . There is a quality assurance machine – qualit y – that chec ks if item3 fulfills certain quality s tandards. If it do es, then ite m3 is accepted and packed to further pr o duce item4 through a pac kaging machine. On the contrary , it is r ejected and recy cled thr ough machine di sassemb ler . Machines need the pre sence of an operator in order to w ork proper ly . E lemen ts are graphically represented in Fig. A.1. Fig. A .1. Graphical Representation of Sy stem A ct ors In this case s tudy t yp es a re those in Fig. A.1. Ther e can be more tha n one elemen t o f each t yp e, e.g there are six elements o f type conv eyor in Fig. A.6 , which shows a sna pshot of the state of an example of assembly line. F or typing conv entions refer to commen ts o n the ex a mple in p. 74. Note that for now conv eyors hav e infinite lo ad capacity , e lemen ts in a c on vey or are not ordered and one op erator can simultaneously manag e tw o o r more machines. It should b e A.1 Presen tation of the Scenario 261 desirable that one op era tor might lo o k after different machines but only one at a time. This can b e guaranteed o nly with graph co nstraints although if the initial state fulfills this condition and pro ductions o bserve this fact, there should b e no pro blem. W e will return to this p oint in Sec. A.5. Fig. A .2. DSL Syntax Sp ecification When dealing with DSLs, it is custo mary to specify its syntax through a meta-model. W e will res trict connections among the different actors o f the s ystem: • Op era tors can only b e co nnected to machines (by the end of Sec. A.2 this will b e relaxed). • Items can only b e co nnected to conv eyors (until Sec. A.5 in which they will be allowed to be co nnected to other items). • Convey o rs can only b e c o nnected to machines or to other conv eyors. • Machines can be co nnected o nly to con vey ors (by the end of Sec. A.2 this will b e relaxed). These res tr ictions hav e a natural gra ph representation (see Fig . A.2), whic h is some- times re fer red to as typed g raphs, [10]. Notice that for simplicity all actual t yp es hav e bee n omitted. F or example, there should b e four no des for the different t yp es o f items ( item1 , . . . , item4 ) a nd the same for the ma chines. Now we describ e the actions that can b e per formed. These are the grammar rules. The state mach ine will evolv e a ccording to them. See Fig. A.3 for the basic pro ductions. W e will enlar ge or amend them a nd add some others in future se c tions. Machines are no t fully automatic so in this four pro ductions one o per a tor is needed. The four ba sic actions ar e assemble, disassemble, certify and pack. They corres pond to pro ductions as semble , r ecycle , cert ify and pack . Identifications ar e obvious so they 262 A Case Study Fig. A .3. Basic Product ions of the Assem b ly Line hav e not b een ma de explicit (n umber s betw een different pr oductio ns need not b e r elated, i.e. 1 :conv in pro duction a ssem and 1:con v in ce rtify can b e differently ident ified in a hos t gr aph). There are four r ules that p ermit op erator s to change fr o m one machine to a nother. This mo vemen t is cyclic (to make the gra mmar a little bit more in teresting). A practical justification could be that the manag er of the depa rtmen t obliges ev ery o pera tor passing near a machine to chec k if there is any task p ending, attending it just in case. W e will start with a single op erato r to av oid collaps es. See gr ammar rules move 2A , move 2Q , m ove2P and mo ve2D in Fig. A.4. Fig. A .4. Produ ctions for Operator Mov ement A.1 Presen tation of the Scenario 263 The last set of pro ductions s pecify machines and op erators break-down (the ’b’ in front of the pro ductions ). F or tunately for the compan y they ca n b e fixed o r replaced (the ’f ’ in front of the pro ductions). See Fig. A.5 for the pro ductions, where a s usual H stands for the empty g raph. In o rder to s av e so me space we hav e summar ized four rules (one per machine) substituting the name o f the machine by an X . This is nota tionally conv enient but we should b ear in mind that there a re four r ules for mach ines br eak down ( bMachA , b MachQ , bMach P and bMa chD ) a nd ano ther four for machines fixing ( fMac hA , fMachQ , f MachP a nd f MachD ). Also , they ca n b e thought of as abstract rules 1 or v ariable no des as in Sec. 9.3 . The total a mo un t of gr a mmar rules up to now is tw ent y . Fig. A.5. Break-Down and Fixing of Assembly Line Elements Here we face the problem of ε -pro ductions for the first time. If a co nvey o r with tw o items breaks (disapp ear s ) due to rule b Conv , ther e will b e at lea st t wo dangling edg es, one from its input machine a nd another to its output machine. These dangling edges could be av o ided defining one production p er co n vey or that tak e s them in to acco un t. If the conv eyor had a n y item, then the corr e s ponding edg e would also dangle. Again this ca n be avoided if there is a limit in the num b er of pieces that a convey o r can carry , but a rule for e a c h o ne is again needed. 2 Another po ssibilit y for DPO-like g raph transformatio n systems (what w e have called fixed graph gramma r s) is to define a sort of sub gr ammar that takes car e of p otential dangling edges. This s ubg rammar pro ductions would b e applied itera tiv ely until no edg e c a n dangle. This is a characteris tic of fixed 1 See reference [47]. 2 A ru le f or the case in which a co nv eyor has one item, another for the case in which the conv eyor has tw o items, etcetera. 264 A Case Study graph tr ansformation s ystems and in some situations can be a bit anno ying. If there is no limit to the n umber of items (or the limit is too high, e.g. a memory stack in a CP U RAM), it is p ossible to use fixed g raph gra mmars only to some ex ten t. Th us, ε -pro ductions are useful – at times essential – from a practical p oint of view, among other things, to decrea se the num b er o f pro ductions in a g r ammar (this probably eases grammar definition a nd maintenance and incre a ses runtime efficiency). Matrix r epresentation of these rules is almost str aightforw ard a ccording to Sec. 4.1. W e will explicitly w r ite the static (left and right hand sides ) and dynamic representations (deletion, addition and nihilatio n matrices) of pro duction as semble . Elements ar e order ed [1:it em1 1:i tem2 1:conv 2:conv 3:conv 1:macA 1:op] for L E assem and L N assem , i.e. element p 1 , 3 q of matrix L E assem is the e dg e tha t starts in no de (1:ite m1) and ends in the fir st conv eyor, (1: conv) . The order ing for pro ductions R E assem and R N assem is [1:ite m3 1:c onv 2:conv 3 :conv 1:macA 1:op] . Num b ers in front of t yp es a re a means to distinguish b etw e e n elements of the same type in a given graph (these a re the num b ers that app ear in Fig. A.3 ). L E assem 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 , R E assem 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 , L N assem 1 1 1 1 1 1 1 , R N assem 1 1 1 1 1 1 . F or e E , e N , r E and r N we ha ve the same ordering of element s. e E assem 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , r E assem 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , e N assem 1 1 0 0 0 0 0 , r N assem 1 0 0 0 0 0 . The pro duction is defined R p p L q r _ eL b oth for edges and for no des. T o op erate it is mandatory to complete the matrices. See equation (A.2) for the implicit ordering of elements. A.1 Presen tation of the Scenario 265 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 loooooo oooomoo ooooooo on R E assem 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 loo ooooooo omoooo oooooon r E assem _ 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 loo oooooooomoo ooo ooooon e E assem 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 loo ooooooo omoo oooooooon L E assem (A.1) The expressio n for no des is simila r . As p ointed o ut in Sec. 9.4, using a similar co n- struction to that o f Sec. 10.3 (in the definition o f the incidence tensor 10.3.3) it should be p ossible to get a single expressio n for b oth no des and edges instead o f a formula for edges and a fo rm ula for no des. This might b e of interest for implementations of Matrix Graph Gr a mmars as more compac t expr essions would b e derived. W e shall mainly concentrate on e dges b ecause they define matrices instead of just vectors and all problems such as inconsistencies (dangling elements) come this wa y . 0 | 1:item 1 0 | 1:item 2 1 | 1:item 3 1 | 1:conv 1 | 2:conv 1 | 3:conv 1 | 1:mach A 1 | 1:op lo oooooo omo oooooo on R N assem 0 | 1:item 1 0 | 1:item 2 1 | 1:item 3 0 | 1:conv 0 | 2:conv 0 | 3:conv 0 | 1:mach A 0 | 1:op lo oooooo omo ooooo o on r N assem _ 1 | 1:item 1 1 | 1:item 2 0 | 1:item 3 0 | 1:conv 0 | 2:conv 0 | 3:conv 0 | 1:mach A 0 | 1:op lo o ooooo omo oooooo on e N assem 1 | 1:item 1 1 | 1:item 2 0 | 1:item 3 1 | 1:conv 1 | 2:conv 1 | 3:conv 1 | 1:mach A 1 | 1:op lo oooooo omo oooooo on L N assem (A.2) Note tha t some elements in the no de vectors are zero. This mea ns that these no des app ear in the a lgebraic expr essions but are not par t of the graphs. The nihilation matrix in this case includes all edges inciden t to any no de that is deleted plus edges that are added by production ass em . See Lemma 4.4.2 for its calculation formula: 266 A Case Study K assem 1 1 1 0 1 1 1 1 | 1:item 1 1 1 1 1 0 1 1 1 | 1:item 2 1 1 0 0 0 1 0 0 | 1:item 3 1 1 0 0 0 0 0 0 | 1:conv 1 1 0 0 0 0 0 0 | 2:conv 1 1 0 0 0 0 0 0 | 3:conv 1 1 0 0 0 0 0 0 | 1:mach A 1 1 0 0 0 0 0 0 | 1:op (A.3) Let’s cons ider seque nce b Op;asse m to see how formula (2.4) works to chec k co mpat- ibilit y (Pr ops. 2 .3 .4 and 4.1.6). W e ca n for esee a problem with edg e (1:op,1 :machA) bec ause the no de disapp ears but not the edge. According to eq. (5 .17) we need to chec k compatibility for the increasing set o f se- quences s1 = asse m and s 2 = bOp;ass em . Note that the minimal initial digraph is the same for both seque nce s and coincide s with the left hand side of assem . Sequence assem is compatible, as the output of pro duction assem is a simple digraph ag a in, i.e. rule assemb le is well defined: s 1 M E assem _ s 1 M E assem t d s 1 p M N assem q 1 R E assem _ R E assem t d R N assem 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 _ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 Æ Æ Æ Æ Æ Æ Æ d 1 1 0 0 0 0 0 0 1 0 Thu s, there is no problem with s 1 . Let’s chec k s 2 out. Oper ations are also easy for it. Note tha t r bOp _ e bOp R E assem R E assem , s o: s 2 M E _ s 2 M E t d s 1 p M N q 1 bOp R E _ bOp R E t d bOp p R N q 1 R E _ R E t d bOp p R N q 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 1 0 1 0 0 0 0 0 0 1 0 d 1 1 0 0 0 0 0 1 1 1 A.2 Sequences 267 This kind o f for m ulas do no t o nly asse rt compatibility for the sequence, but also tells us which elements are problema tic. In previous equation we see that the final a nsw er is 1 b ecause of element in p osition p 7 , 8 q (bo ld). In o ur case s tudy as defined up to now, compatibility can o nly be ruined by pro duc- tions starting with a ’b’ ( bOp , etceter a). Either an ε -pr o duction is app ended or the result is no t a simple digr aph (not a graph, actually). Some information ab out compatibility can b e gathered at design time, on the basis of r equired elements app earing o n the left hand side of the pro ductions, o r elements added. F or example, a ccording to pro ductions considered so far any op erator is connected to some mac hine so if pro duction b Op is applied it is very likely that some da ngling edge will app ear. Nihilation matrices can b e automatically ca lculated as well as completion of rules with r espec t to each other. A typical snapshot o f the evolution of o ur asse m bly line can b e found in Fig . A.6. It will b e used in future sec tio ns as initial state. Fig. A.6. S napshot of the A ssem bly Line A.2 Sequences One topic not addr e s sed in this b o ok is how r ules in a gra ph gr ammar ar e selec ted for its application to an actual host gra ph. There are several p ossibilities. T o simplify the exp osition r ules will be chosen randomly . As commented in Secs. 6.1 and 9 .3, this is 268 A Case Study the first – out of tw o – source of no n-determinism in g raph transformatio n systems, in particular in Matr ix Graph Grammars. W e will a dd a nother rule – rejec t – that discards one element once it has b een assembled. It is repres en ted in Fig. A.7. Fig. A.7. Graph Grammar Rule reject W e have t wo comments on this r ule. First, reje ct do es not need the presence of an op erator to act, but it may a lso b e applied if an op era to r is on the machine. Second, if gra mma r rules are applied randomly following some probability distribution, elemen ts will b e rejected according to the s elected probability measure . Let’s b egin with one seque nce that starts with one piece of type ite m1 and one of t yp e item2 and pro duces one o f type item4 : s 0 pack;c ertify ;assem (A.4) which is compatible as no pro duction ge ner ates an y dangling edge. Reca ll that compati- bilit y also dep e nds on the host gr aph: If item1 was connected to t wo different con vey ors (should this ma k e any sense) then rule a ssem would pro duce one dangling edge. The minimal initial digraph of s 0 can be ca lc ulated using eq. (5.1)), M s 0 ∇ 3 1 p r x L y q , where or der of no des is [1:it em1 1:i tem2 1:item3 1:item4 1:conv 2:conv 3:conv 4:conv 5:conv 1:mach A 1: machQ 1:machP 1:op] . The completion we ha ve p erformed ident ifies op erator s in the pro ductions as be ing the s ame. Also , element 1 :conv in rule certif y (Fig. A.3) b ecomes 3:co nv a nd 2: conv is now 4: conv . Similar manipulations hav e been p erformed for p ack . Theorem 5.1.2 demands coherence in order to apply eq. (5.1), which is chec ked out in (A.7). Mo r e a tten tion w ill b e paid to initial digra phs in the next sectio n. A.2 Sequences 269 Fig. A.8. Minimal Initial Digraph and Image of Sequence s 0 M s 0 L assem _ r assem L certify _ r assem r certify L pack 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 (A.5) The neg ative initial digra ph is c a lculated using eq. (5 .14), K p s 0 q ∇ 3 1 p e x K y q . It is not shown in a n y figure b ecause it ha s many edges. In or der to calculate K p s 0 q , the nihila- tion matrice s of pro ductions as sem (A.3), cert ify a nd pack ar e needed. Equatio n (4.48), K p D , can b e us ed with the same ordering o f no des a s for M s 0 . K p s 0 q K assem _ e assem K cert _ e assem e cert K pack 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 (A.6) 270 A Case Study The result of a pplying s 0 to M s 0 is given b y eq. (5.1 0), s 0 M E s 0 3 i 1 e E i M E s 0 _ △ 3 1 e E x r E y and can b e found to the rig h t of Fig. A.8. F or its calculation, it is p ossible to int erpret s 0 as a pro duction acco rding to the rema rk that app ears right after eq. (5.1 0). Sequence s 0 is coherent with resp ect to the iden tifications pr opo sed in its minimal initial digraph (Fig. A.8). T o see this (4 .42) in Theorem 4.3 .5 can b e used, which o nce simplified is e q . (4.38): L cert e assem _ L pack p e assem r cert _ e cert q _ _ R assem e cert r pack _ r cert _ R cert r pack 0 . (A.7) A very simple non-coherent sequence – assuming that b oth rules a ct on the same elements – is t 0 reject ; c ertify . It is obvious as bo th co nsume the same item. When its coherence is calculated, no t o nly will we b e informed that coherence fails but also what elements are res ponsible fo r this failure. Prop osition 5.3.4 tells us tha t the r ules in s 0 can b e comp osed if they are coherent a nd compatible. Let c 0 p L c , e c , r c q be the rule so de fined. Using equations (5.20) and (5.21) its ma trices can b e found. Also, tak ing adv a n tage of previous calculations for the image and using Corolla ry 5.1 .3, we can see that the comp osition is the o ne g iv en in Fig. A.9, closely related to Fig. A.8. Fig. A .9. Composition of Sequence s 0 Let mv 1 move2A ; move2 D and mv 2 move2P ; m ove2Q a nd define the sequence s 4 pack ; mv 2 ; assem ; mv 1 . Pr o ductio n pack is not sequentially indep endent of m v 1 nor of mv 2 ; assem . This is a simple exa mple in which it is po ssible to a dv ance pro ductions inside se q uences only if jumps of length strictly greater than one ar e allow ed. T o see that A.2 Sequences 271 pack K p mv 2 ; assem ; m v 1 q it is necessar y – see Theo rem 7.2.2 – to chec k coher ence of both sequences and G-co ng ruence. Coherence for adv ancement of a single pr o ductio n ins ide a s e quence is given by eq. (7.30) in Theorem 7.2.3, w hich s hould b e zer o. It is s traightforw ard to chec k tha t: e pack ▽ 5 1 p r x L y q _ R pack ▽ 5 1 p e x r y q 0 . (A.8) Fig. A.10. DSL Syntax Sp ecification Extended By incre asing the num b er of pro ductions the sys tem ca n b e mo delled in greater detail. F or example, one oper ator can be busy or idle . The oper a tor is busy if some action needs his a tten tion. This will be represented by a self lo op attach ed to the opera tor under consideratio n. The same a pplies to a machine. The syn ta x a s a DSL of o ur grammar changes beca us e there ca n exist self-lo ops for machines and o per ators. This is not allow ed in Fig. A.2. Howev er , negative conditions are nee ded in the type gra ph (there can be self- lo ops in ma c hines or op erators but not connections b etw ee n t wo op erators or b et ween t wo machines). See Fig. A.1 0. W e need to demand A 1 for every s ingle edge (using the decomp osition op erator p T of Sec. 8.3) and the nonexistence of matchings with A 2 and A 3 . Up to now a single op erato r could b e in charge of more than one machine so if there are edges fr om the op erator to several mac hines, all ma c hines may w ork simultaneously . Besides, there can be more than one oper ator w orking on the same machine. In a proba bly more realistic gr ammar, these t wo scenarios could not ta ke place. These re s trictions will be a ddressed in Sec. A.5. The production pro cess of an y ma c hine can b e split in to t wo pha ses: If there are enough elements to s tart its job, then the input pieces disapp ear a nd the machine and 272 A Case Study Fig. A.11. Prod uction assemble in Greater Detail the op erator b ecome busy . After that, some output piece is pro duced and the machine and the op erator b e come idle aga in. This is represented in the sequence of Fig. A.11. Note tha t ass emble assemb le 1 assemb le 2 . If w e limit our Matrix Gr a ph Gr ammar to deal with simple digraphs we have a built-in application condition “ for free”. Even though one op erato r can still b e in charge of several machines simultaneously , he will mana ge a t mos t o ne machine a t a time. Otherwise, tw o self-lo ops would b e added vio la ting compatibility . Application conditions are neede d if we wan t to set res trictions o n pr oductio ns move . This can b e p ermitted if the machine ha s a kind of “pause” , s o the machine (which is busy as it ha s a s elf lo op) can r esume a s so on as an op erator moves to it. It is not neces sary to sp ecify a r estriction to sta te that a machine can not star t a job when the op erator is busy , as the rule would try to app end a s econd self-lo op to the op erato r (something not allow ed if we a re limited to simple dig r aphs). Sequences can be generated at design time to debug the grammar or during runtime to force a set of even ts. They can a lso b e automatically generated by application conditions or c a n b e a sso ciated to other co ncepts, such as r eachabilit y . A.3 I nitial Digraph Sets a nd G-Congruen ce T o calculate the initial digraph set of sequence s 0 pack ; c ertify ; ass em w e start with the maxima l initial digraph M 0 , the digraph that unrelates all elements for different pro- ductions. It is fo r med by the disjoint union of the left hand sides of the three pro ductions in sequence s 0 . The rest of ele ments M i of the initial digraph set M p s 0 q are derived by ident ifying no des and e dges in M 0 . These identifications how ever can not b e ca r ried out A.3 Initial Digraph Sets and G-Congruence 273 arbitrar ily bec a use any M i P M p s 0 q m ust satisfy eq . (5.1). Hence, ther e ar e identifications that make some element s unneces s ary . F or example, if the output co nvey o r of pr oductio n certif y is identified with the input co n vey or o f pack , then ite m3 (mandatory for the application of pa ck ) is not needed anymore b ecause it will b e provided b y cer tify . Fig. A.12. MID and Excerpt of the Initial Digraph Set of s 0 pack ; certify ; assem F or s 0 we will lab el c 1 and c 2 the input conv eyors of assemb le and c 3 its output conv eyor. Similarly , we hav e c 4 and c 5 fo r cert ify a nd c 6 and c 7 for pack . Op era to rs will be lab elled ac c ordingly so o 1 is the o ne in assem ble , o 2 in certi fy a nd o 3 in pack . There are tw o machines for packing, m 1 the one in certify a nd m 2 in pack . See the graph to the left o f Fig. A.1 3. No ident ification preven ts any other 3 in M p s 0 q , so the nu mber of elements in M p s 0 q grows factorially . In this cas e, since there ar e 6 p ossible ident ifications we hav e 720 po ssibilities. In Fig. A.12 a part of the init ial digraph set can be found to the right. The string tha t appear s close to ea c h arrow sp ecifies the ident ification (top-b ottom) p erfor med to derive the cor resp onding initial digraph. 3 F or an example in whic h not all iden tifications are p ermitted refer to Sec. 6.3 , Fig. 6.7. 274 A Case Study Initial digr aph sets can b e useful to debug a gra mmar. By choosing certain testing sequences it is p ossible to automatica lly select “ex treme” cases in whic h as many elements as po s sible are iden tified o r unrela ted. F or example, the developmen t framework ca n tell that a s ing le opera tor may ma na ge a ll ma chines with the gr ammar as defined so far, but maybe this was not the intended b ehavior. Fig. A.13. MID for Sequences s 1 and s 2 G-congruence and congruence conditions guara n tee the sameness o f the minimal ini- tial digr a ph. They also provide information on what elements are s poiling this prop erty . Consider the s equences s 1 reject ; a ssemble ; recyc le a nd s 2 assemb le ; rec ycle ; reject , where in s 2 the a pplica tion o f pr o duction rejec t has b een adv anced tw o p osi- tions with resp ect to s 1 . The minimal initial digra phs of b oth sequences can b e found in Fig. A.13. By the way , notice that M p s i q are inv aria n ts for these tra nsformations, i.e. s i p M p s i qq M p s i q . G-congruence is characterized in terms of congruence conditions in T heo rem 7.1.6. Congruence conditions for the adv ancement of a single pro duction inside a seq ue nce are stated in Prop. 7.1.2 , in particular in eq. (7.22). Simp lified and adapted for this case with no des ordered [1: item1 1:item2 1:item3 1:conv 2:conv 3:conv 4:conv 1:macA 1:macQ 1:macD 1:op] : 4 4 Where subscript 1 stands for rule recycle , subscript 2 is assemble and subscript 3 is reject . A.3 Initial Digraph Sets and G-Congruence 275 C C L 3 ∇ 2 1 e x K y p r y _ e 3 q _ K 3 ∇ 2 1 r x L y p e y _ r 3 q L 3 r K 1 p r 1 _ e 3 q _ e 1 K 2 p r 2 _ e 3 qs _ K 3 r L 1 p e 1 _ r 3 q _ r 1 L 2 p e 2 _ r 3 q s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _ _ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ _ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ _ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ _ 276 A Case Study 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 _ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ Æ 0 0 0 0 0 0 0 0 0 0 0 i 1 0 0 0 0 0 0 0 0 0 0 0 i 2 0 0 0 0 0 1 1 0 0 0 0 i 3 0 0 0 0 0 0 0 0 0 0 0 1 c 0 0 0 0 0 0 0 0 0 0 0 2 c 0 0 0 0 0 0 0 0 0 0 0 3 c 0 0 0 0 0 0 0 0 0 0 0 4 c 0 0 0 0 0 0 0 0 0 0 0 1 mA 0 0 0 0 0 0 0 0 0 0 0 1 mQ 0 0 0 0 0 0 0 0 0 0 0 1 m D 0 0 0 0 0 0 0 0 0 0 0 1 op The congrue nc e condition fails precis ely in those elements that make b oth minimal initial digraph different, p i 3 , 3 c q and p i 3 , 4 c q . See Fig. A.13. Fig. A .14. Ordered Items in Con vey ors Relev ant matr ices in previous calculations can be found in eqs. (A.9) and (A.10) for r ules rec ycle a nd rejec t , and in Sec. A.1 for a ssemble , in particular e q ua- tions (A.1 ) and (A.3). F or identifications across pro ductions see Figs. A.13 a nd A.14. A.4 Reac hability 277 K recycle 0 0 1 1 0 0 0 0 | 1:item 1 0 0 1 0 1 1 0 0 | 1:item 2 0 0 1 1 1 0 1 1 | 1:item 3 0 0 1 0 0 0 0 0 | 1:conv 0 0 1 0 0 0 0 0 | 2:conv 0 0 1 0 0 0 0 0 | 3:conv 0 0 1 0 0 0 0 0 | 1:mach D 0 0 1 0 0 0 0 0 | 1:op L recycle 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 (A.9) e reject 0 0 1 0 0 | 1:item 3 0 0 0 0 0 | 3:conv 0 0 0 0 0 | 4:conv 0 0 0 0 0 | 1:mach D 0 0 0 0 0 | 1:mach Q r reject 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (A.10) A.4 R eac habilit y In this section r e a c hability is addressed tog ether with some comments on other problems such as confluence, termination and co mplex it y (to b e addres sed in a future cont ribution). Throughout the bo o k so me techniques to deal with sequences hav e b een developed. Sequences to b e studied hav e to b e supplied b y the us e r. Reachabilit y is a mor e indirect source o f sequences, bec a use initial and final states a re sp ecified and the system pr ovides us with sets of c a ndidate sequences. Fig. A.15. Initial and Final Digraphs for Reac h abilit y Example W e shall use similar initial and final states as those in Fig . A.8 (see Fig. A.15). Our grammar as defined so fa r has a fix e d be havior, i.e. it is a fix ed gr aph gramma r, whose state equa tion is given b y (10.9) in P rop. 10 .3.4. 278 A Case Study Let 0 S a nd d S b e the initial a nd final states a nd the ordering [ 1:item 1 1:ite m2 1:item 3 1: item4 1:conv 2:conv 3:conv 4:conv 5:conv 6:conv 1:mac hA 1: machQ 1:mach D 1: machP 1:op] . No des a ppear in the la s t column. M i j d S 0 S n ¸ k 1 A i j k x k 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 (A.11) F or tenso r A i j k only the basic pr oductio ns assem , cert ify , reject , recy cle and pac k are considered plus those for op erator mov ement mov2 * . F ollowing Sec. 10.2, gra mmar rules that a dd and delete elements of the sa me type are split in their addition (+) and deletion (–) par ts . This includes only pro ductions c ertify and r eject . 5 The set of rules is t assem , ce rtify , certif y , reject , reject , recycl e , pack , mov2A , mov2Q , mov2D , mov2P u , so k P t 1 , . . . , 11 u . This ordering is kept in the equations from now on. The following list summarize s all actions p erfor med by the grammar rules under consideratio n o n nodes and edges. A plus sig n betw een brack ets means that the element is added and a minus sign that it is deleted. • p 1:item 1 , 1:conv q ÞÝ Ñ assem pq , recycl e pq • p 1:item 2 , 2:conv q ÞÝ Ñ assem pq , recycl e pq • p 1:item 3 , 3:conv q ÞÝ Ñ assem pq , certif y pq , reject p q • p 1:item 3 , 4:conv q ÞÝ Ñ certif y p q , pack pq 5 Note that neith er cert ify nor reject add or d elete th e item1 no de. They only act on edges. These p roductions are split b ecause the edge deleted and the edge added are of t h e same type, p item1 , conv q . A.4 Reac hability 279 • p 1:item 3 , 6:co nv q ÞÝ Ñ reject pq , recycl e p q • p 1:item 4 , 5:co nv q ÞÝ Ñ pack p q • p 1:op , 1:mac hA q ÞÝ Ñ mov2A pq , mov2Q p q • p 1:op , 1:mac hQ q ÞÝ Ñ mov2Q pq , mov2P p q • p 1:op , 1:mac hD q ÞÝ Ñ mov2D pq , mov2A p q • p 1:op , 1:mac hP q ÞÝ Ñ mov2P pq , mov2D p q • p 1:item 1 q ÞÝ Ñ assem p q , recycl e p q • p 1:item 2 q ÞÝ Ñ assem p q , recycl e p q • p 1:item 3 q ÞÝ Ñ assem p q , recycl e p q , pack p q • p 1:item 4 q ÞÝ Ñ pack pq What is finally der ived a ccording to the metho ds pr opo sed in Cha p. 10 is a system of linear equations. T o those a rising from the tenso r equations ano ther thirteen must be app ended: t x k p x k q u , p, q P t 1 , . . . , 11 u x 2 p x 3 q x 4 p x 5 q . The firs t set of equations guarantee that a rule is applied a concrete num b er of times. The second and the third equatio ns do not allow inconsistencies for rules cert ify and reject , that hav e b een split in their a ddition and deletion par ts. They hav e to b e applied the sa me amount of times. Only those columns of M for which some “activity” is defined in the pr o ductio ns are of interest, i.e. all except the first four . Zero elements are not included, but subs tituted by bold zero s: 1 0 M 5 11 ¸ k 1 A 5 k x k 5 x 1 5 x 6 5 0 0 1 0 M 6 11 ¸ k 1 A 6 k x k 6 0 x 1 6 x 6 6 0 280 A Case Study 0 M 7 11 ¸ k 1 A 7 k x k 7 0 0 x 1 7 x 3 7 x 5 7 0 0 M 8 11 ¸ k 1 A 8 k x k 8 0 0 x 2 8 x 7 8 0 0 0 0 1 0 M 9 11 ¸ k 1 A 9 k x k 9 0 0 0 x 7 9 0 0 M 10 11 ¸ k 1 A 10 ,k x k 10 0 0 x 4 10 x 6 10 0 0 1 M 11 11 ¸ k 1 A 11 ,k x k 11 0 x 8 11 x 9 11 0 0 M 12 11 ¸ k 1 A 12 ,k x k 12 0 x 9 12 x 11 12 0 A.4 Reac hability 281 0 M 13 11 ¸ k 1 A 13 ,k x k 13 0 x 10 13 x 8 13 0 0 1 M 14 11 ¸ k 1 A 14 ,k x k 14 0 x 11 14 x 10 14 1 1 0 1 0 M 16 11 ¸ k 1 A 16 ,k x k 16 x 6 16 x 1 16 x 6 16 x 1 16 x 1 16 x 6 16 x 7 16 x 7 16 0 M 16 corres p onds to no des. Recall that x must satisfy the additional conditions x k p x k q , k P t 1 , . . . , 11 u . The system has the so lutio n: p x, 1 , 1 , x 1 , x 1 , x 1 , 1 , y 1 , y , y 1 , y q 0 . (A.12) being s 0 – see equation (A.4) – one of the sequences for x 1, y 1. Note that solutions are uncoupled in tw o parts: The o ne that rules op erator mo vemen t p y q and that o f items pro cessing p x q . This is a go o d example to study termina tion a nd confluence. Any evolution of the system having as initial state the one depicted to the left of Fig. A.15 will even tually get to the state to the right of the s a me figure (termination). 6 The grammar is co nfluen t (there is a single solution ) althoug h there is no upp e r b ound to the num b er of steps it will tak e to get to its fina l state (complexit y). Depending on the probability distribution there will be more chances to end up so oner or later. Indep endent ly o f the distr ibution, larger seq uences have smaller pro babilities, b eing their probability zero in the limit (if the pr obability a ssigned to rejecting ite m1 is different from 1 ). 6 In fact, it is not terminating b ecause the pro ductions that mov e t he op erator can still b e applied. Wh at we would need is another produ ction th at drives th e system to a halting state. 282 A Case Study A.5 Gr aph Const rain ts and A pplication Conditions Application conditions and gra ph constraints will ma k e our case study muc h more real- istic. W e will s ee t wo examples on how application co nditions can b e use d to limit the applicability of r ules or to avoid undesired b ehaviors. Fig. A.16. Graph Constrain t on Con vey or Load The first is based on the remark that con veyors as presented so far have infinite capacity to loa d items. Pr obably either due to a limit of spa ce or of loa d, conv eyors ca n not tra nspo rt mor e than, say , t wo items. This is a constr ain t on the whole sys tem, which can b e mo delled as a graph constr a in t a s introduce d in Chap. 8. Figure A.16 shows a diagram d 0 that s ets this limit, with asso ciated formula: f 0 E A 1 . . . A 6 6 ª i 1 A i A 1 . . . A 6 6 © i 1 A i . (A.13) Recall tha t if the quantifier is not rep eated it means that it a pplies to every term, e.g. E A 1 A 2 E A 1 E A 2 . A.5 Graph Constraints and Application Conditions 283 Graphs A 5 and A 6 are necessa ry b ecause rule recycle may mix elements o f t yp e item1 and item 2 in the same conv eyor. This gr aph cons tr aint will b e named GC 0 p f 0 , d 0 q . By us ing v ariable no des – see Sec. 9.3 – the diagram and the formula would be simpler, similar to the e x ample o n p. 176, in pa r ticular the right side of Fig. 8 .5. In the end, the diagram and the formula would be instantiated to a graph constraint similar to what app ears on Fig. A.16 and equation (A.13). Fig. A .17. Graph Constrain t as Precondition and Postcondition The same graph constraint is depicted as preco nditio n and p ostco nditio n on Fig . A.17 . The equatio ns are those adapted fr o m (A.13): f 2 E A 20 A 21 A 20 _ A 21 (A.14) Ñ f 2 E Ñ A 20 Ñ A 21 Ñ A 20 _ Ñ A 21 . (A.15) Only the diagra m in which elemen ts o f type item3 app ear ha s been kept beca use w e know that in con vey or lab elled 1 there should not b e items o f any other type (they would 284 A Case Study never b e pro cess ed). Actually , with the definitions of r ules given up to now, co n vey ors connecting differ e n t machines are of the sa me kind. Hence, all six diagr a ms s hould app ear on rejec t ’s left hand side and their tra nsformation, acco rding to Theor em 9.2 .6, on its right hand side. The precondition and the p ostcondition can b e transformed into equiv alent seq uences according to Theorems 8 .3 .5 and 9 .2 .2. This is a negative application c ondition, see Theorem 8.2.3 a nd Lemma 8.3.4. Hence, they are split into tw o sub conditions, each one demanding the nonexistence of one element. A 1 20 will a s k for the nonexistence of edge p 2 : it em3 , 1 : co nv q and A 2 20 for p 3 : it em3 , 1 : co nv q . Similarly we have A 1 21 for p 2 : it em3 , 2 : co nv q and A 2 21 for p 3 : it em3 , 2 : co nv q . 7 A t lea st one element in each ca se m ust not b e present, so there are four combinations: reject ÞÝ Ñ " reject ; id A 1 21 ; id A 1 20 , reject ; id A 1 21 ; id A 2 20 , reject ; id A 2 21 ; id A 1 20 , rej ect ; id A 2 21 ; id A 2 20 * (A.16) The cor resp onding formula – the left arrow on top is omitted – ca n b e wr itten: D A 1 20 A 2 20 A 1 210 A 2 21 A 1 20 _ A 2 20 A 1 21 _ A 2 21 (A.17) Here p ostconditio ns and preconditions turn out to be the same b ecause reject K id A 1 2 x and r eject K id A 2 2 x , x P t 0 , 1 u . F or each se q uence it is p ossible to comp ose a ll pro duc- tions and derive a unique rule. If so , a s there a re just elements that hav e to b e found in the complement of the host gr aph, they ar e app ended to the nihilation ma trix of the comp osition. F or g raph co nstraints, if so mething is to b e forbidden, it is mo re co mmon to think in “ w ha t should not be” , i.e. to think it a s a p ostcondition (graph cons tr aint GC 0 is of this type). On the co n trary , if something is to b e demanded then it is normally easier to describ e it as a pr e condition. 7 T o b e precise, there w ould b e other tw o conditions asking for the nonexistence of p 1 : item3 , 1 : conv q , how ever this part of the applica tion condition is inconsisten t for the first conv eyor (this ed ge is demanded because it has to b e erased) and redundant for t he second conv eyor (it wo uld be fulfilled alw ays b ecause this edge is going t o b e add ed, so it can not exist in the left hand side). This stems from th e t heory dev elop ed in Chap. 8 . A.5 Graph Constraints and Application Conditions 285 Let’s co n tinue w ith a nother prop erty of our system not addre s sed up to now. Note that conv eyors c le a rly hav e a dir e ction: Each one is the output of o ne or mo r e machines and input o f one o r mo re machines. In our exa mple this is s implified so co n vey ors just join tw o different machin es. What migh t b e of interest is that items in co n vey ors a re naturally or dered. Machines should pick the first order e d element. T o make o ur as s em bly line realize this fea ture, when the machine pr o cesses a new item – 2:ite m3 in Fig. A.18 – and ther e is alrea dy a n item in the output conv eyor – 1:item 3 in Fig. A.18 –, an edg e fr om 2:ite m3 to 1:it em3 will b e added. A c ha in is th us defined: The first ele ment will hav e an inco ming edge from another item, but it will not b e the s o urce of a n y edge that ends in other item. The las t item will no t hav e any incoming edge but one outgoing edg e to another item. It has b een exemplified fo r rule reject in Fig. A.18. 8 Fig. A.18. Ordered Items in Con vey ors Again we hav e to c ha nge the allowable connections among types. The dia gram in Fig. A.10 nee ds to b e further extended with a self-lo op for items (there ca n b e edges now) joining t wo of them. How ever, concrete items can not have self-lo o ps, s o a new graph cons tr aint should take care of this. This ordering c o n ven tion p oses tw o pr oblems when the r ule is applied: 1. If the input conv eyor has t wo o r more items, the fir s t – the one with inco ming edges – sho uld b e use d. 2. If the output conv eyor has one or more items, the new item must b e linked to the last one. 8 W e are not going to prop ose th e mo dification of every single rule to handle o rdering in conv eyors. O n th e contrary , we are going to prop ose a meth od b ased on graph constraints and application conditions that automatical ly takes care of ordering. 286 A Case Study The first if statement (pick the elder item) ca n b e mo delled by an applica tion con- dition. W e hav e a preco ndition A p f 1 , d 1 q with: f 1 A 1 D A 2 A 1 ^ A 2 . (A.18) Fig. A.19. Expanded Rule reject The dia gram is repr e s en ted in Fig. A.19. Numbered elements are related by the cor- resp onding morphisms. In formula f 1 the term A 1 . . . A 1 . . . preven ts the applicatio n of the rule if there is s ome mar k ed item in the output conv eyor (the blue squar e, read below). If the rule was a pplied then there w ould be tw o “ la st” items and it should b ecome impo ssible to distinguish which one was added fir st. The term . . . D A 2 . . . A 2 forces the rule to pick the first item in the chain, just in case there was a chain. Item 1:item 3 will be chosen either if it is the fir st in the chain or it is alone. This is eq uiv alent to demand one item that has no outgoing edg es to any other item. The se c ond if sta temen t ca n not be modelled with a n applicatio n condition. The reason is that w e need to add o ne edge in ca s e a “last” item exists in the output con vey or (if the output convey o r is empty , then the rule sho uld s imply add the item). Applica- tion conditions are limited to chec king whether (almos t any ar bitrary combination of ) elements a re pr esen t or not, but they can not direc tly mo dify the actions of the rules. An ywa y , the solutio n is not difficult: 1. T he newly added element needs to be mar k ed so tha t the la s t item in the co nvey o r can b e identified: The blue s quare of A 1 in Fig . A.19 ma rks the last item a dded. A.5 Graph Constraints and Application Conditions 287 2. A precondition has to b e imp osed s uc h that if ther e are marked items in the output conv eyor, the rule ca n not b e a pplied (this wa y at most one unlinked item will exist in ea c h output conv eyor). Ag a in, see A 1 in Fig . A.19 a nd the cor r esp o nding term in eq. (A.18). 3. T he grammar is e nla rged with a new rule that chec k s if th ere are unlinked items (linking them, remM ark2 ) and another that unmar ks them if they are a lone in the conv eyor, r emMark1 . See Fig. A.20 Fig. A.20. Rules to Remo ve Last I tem Marks Both pro ductions rem Mark1 and remMa rk2 have applica tion conditions, AC 1 p f 1 , d 1 t B 1 uq and AC 2 p f 2 , d 2 t B 2 uq , re s pectively . The corre sponding form ulas a re: f 1 E B 1 r B 1 s f 2 B 2 B 2 E B 2 r B 2 s Pro duction rem Mark1 can b e applied only if there is just a single item in the con- vey o r. remM ark2 applies when there is more than one item. B 2 selects the last item: It is equiv alent to “the item with no incoming edges”. There is no problem in tr ansforming b oth preco nditio ns of Fig. A.19 into p ostcon- ditions. Note that there are no da ngling e le men ts in A 2 bec ause 1:it em3 is not er ased (whic h w ould mean removing and adding the same element, something forbidden in Ma- trix Gra ph Grammars , see comments rig h t after Pro p. 4.1.4). Notice that we hav e included ordering in co n vey ors with gra ph constraints and ap- plication conditions (there exists the p ossibility to tra nsform one in to the other) without really mo difying existent g rammar rules. Ordering is a pro p erty of the system and not of 288 A Case Study the pro ductions, whic h should just take care of the actions to be p erformed. W e think that Matrix Graph Grammars clearly s eparate b oth topics: It is feasible to sp ecify g rammar rules firs t and pr op e r ties of the sys tem afterwards. With the theo r y develop ed in Chap. 8 a fra mework – such as A T oM 3 – can r elate one to the other more or less a utomatically . Other exa mples of restrictions and limitations that ca n b e imposed on the case study are: • Lim itations on the num b e r o f op erators, e.g . a ma xim um of four op erator s. • A n op erator can b e in charge of at mo st one machine. • There s hould not b e t wo oper a tors w ork ing in the same machine, whic h is a restriction on rules of type m ov2* . More g eneral constra in ts such as the numb er of op er ators c an not exc e e d the num b er of m achines are also p ossible, altho ugh v ar iable no des would b e needed in this case. The examples so fa r are simple a nd can b e expre ssed with other approa c hes to the topic. F or other natural application conditions that can o nly b e addressed with Matrix Graph Grammar appro aches (to the b est of our knowledge) please refer to the example on p. 19 2 o r to [65]. The ex ample studied in this a ppendix is a e xtended version of the one that app ears there. A.6 Der iv ations In this section a sligh t mo dification of t he initial state depicted in Fig. A.6 together with a p ermut ation of s e quence s 0 will be used again, but enlarged with ordering of pro ductions (s e quences) and restrictions o f Sec. A.5. In ternal and external ε -pro ductions will b e addres sed in pas sing. Let’s co nsider as initial state the one depicted in Fig. A.21. Due to restrictions, sequence s 0 pack ; c ertify ; ass em is not applica ble (three items would app ear in the input co nvey o r of p ack ). How ever, pro ductions are a ll sequent ially indep enden t b ecause they are a pplied to differen t items (due to the amount o f elements a v ailable in the initial state in Fig. A.21) so sequence s 1 5 certif y ; pa ck ; asse m can b e considered instea d. Sequence s 1 5 can not b e a pplied becaus e the op erator has to mov e to the a ppr opriate machine and order ing of items needs to be co nsidered. Let’s supp ose that the four basic A.6 Deriv ations 289 Fig. A .21. Grammar Initial State for s 1 5 rules have a higher proba bilit y – or tha t they are in a higher layer, as e.g. in AGG 9 – so as so on a s one of them is applicable it is in fac t applied. According to the wa y an op erator may mov e in our as sem bly line, applying s 1 5 would need at least the following rules: s 2 5 certif y ; mo v2Q ; mov 2A ; re cycle ; mov 2D ; pack ; mov2P ; m ov2Q ; assem . (A.19) Pro duction reject could hav e b een applied s omewhere in the sequence. Again, as items ar e or dered and some dangling edges a ppear during the pro cess , this is not enough and some other pro ductions nee d to b e a ppended: s 5 p remMar k2 ; certif y ; cer tify ε q ; mov2Q ; m ov2A ; recy cle ; mov2D ; p remMar k2 ; pack ; pa ck ε q ; mov2P ; mov 2Q ; p remMar k2 ; assem ; ass em ε q Fig. A.22. Prod uction to Remo ve Dangling Edges (Ord ering of Items in Conv eyor s) Paren theses are used to iso la te subsequences that could pr obably b e compos e d to obtain mor e “natura l” atomic actions. See Fig. A.20 for the definitio n of r emMark2 and 9 A T oM 3 has priorities. 290 A Case Study Fig. A.22 f or asse m ε , pack ε and certify ε . In t his case, both ass em ε and pack ε are external while cer tify ε is internal. Pro ductions be tw een brackets ar e rela ted through a marking op era tor. It is ma ndatory that they act o n the same no des a nd edges. A user of a to ol s uch as A T oM 3 or AGG do es not necessar ily need to know ab out ε -pro ductions, ev en less abo ut marking. Proba bly in this case it should b e b etter to comp ose pro ductions that include remM ark1 o r r emMark 2 and call them as the o riginal rule, e.g . r emMark2 ; asse m ÞÝ Ñ assem . The final state for s 5 can b e found in Fig . A.23 Fig. A .23. Grammar Final State for s 5 A developmen t framework should ha ve facilities to ease visualizatio n of gr ammar rules, as diagra ms can be quite cumbersome with only a few constraints. F or exa mple, it should b e p ossible to keep graph co nstraints apart from pro ductions, calcula ting on demand ho w a concrete constraint modifies a selected productio n, its left and rig h t hand sides and nihilatio n matrices. References [1] Ag raw a l, A. 20 04. A F ormal Gr aph T r ansformation Base d L anguage for Mo del-to- Mo del T r ansformations . Ph.D. Dissertation. Nashville, T ennessee. [2] B aldan, P ., C o rradini, A., Ehr ig , H., L¨ owe, M., Montanari, U. and Ross i, F., 1999 . Concurr ent Semantics of Algebr aic Gr aph T r ansformations . In [24 ], pp.: 107-1 87. [3] B auderon, M., H ` el´ ene, J. 2001. Pul lb ack as a Generic Gr aph R ewriting Me chanism . Applied Categor ical Structures, 9(1):65 -82. [4] B auderon, M. 1995 . Par al lel R ewriting Thr ough the Pul lb ack Appr o ach . Electro nic Notes, 2. SEGRAGRA’95. [5] B auderon, M. 1997. A Uniform Appr o ach t o Gr aph R ewriting: the Pul lb ack Ap- pr o ach . In Manfred Nag l, editor, Graph Theoretic Co ncepts in Computer Science, W G ’96, V ol. 101 7 of LNCS, pp. 101- 1 15. Springer. [6] B rown, R., Morris, I., Shrimpton J., W ens ley , C.D . 2006 . Gr aphs of Gr aphs and Morphisms . P reprint av ailable a t: htt p://www .informatics.bangor.ac.uk/publi c/math /resea rch/ftp/cathom/06 04.pdf [7] B ¨ uc hi, J. 196 0. We ak Se c ond-Or der L o gic and Finite Automata . In Z Math. Logik Grundlagen Math. 5, 62 -92. [8] C o rmen, T., Leisers on, C., Rivest, R. 199 0 . Int ro duction to Algorithms. McGr a w- Hill. [9] C o rradini, A., Heindel, T., Hermann, F., K nig, B . 2006 . Sesqui-pushout R ewriting . In Pro c. of ICGT ’06 (International Conference o n Graph T r ansformation), pp. 30-45 . Springer. LNCS 4 178. 292 References [10] Corradini, A., Montanari, U., Rossi, F. 1 996. Gr aph Pr o c esses . F undamenta Infor- maticae. V ol. 26 . p. 241-2 65. [11] Corradini, A., Montanari, U., Ross i, F., Ehrig , H., Heck e l, R., L¨ o w e, M. 1 9 99. Al- gebr aic Appr o aches t o Gr aph T ra nsformation - Part I: Basic Conc epts and D ou ble Pushout Appr o ach . In [23], pp.: 16 3-246 [12] Courcelle, B. 199 7 . The expr ession of gr aph pr op erties and gr aph tr ansformations in monadic se c ond-or der lo gic . In [2 3], pp.: 313- 400. [13] Drewes, F., Hab el, A., K reowski, H.-J., T aub en b erger, S. 199 5. Gener ating self-affine fr actals by c ol lage gr ammars . Theor etical Computer Science 145 :159-18 7, 1995 . [14] Ehrig, H., Ehrig, K., Hab el, A., Pennem ann, K.-H. 200 6. The ory of Constr aints and Applic ation Conditio ns: F r om Gr aphs to High-L evel St ructur es. F undamenta Informaticae (74 ) pp.: 1 35-166 , 2 006 [15] Ehrig, H., Ehrig, K., de La ra, J., T aentzer, T., V ar r´ o, D., V arr´ o -Gyapa y , S. 200 5 . T ermination Criteria for Mo del T ra nsformation . Pr o c e edings o f F undament al Ap- proaches to Softw ar e E ngineering F ASE05 (ET APS’05). Le c tur e Notes in Computer Science 34 42 pp.: 49-63 . Spr inger. [16] Ehrig, H., Hab el, A., Kreowski, H.-J., Parisi-Presicce, F. 1991. F r om Gr aph Gr am- mars to High L evel R eplac ement Systems . In H. E hrig, H. J. Kre owski and G. Rozen- ber g, editors, Gr aph Gr ammars and Their Applic ation to Computer Scienc e , vol. 532 of LNCS , pp. 269 - 291. Springer. [17] Ehrig, H., Habe l, A., K r eowski, H.-J., Parisi-Presicce , F. 19 91. Par al lelism and Con- curr ency in High-L evel R eplac ement Systems . Mathematic al Structu r es in Co mputer Scienc e , 1(3):36 1 -404. [18] Ehrig, H., Hab el, A., Padb erg, J, Pra ng e, U. 2004. A dhesive High-L evel R eplac ement Cate gories and Systems . In H. E hrig, G. Engels, F. Parisi-Presicce and G. Rozenberg, editors, Pr o c e e dings of ICGT 2004 , V ol. 325 6 of LNCS , pp. 1 44-160 . Spr inger. [19] Ehrig, H. 1979. Intr o duction to the Algebr aic The ory of Gr aph Gr ammars. In V. Claus, H. Ehrig, a nd G. Rozen b erg (eds.), 1st Graph Grammar W or kshop, pp. 1-69. Springer LNCS 73. [20] Ehrig, H., Na gl, M., Ro zen b erg, G., Rosenfeld, A., edito r s, 1 987 Gr aph-Gr ammars and Their Applic ation to Computer Scienc e , 3r d International W orkshop, V ol. 291 of LNCS. Springer . References 293 [21] Ehr ig, H., Pfender, M., a nd Schneider, H. J. 19 73. Gr aph gr ammars: An Alg ebr aic Appr o ach . In Pro c. IEEE Conf. on Automata and Switc hing Theory , SW A T ’73, pp. 167-1 80. [22] Ehr ig, H., E hrig, K., Prange, U., T aentzer, G. 20 06. F u ndamentals of Algebr aic Gr aph T r ansformation . Springer. [23] Ehr ig, H., Engels, G., Kreowski, H.-J., Rozen be rg, G. 1999 . Handb o ok of Gr aph Gr ammars and Computing by Gr aph T r ansformation. V ol 1 . F oun dations. W orld Scient ific. [24] Ehr ig, H., K reowski, H.-J ., Montanari, U., Rozenberg , G. 1 9 99. Handb o ok of Gr aph Gr ammars and Computing by Gr aph T r ans formation. V ol.3., Concurr ency, Par al- lelism and Distribution. W or ld Scientific. [25] Eile n b erg, S. Ma cLane, S. 19 45. Gener al The ory of Natur al Equivalenc e , T r ans. Amer. So c. 231 . [26] Elg ot, C. 1961. De cision Pr oblems of Fi nite Automata Design and R elate d Arith- metics. T ra ns . A.M.S. 9 8, 21- 52. [27] F eder, J. 19 71. Plex L anguages . Information Sciences, 3 :225-24 1. [28] F okkinga, M. M. 1992 . A Gentle Intr o duction t o Cate gory The ory — t he Calcula- tional Appr o ach . Universit y of Utrech t. In L e ctur e Notes of the 1992 S ummerscho ol on Constru ctive Algori thmics . pp.: 1-7 2. [29] Gulmann, J., Jensen, J ., Jør gensen, M., Klarlund, N., Rauhe, T., and Sandho lm, A. 1995. Mona: Monadic se c ond-or der lo gic in pr actic e . In U.H. Engb erg, K.G. La rsen, and A. Skou, editor s , T ACAS, pp. 58-73 . Springer V er la g, LNCS. [30] Kr euzer, T. L. 2003 . T erm Re writing Systems . Ca mbridge University Press. [31] Heck el, R., K ¨ uster, J. M., T aentzer, G. 2002 . Confluenc e of T yp e d Attribute d Gr aph T r ansformation Systems . In ICGT’20 02. LNCS 2505, pp.: 16 1-176. Springer. [32] Heck el, R., W agner , A. 1995 . Ensu ring Consistency of Condi tional Gr aph Gr ammars – A Constructive A ppr o ach – . E lectronic Notes in Theor etical Computer Scie nce 2. [33] Heinbo ck el, J.H. 1996. Intr o duction to T ensor Calculus and Co ntinu u m Me- chanics. Old Dominion Universit y . F r ee v ersio n (80% of Material) Av a il. at http:/ /www.m ath.odu.edu/ ~ jhh/co unter2 .html . [34] Hoffman, B. 200 5 . Gr aph T r ansformation with V ariables . In Graph T r a nsformation, V ol. 339 3/2005 of LNCS, pp. 101- 115. Springer. 294 References [35] L¨ ammel, R., Mernik, M., eds., 2001. Domain-Sp e cific L anguages. Sp e cial Issue of the Jour n al of Computing and In formation T e chnolo gy (CIT) . [36] Kahl, W., 2002. A Rela tion-Algebr aic Appr o ach to Gr aph Stru ctur e T r ansformation . PhD Thesis. [37] Kauffman, L.H. Knots . Av ail. at http:/ /www.m ath.uic.edu/ kauffman/Tots/K nots.h tm [38] Kaw a hara, Y. 197 3. R elations in Cate gories with Pul lb acks . Me m. F ac. Sci. K y ush u Univ. Ser . A, 27(1): 1 49-173 . [39] Kaw a hara, Y. 19 73. Matrix Calculus in I-c ate gories and an Axiomatic Char acter- ization of Re lations i n a R e gular Cate gory . Mem. F ac. Sci. Kyushu Univ. Ser. A, 27(2): 249 -273. [40] Kaw a hara, Y. 1 973. Notes o n the Un iversality of Re lational F u nctors . Mem. F ac . Sci. Kyushu Univ. Ser. A, 27(2): 27 5 -289. [41] Kennaw ay , R., 1 987. O n Gr aph R ewritings . Theoretica l Computer Science, 52:37-58 . [42] Kennaw ay , R. 199 1. Gr aph Re writing in Some Cate gories of Partial Morphisms . In Ehrig et al. [20], pp. 4 90-504 . [43] Lack, S., Sob o ci´ nski, P . 200 4. A dhesive Cate gories . In I. W alukiev icz, editor, Pr o- c e e dings of FOSSAC S 2004 , V ol. 2 987 of LNCS, pp. 273 -288. Springer. [44] Lambers, L., Ehrig , H., Orejas, F. 2 006. Conflict Dete ction for Gr aph T r ansforma- tion with Ne gative Applic ation Conditions . P ro c. ICGT’06, LNCS 417 8, pp.: 61-7 6. Springer. [45] de La ra, J., Hans V ang heluwe, H. 2 002. A T oM 3 : A T o ol for Multi-F ormalism Mo d- el ling and Meta-Mo del ling . LNCS 23 06, pp.:17 4-188. F undamental Appro aches to Soft ware Engineering - F ASE’02 , in Europ e an Joint Conferences o n Theory And Practice o f Soft ware - ET APS’0 2 . Gr enoble. F ra nce. [46] de Lar a , J., V angheluw e , H., 2 0 04. D efining Visual Notations and Their Manipu- lation Thr ough Meta-Mo del ling and Gr aph T r ansformation . J o urnal of Visual Lan- guages and Computing. Special Issue o n “Domain- Sp ecific Modeling with Visua l Languages ”, V o l 15(3- 4), pp.: 30 9-330. Elsev ier Science [47] de Lar a, J., Bardohl, R., Ehrig, H., Ehr ig, K., Prang e , U., T aentzer, G. 200 7. At- tribute d Gr aph T r ansformation with No de T yp e Inheritanc e . Theo retical Computer Science (Elsev ier), 376 (3): 139-16 3. References 295 [48] Mendelso n, E. 199 7. Int r o duction to Mathematic al L o gic, F ourth Edition . Chapman & Hall. [49] L¨ o w e, M., 199 0. Algebr aic Appr o ach to Gr aph T r ansformation Base d on Single Pushout Derivations . T echnical Repo rt 90/ 05, TU Berlin. [50] Mac Lane, S. 1 998. Cate gories for t he Working Mathematician . Springer . ISBN 0 - 387-9 8403-8 . [51] Minas , M. 200 2 . Conc epts and R e alization of a Diagr am Editor Gener ator Base d on Hyp er gr aph T r ansformation . Science of Computer Progra mming, V ol. 44(2), pp: 157 - 180. [52] Mizo g uc hi, Y., Kaw a hara, Y. 199 5. Relatio nal Graph Re wr itings. Theoretical Com- puter Science, V ol 141, pp. 311 -328. [53] Manza no, M. 1996. Extensions of First-Or der L o gics (Cambridge T r acts in The or et- ic al Computer S cienc e) . C a m bridge Universit y Press . [54] Mura ta, T. 19 89. Petri nets: Pr op erties, Analy sis and Applic ations . Pro cee ding s o f the IE EE, V o l 77(4), pp. 541-5 80. [55] Nag l, M. 1976. F ormal L angu ages of L ab el le d Gr aphs . Computing 1 6 , 113- 137. [56] Nag l, M. 1979. Gr aph-Gr ammatiken . Vieweg, B r aunsch weig. [57] Newman, J. 195 6 . the World of Mathematics . Simon & Sch uster, New Y ork . [58] Papadimitriou, C. 19 93. Computational Complexity . Addison W e sley . [59] Pavlidis, T. 1972. Line ar and Context- F r e e Gr aph Gr ammars . Jour na l o f the A CM, 19(1):11- 23. [60] P´ erez V elasco, P . P ., de Lara, J. 2006. T owar ds a New Algebr aic Appr o ach to Gr aph T r ansformation: L ong V ersion. T echnical Rep ort of the School of Computer Science, Univ ersida d Aut´ onoma de Ma drid. Av aila ble at http ://www. ii.uam.es/ jlara/ invest igacio n/techrep 03 06. pdf . [61] P´ erez V elasco, P . P ., de Lar a, J. 200 6. Matrix A ppr o ach to Gr aph T r ansformation . Mathematical Asp ects o f Computer Science. Pr o c . ICM’06, V ol. Abstracts, p. 12 8. Europ ean Ma thematical So ciety . [62] P´ erez V elasco, P . P ., de Lara, J. 2006 . Matrix Appr o ach to Gra ph T r ansformation: Matching and Se quenc es . Pro c. ICGT’0 6, LNCS 4218 , pp.:122-13 7 . Springer. [63] P´ erez V elasc o , P . P ., de La r a, J. 200 6. Petri N et s and Matrix Gr aph Gr ammars: R e achability . Pro c. PN-GT’06 , Electro nic Co mm unications of EASST(2). 296 References [64] P´ er e z V elasco, P . P ., de Lara, J. 2 0 07. U sing Gr aph Gr ammars for the Analy sis of Be- haviour al Sp e cific ations: Se quential and Par al lel Indep endenc e . Pro c. PROLE’200 7. Also as ENTCS (Elsevier). [65] P´ er e z V elas co, P . P ., de Lara, J. 20 0 7. Analysing Rules with Applic ation Conditions Using Matrix Gr aph Gr ammars . P ro c. GT-VC’2007. [66] P´ er e z V e la sco, P . P ., de La r a, J . 2009. A R eformulation of Matrix Gr aph Gr ammars with Bo ole an Complexes . The Electr o nic Jo ur nal of Co m binatorics. V ol. 1 6(1). R73. Av ailable at: http: //www.c ombinatorics.org/ [67] P´ er e z V elas c o, P . P . 2009. Matrix Gr aph Gr ammars as a Mo del of Co mputation . Av ailable at http ://www .mat2gra.info and ht tp://ar xiv.org /abs/0905.1202 , arXiv:090 5.1202 [68] Penrose, R. 2006 . The R o ad to R e ality: a Complete Guide to t he L aws of the Universe . Knof, 06 79454 4 38. [69] Pfaltz, J.L., Rosenfeld, A. 1969 . Web Gr ammars . Pro c. Int. Jont Conf. Art. Intelli- gence, W ashington, 1 969, pp. 609 - 619. [70] Raoult, J. C., 1984 . O n Gr aph R ewritings . T he o retical Computer Science, 32:1- 2 4. [71] Reisig, W., 198 5. Petri Nets, an Int r o duct ion . Springer -V erlag, Berlin. [72] Sc hneider, H. J . 1970. Chomsky-System f¨ ur Partiel le Or dnungen, Arb eitsb er . d. Inst. f. Math. Mas c h. u. Datenv er . 3, E rlangen. [73] Sc h ¨ ur r, A. 19 94. Sp e cific ation of Gr aph T r anslators with T riple Gr aph Gr ammars . Pro c. 20th International W o r kshop on Graph-Theor e tic Co ncepts in Computer Sci- ence. LNCS 903 , pp.: 15 1 - 163. Spr inger. [74] Sm ullyan, R. 1 995. First-Or der L o gic . Dov er Publica tio ns. [75] Sokolnik off, I.S. 195 1. T ensor Analysis, The ory and Applic ations . J ohn Wiley a nd Sons. [76] T aentzer, G. 200 4 . AGG: A Gr aph T r ansformation Envir onment for Mo deling and V alidation of Softwar e . AGTIVE 2003, LNCS 306 2, pp.: 4 46-453 . Springer . [77] T erese. 2 0 03. T erm R ewriting Systems . Ca mbridge Universit y P ress. [78] Thomas, W. 199 0. Automata on Infinite Obje cts . In J . v an Leeuw en, editor, H andb o ok of The or etic al Computer Scienc e , V o l. B, pp. 133-1 98. MIT Press /Elsevier. [79] V ollmer, H. 19 99. Intr o duction to Cir cu it Complexity: A Uniform Appr o ach . T ext in Theoretical Co mputer Science. EA TCS Serie s . Index abelian group 37 adjacency matrix 27 adjoin t op erator 36 allegory 64 distributive 65 amalgamatio n 46 analysis of a deriv ation 46 applicabilit y 7 application condition 47 coherent 208 compatible 208 consisten t 208 in MGG 178 w eak 178 arit y 16 Banac h space 35 binary relation 60 Boolean matrix pro duct 29 b oundedness 251 categorical prod uct 21 category 19 Graph 20 Graph P 20 PTNets 25 P oset 24 Rel 62 Set 19 Set P 63 T op 24 adhesive HL R 23 Dedekind 65 w eak adhesive HLR 25 class 19 closed form ula 16 closure 186 cocone 22 coherence 80, 239 colimit 22 compatibilit y 239 graph 30 prod uction 72 sequence 112 completion 76 298 Index complexity 281 composition 115 concatenation 79 cone 22 conflict-free condition 49 confluence 9 congruence condition 147 negative 147 p ositiv e 147 context graph 43 contra ction 32 contra v ariance 33 coprodu ct 22 co v ariance 33 cycle 38 dangling condition 30, 43 edge 3 , 30 daughter graph 52 decomp osition 187 definition sc heme 61 deriv ation 8 exact 137 diagram 170, 175 direct deriv ation 8 DPO 43 MGG 121 SPO 49 direct transformation 48 distance 35 domain 63 domain of discourse 17 double pullbac k (DPB) 51 double pushout (DPO) 42 DSL, Domain-Sp ecific Languages 259 dual space 35 ε -pro duction adjoin t op erator 127 edge addition 68 deletion 68 external 136 internal 136 type 75 fixed grammar 128 floating grammar 128 FOL connective 16 constant 16 first order logi c 15 function 16 quantifier 16 symbol 16 v ariable 16 function partial 63 total 63 functional represen tation closure 200, 216 decomp osition 198, 216 matc h 195, 216 negative application condition 216 negative application condition 201 prod uction 125 functor 20 G-congruence 142 gluing condition 44 Index 299 graph constrain t 175 fulfillmen t 181 graph pattern 224 ground form u la 16 , 175 group 37 Hilb ert space 34 hyperedge 57 hypergraph 57 isomorphism 57 identificatio n condition 43 identit y conjugate 196 incidence matrix 27, 240 incidence tensor 245 matrices 240 indep endence 8 initial digraph actual 136 set 131 initial ob ject 19 inner produ ct 33, 34 interf ace 42 interpretatio n function 17 inv arian ts place 251 transition 251 kernel (graph) 224 Kroneck er delta 33 Kroneck er pro duct 32 Levi-Civita symbol 33 LHS, Left Hand Side 69 limit 22 line graph 27 liv eness 251 marking 234 minimal 238 operator 129 matc h DPO 43 extended 124 MGG 120 SPO 49 metric 35 metric tensor 33 MGG, Matrix Graph Grammar 6 minimal initial digraph 100 monadic second order logic, MSOL 18 morphism partial 63 mother graph 52 multidig raph constraints 227 multig raph 20 multinode 224 NCE 54 negative application condition 47 graph constrain t 47 initial digraph 107 initial set 133 nihilation matrix 89 NLC 52 nod e addition 69 deletion 69 type 74 vector 28 300 Index norm 34 of Boolean vector 30 operator 34 delta 85 nabla 85 order 31 outer prod uct 32 ε -pro duction 126 external 136 internal 136 parallel indep endence 44 prod uction 46 P arikh vector 235 parity 38 p erm utation 38 P etri net 234 conserv ative 251 definition 234 pure 238 place 234 p ositiv e application condition 47 application condition atomic 47 graph constrain t 47 graph constrain t atomic 47 p ostcondition 47 MGG 178 w eak 178 precondition 47 MGG 178 w eak 177 prod uction ε 126 DPO 42 dynamic form ulation 90 SPO 49 static form u lation 68 prop ositional logi c 16 pullback 22 pullout 65 pushout 22 complemen t 23 initial 23 R -structure 60 rank 31 reac hability 8, 234 , 238 relation 62 equiv alence 76 universal 65 zero 65 RHS, Righ t Hand Side 71 Riesz represen tation theorem 35 rule sc heme 224 scalar produ ct 34 second order logic, SOL 17 sequence 79 sequential conflu ence 10 sequential indep end en ce 8, 45 generalization 156, 161 w eak 50 signature 38 simple digraph 27 Index 301 nod e 224 single pullback (SPB) 51 pushout (SPO) 48 source 20 state equation 235, 250 string 57 length 57 subgroup 37 substitution function 224 synthesis of a deriv ation 46 target 20 tensor 31 prod uct 32 for graphs 29 terminal ob ject 19 termination 281 token 234 transduction 60 transformation (HLR systems) 48 transition 234 enabled 234 firing 234 transp osition 38 even 38 odd 38 true concurrency 164 type 75 universal prop erty 20 v alence 31 V an Kampen square 24 w eak parallel indep enden ce 45 w ell-definedn ess 175 Ξ -pro duction 229
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment