Matrix Graph Grammars: Transformation of Restrictions
aa r X i v : . [ c s . D M ] D ec Matrix Graph Grammars: Transformation of Restrictions
Pedro Pablo P´erez Velasco
School of Computer ScienceUniversidad Aut´onoma de MadridCiudad Universitaria de Cantoblanco, 28049 - Madrid, [email protected]
Abstract.
In the Matrix approach to graph transformation we represent simple digraphs and ruleswith Boolean matrices and vectors, and the rewriting is expressed using Boolean operations only.In previous works, we developed analysis techniques enabling the study of the applicability of rulesequences, their independence, stated reachability and the minimal digraph able to fire a sequence.See [19] for a comprehensive introduction. In [22], graph constraints and application conditions(so-called restrictions ) have been studied in detail. In the present contribution we tackle the prob-lem of translating post-conditions into pre-conditions and vice versa. Moreover, we shall see thatapplication conditions can be moved along productions inside a sequence ( restriction delocaliza-tion ). As a practical-theoretical application we show how application conditions allow us to performmultidigraph rewriting (as opposed to simple digraph rewriting) using Matrix Graph Grammars.
Keywords:
Matrix Graph Grammars, Graph Dynamics, Graph Transformation, Restrictions, Appli-cation Conditions, Preconditions, Postconditions, Graph Constraints.
1. Introduction
Graph transformation [7, 24] is becoming increasingly popular in order to describe system behavior dueto its graphical, declarative and formal nature. For example, it has been used to describe the operationalsemantics of Domain Specific Visual Languages (DSVLs, [14]), taking the advantage that it is possibleto use the concrete syntax of the DSVL in the rules which then become more intuitive to the designer.
P. P. P´erez / Matrix Graph Grammars
The main formalization of graph transformation is the so-called algebraic approach [7], which usescategory theory in order to express the rewriting step. Prominent examples of this approach are thedouble [3, 7] and single [5] pushout (DPO and SPO) which have developed interesting analysis tech-niques, for example to check sequential and parallel independence between pairs of rules [7, 24] or thecalculation of critical pairs [10, 13].Frequently, graph transformation rules are equipped with application conditions (ACs) [6, 7, 11],stating extra (in addition to the left hand side) positive and negative conditions that the host graph shouldsatisfy for the rule to be applicable. The algebraic approach has proposed a kind of ACs with predefineddiagrams (i.e. graphs and morphisms making the condition) and quantifiers regarding the existence or notof matchings of the different graphs of the constraint in the host graph [6, 7]. Most analysis techniquesfor plain rules (without ACs) have to be adapted then for rules with ACs (see e.g. [13] for critical pairswith negative ACs). Moreover, different adaptations may be needed for different kinds of ACs. Thus, auniform approach to analyze rules with arbitrary ACs would be very useful.In previous works [15, 16, 17, 19] we developed a framework (Matrix Graph Grammars, MGGs) forthe transformation of simple digraphs. Simple digraphs and their transformation rules can be representedusing Boolean matrices and vectors. Thus, the rewriting can be expressed using Boolean operatorsonly. One important point is that, as a difference from other approaches, we explicitly represent the ruledynamics (addition and deletion of elements) instead of only the static parts (rule pre and postconditions).This point of view enables new analysis techniques, such as for example checking independence ofa sequence of arbitrary length and a permutation of it, or to obtain the smallest graph able to fire asequence. On the theoretical side, our formalization of graph transformation introduces concepts frommany branches of mathematics like Boolean algebra, group theory, functional analysis, tensor algebraand logics [19, 20, 21]. This wealth of available mathematical results opens the door to new analysismethods not developed so far, like sequential independence and explicit parallelism not limited to pairs ofsequences, applicability, graph congruence and reachability. On the practical side, the implementationsof our analysis techniques, being based on Boolean algebra manipulations, are expected to have a goodperformance.In MGGs we do not only consider the elements that must be present in order to apply a production(left hand side, LHS, also known as certainty part ) but also those elements that potentially prevent itsapplication (also known as nihil or nihilation part ). Refer to [22] in which, besides this, applicationconditions and graph constraints are studied for the MGG approach. The present contribution is a con-tinuation of [22] where a comparison with related work can also be found. We shall tackle pre andpostconditions, their transformation, the sequential version of these results and multidigraph rewriting. Paper organization . Section 2 gives an overview of Matrix Graph Grammars. Section 3 revises ap-plication conditions as studied in [22]. Postconditions and their equivalence to certain sequences areaddressed in Sec. 4. Section 5 tackles the transformation of preconditions into postconditions. The con- . P. P´erez / Matrix Graph Grammars verse, more natural from a practical point of view, is also addressed. The transformation of restrictions isgeneralized in Sec. 6 in which delocalization – how to move application conditions from one productionto another inside the same sequence – is also studied together with variable nodes . As an application ofrestrictions to MGGs, Sec. 7 shows how to make MGG deal with multidigraphs instead of just simpledigraphs without major modifications to the theory. The paper ends in Sec. 8 with some conclusions,further research remarks and acknowledgements.
2. Matrix Graph Grammars Overview
We work with simple digraphs which we represent as G (cid:16) p M, V q , where M is a Boolean matrix foredges (the graph adjacency matrix) and V a Boolean vector for vertices or nodes. The left of Fig. 1shows a graph representing a production system made up of a machine (controlled by an operator) whichconsumes and produces pieces through conveyors. Self loops in operators and machines indicate thatthey are busy.
Figure 1. Simple Digraph Example (left). Matrix Representation (right)
Well-formedness of graphs (i.e. absence of dangling edges) can be checked by verifying the identity (cid:15)(cid:15)(cid:0) M _ M t (cid:8) d V (cid:15)(cid:15) (cid:16) , where d is the Boolean matrix product, M t is the transpose of the matrix M , V is the negation of the nodes vector V , and } (cid:4) } is an operation (a norm, actually) that results in the or of all the components of the vector. We call this property compatibility (refer to [15]). Note that M d V results in a vector that contains a 1 in position i when there is an outgoing edge from node i to anon-existing node. A similar expression with the transpose of M is used to check for incoming edges.A type is assigned to each node in G (cid:16) p M, V q by a function from the set of nodes | V | to a setof types T , λ : | V | Ñ T . Sets will be represented by | (cid:4) | . In Fig. 1 types are represented as an extracolumn in the matrices, where the numbers before the colon distinguish elements of the same type. It isjust a visual aid. For edges we use the types of their source and target nodes. A typed simple digraph is G T (cid:16) p G, λ q . From now on we shall assume typed graphs and shall drop the T subindex. The vector for nodes is necessary because in MGG nodes can be added and deleted, and thus we mark the existing nodes witha in the corresponding position of the vector. The Boolean matrix product is like the regular matrix product, but with and and or instead of multiplication and addition. P. P. P´erez / Matrix Graph Grammars A production or grammar rule p : L Ñ R is a morphism of typed simple digraphs, which is definedas a mapping that transforms L in R with the restriction that the type of the image must be equal to thetype of the source element. More explicitly, f (cid:16) p f V , f E q : G Ñ G being f V and f E partial injectivemappings f V : | V | Ñ | V | , f E : | M | Ñ | M | such that v P Dom p f V q , λ p v q (cid:16) λ p f V p v qq and e (cid:16) p n, m q P Dom p f E q , f E p e q (cid:16) f E p n, m q (cid:16) p f V p n q , f V p m qq , where Dom stands for domain, E for edges and V for vertices.A production p : L Ñ R is statically represented as p (cid:16) p L, R q (cid:16) (cid:0)(cid:0) L E , L V , λ L (cid:8) , (cid:0) R E , R V , λ R (cid:8)(cid:8) .The matrices and vectors of these graphs are arranged so that the elements identified by morphism p match (this is called completion, see below). Alternatively, a production adds and deletes nodesand edges, therefore they can be dynamically represented by encoding the rule’s LHS together withmatrices and vectors representing the addition and deletion of edges and nodes: p (cid:16) p L, e, r q (cid:16)(cid:0)(cid:0) L E , L V , λ L (cid:8) , e E , r E , e V , r V , λ r (cid:8) , where λ r contains the types of the new nodes, e E and e V are thedeletion Boolean matrix and vector, r E and r V are the addition Boolean matrix and vector. They havea 1 in the position where the element is to be deleted or added, respectively. The output of rule p iscalculated by the Boolean formula R (cid:16) p p L q (cid:16) r _ e L , which applies both to nodes and edges. Figure 2. (a) Rule Example. (b) Static Formulation. (c) Dynamic Formulation
Example . (cid:5) Figure 2 shows a rule and its associated matrices. The rule models the consumption of a piece(Pack) by a machine (Mach) input via the conveyor (Conv). There is an operator (Oper) managing themachine. Compatibility of the resulting graph must be ensured, thus the rule cannot be applied if themachine is already busy, as it would end up with two self loops which is not allowed in a simple digraph.This restriction of simple digraphs can be useful in this kind of situations and acts like a built-in negativeapplication condition. Later we will see that the nihilation matrix takes care of this restriction. (cid:4)
In order to operate with the matrix representation of graphs of different sizes, an operation called We shall come back to this topic in Sec. 6. We call such matrices and vectors e for “erase” and r for “restock”. The and symbol ^ is usually omitted in formulae, so R (cid:16) p p L q (cid:16) r _ e ^ L with precedence of ^ over _ . . P. P´erez / Matrix Graph Grammars completion adds extra rows and columns with zeros to matrices and vectors, and rearranges rows andcolumns so that the identified edges and nodes of the two graphs match. For example, in Fig. 2, if weneed to operate L E and R E , completion adds a fourth -row and fourth -column to R E . No furthermodification is needed because the rest of the elements have the right types and are placed properly. With the purpose of considering the elements in the host graph that disable a rule application, weextend the notation for rules with a new simple digraph K , which specifies the two kinds of forbiddenedges: Those incident to nodes which are going to be erased and any edge added by the rule (whichcannot be added twice, since we are dealing with simple digraphs). K has non-zero elements in positionscorresponding to newly added edges, and to non-deleted edges incident to deleted nodes. Matrices arederived in the following order: p L, R q ÞÑ p e, r q ÞÑ K . Thus, a rule is statically determined by its LHSand RHS p (cid:16) p L, R q , from which it is possible to give a dynamic definition p (cid:16) p L, e, r q , with e (cid:16) LR and r (cid:16) RL , to end up with a full specification including its environmental behavior p (cid:16) p L, K, e, r q . Noextra effort is needed from the grammar designer because K can be automatically calculated: K (cid:16) p p D q ,with D (cid:16) e V b e V t . The evolution of the nihilation matrix (what elements can not appear in the RHS)– call it Q – is given by the inverse of the production: p R, Q q (cid:16) (cid:0) p p L q , p (cid:1) p K q(cid:8) (cid:16) p r _ eL, e _ rK q .See [22] for more details.Inspired by the Dirac or bra-ket notation [2] we split the static part (initial state, L ) from the dynamics(element addition and deletion, p ): R (cid:16) p p L q (cid:16) x L, p y . The ket operators (those to the right side of thebra-ket) can be moved to the bra (left hand side) by using their adjoints. Matching is the operation of identifying the LHS of a rule inside a host graph. Given a rule p : L Ñ R and a simple digraph G , any total injective morphism m : L Ñ G is a match for p in G , thus it is oneof the ways of completing L in G . Besides, we shall consider the elements that must not be present.Given the grammar rule p : L Ñ R and the graph G (cid:16) p G E , G V q , d (cid:16) p p, m q is called a directderivation with m (cid:16) p m L , m K q and result H (cid:16) p (cid:6) p G q if the following conditions are satisfied:1. There exist total injective morphisms m L : L Ñ G and m K : K Ñ G with m L p n q (cid:16) m K p n q , n P L V .2. The match m L induces a completion of L in G . Matrices e and r are then completed in the sameway to yield e (cid:6) and r (cid:6) . The output graph is calculated as H (cid:16) p (cid:6)p G q (cid:16) r (cid:6) _ e (cid:6) G .The negation when applied to graphs alone (not specifying the nodes) – e.g. G in the first conditionabove – will be carried out just on edges. Notice that in particular the first condition above guaranteesthat L and K will be applied to the same nodes in the host graph G . In the present contribution we shall assume that completion is being performed somehow. This is closely related to non-determinism. The reader is referred to [21] for further details. Symbol b denotes the tensor or Kronecker product, which sums up the covariant and contravariant parts and multiplies everyelement of the first vector by the whole second vector. MGG considers only injective matches.
P. P. P´erez / Matrix Graph Grammars
In direct derivations dangling edges can occur because the nihilation matrix only considers edgesincident to nodes appearing in the rule’s LHS and not in the whole host graph. In MGG an operator T ε takes care of dangling edges which are deleted by adding a preproduction (known as ε (cid:1) production)before the original rule. Refer to [15, 16]. Thus, rule p is transformed into the sequence p ; p ε , where p ε deletes the dangling edges and p remains unaltered.There are occasions in which two or more productions should be matched to the same nodes. This isachieved with the marking operator T µ introduced in Chap. 6 in [19]. A grammar rule and its associated ε -production is one example and we shall find more in future sections.In [15, 16, 17, 19] some analysis techniques for MGGs have been developed which we shall skimthrough. One important feature of MGG is that sequences of rules can be analyzed independently tosome extent of any host graph. A rule sequence is represented by s n (cid:16) p n ; . . . ; p where application isfrom right to left, i.e. p is applied first. For its analysis, the sequence is completed by identifying thenodes across rules which are assumed to be mapped to the same node in the host graph.Once the sequence is completed, sequence coherence [15, 19, 20] allows us to know if, for the givenidentification, the sequence is potentially applicable, i.e. if no rule disturbs the application of thosefollowing it. The formula for coherence results in a matrix and a vector (which can be interpreted asa graph) with the problematic elements. If the sequence is coherent, both should be zero; if not, theycontain the problematic elements. A coherent sequence is compatible if its application produces a simpledigraph. That is, no dangling edges are produced in intermediate steps.Given a completed sequence, the minimal initial digraph (MID) is the smallest graph that permitsthe application of such sequence. Conversely, the negative initial digraph (NID) contains all elementsthat should not be present in the host graph for the sequence to be applicable. In this way, the NID is agraph that should be found in G for the sequence to be applicable (i.e. none of its edges can be found in G ). See Sec. 6 in [20] or Chaps. 5 and 6 in [19].Other concepts we developed aim at checking sequential independence (same result) between asequence and a permutation of it. G-Congruence detects if two sequences, one permutation of the other,have the same MID and NID. It returns two matrices and two vectors, representing two graphs which arethe differences between the MIDs and NIDs of each sequence, respectively. Thus if zero, the sequenceshave the same MID and NID. Two coherent and compatible completed sequences that are G-congruentare sequentially independent. See Sec. 7 in [20] or Chap. 7 in [19].
3. Previous Work on Application Conditions in MGG
In this section we shall brush up on application conditions (ACs) as introduced for MGG in [22] withnon-fixed diagrams and quantifiers. For the quantification, a full-fledged monadic second order logic MSOL, see e.g. [4]. . P. P´erez / Matrix Graph Grammars formula is used. One of the contributions in [22] is that a rule with an AC can be transformed into(sequences of) plain rules by adding the positive information to the left hand side of the production andthe negative to the nihilation matrix.A diagram d is a set of simple digraphs t A i u i P I and a set of partial injective morphisms t d k u k P K with d k : A i Ñ A j . The diagram d is well defined if every cycle of morphisms commute. GC (cid:16) (cid:0) d (cid:16)pt A i u i P I , t d j u j P J q , f (cid:8) is a graph constraint where d is a well defined diagram and f a sentence withvariables in t A i u i P I and predicates P and Q . See eqs. (1) and (2). Formulae are restricted to have nofree variables except for the default second argument of predicates P and Q , which is the host graph G in which we evaluate the GC. GC formulae are made up of expressions about graph inclusions. Thepredicates P and Q are given by: P p X , X q (cid:16) m r F p m, X q ñ F p m, X qs (1) Q p X , X q (cid:16) D e r F p e, X q ^ F p e, X qs , (2)where predicate F p m, X q states that element m (a node or an edge) is in graph X . Predicate P p X , X q means that graph X is included in X . Predicate Q p X , X q asserts that there is a partial morphismbetween X and X , which is defined on at least one edge ( e ranges over all edges). The notation(syntax) will be simplified by making the host graph G the default second argument for predicates P and Q . Besides, it will be assumed that by default total morphisms are demanded: Unless otherwise statedpredicate P is assumed. We take the convention that negations in abbreviations apply to the predicate(e.g. D A r A s (cid:17) D A r P p A, G qs ) and not the negation of the graph’s adjacency matrix. Figure 3. Diagram Example
Example . (cid:5) The GC in Fig. 3 is satisfied if for every A in G it is possible to find a related A in G , i.e. itsassociated formula is A D A (cid:16) A ñ A (cid:24) , equivalent by definition to A D A (cid:16) P (cid:0) A , G (cid:8) ñ P (cid:0) A , G (cid:8)(cid:24) .Nodes and edges in A and A are related through morphism d in which the image of the machine in A is the machine in A . To enhance readability, each graph in the diagram has been marked with thequantifier given in the formula. The GC in Fig. 3 expresses that each machine should have an outputconveyor. (cid:4) Given the rule p : L Ñ R with nihilation matrix K , an application condition AC (over the freevariable G ) is a GC satisfying:1. D ! i, j such that A i (cid:16) L and A j (cid:16) K . P. P. P´erez / Matrix Graph Grammars D ! k such that A k (cid:16) G is the only free variable.3. f must demand the existence of L in G and the existence of K in G .For simplicity, we usually do not explicitly show the condition 3 in the formulae of ACs, nor thenihilation matrix K in the diagram which are existentially quantified before any other graph of the AC.Notice that the rule’s LHS and its nihilation matrix can be interpreted as the minimal AC a rule can have.For technical reasons addressed in Sec. 5 (related to converting pre into postconditions) we assume thatmorphisms d i in the diagram do not have codomain L or K . This is easily solved as we may always usetheir inverses due to d i ’s injectiveness.It is possible to embed arbitrary ACs into rules by including the positive and negative conditionsin L and K , respectively. Intuitively: “MGG + AC = MGG” and “MGG + GC = MGG”. In [22] twobasic operations are introduced: closure – q T – that transforms universal into existential quantifiers, and decomposition – p T – that transforms partial morphisms into total morphisms. Notice that a matchis an existentially quantified total morphism. It is proved in [22] that any AC can be embedded intoits corresponding direct derivation. This is achieved by transforming the AC into some sequences ofproductions. There are four basic types of ACs/GCs. Let GC (cid:16) p d , f q be a graph constraint withdiagram d (cid:16) t A u and consider the associated production p : L Ñ R . The case f (cid:16) D A r A s is just thematching of A in the host graph G . It is equivalent to the sequence p ; id A , where id A has A as LHS andRHS, so it simply demands its existence in G . We introduce the operator T A that replaces p by p ; id A andleaves the diagram and the formula unaltered. If the formula f (cid:16) A r A s is considered, we can reduce itto a sequence of matchings via the closure operator q T A whose result is: d ÞÝÑ d A , . . . , A n u , d ij : A i Ñ A j (cid:8) f ÞÝÑ f A . . . D A n (cid:19) n © i (cid:16) A i (cid:27) , (3)with A i (cid:21) A , d ij R iso p A i , A j q , q T A p AC q (cid:16) AC d , f and n (cid:16) | par max p A, G q| . This isequivalent to the sequence p ; id A n ; . . . ; id A . If the application condition has formula f (cid:16) D A r Q p A qs ,we can proceed by defining the composition operator p T A with action: d ÞÝÑ d A , . . . , A n u , d ij : A i Ñ A j (cid:8) f ÞÝÑ f A . . . D A n (cid:19) n ª i (cid:16) A i (cid:27) , (4)where A i contains a single edge of A and n is the number of edges of A . This is equivalent to the set of iso p A, G q (cid:16) t f : A Ñ G | f is an isomorphism u . par max p A, G q (cid:16) t f : A Ñ G | f is a maximal non-empty partial morphism with Dom p f q V (cid:16) A V u . . P. P´erez / Matrix Graph Grammars sequences t p ; id A i u , i P t , . . . , n u .Less evident are formulas of the form f (cid:16) E A r A s (cid:16) A r A s . Fortunately, operators q T and p T commutewhen composed so we can get along with the operator r T A (cid:16) p T A (cid:5) q T A (cid:16) q T A (cid:5) p T A . The image of r T onsuch ACs are given by: d ÞÝÑ d A , . . . , A mn u , d ij : A i Ñ A j (cid:8) f ÞÝÑ f A . . . D A mn (cid:19) m © i (cid:16) n ª j (cid:16) P (cid:0) A ij , G (cid:8)(cid:27) . (5)An AC is said to be coherent if it is not a contradiction (false in all scenarios), compatible if, togetherwith the rule’s actions, produces a simple digraph, and consistent if D G host graph such that G |ù AC to which the production is applicable. As ACs can be transformed into equivalent (sets of) sequences, itis proved in [22] that coherence and compatibility of an AC is equivalent to coherence and compatibilityof the associated (set of) sequence(s), respectively. Also, an AC is consistent if and only if its equivalent(set of) sequence(s) is applicable. Besides, all results and analysis techniques developed for MGG canbe applied to sequences with ACs. Some examples follow: • As a sequence is applicable if and only if it is coherent and compatible (see Sec 6.4 in [19]) thenan AC is consistent if and only if it is coherent and compatible. • Sequential independence allows us to delay or advance the constraints inside a sequence. Aslong as the productions do not modify the elements of the constraints, this is transformation ofpreconditions into postconditions. More on Sec. 5. • Initial digraph calculation solves the problem of finding a host graph that satisfies a given AC/GC.There are some limitations, though. For example it is necessary to limit the maximum numberof nodes when dealing with universal quantifiers. This has no impact in some cases, for examplewhen non-uniform MGG submodels are considered (see nodeless MGG in [21]). • Graph congruence characterizes sequences with the same initial digraph. Therefore it can be usedto study when two GCs/ACs are equivalent for all morphisms or for some of them.Summarizing, there are two basic results in [22]. First, it is always possible to embed an applicationcondition into the LHS of the production or derivation. The left hand side L of a production receiveselements that must be found – P p A, G q – and K those whose presence is forbidden – P p A, G q –. Second, We shall say that the host graph G satisfies D A r A s , written G |ù D A r A s , if and only if D f P par max p A, G q r f P tot p A, G qs ,being tot p A, G q (cid:16) t f : A Ñ G | f is a total morphism u „ par max p A, G q . Also, G satisfies A r A s , written G |ù A r A s ,if and only if f P par max p A, G q r f P tot p A, G qs . Usually we shall abuse of the notation and write G |ù GC instead. Formore details, please refer to [22].0 P. P. P´erez / Matrix Graph Grammars it is always possible to find a sequence or a set of sequences of plain productions whose behavior isequivalent to that of the production plus the application condition.
4. Postconditions
In this section we shall introduce postconditions and state some basic facts about them analogous to thosefor preconditions. We shall enlarge the notation by appending a left arrow on top of the conditions toindicate that they are preconditions and an upper right arrow for postconditions. Examples are A for aprecondition and Ñ A for a postcondition. If it is clear from the context, arrows will be omitted. Definition 4.1. (Precondition and Postcondition)
An application condition set on the LHS of a production is known as a precondition . If it is set on theRHS then it is known as a postcondition .Operators T Ñ A , p T Ñ A , q T Ñ A and r T Ñ A are defined similarly for postconditions. The following propositionestablishes an equivalence between the basic formulae (match, decomposition, closure and negative ap-plication condition) and certain sequences of productions. Proposition 4.1.
Let Ñ A (cid:16) p f , d q (cid:16) (cid:0) f , pt A u , d : R Ñ A q(cid:8) be a postcondition. Then we can obtain a setof equivalent sequences to given basic formulae as follows:(Match) f (cid:16) D A r A s ÞÝÑ T A p p q (cid:16) id A ; p (6)(Closure) f (cid:16) E A r A s ÞÝÑ q T A p p q (cid:16) id A ; . . . ; id A m ; p (7)(Decomposition) f (cid:16) D A r A s ÞÝÑ p T A p p q (cid:16) id A i ; p ( i (cid:16) ,...,n (8)(NAC) f (cid:16) E A r A s ÞÝÑ r T A p p q (cid:16) ! id A i ; . . . ; id A mim ; p ) i j Pt ,...,n u ,j Pt ,...,m u (9)where m is the number of potential matches of A in the image of the host graph, n is the number of edgesin A and id A asks for the existence of A in the complement of the image of the host graph. Proof (cid:5)
For the first case (match), the AC states that an additional graph A has to be found in the imageof the host graph. This is easily achieved by applying id A to the image of L , i.e. by considering id A p p p L qq (cid:16) p id A (cid:5) p q p L q . The elements in A are related to those in R according to the identificationsin a morphism d that has to be given in the diagram of the postcondition. In the four cases considered inthe proposition we can move from composition to concatenation by means of the marking operator T µ .Recall that T µ guarantees that the identifications in d are preserved. . P. P´erez / Matrix Graph Grammars The second case (closure) is very similar. We have to verify all potential appearances of A in theimage of the host graph because E A r A s (cid:16) A r A s . We proceed as in the first case but this time with afinite number of compositions: p id A (cid:5) . . . (cid:5) id A m (cid:5) p q p L q .For decomposition, A is not found in the host graph if for some matching there is at least one missingedge. It is thus similar to matching but for a single edge. The way to proceed is to consider the set ofsequences that appear in eq. (8). Negative application conditions (NACs) are the composition of eqs. (7)and (8). (cid:4) One of the main points of the techniques available for preconditions is to analyze rules with ACs bytranslating them into sequences of flat rules, and then analyzing the sequences of flat rules instead.
Theorem 4.1.
Any well-defined postcondition can be reduced to the study of the corresponding set ofsequences.
Proof (cid:5)
The proof follows that of Th. 4.1 in [22] and is included here for completeness sake. Let the depth of agraph for a fixed node n be the maximum over the shortest path (to avoid cycles) starting in any nodedifferent from n and ending in n . The depth of a graph is the maximum depth for all its nodes. Noticethat the depth is if and only if A i , i in the diagram are unrelated. We shall apply induction on thedepth of the AC.A diagram d is a graph where nodes are digraphs A i and edges are morphisms d ij . There are possibilities for depth in a AC made up of a single element A , summarized in Table 1.(1*) D A r A s (5*) { A r A s (9*) D A r Q p A qs (13*) { A r Q p A qs (2*) D A r A s (6*) { A r A s (10*) D A r Q p A qs (14*) { A r Q p A qs (3*) E A r A s (7*) A r A s (11*) E A r Q p A qs (15*) A r Q p A qs (4*) E A r A s (8*) A r A s (12*) E A r Q p A qs (16*) A r Q p A qs Table 1. All Possible Diagrams for a Single Element
Elements in the same row for each pair of columns are related using equalities E A r A s (cid:16) A r A s and { A r A s (cid:16) D A r A s , so it is possible to reduce the study to cases (1*) – (4*) and (9*) – (12*). Identities Q p A q (cid:16) P p A, G q and Q p A q (cid:16) P p A, G q reduce (9*) – (12*) to formulae (1*) – (4*): D A r Q p A qs (cid:16) D A (cid:16) P p A, G q(cid:24) , D A r Q p A qs (cid:16) D A (cid:16) P p A, G q(cid:24)E A r Q p A qs (cid:16) E A (cid:16) P p A, G q(cid:24) , E A r Q p A qs (cid:16) E A (cid:16) P p A, G q(cid:24) . Proposition 4.1 considers the four basic cases which correspond to (1*) – (4*) in Table 1, showingthat in fact they can all be reduced to matchings in the image of the host graph, i.e. to (1*) in Table 1,verifying the theorem. P. P. P´erez / Matrix Graph Grammars
Now we move on to the induction step which considers combinations of quantifiers. Well-definednessguarantees independence with respect to the order in which elements A i in the postcondition are selected.When there is a universal quantifier A , according to eq. (7), elements of A are replicated as many timesas potential instances of A can be found in the host graph. In order to continue the procedure we have toclone the rest of the diagram for each replica of A , except those graphs which are existentially quantifiedbefore A in the formula. That is, if we have a formula D B A D C when performing the closure of A , wehave to replicate C as many times as A , but not B . Moreover B has to be connected to each replica of A , preserving the identifications of the morphism B Ñ A . More in detail: When closure is applied to A ,we iterate on all graphs B j in the diagram. There are three possibilities: • If B j is existentially quantified after A – A... D B j – then it is replicated as many times as A .Appropriate morphisms are created between each A i and B ij if a morphism d : A Ñ B existed.The new morphisms identify elements in A i and B ij according to d . This permits finding differentmatches of B j for each A i , some of which can be equal. • If B j is existentially quantified before A – D B j ... A – then it is not replicated, but just connectedto each replica of A if necessary. This ensures that a unique B j has to be found for each A i .Moreover, the replication of A has to preserve the shape of the original diagram. That is, if there isa morphism d : B Ñ A then each d i : B Ñ A i has to preserve the identifications of d (this meansthat we take only those A i which preserve the structure of the diagram). • If B j is universally quantified (no matter if it is quantified before or after A ), again it is replicatedas many times as A . Afterwards, B j will itself need to be replicated due to its universality. Theorder in which these replications are performed is not relevant as A B j (cid:16) B j A . (cid:4) Previous theorem and the corollaries that follow heavily depend on the host graph and its image(through matching) so analysis techniques developed so far in MGG which are independent of the hostgraphs can not be applied. The “problem” is the universal quantifier. We can consider the initial digraphand dispose to some extent of the host graph and its image. This is related to the fact (Sec. 5) that it ispossible to transform postconditions into equivalent preconditions.Two applications of Th. 4.1 are the following corollaries that characterize coherence, compatibilityand consistency of postconditions.
Corollary 4.1.
A postcondition is coherent if and only if its associated (set of) sequence(s) is coherent.Also, it is compatible if and only if its associated (set of) sequence(s) is compatible and it is consistent ifand only if its associated (set of) sequence(s) is applicable. If for example there are three instances of A in the image of the host graph but only one of B j , then the three replicas of B are matched to the same part of p p G q . . P. P´erez / Matrix Graph Grammars Corollary 4.2.
A postcondition is consistent if and only if it is coherent and compatible.
Example. (cid:5)
Let’s consider the diagram in Fig. 4 with formula D A D A r A ñ A s . The postconditionstates that if an operator is connected to a machine, such machine is busy. The formula has an implica-tion so it is not possible to directly generate the set of sequences because the postcondition also holdswhen the left of the implication is false. The closure operator q T reduces the postcondition to existentialquantifiers, which is represented to the right of the figure. The resulting modified formula would be D A A A A rp A ñ A q ^ p A ñ A qs . Figure 4. Postcondition Example
Once the formula has existentials only, we manipulate it to get rid of implications. Thus, we have D A A A A rp A _ A q^p A _ A qs (cid:16) D A A A A rp A ^ A q_p A ^ A q_p A ^ A q_p A ^ A qs .This leads to a set of four sequences: tp id A ; id A q , p id A ; id A q , p id A ; id A q , p id A ; id A qu . Thus,the graph p p G q and the production satisfy the postcondition if and only if some sequence in the set isapplicable to p p G q . (cid:4) Something left undefined is the order of productions id A i and id A i in the sequences. Consistencydoes not depend on the ordering of productions – as long as the first to be applied is production p –because productions id (and their negation) are sequentially independent (they do not add nor delete anyedge or node). If they are not sequentially independent then there exists at least one inconsistency. Thisinconsistency can be detected using previous corollaries independently of the order of the productions.
5. Moving Conditions
In this section we give two different proofs that it is possible to transform preconditions into equivalentpostconditions and back again. The first proof (sketched) makes use of category theory while the second P. P. P´erez / Matrix Graph Grammars relies on the characterizations of coherence, G-congruence and compatibility. To ease exposition we shallfocus on the certainty part only as the nihilation part would follow using the inverse of the production.We shall start with a case that can be addressed using equations (6) – (9), Th. 4.1 and Cor. 4.1: Whenthe transformed postcondition for a given precondition does not change. The question of whether it isalways possible to transform a precondition into a postcondition – and back again – in this restricted casewould be equivalent to asking for sequential independence of the production p and the identities id or id : p ; id A n ; . . . ; id A (cid:16) id A n ; . . . ; id A ; p, (10)where the sequence to the left of the equality corresponds to a precondition and the sequence to the rightcorresponds to its equivalent postcondition. A m A (cid:23) (cid:23) ............... p A / / Ñ A m Ñ A (cid:7) (cid:7) L p / / d L (cid:15) (cid:15) R d (cid:6) L (cid:15) (cid:15) A p A / / m A (cid:15) (cid:15) Ñ A m Ñ A (cid:15) (cid:15) L p / / m L (cid:15) (cid:15) d L ^ ^ ======== R m (cid:6) L (cid:15) (cid:15) d (cid:6) L @ @ G p (cid:6) / / H A p A / / Ñ A G p (cid:6) A / / H Figure 5. Precondition to Postcondition Transformation
In general the production may act on elements that appear in the diagram of the precondition, spoilingsequential independence. Left and center of Fig. 5 – in which the first basic AC (match) is considered –suggest that the pre-to-post transformation is a categorical pushout in the category of simple digraphsand partial morphisms.Theorem 4.1 proves that any postcondition can be reduced to the match case. Besides, we can triviallyconsider total morphisms (instead of partial ones) by restricting the domain and the codomain of p to thenodes in A . For the post-to-pre transformation we can either use pullbacks or pushouts plus the inverseof the production involved.To see that precondition satisfaction is equivalent to postcondition satisfaction using category theory,we should check that the different pushouts can be constructed ( p (cid:6) , p A , p (cid:6) A , etcetera) and that d L (cid:16) m A (cid:5) m L and d (cid:6) L (cid:16) m Ñ A (cid:5) m (cid:6) L (refer to Fig. 5). Although some topics remain untouched such asdangling edges, we shall not carry on with category theory. Example . (cid:5) Let be given the precondition A to the left of Fig. 6 with formula f (cid:16) D A r A s . To calculate itsassociated postcondition we can apply the production to A and obtain Ñ A , represented also to the left of This is not so unrealistic. For example, if the production preserves all elements appearing in the precondition. The square ! L, R, A, Ñ A ) is a pushout where p , L , d L , R and A are known and Ñ A , p A and d L need to be calculated. . P. P´erez / Matrix Graph Grammars Figure 6. Restriction to Common Parts: Total Morphism the same figure. Notice however that it is not possible to fing a match of L in A because of node . Onepossible solution is to consider L L X A and restrict the production to those common elements. Thisis done to the right of Fig. 6 (cid:4) Theorem 5.1.
Any consistent precondition is equivalent to some consistent postcondition and vice versa.
Proof (cid:5)
For the post-to-pre transformation roles of p and p (cid:1) are interchanged so we shall address only the pre-to-post case. It is enough to study a single A in the diagram as the same procedure applies mechanically(Th. 4.1 transforms any precondition into a sequence of productions). Also, it suffices to state the resultfor id A because id A is similar but the evolution depends on p (cid:1) . Finally, we shall assume that p and id A are not sequentially independent.Recall that G -congruence guarantees sameness of the initial digraph, which is what the sequencedemands on the host graph. Therefore, all we have to do is to use G -congruence to check the differencesin the two sequences: p ; id A ÞÝÑ id Ñ A ; p. (11)However, before that we need to guarantee coherence and compatibility of both sequences (see the hy-pothesis of Th. 4 in [20]). Coherence gives rise to the following equation: r Ñ A R _ e Ñ A (cid:16) (cid:16) r A _ e A L ùñ e Ñ A _ r A (cid:16) , (12)where e and r correspond to p , e A to id A and r Ñ A to id Ñ A . Fortunately, id is a production that doesnothing, so from the dynamical point of view any conflict should come from p , i.e. e A (cid:16) r Ñ A (cid:16) whichhas been used in the implication of eq. (12).By consistency we have that r A (cid:16) so eq. (12) will be fulfilled if the postcondition is the precon-dition but erasing the elements that the production deletes. A similar reasoning for the nihil part tells usthat we should add to the postcondition all those elements added by the production. P. P. P´erez / Matrix Graph Grammars
Compatibility can only be ruined by dangling edges. In Sec. 6.1 in [19] dangling edges are deletedtransforming the production via the opeartor T ε . This is proved to be equivalent to defining a sequenceby appending a so-called ε -production. In essence the ε -production just deletes any dangling edge, thuskeeping compatibility. This very same procedure can be applied now: p ; id A T ε ÞÝÑ p ; p ε ; id A ÞÝÑ p ; id ε A ; p ε ÞÝÑ id Ñ A ; p ; p ε . (13)According to Prop. 5 and Th. 4 in [20], two compatible and coherent sequences are G -congruent ifthe following equation (adapted to our case) is fulfilled: L A eK (cid:1) r _ e A (cid:9) _ K A rL p e _ r A q (cid:16) . (14)We have that e A (cid:16) r A (cid:16) . Also, K A (cid:16) because id A acts on the certainty part and er (cid:16) r (see e.g.Prop. 4.1.4 in [24]). We are left with L A K (cid:16) K A (cid:16) , (15)which is guaranteed by compatibility: once id A is transformed into id ε A in eq. (13) there can not be anypotential dangling edge, except those to be deleted by p in the last step. (cid:4) It is worth stressing the fact that the transformation between pre and postconditions preserve consis-tency of the application condition. We have seen in this section that p not only acts on L but on the wholeprecondition. We can therefore extend the notation: Ñ A (cid:16) p p A q , Ñ A (cid:16) x A, p y . (16)Pre-to-post and post-to-pre transformations can affect the diagram and the formula. See the examplebelow. There are two clear cases: • The application condition requires the graph to appear and the production deletes all its elements. • The application condition requires the graph not to appear and the production adds all its elements.For a given application condition AC it is not necessarily true that A (cid:16) p (cid:1) ; p p A q because somenew elements may be added and some obsolete elements discarded. What we will get is an equivalentcondition adapted to p that holds whenever A holds and fails to be true whenever A is false. Example . (cid:5) In Fig. 7 there is a very simple transformation of a precondition into a postcondition throughmorphism p p A q . The associated formula to the precondition A that we shall consider is f (cid:16) D A r A s . Theproduction deletes two arrows and adds a new one. The overall effect is reverting the direction of theedge between nodes and and deleting the self-loop in node . Notice that m L can not match node to in G because of the edge p , q in the application condition. . P. P´erez / Matrix Graph Grammars Figure 7. Precondition to Postcondition Example
Suppose we had a (redundant) graph B made up of a single node with a self loop in the preconditionand with formula f (cid:16) D AB r AB s . The formula in the postcondition would still be Ñ f (cid:16) D A r A s .The opposite transformation, from postcondition into precondition, can be obtained by reverting thearrow, i.e. through p (cid:1) p A q . More general schemes can be studied applying the same principles.Let A (cid:16) p (cid:1) (cid:5) p (cid:1) A (cid:9) . If a pre-post-pre transformation is carried out, we will have A (cid:24) A becauseedge (2,1) would be added to A . However, it is true that A (cid:16) p (cid:1) (cid:5) p p A q .Note that in fact id A and p are sequentially independent if we limit ourselves to edges, so it wouldbe possible to simply move the precondition to a postcondition as it is. Nonetheless, we have to considernodes 1 and 2 as the common parts between L and A . This is the same kind of restriction as the oneillustrated in Fig. 6. (cid:4) If the pre-post-pre transformation is thought of as an operator T p acting on application conditions,then it fulfills T p (cid:16) id, (17)where id is the identity. The same would also be true for a post-pre-post transformation.A possible interpretation of eq. (17) is that the definition of the application condition can vary fromthe natural one, according to the production under consideration. Pre-post-pre or post-pre-post transfor-mations adjust application conditions to the corresponding production.When defining diagrams some “practical problems” may turn up. For example, if the diagram d (cid:16)(cid:1) L d L Ñ A d A (cid:9) is considered then there are two potential problems:1. The direction in the arrow A A is not the natural one. Nevertheless, injectiveness allows usto safely revert the arrow, d (cid:16) d (cid:1) .2. Even though we only formally state d L and d , other morphisms naturally appear and need to be P. P. P´erez / Matrix Graph Grammars checked out, e.g. d L : R Ñ A . New morphisms should be considered if they relate at least oneelement.
6. Delocalization and Variable Nodes
In this section we touch on delocalization of graph constraints and application conditions as well as theirequivalence. Also, we shall pave the way to multidigraph rewriting to be studied in detail in Sec. 7.Let s (cid:16) p n ; . . . ; p be a sequence of productions with their corresponding ACs. We have seen inTh. 4.1 that preconditions and postconditions are equivalent and in Th. 5.1 that they can be transformedinto sequences of productions. As a precondition in p i (cid:0) is the same as a postcondition in p i , we see thatACs can be moved arbitrarily inside a sequence.Similarly, constraints set on the intermediate states of a derivation can be moved among them. Agraph constraint GC set in the initial state G to which a production p is going to be applied is equivalentto the precondition f pre (cid:16) D L D K (cid:16) L ^ P (cid:0) K, G (cid:8) ^ f GC (cid:24) . (18)If the GC is set on the final state H (cid:16) p p G q to which the production p has been applied, there is anequivalent postcondition: f post (cid:16) D R D Q (cid:16) P p R, H q ^ P (cid:0) Q, H (cid:8) ^ f GC (cid:24) . (19)In both cases the diagrams are given by the LHS or the RHS plus the diagram of the graph constraint.We call this property of application conditions and graph constraints delocalization .We shall now address variable nodes which will be used to enhance MGG functionality to deal withmultidigraphs. Graph transformation with variables is studied in [12]. We shall summarize the proposalin [12] and propound an alternative way to close the section.If instead of nodes of fixed type variable, types are allowed we get a so called graph pattern . A rule scheme is just a production in which graphs are graph patterns. A substitution function ι specifieshow variable names taking place in a production are substituted. A rule scheme p is instantiated viasubstitution functions producing a particular production. For example, for substitution function ι we get p ι . The set of production instances for p is defined as the set I p p q (cid:16) t p ι | ι is a substitution u . The kernel of a graph G , ker p G q , is defined as the graph resulting when all variable nodes are removed. Itmight be the case that ker p G q (cid:16) H .The basic idea is to reduce any rule scheme to a set of rule instances. Note that it is not possible ingeneral to generate I p p q because this set can be infinite. The way to proceed is not difficult: Otherwise stated: Any condition made up of n graphs A i can be identified as the complete graph K n , in which nodes aregraphs A i and morphisms are d ij . Whether this is a directed graph or not is a matter of taste (morphisms are injective). . P. P´erez / Matrix Graph Grammars
1. Find a match for the kernel of L .2. Induce a substitution ι such that the match for the kernel becomes a full match m : L ι Ñ G .3. Construct the instance R ι and apply p ι to get the direct derivation G p ι ùñ H .As an alternative, we may extend the concept of type assignment. Recall from Sec. 2 that types areassigned by a function from the set of nodes | V | of a simple digraph G to some fixed set T of types, λ : | V | Ñ T . Instead, we shall define λ : | V | ÝÑ P p T qzH , (20)where P p T q is the power set of T except for the empty set because we do not permit nodes withouttypes.When two matrices are operated, the types of a fixed node will be the intersection of the nodesoperated. For example, suppose that we and two matrices C (cid:16) AB and that the (set of) nodes associatedto the elements a and b are λ p a q and λ p b q , respectively. Then, λ p c q (cid:16) λ p a q X λ p b q . The operation wouldnot be allowed in case λ p c q (cid:16) λ p a q X λ p b q (cid:16) H . Figure 8. Example of Graph Constraint
Example . (cid:5) Let a type of nodes be represented by squares (call them multinodes ) and the rest (call them simple nodes ) by colored circles. The set of types T is split into two: multinodes and simple nodes.Let’s consider the graph constraint GC (cid:16) p d , f q , with d the diagram depicted in Fig. 8 made upof the graphs A and A , along with the formula f (cid:16) A A r Q p A q Q p A qs . This graph constraintis “edges must connect nodes and multinodes alternatively but no edge is allowed to be incident to twomultinodes or to two simple nodes, including self-loops”.In the graph A of Fig. 8, x and y represent variable nodes while a and b in A have a fixed type. Wemay think of a and b as variable nodes whose set of types has a single element. (cid:4) The set of all subsets.0
P. P. P´erez / Matrix Graph Grammars
7. From Simple Digraphs to Multidigraphs
In this section we show how MGG can deal with multidigraphs (directed graphs allowing multiple paral-lel edges) just by considering variable nodes. At first sight this might seem a hard task as MGG heavilydepends on adjacency matrices. Adjacency matrices are well suited for simple digraphs but can not copewith parallel edges. This section can be thought of as a theoretical application of graph constraints andapplication conditions to Matrix Graph Grammars.The idea is not difficult: A special kind of node (call it multinode in contrast to simple node ) associ-ated to every edge in the graph is introduced, i.e. edges in the multidigraph are substituted by multinodesin a simple digraph representation of the multidigraph. Graphically, multinodes will be represented by afilled square while normal nodes will appear as colored circles. See the example by the end of Sec. 6Operations previously specified on edges now act on multinodes: Adding an edge is transformed intoa multinode addition and edge deletion becomes multinode deletion. There are edges that link multinodesto their source and target simple nodes.Some restrictions (application conditions) to be imposed on the actions that can be performed onmultinodes exist, as well as on the shape or topology of permitted graphs (graph constraints). Not everypossible graph with multinodes represents a multidigraph.
Figure 9. Multidigraph with Two Outgoing Edges
Example . (cid:5) Consider the simple production in Fig. 9 with two parallel edges between nodes and . Ascommented above, multinodes are represented by square nodes while normal nodes are left unchanged.When p deletes an edge, p τ deletes a multinode. Adjacency matrices for p τ are: . P. P´erez / Matrix Graph Grammars L (cid:16) (cid:20)(cid:22)(cid:22)(cid:22)(cid:22)(cid:21) |
10 0 0 0 0 0 |
20 0 0 0 0 0 |
30 0 1 0 0 0 | a | a | d (cid:28)(cid:30)(cid:30)(cid:30)(cid:30)(cid:29) R (cid:16) (cid:20)(cid:22)(cid:22)(cid:22)(cid:21) |
10 0 0 0 0 |
20 0 0 0 0 |
30 0 1 0 0 | a | d (cid:28)(cid:30)(cid:30)(cid:30)(cid:29) K (cid:16) (cid:20)(cid:22)(cid:22)(cid:22)(cid:22)(cid:21) |
10 0 0 1 0 0 |
20 0 0 1 0 0 |
31 1 0 1 1 1 | a | a | d (cid:28)(cid:30)(cid:30)(cid:30)(cid:30)(cid:29) e (cid:16) (cid:20)(cid:22)(cid:22)(cid:22)(cid:22)(cid:21) |
10 0 0 0 0 0 |
20 0 0 0 0 0 |
30 0 1 0 0 0 | a | a | d (cid:28)(cid:30)(cid:30)(cid:30)(cid:30)(cid:29) In a real situation, a development tool such as AToM or AGG should take care of all these repre-sentation issues. A user would see what appears to the left of Fig. 9 and not what is depicted to the rightof the same figure. (cid:4) Some restrictions on what a production can do to a multidigraph are necessary in order to obtain amultidigraph again. Think for example the case in which after applying some production we get a graphin which there is an isolated multinode (which would stand for an edge with no source nor target nodes).All we have to do is to find the properties that define one edge and impose them on multinodes as graphconstraints:1. A simple node (resp., multinode) can not be directly connected to another simple node (resp.,multinode).2. Edges (encoded as multinodes) always have a simple node as source and a simple node as target.First condition above is addressed in the example of Sec. 6 with graph constraint GC . See Fig. 8.The second condition can be encoded as another graph constraint GC (cid:16) p d , f q . The diagram can befound in Fig. 10 and the formula is f (cid:16) A A r A A s . Figure 10. Multidigraph Constraints http://moncs.cs.mcgill.ca/MSDL/research/projects/AToM3/ for AToM and forAGG and some other tools.2 P. P. P´erez / Matrix Graph Grammars
Theorem 7.1.
Any multidigraph is isomorphic to some simple digraph G together with the graph con-straint M C (cid:16) p d Y d , f ^ f q . Proof (sketch) (cid:5)
A graph with multiple edges M (cid:16) p V, E, s, t q consists of disjoint finite sets V of nodes and E of edgesand source and target functions s : E Ñ V and t : E Ñ V , respectively. Function v (cid:16) s p e q , v P V , e P E returns the node source v for edge e . We are considering multidigraphs because the pair function p s, t q : E Ñ V (cid:2) V need not be injective, i.e. several different edges may have the same source andtarget nodes. We have digraphs because there is a distinction between source and target nodes. This isthe standard definition found in any textbook.It is clear that any M can be represented as a multidigraph G satisfying M C . The converse alsoholds. To see it, just consider all possible combinations of two nodes and two multinodes and check thatany problematic situation is ruled out by
M C . Induction finishes the proof. (cid:4)
The multidigraph constraint
M C must be fulfilled by any host graph. If there is a production p : L Ñ R involved, M C has to be transformed into an application condition over p . In fact, the multidigraphconstraint should be demanded both as precondition and postcondition. This is easily achieved by meansof eqs. (18) and (19).This section is closed analyzing what behavior we have for multidigraphs with respect to danglingedges. With the theory as developed so far, if a production specifies the deletion of a simple node thenan ε -production would delete any edge incident to this simple node, connecting it to any surroundingmultinode. But restrictions imposed by MC do not allow this so any production with potential danglingedges can not be applied.In order to automatically delete any potential multiple dangling edge, ε -productions need to be re-stated by defining them at a multidigraph level, i.e. ε -productions have to delete any potential “danglingmultinode”. A new type of productions ( Ξ -productions) are introduced to get rid of annoying edges that would dangle when multinodes are also deleted by ε -productions. We will not develop the idea indetail and will limit to describe the concepts. The way to proceed is to define the appropriate operator T Ξ and redefine the operator T ε .A production p : L Ñ R between multidigraphs that deletes one simple node n may give rise to one ε -production that deletes one or more multinodes m i (those “incident” to n not deleted by the grammarrule). This ε -production can in turn be applied only if any edge incident to the m i ’s has already beenerased, hence possibly provoking the appearance of one Ξ -production.This process is depicted in Fig. 11 where, in order to apply production p , productions p ε and p Ξ needto be applied in advance p ÞÝÑ p ; p ε ; p Ξ . (21) Edges connect simple nodes and multinodes. . P. P´erez / Matrix Graph Grammars Figure 11. ε -production and Ξ -production Eventually, one could simply compose the Ξ -production with its ε -production, renaming it to ε -production and defining it as the way to deal with dangling edges in case of multiple edges, fully recov-ering the standard behavior in MGG. As commented above, a potential user of a development tool such asAToM would still see things as in the simple digraph case, with no need to worry about Ξ -productions.Another theoretical use of application conditions and graph constraints is the encoding of Turing Ma-chines and Boolean Circuits using Matrix Graph Grammars (see [21]). However, they are not necessaryfor Petri nets (see Chap. 10 in [19]).
8. Conclusions and Future Work
In the present contribution we have introduced preconditions and postconditions for MGGs, proving thatthere is an equivalent set of sequences of plain rules to any given postcondition. Besides, coherence,compatibility and consistency of postconditions have been characterized in terms of already known con-cepts for sequences. We have also proved that it is always possible to transform any postcondition intoan equivalent precondition and vice versa. Moreover, we have seen that restrictions are delocalized if asequence is under consideration. An alternative way to that in [12] to tackle variable nodes has also beenproposed. This allows us to extend MGG to cope with multidigraphs without major modifications to thetheory.In [22] there is an exhaustive comparison of the application conditions in MGG and other proposals.The main papers to the best of our knowledge that tackle this topic are [6] (with the definition of ACs),[8, 9] where GCs and ACs are extended with nesting and satisfiability, and also [23] in which ACs aregeneralized to arbitrary levels of nesting (though restricted to trees).For future work, we shall generalize already studied concepts in MGG for multidigraphs such ascoherence, compatibility, initial digraphs, graph congruence, reachability, etcetera. Our main interest,however, will be focused on complexity theory and the application of MGG to the study of complexity P. P. P´erez / Matrix Graph Grammars classes, P and NP in particular. [21] follows this line of research. References [1] AGG, The Attributed Graph Grammar system. http://tfs.cs.tu-berlin.de/agg/ .[2] Bra-ket notation intro: http://en.wikipedia.org/wiki/Bra-ket notation [3] Corradini, A., Montanari, U., Rossi, F., Ehrig, H., Heckel, R., L¨owe, M. 1999.
Algebraic Approaches toGraph Transformation - Part I: Basic Concepts and Double Pushout Approach . In [24], pp.: 163-246[4] Courcelle, B. 1997.
The expression of graph properties and graph transformations in monadic second-orderlogic . In [24], pp.: 313-400.[5] Ehrig, H., Heckel, R., Korff, M., L¨owe, M., Ribeiro, L., Wagner, A., Corradini, A. 1999.
Algebraic Ap-proaches to Graph Transformation - Part II: Single Pushout Approach and Comparison with Double PushoutApproach.
In [24], pp.: 247-312.[6] Ehrig, H., Ehrig, K., Habel, A., Pennemann, K.-H.
Constraints and Application Conditions: From Graphs toHigh-Level Structures . Proc. ICGT’04. LNCS 3256, pp.: 287-303. Springer.[7] Ehrig, H., Ehrig, K., Prange, U., Taentzer, G. 2006.
Fundamentals of Algebraic Graph Transformation.
Springer.[8] Habel, A., Pennemann, K.-H. 2005.
Nested Constraints and Application Conditions for High-Level Struc-tures . In Formal Methods in Software and Systems Modeling, LNCS 3393, pp. 293-308. Springer.[9] Habel, A., Penneman, K.-H. 2009.
Correctness of High-Level Transformation Systems Relative to NestedConditions . Math. Struct. Comp. Science 19(2), pp.: 245–296.[10] Heckel, R., K¨uster, J.-M-., Taentzer, G. 2002.
Confluence of typed attributed graph transformation systems .Proc. ICGT’02, LNCS 2505, pp. 161–176. Springer.[11] Heckel, R., Wagner, A. 1995.
Ensuring consistency of conditional graph rewriting - a constructive approach. ,Electr. Notes Theor. Comput. Sci. (2).[12] Hoffman, B. 2005.
Graph Transformation with Variables . In Graph Transformation, Vol. 3393/2005 ofLNCS, pp. 101-115. Springer.[13] Lambers, L., Ehrig, H., Orejas, F. 2006.
Conflict Detection for Graph Transformation with Negative Appli-cation Conditions . Proc ICGT’06, LNCS 4178, pp.: 61-76. Springer.[14] de Lara, J., Vangheluwe, H. 2004.
Defining Visual Notations and Their Manipulation Through Meta-Modelling and Graph Transformation . Journal of Visual Languages and Computing. Special section on“Domain-Specific Modeling with Visual Languages”, Vol 15(3-4), pp.: 309-330. Elsevier Science.[15] P´erez Velasco, P. P., de Lara, J. 2006.
Towards a New Algebraic Approach to Graph Trans-formation: Long Version.
Tech. Rep. of the School of Comp. Sci., Univ. Aut´onoma Madrid. (cid:18) jlara/investigacion/techrep 03 06.pdf . . P. P´erez / Matrix Graph Grammars [16] P´erez Velasco, P. P., de Lara, J. 2006. Matrix Approach to Graph Transformation: Matching and Sequences .Proc ICGT’06, LNCS 4178, pp.:122-137. Springer.[17] P´erez Velasco, P. P., de Lara, J. 2007.
Using Matrix Graph Grammars for the Analysis of Behavioural Speci-fications: Sequential and Parallel Independence
Proc. PROLE’07, pp.: 11-26. Electr. Notes Theor. Comput.Sci. (206). pp.:133–152. Elsevier.[18] P´erez Velasco, P. P., de Lara, J. 2007.
Analysing Rules with Application Conditions using Matrix GraphGrammars . Graph Transformation for Verification and Concurrency (GTVC) workshop.[19] P´erez Velasco, P. P. 2009.
Matrix Graph Grammars: An Algebraic Approach to Graph Dynamics .ISBN 978-3639212556. VDM Verlag. Also available as e-book at: and arXiv:0801.1245v1 .[20] P´erez Velasco, P. P., de Lara, J. 2009.
A Reformulation of Matrix Graph Grammars withBoolean Complexes . The Electronic Journal of Combinatorics. Vol 16(1). R73. Available at: .[21] P´erez Velasco, P. P. 2009.
Matrix Graph Grammars as a Model of Computation . Preliminary version availableat arXiv:0905.1202v2 .[22] P´erez Velasco, P. P., de Lara, J. 2010.
Matrix Graph Grammars with Application Conditions . To appear inFundamenta Informaticae. Also available at arXiv:0902.1809v2 .[23] Rensink, A. 2004.
Representing First-Order Logic Using Graphs.
Proc. ICGT’04, LNCS 3256, pp.: 319-335.Springer.[24] Rozenberg, G. 1997.