Parallel Independence in Attributed Graph Rewriting
PP. Bahr (Ed.): 11th International Workshop onComputing with Terms and Graphs (TERMGRAPH 2020)EPTCS 334, 2021, pp. 62–77, doi:10.4204/EPTCS.334.5 © T. Boy de la TourThis work is licensed under theCreative Commons Attribution License.
Parallel Independence in Attributed Graph Rewriting
Thierry Boy de la Tour
Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000 Grenoble, France [email protected]
In order to define graph transformations by the simultaneous application of concurrent rules, we haveadopted in previous work a structure of attributed graphs stable by unions. We analyze the conse-quences on parallel independence, a property that characterizes the possibility to resort to sequentialrewriting. This property turns out to depend not only on the left-hand side of rules, as in algebraic ap-proaches to graph rewriting, but also on their right-hand side. It is then shown that, of three possibledefinitions of parallel rewriting, only one is convenient in the light of parallel independence.
The notion of parallel independence from [18, 12] has been studied mostly in the algebraic approachesto graph rewriting, see [7]. It basically consists in a condition on concurrent transformations of an objectthat not only guarantees but characterizes the possibility to apply the transformations sequentially in anyorder such that all such sequences of transformations yield the same result.When two transformations are involved, with rules r and r , this takes the form of the diamondproperty and is known as the Local Church-Rosser Problem [7]; it consists in finding a condition (calledparallel independence) on direct transformations H r ←− G r −→ H that is equivalent to the existence ofdirect transformations H r −→ H r ←− H with the same redexes, hence to the existence of two equivalentsequences of transformations G r −→ H r −→ H and G r −→ H r −→ H . It is obvious that non overlappingredexes always entail parallel independence, the difficulty of the problem is that the reverse does nothold and that, depending on the rules, some amount of overlap may be allowed. The notion of parallelindependence is also instrumental in defining Critical Pairs (as pairs of transformations that are notparallel independent) that are central in proving confluence of sets of production rules [14, 8].This notion should therefore also be considered in algorithmic approaches to graph rewriting. Indeed,the informal description of parallel independence given above makes perfect sense out of the algebraicapproach; it is purely operational. Consider for instance Python’s multiple assignment a , b : = b , a , anelegant expression that swaps the values of a and b . We naturally understand this as a parallel expression a : = b k b : = a . If a and b have the same value then the two assignments can be evaluated in sequencein any order, yielding the same result independently of the chosen order; they are parallel independent.If however they have distinct values, the two sequential evaluations yield different results (and nonecorresponds to the intended meaning); the two assignments are parallel dependent. Parallel dependencealso typically occurs in cellular automata when rules are applied to neighbor cells, because of the over-lap. Hence sequential applications of rules in an undetermined order would result in non deterministicautomata.These examples show that there is a legitimate way of computing by applying simultaneously concur-rent transformations that may not be parallel independent, even though the result may not be reachableby sequential transformations. Swapping the values of a and b cannot be performed by applying a : = b or b : = a sequentially. This calls for a notion of parallel transformation for defining the result of suchsimultaneous applications of rules..Boydela Tour 63One such transformation has been defined in [5], in an algorithmic approach that is adopted here. Itis based on directed graphs where vertices and arrows are equipped with sets of attributes and enables adefinition of a union of such graphs, given in Section 2. This is a fundamental difference with terms ortermgraphs and leads to a natural definition of parallel transformation in Section 3.The consequences of these definitions on parallel independence are analyzed in Sections 4 and 5.The results of these sections are also from [5], we give them here in a slightly simpler setting andwithout proofs, focusing on comparisons with the algebraic approach to graph rewriting where parallelindependence has been originally formulated.In Section 6 we analyze the notion of parallel rewriting. We first define a notion of regularity thatensures the absence of conflicts between concurrent rules. We then show that this notion is too restrictedto encompass parallel independence, and generalize it to the effective deletion property (also from [5]).Section 7 is devoted to comparisons with the algebraic notion of parallel coherence from [4]. Itis shown that its translation to the present framework, though more general than regularity, is still toorestricted to encompass parallel independence. It is also shown to be the right algebraic translation of theeffective deletion property. Concluding remarks and related works are presented in Section 8. We assume a many-sorted signature Σ and a set V of variables , disjoint from Σ , such that every variablehas a Σ -sort. For any finite X ⊆ V , T ( Σ , X ) denotes the algebra of Σ -terms over X . For any Σ -algebra A , let ⌊ A ⌋ be the disjoint union of the carrier sets of the Σ -sorts in A .An attributed graph (or graph for short) G is a tuple ( ˙ G , ~ G , ´ G , ` G , A G , ˚ G ) where ˙ G , ~ G are sets whoseelements are respectively called vertices and arrows , ´ G , ` G are the source and target functions from ~ G to˙ G , A G is a Σ -algebra and ˚ G is an attribution of G , i.e., a function from ˙ G ∪ ~ G to P ( ⌊ A G ⌋ ) . The elementsof ⌊ A G ⌋ are called attributes , and we assume that ˙ G , ~ G and ⌊ A G ⌋ are pairwise disjoint. G is unlabeled if ˚ G ( x ) = ∅ for all x ∈ ˙ G ∪ ~ G , it is finite if the sets ˙ G , ~ G and ˚ G ( x ) are finite. The carrier of G is the set ⌊ G ⌋ def = ˙ G ∪ ~ G ∪ ⌊ A G ⌋ .A graph H is a subgraph of G , written H ⊳ G , if the underlying graph ( ˙ H , ~ H , ´ H , ` H ) of H is a subgraphof G ’s underlying graph (in the usual sense), A H = A G and ˚ H ( x ) ⊆ ˚ G ( x ) for all x ∈ ˙ H ∪ ~ H .Graphs are better specified as pictures. Vertices and arrows will be named and their attributes will belisted after each name, separated from it by | (which is omitted if the attribute is ∅ ). Since graphs maynot be connected, they will be surrounded by a rectangle with rounded corners, as in: H = x y | f ⊳ x | y | , z fg | = G where H is the graph such that ˙ H = { x , y } , ~ H = { f } , ´ H ( f ) = x , ` H ( f ) = y , ˚ H ( x ) = ˚ H ( f ) = ∅ , ˚ H ( y ) = { } and similarly for G (the Σ -algebra A H = A G must contain at least 0 and 1).A morphism α from graph H to graph G , written α : H → G , is a function from ⌊ H ⌋ to ⌊ G ⌋ such thatthe restriction of α to ˙ H ∪ ~ H is a morphism from H ’s to G ’s underlying graphs (that is, ´ G ◦ α = α ◦ ´ H and ` G ◦ α = α ◦ ` H , this restriction of α is called the underlying graph morphism of α ), the restriction of α to ⌊ A H ⌋ is a Σ -homomorphism from A H to A G , denoted ˚ α , and ˚ α ◦ ˚ H ( x ) ⊆ ˚ G ◦ α ( x ) for all x ∈ ˙ H ∪ ~ H .Note that H ⊳ G iff ⌊ H ⌋ ⊆ ⌊ G ⌋ and the canonical injection from ⌊ H ⌋ to ⌊ G ⌋ is a morphism from H to4 Parallel Independence in Attributed Graph Rewriting G . For all F ⊳ H , the image α ( F ) is the smallest subgraph of G w.r.t. the order ⊳ such that α | ⌊ F ⌋ is amorphism from F to α ( F ) .An isomorphism is a morphism that has an inverse morphism. We write H ≃ G if there is an isomor-phism from H to G . A morphism µ : H → G is a matching if the underlying graph morphism of µ isinjective. For any F ⊳ H it is then easy to see that µ ( F ) = ( µ ( ˙ F ) , µ ( ~ F ) , µ ◦ ´ F ◦ µ − , µ ◦ ` F ◦ µ − , A G , ˚ µ ◦ ˚ F ◦ µ − ) . Given two attributions l and l ′ of G let l \ l ′ (resp. l ∩ l ′ , l ∪ l ′ ) be the attribution of G that maps any x to l ( x ) \ l ′ ( x ) (resp. l ( x ) ∩ l ′ ( x ) , l ( x ) ∪ l ′ ( x ) ). If l is an attribution of a subgraph H ⊳ G , it is implicitlyextended to the attribution of G that is identical to l on ˙ H ∪ ~ H and maps any other entry to ∅ .Unions of graphs can only be formed between joinable graphs, i.e., graphs that have a common part.We start with a simpler notion of joinable functions. Definition 2.1 (joinable functions) . Two functions f : D → C and g : D ′ → C ′ are joinable if f ( x ) = g ( x ) for all x ∈ D ∩ D ′ . Then, the meet of f and g is the function f f g : D ∩ D ′ → C ∩ C ′ that is the restrictionof f (or g ) to D ∩ D ′ . The join f g g is the unique function from D ∪ D ′ to C ∪ C ′ such that f = ( f g g ) | D and g = ( f g g ) | D ′ .For any set I and any I -indexed family ( f i : D i → C i ) i ∈ I of pairwise joinable functions, let g i ∈ I f i bethe only function from S i ∈ I D i to S i ∈ I C i such that f i = (cid:0) g i ∈ I f i (cid:1) | D i for all i ∈ I .We see that any two restrictions f | A and f | B of the same function f are joinable, and then f | A f f | B = f | A ∩ B and f | A g f | B = f | A ∪ B . Conversely, if f and g are joinable then each is a restriction of f g g . Definition 2.2 (joinable graphs) . Two graphs H and G are joinable if A H = A G , ˙ H ∩ ~ G = ~ H ∩ ˙ G = ∅ ,and the functions ´ H and ´ G (and similarly ` H and ` G ) are joinable. We can then define the graphs H ⊓ G def = ( ˙ H ∩ ˙ G , ~ H ∩ ~ G , ´ H f ´ G , ` H f ` G , A H , ˚ H ∩ ˚ G ) , H ⊔ G def = ( ˙ H ∪ ˙ G , ~ H ∪ ~ G , ´ H g ´ G , ` H g ` G , A H , ˚ H ∪ ˚ G ) . Similarly, if ( G i ) i ∈ I is an I -indexed family of graphs that are pairwise joinable, and A is an algebra suchthat A = A G i for all i ∈ I , then let G i ∈ I G i def = ( [ i ∈ I ˙ G i , [ i ∈ I ~ G i , g i ∈ I ´ G i , g i ∈ I ` G i , A , [ i ∈ I ˚ G i ) . It is easy to see that these structures are graphs: the sets of vertices and arrows are disjoint and theadjacency functions have the correct domains and codomains. If I = ∅ the chosen algebra A is generallyobvious from the context. Note that if H and G are joinable then H ⊓ G ⊳ H ⊳ H ⊔ G . Similarly, if the G i ’s are pairwise joinable then G j ⊳ F i ∈ I G i for all j ∈ I . We see that any two subgraphs of G arejoinable, and that H ⊳ G iff H ⊓ G = H iff H ⊔ G = G . These operations are commutative and, on triplesof pairwise joinable graphs, they are associative and distributive over each other. For any two graphs H , G there exists G ′ ≃ G such that H and G ′ are joinable (one possibility is to take ˙ G ′ ∩ ~ H = ∅ and ~ G ′ ∩ ( ˙ H ∪ ~ H ) = ∅ ).For any sets V , A and attribution l , we say that G is disjoint from V , A , l if ˙ G ∩ V = ∅ , ~ G ∩ A = ∅ and˚ G ( x ) ∩ l ( x ) = ∅ for all x ∈ ˙ G ∪ ~ G . We write G \ [ V , A , l ] for the largest subgraph of G (w.r.t. ⊳ ) that isdisjoint from V , A , l . This provides a natural way of removing objects from an attributed graph. It is easyto see that this subgraph always exists (it is the union of all subgraphs of G disjoint from V , A , l ), hencerewriting steps will not be restricted by a gluing condition as in the Double-Pushout approach (see [11])..Boydela Tour 65 Definition 3.1 (rules, matchings) . For any finite X ⊆ V , a ( Σ , X ) -graph is a finite graph G such that A G = T ( Σ , X ) . Let Var ( G ) def = S x ∈ ˙ G ∪ ~ G (cid:0) S t ∈ ˚ G ( x ) Var ( t ) (cid:1) , where Var ( t ) is the set of variables occurringin t .A rule r is a triple ( L , K , R ) of ( Σ , X ) -graphs such that L and R are joinable, L ⊓ R ⊳ K ⊳ L andVar ( L ) = X (see Remark 3.2 below). The rule r is unlabeled if L , K and R are unlabeled.A matching of r in a graph G is a matching µ from L to G that is consistent , i.e., such that ˚ µ ( ˚ L ( x ) \ ˚ K ( x )) ∩ ˚ µ ( ˚ K ( x )) = ∅ (or equivalently ˚ µ ( ˚ L ( x ) \ ˚ K ( x )) = ˚ µ ( ˚ L ( x )) \ ˚ µ ( ˚ K ( x )) ) for all x ∈ ˙ K ∪ ~ K . We denote M ( r , G ) the set of all matchings of r in G (they all have domain ⌊ L ⌋ ).We consider finite sets R of rules such that for all r , r ′ ∈ R , if ( L , K , R ) = r = r ′ = ( L ′ , K ′ , R ′ ) then ⌊ L ⌋ 6 = ⌊ L ′ ⌋ , so that M ( r , G ) ∩ M ( r ′ , G ) = ∅ for any graph G ; we then write M ( R , G ) for U r ∈ R M ( r , G ) .For any µ ∈ M ( R , G ) there is a unique rule r µ ∈ R such that µ ∈ M ( r µ , G ) , and its components aredenoted r µ = ( L µ , K µ , R µ ) . Remark 3.2. If X were allowed to contain a variable v not occurring in L , then v would freely match anyelement of A G and the set M ( r , G ) would contain as many matchings with essentially the same effect.Also note that Var ( R ) ⊆ Var ( L ) , R and K are joinable and R ⊓ K = L ⊓ R . The fact that K is not requiredto be a subgraph of R allows the possible deletion by other rules of data matched by K but not by R .A rewrite step may involve the creation of new vertices in a graph, corresponding to the vertices of arule that have no match in the input graph, i.e., those in ˙ R \ ˙ L (or similarly may create new arrows). Thesevertices should really be new, not only different from the vertices of the original graph but also differentfrom the vertices created by other transformations (corresponding to other matchings in the graph). Wesimply reuse the vertices x from ˙ R \ ˙ L by indexing them with any relevant matching µ , each time yieldinga new vertex ( x , µ ) which is obviously different from any new vertex ( x , ν ) for any other matching ν = µ ,and also from any vertex of G . This is similar to a construction of colimits in the category of sets. Definition 3.3 (graph G ↑ µ and matching µ ↑ ) . For any rule r = ( L , K , R ) , graph G and µ ∈ M ( r , G ) wedefine a graph G ↑ µ together with a matching µ ↑ of R in G ↑ µ . We first define the sets˙ G ↑ µ def = µ ( ˙ R ∩ ˙ K ) ∪ (( ˙ R \ ˙ K ) × { µ } ) and ~ G ↑ µ def = µ ( ~ R ∩ ~ K ) ∪ (( ~ R \ ~ K ) × { µ } ) . Next we define µ ↑ by: ˚ µ ↑ def = ˚ µ and for all x ∈ ˙ R ∪ ~ R , if x ∈ ˙ K ∪ ~ K then µ ↑ ( x ) def = µ ( x ) else µ ↑ ( x ) def = ( x , µ ) .Since the restriction of µ ↑ to ˙ R ∪ ~ R is bijective, then µ ↑ is a matching from R to the graph G ↑ µ def = ( ˙ G ↑ µ , ~ G ↑ µ , µ ↑ ◦ ´ R ◦ µ ↑ − , µ ↑ ◦ ` R ◦ µ ↑ − , A G , ˚ µ ↑ ◦ ˚ R ◦ µ ↑ − ) . By construction µ ↑ ( R ) = G ↑ µ , the matchings µ and µ ↑ are joinable and µ f µ ↑ is a matching from R ⊓ K to µ ( R ⊓ K ) . It is easy to see that the graph G and the graphs G ↑ µ are pairwise joinable.For any set M ⊆ M ( R , G ) of matchings in a graph G we define below how to transform G byapplying simultaneously the rules associated with matches in M . This simply consists in first removingsimultaneously all the vertices, arrows and attributes that are matched by L µ but not by K µ for any µ ∈ M ,and then in adding simultaneously all the images of the right-hand sides R µ . Definition 3.4 (graph G k M ) . For any graph G and set M ⊆ M ( R , G ) let G k M def = G \ [ V M , A M , ℓ M ] ⊔ G µ ∈ M G ↑ µ where6 Parallel Independence in Attributed Graph RewritingV M def = [ µ ∈ M µ ( ˙L µ \ ˙K µ ) , A M def = [ µ ∈ M µ ( ~ L µ \ ~ K µ ) and ℓ M def = [ µ ∈ M ˚ µ ◦ ( ˚L µ \ ˚K µ ) ◦ µ − . If M is a singleton { µ } we write G k µ for G k M , V µ for V M , etc. Example 3.5.
We represent the simultaneous assignment a , b : = b , a by two rules that correspond to thesimple assignments a : = b and b : = a . For this we use a signature Σ with two constants a and b of sort identifier , and a set of variables V with two variables u and v of sort integer . The environmentis represented by two nodes x and y , each attributed by an identifier and its value. More precisely, let A be the Σ -algebra where the sort integer is interpreted as A integer = Z , the sort identifier as A identifier = { a , b } and each constant as itself. We consider the environment where a = b = − G = x | a , y | b , − r = ( x | a , u y | b , v , x | a y | b , v , x | a , v ) r = ( x | a , u y | b , v , x | a , u y | b , y | b , u ) that correspond to a : = b and b : = a respectively. We see that r removes the content u associated to a and replaces it by the content v associated to b . There is exactly one matching µ i of rule r i in G for i = ,
2, given by x y a b u v µ x y a b − x y a b u v µ x y a b − M = { µ , µ } . Since no vertex or arrow is removed we have V M = A M = ∅ . We also have ℓ M ( x ) = (cid:0) ˚ µ ◦ ( ˚L µ \ ˚K µ ) ◦ µ − ( x ) (cid:1) ∪ (cid:0) ˚ µ ◦ ( ˚L µ \ ˚K µ ) ◦ µ − ( x ) (cid:1) = ˚ µ (cid:0) ˚L µ ( x ) \ ˚K µ ( x ) (cid:1) ∪ ˚ µ (cid:0) ˚L µ ( x ) \ ˚K µ ( x ) (cid:1) = ˚ µ ( { u } ) ∪ ˚ µ ( ∅ )= { } and similarly ℓ M ( y ) = {− } , so that G \ [ V M , A M , ℓ M ] = x | a y | b . Finally, we see that G ↑ µ = µ ↑ ( R µ ) = µ ↑ ( x | a , v ) = x | a , − G ↑ µ = µ ↑ ( R µ ) = µ ↑ ( y | b , u ) = y | b , G k M = x | a y | b ⊔ x | a , − ⊔ y | b , = x | a , − y | b , a = − b =
1, i.e., where the initial values of a and b havebeen swapped. Note that the same transformation can obviously be performed by the single rule ( x | a , u y | b , v , x | a y | b , x | a , v y | b , u ) More importantly, this rule can be computed from r and r (see [4])..Boydela Tour 67In Definition 3.4 G k M is guaranteed to be a graph since the ⊔ operation is only applied on joinablegraphs. Every morphism µ ↑ is a matching from the right-hand side R µ to the result G k M of the trans-formation. The case where M is a singleton defines the classical semantics of one sequential rewritestep. Definition 3.6 (sequential rewriting) . For any finite set of rules R , we define the relation −→ R of sequential rewriting by stating that, for all graphs G and H , G −→ R H iff there exists some µ ∈ M ( R , G ) such that H ≃ G k µ . In the Double-Pushout approach to graph rewriting (see [11]), production rules are spans L ← K → R ,with two morphisms from an interface K to the left- and right-hand sides L , R . These objects andmorphisms are taken in a category, possibly of some sort of graphs. Direct derivations are diagrams HRKL DG µ where the two squares are pushouts , i.e., a form of union. Since objects, say D and R , can always beunderstood modulo isomorphisms, their union cannot be defined without specifying what they have incommon; this is the rˆole of K and of the morphisms from K to D and R . If for instance K is empty thenthe pushout H is the disjoint union (or direct sum, or co-product) of D and R . Hence the right square addssomething to D , and inversely the left square removes something from G . Hence H is obtained from G by removing an image of L and writing an image of R , with the possibility that L and R share a commonpart given by K . This very general approach has a drawback: depending on G and µ the object D maynot exist, and if it does it may not be unique.In this approach sequential independence is a property of two consecutive direct transformations,formulated as the existence of two commuting morphisms j and j as shown below. L K R H D H µ R K L D G µ j j It is then proven by the Local Church-Rosser Theorem that the two production rules can be applied inreverse order to G and yield the same result H (we may call this the swapping property ). Of course,the matchings µ : L → G and µ : L → H are then replaced by other matchings µ ′ : L → H ′ and µ ′ : L → G that are related to µ and µ . A drawback of this definition is that it does not account forlonger sequences of direct transformations. Indeed, if three consecutive steps are given by ( µ , µ , µ ) , itis possible to swap µ with µ if they are sequential independent, and similarly for µ and µ , but this doesnot imply that µ and µ can be swapped under these hypotheses (because the matchings, and hence thedirect transformations, are modified by the swapping operations). We would need to express sequentialindependence between µ and µ , but the definition does not apply since they are not consecutive steps.8 Parallel Independence in Attributed Graph RewritingMore elaborate notions of equivalence between sequences of direct transformations are thus required(see the notion of shift equivalence in [7, chapter 3.5]).Because of the specificities of our framework (no pushouts, horizontal morphisms are only canonicalinjections, and there may be no such morphism from K to R ) we need a different definition of sequen-tial independence. It is natural to think of the swapping property itself as the definition of sequentialindependence, since it describes the operational meaning of parallel independence, but we are faced withanother problem. We are dealing with possibly infinite sets of matchings of rules in a graph, and wecannot form a notion of infinite sequences of rewrite steps (because each step may both remove and adddata). Yet we do not wish to restrict the notion to finite sets, not simply for the sake of generality but alsobecause it is closely related to parallel independence, a notion that can naturally be defined on infinitesets (see Section 5).We may however use Definition 3.4 to handle infinite sets of matchings, by using the graph G k M tostand for the result of an (independent) sequence of transformations. We may thus express sequentialindependence as a generalized swapping property, where the swap is performed between one transfor-mation and all the others (taken in parallel). Yet this definition would not imply that all subsets of asequential independent set are sequential independent, hence it needs to be stated in a more general way,by swapping any transformation with any others (and not only with all the others). Definition 4.1. [sequential independence] For any graph G and set M ⊆ M ( R , G ) , we say that M is sequential independent if for all N ⊆ M and all µ ∈ M \ N ,• µ ( L µ ) ⊳ G k N , hence there is a canonical injection j from µ ( L µ ) to G k N ,• there exists an isomorphism α such that α ( G k N ∪{ µ } ) = (cid:0) G k N (cid:1) k j ◦ µ and α is the identity on G .The isomorphism α in Definition 4.1 is necessary to account for the difference between the iso-morphic graphs µ ↑ ( R µ ) and ( j ◦ µ ) ↑ ( R µ ) , i.e., to transform vertices or arrows of the form ( x , µ ) into ( x , j ◦ µ ) (but there is no need to be that specific in the definition).It is then easy to see (by induction on the cardinality of M ) that Proposition 4.2.
For any graph G and finite set M ⊆ M ( R , G ) , if M is sequential independent thenG −→ ⋆ R G k M . Of course there is usually more than one sequence of rewriting steps from G to G k M , since under thehypothesis they can be swapped; but without it there is generally none (as illustrated in Example 3.5).And the fact that there is one such sequence does not imply sequential independence, i.e., the converseof Proposition 4.2 is obviously not true. In the Double-Pushout approach, parallel independence is a property of two direct transformations of thesame object G , formulated as the existence of two commuting morphisms j and j as shown below. L K R G D H ν L K R D H µ j j .Boydela Tour 69The Local Church-Rosser Theorem mentioned above actually shows that µ and ν are parallel inde-pendent iff they correspond to a sequential independent pair ( µ , ν ′ ) (where ν ′ : L → H is related to ν ). It is the symmetry between µ and ν that entails the swapping property. This is remarkable sinceparallel independence does not refer to the results of the transformations involved, while the result of thesequences of transformations is central in the swapping property (as in Definition 4.1).This definition of parallel independence can easily be lifted to sets M of matchings (or direct trans-formations) by considering all possible pairs µ , ν ∈ M , with a slight caveat. In this definition the twodirect transformations may be identical, thus stating a property of a single transformation that is notshared by all direct transformations. But Definition 3.4 does not allow to apply any member µ of M more than once (because applying µ any number of times in parallel would jeopardize determinism of Z = ⇒ R , see Definition 6.6 below). For this reason we will only consider pairs of distinct matchings (sothat singletons M shall be considered as parallel independent, see below).Our goal is therefore to formulate parallel independence in the present framework, in order to obtainan equivalence similar to the Local Church-Rosser Theorem. Considering that the pushout complement D is replaced by the graph G \ [ V µ , A µ , ℓ µ ] , the commuting property of j amounts to ν ( L ) ⊳ G \ [ V µ , A µ , ℓ µ ] , that can be more elegantly expressed as ν ( L ) ⊓ µ ( L ) ⊳ µ ( K ) , or ν ( L ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) using our notations. This simply means that any graph item that is matched by two concurrent rulescannot be removed. The commuting property of j is obtained by swapping µ and ν .However, our treatment of attributes makes it possible to recover in the right-hand side an attributethat has been deleted in the left-hand side (this is of course not possible for vertices or arrows). Thispossibility should therefore be accounted for in the notion of parallel independence, i.e., an attribute thatis matched twice may be deleted provided it is recovered. This can be expressed as ν ( L ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) ⊔ µ ↑ ( R µ ) for all µ , ν ∈ M such that µ = ν . However, this is not a sufficient condition for sequential independence.
Example 5.1.
We consider the following graph and rules: G = x | ⌊ A G ⌋ = { } r = ( x | , x , x ) r = ( x , x , x | ) There is a unique matching µ of r (resp. ν of r ) in G , given by µ ( x ) = ν ( x ) = x and ˚ µ ( ) = ˚ ν ( ) = M = { µ , ν } is not sequential independent. Indeed, let N = { ν } , then G = µ ( L µ ) ⊳ G k N = G (hence j is the identity morphism of G ), but (cid:0) G k N (cid:1) k µ = G k µ = x is not isomorphic to G k M = G .Yet we see that ν ( L ν ) ⊓ µ ( L µ ) = x ⊳ x = µ ( K µ ) ⊔ µ ↑ ( R µ ) µ ( L µ ) ⊓ ν ( L ν ) = x ⊳ x | = ν ( K ν ) ⊔ ν ↑ ( R ν ) , which proves that this condition is true for all pairs of distinct elements of M ; hence it is not sufficient toensure sequential independence.The problem in Example 5.1 is that the attribute 0 of x is considered as being matched only once (byL µ ), while it is actually also matched by R ν . This leads to the following definition.0 Parallel Independence in Attributed Graph Rewriting Definition 5.2 (parallel independence) . For any graph G and set M ⊆ M ( R , G ) , we say that M is parallelindependent if ( ν ( L ν ) ⊔ ν ↑ ( R ν )) ⊓ µ ( L µ ) ⊳ µ ( K µ ) ⊔ µ ↑ ( R µ ) for all µ , ν ∈ M such that µ = ν . This definition may seem strange, but it is easy to see that on unlabeled graphs it amounts to ν ( L ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) for all µ = ν , i.e., to the standard algebraic notion of parallel independence (translatedto the present framework).It turns out that Definition 5.2 provides the expected characterization of sequential independence. Theorem 5.3.
For any graph G and set M ⊆ M ( R , G ) , M is parallel independent iff M is sequentialindependent. The (rather long) proof of Theorem 5.3 can be found in [5].We therefore see that Definition 5.2 arises as a characterization of sequential independence that doesnot refer to the results of the transformations, and indeed that does not rely on the definition of G k M (Definition 3.4), though of course it does rely on the definitions of unions of graphs, of rules and ofthe matchings µ ↑ (Definitions 2.2, 3.1 and 3.3). Note also that Definition 5.2 depends explicitly on theright-hand sides of rules, in contrast with the general algebraic definition of parallel independence givenabove, or with the Essential Condition of parallel independence in [6]. We have not yet defined a relation of parallel rewriting as we did for sequential rewriting (Definition 3.6).The reason is that two matchings may conflict as one retains (in R ⊓ K ) what another removes. Example 6.1.
We consider the following unlabeled rule r and graph G . r = ( x x ′ ff ′ , x , x ) G = y z gh There are two matchings µ , µ of r in G , given by x x ′ f f ′ µ y z g h x x ′ f f ′ µ z y h g According to rule r with matching µ , the node µ ( x ′ ) = z and the arrows µ ( f ) = g and µ ( f ′ ) = h have to be removed, and the node µ ( x ) = y should occur in the result of the transformation. But withmatching µ , the node µ ( x ′ ) = y should be removed and the node µ ( x ) = z should be preserved. Thereis a conflict between µ and µ on the nodes of G (but not on its arrows).Let M = { µ , µ } , then V M = µ ( { x ′ } ) ∪ µ ( { x ′ } ) = { y , z } = ˙ G hence G \ [ V M , A M , ℓ M ] is empty and G k M = µ ( x ) ⊔ µ ( x ) = y ⊔ z = y z The transformation offered by Definition 3.4 performs deletions before unions, which means thatthese conflicts are resolved by giving priority to retainers over removers. But if the deletion actions ofa rule are not executed in a parallel transformation, how can we claim that this rule has been executed(or applied) in parallel with others? Thus, in order to define parallel rewriting with a clear semantics weneed to rule out such conflicts.A natural restriction is therefore to make sure that the items that should be removed, i.e., thosecontained in V M , A M or ℓ M , have indeed been removed from the result..Boydela Tour 71 Definition 6.2 (regularity) . For any graph G and set M ⊆ M ( R , G ) , we say that M is regular if G k M isdisjoint from V M , A M , ℓ M .As for sequential independence, this property of M can be characterized as a property of pairs ofelements of M . Lemma 6.3.
For any graph G and set M ⊆ M ( R , G ) ,M is regular iff ν ↑ ( R ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) for all µ , ν ∈ M . Proof.
Let H = F ν ∈ M G ↑ ν , then G k M = G \ [ V M , A M , ℓ M ] ⊔ H is disjoint from V M , A M , ℓ M iff H is. Wehave ˙ H ∩ V M = (cid:16) [ ν ∈ M ˙ G ↑ ν (cid:17) ∩ (cid:16) [ µ ∈ M µ ( ˙L µ \ ˙K µ ) (cid:17) = [ µ , ν ∈ M ν ↑ ( ˙R ν ) ∩ µ ( ˙L µ ) \ µ ( ˙K µ ) hence ˙ H ∩ V M = ∅ iff ν ↑ ( ˙R ν ) ∩ µ ( ˙L µ ) \ µ ( ˙K µ ) = ∅ for all µ , ν ∈ M , but this is equivalent to ν ↑ ( ˙R ν ) ∩ µ ( ˙L µ ) ⊆ µ ( ˙K µ ) . Similarly we see that ~ H ∩ A M = ∅ iff ν ↑ ( ~ R ν ) ∩ µ ( ~ L µ ) ⊆ µ ( ~ K µ ) for all µ , ν ∈ M .For every vertex or arrow x of G k M we have˚ H ( x ) ∩ ℓ M ( x ) = [ ν ∈ M ˚ ν ◦ ˚R ν ◦ ν − ( x ) ∩ ℓ M ( x )= [ µ , ν ∈ M ˚ ν ◦ ˚R ν ◦ ν − ( x ) ∩ ˚ µ ◦ ( ˚L µ \ ˚K µ ) ◦ µ − ( x )= [ µν ∈ M ˚ ν ◦ ˚R ν ◦ ν − ( x ) ∩ ˚ µ ◦ ˚L µ ◦ µ − ( x ) \ ˚ µ ◦ ˚K µ ◦ µ − ( x ) by using the fact that µ is consistent. We therefore see that ˚ H ( x ) ∩ ℓ M ( x ) = ∅ holds iff ˚ ν ◦ ˚R ν ◦ ν − ( x ) ∩ ˚ µ ◦ ˚L µ ◦ µ − ( x ) ⊆ ˚ µ ◦ ˚K µ ◦ µ − ( x ) holds for all µ , ν ∈ M . By definition M is regular iff ˙ H ∩ V M = ~ H ∩ A M = ∅ and ˚ H ∩ ℓ M is empty everywhere, hence M is regular iff ν ↑ ( R ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) for all µ , ν ∈ M . Corollary 6.4.
M is regular iff all its subsets are regular.
These nice properties, and the fact that regularity ensures the absence of conflicts, are however notsufficient in the light of parallel independence. Indeed, we now show that a parallel independent set maynot be regular.
Example 6.5.
Let us consider rules r = ( L , K , R ) and r = ( L , K , R ) where the graphs L , K and R have only one vertex x , the graphs L , K and R have only one vertex x , and the attributes are aspictured below ( u , v are variables and f is a unary function symbol). Let A G be the algebra with carrierset { } where f is interpreted as the constant function 0, and let G be the graph that has a unique vertex x with attribute 0. 0 u f ( u ) v f ( v ) { u } = ˚ L ( x ) = ˚ K ( x ) ˚ R ( x ) = { u , f ( u ) } ˚ G ( x ) = { }{ v } = ˚ L ( x ) ˚ K ( x ) = ∅ ˚ R ( x ) = { f ( v ) } ˚ µ ˚ µ ˚ µ ˚ µ { r , r } in G : µ and µ defined by µ ( x ) = µ ( x ) = x and˚ µ ( u ) = ˚ µ ( v ) =
0. Let M = { µ , µ } , we see by Lemma 6.3 that M is not regular since µ ↑ ( R ) ⊓ µ ( L ) = µ ( x | u , f ( u ) ) ⊔ µ ( x | v ) = x | = G is not a subgraph of µ ( K ) = µ ( x ) = x (or equivalently because G k M = G is not disjoint from ℓ M ).However, we see that M is sequential independent since the matchings can be applied sequentially inany order, yielding in both cases the graph G . Equivalently, M is parallel independent since ( µ ( L ) ⊔ µ ↑ ( R )) ⊓ µ ( L ) = ( G ⊔ G ) ⊓ G = G ⊳ G ⊔ G = µ ( K ) ⊔ µ ↑ ( R )( µ ( L ) ⊔ µ ↑ ( R )) ⊓ µ ( L ) = ( G ⊔ G ) ⊓ G = G ⊳ x ⊔ G = µ ( K ) ⊔ µ ↑ ( R ) . Note that conversely a set may be regular and not parallel independent, as is the case of the set M inExample 3.5.We obviously need a more comprehensive notion of parallel rewriting, one that applies at least on allparallel independent sets of matchings. We see in Example 6.5 that the two rules do clash on the attribute0 of x , but the clash is settled by their right-hand sides. This leads to the following definition from [5]. Definition 6.6 (effective deletion property, parallel rewriting) . For any graph G , a set M ⊆ M ( R , G ) issaid to satisfy the effective deletion property if G k M is disjoint from V M , A M , ℓ M \ ℓ ↑ M , where ℓ ↑ M def = [ µ ∈ M ˚ µ ◦ ( ˚R µ \ ˚K µ ) ◦ µ − . For any finite set of rules R , we define the relation = ⇒ R of parallel rewriting by stating that, for allgraphs G and H , G = ⇒ R H iff there exists a set M ⊆ M ( R , G ) that has the effective deletion propertyand such that H ≃ G k M . We write G Z = ⇒ R H if M = M ( R , G ) .The effective deletion property is obviously more general than regularity. The example below showsthat it is strictly more general than regularity. Example 6.7.
We consider again Example 5.1 where M = { µ , ν } is not sequential independent, hence byTheorem 5.3 M is not parallel independent. We have V M = A M = /0 and ℓ M ( x ) = { } . Since G k M = x | M , A M , ℓ M then M is not regular. But ℓ ↑ M ( x ) = ˚ µ ◦ ( ˚ R \ ˚ K ) ◦ µ − ( x ) ∪ ˚ ν ◦ ( ˚ R \ ˚ K ) ◦ ν − ( x ) = { } , hence ℓ M ( x ) \ ℓ ↑ M ( x ) = ∅ and therefore M has the effective deletion property.It has been shown in [5] that Z = ⇒ R is deterministic up to isomorphism, that is, if G Z = ⇒ R H , G ′ Z = ⇒ R H ′ and G ≃ G ′ then H ≃ H ′ . In particular, it is possible to represent any cellular automa-ton by a suitable rule r and a class of graphs that correspond to configurations of the automaton (everyvertex corresponds to a cell), such that Z = ⇒ r (restricted to such graphs) is the transition function of theautomaton. Furthermore, it is proved in [5] (as a lemma to Theorem 5.3) that Theorem 6.8.
For any graph G and set M ⊆ M ( R , G ) if M is parallel independent then M has theeffective deletion property. Hence effective deletion supports a definition of parallel rewriting that is general enough to handleparallel independence. Besides, Example 6.7 also shows that the effective deletion property is strictlymore general than parallel independence.We also see that.Boydela Tour 73
Corollary 6.9.
If M ⊆ M ( R , G ) is finite and parallel independent then G −→ ⋆ R G k M and G = ⇒ R G k M .Proof. By Theorem 6.8 we have G = ⇒ R G k M . By Theorem 5.3 M is sequential independent, hence byProposition 4.2 we have G −→ ⋆ R G k M .Hence in this case parallel and sequential rewriting meet, and parallel rewriting can be said to yielda correct result w.r.t. sequential rewriting. One drawback of the effective deletion property is that it cannot be characterized as a property of pairsof elements of M , as the following example shows. Example 7.1.
We consider the following graph and rule G = x | ⌊ A G ⌋ = { , } r = ( x | , x | , x | , ) and also the rules r , r of Example 5.1. For i = , , µ i be the unique matching of r i in G such that˚ µ i is the identity function of { , } . Let M = { µ , µ , µ } and N = { µ , µ } . We obviously have V M = V N = A M = A N = ∅ . We see that ℓ µ ( x ) = { } and ℓ µ ( x ) = ℓ µ ( x ) = ∅ , so that ℓ N ( x ) = ℓ M ( x ) = { } , G k N = x ⊔ x ⊔ x | , = x | , G k M = x ⊔ x ⊔ x | ⊔ x | , = x | , ℓ ↑ µ ( x ) = ∅ , ℓ ↑ µ ( x ) = { } and ℓ ↑ µ ( x ) = { } , so that ℓ ↑ N ( x ) = { } and ℓ ↑ M ( x ) = { , } .Hence ℓ M ( x ) \ ℓ ↑ M ( x ) = ∅ and ℓ N ( x ) \ ℓ ↑ N ( x ) = { } , and M but not N has the effective deletion property.The reader may find strange that the conflict between r and r could be settled by some other rule,here r . This means that we need the whole of M to decide wether all conflicts are settled. For this reasonthe effective deletion property may appear as too general.Another possibility for defining parallel rewriting is to translate to the present framework the notionof parallel coherence that has been devised in order to define algebraic parallel graph transformation (see[4]). In that paper we used production rules of the form L ← K ← I → R that do not require a morphismfrom K to R . Direct derivations are commuting diagrams L K I RG D H where the squares are pushouts. Note that a standard Double-Pushout can be obtained with K = I . Parallel coherence , as a property of two direct transformations of the same object G , is defined as theexistence of two commuting morphisms j and j as shown below. L K I R G D H ν L K I R D H µ j j I is replaced by the graph R ⊓ K , hence the commuting property of j amounts to ν ( R ⊓ K ) ⊳ G \ [ V µ , A µ , ℓ µ ] , that can be expressed as µ ( L µ ) ⊓ ν ( R ν ⊓ K ν ) ⊳ µ ( K µ ) . Thissimply means that any graph item that is matched by some R ⊓ K cannot be removed by any rule. Definition 7.2 (parallel coherence) . For any graph G and set M ⊆ M ( R , G ) , we say that M is parallelcoherent if ν ( R ν ⊓ K ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) for all µ , ν ∈ M . We easily show that this notion is more general than regularity.
Lemma 7.3.
For any graph G and set M ⊆ M ( R , G ) , if M is regular then M is parallel coherent.Proof. By Lemma 6.3 we have ν ↑ ( R ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) for all µ , ν ∈ M . Since R ν ⊓ K ν ⊳ R ν then ν ( R ν ⊓ K ν ) = ν ↑ ( R ν ⊓ K ν ) ⊳ ν ↑ ( R ν ) , hence ν ( R ν ⊓ K ν ) ⊓ µ ( L µ ) ⊳ ν ↑ ( R ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) , hence M is parallel coherent.It is easy to see that the converse does not hold (use for instance Example 5.1). We now show thatparallel coherence is a restriction of the (possibly too general) effective deletion property. Theorem 7.4.
For any graph G and set M ⊆ M ( R , G ) , if M is parallel coherent then M has the effectivedeletion property.Proof. Let H = G k M then as in the proof of Lemma 6.3 we have˙ H ∩ V M = [ µ , ν ∈ M ν ↑ ( ˙R ν ) ∩ µ ( ˙L µ ) \ µ ( ˙K µ ) . But µ ( ˙L µ ) ⊆ ˙ G and by Definition 3.3 we have˙ G ∩ ν ↑ ( ˙R ν ) = ˙ G ∩ ˙ G ↑ ν = ˙ G ∩ (cid:0) ν ( ˙R ν ∩ ˙K ν ) ∪ (( ˙R ν \ ˙K ν ) × { ν } )= ν ( ˙R ν ∩ ˙K ν ) , hence ˙ H ∩ V M = [ µ , ν ∈ M ν ( ˙R ν ∩ ˙K ν ) ∩ µ ( ˙L µ ) \ µ ( ˙K µ ) = ∅ since by parallel coherence ν ( ˙R ν ∩ ˙K ν ) ∩ µ ( ˙L µ ) ⊆ µ ( ˙K µ ) for all µ , ν ∈ M . Similarly ~ H ∩ A M = ∅ .For all x ∈ ˙ H ∪ ~ H , if x ˙ G ∪ ~ G then ℓ M ( x ) = ∅ and obviously ˚ H ( x ) ∩ ℓ M ( x ) \ ℓ ↑ M ( x ) = ∅ . Otherwise x ∈ ˙ G ∪ ~ G hence ν ↑ − ( x ) = ν − ( x ) so that˚ H ( x ) = (cid:0) ˚ G ( x ) \ ℓ M ( x ) (cid:1) ∪ [ ν ∈ M ˚ ν ◦ ˚R ν ◦ ν − ( x ) . Using the identity A = ( A \ B ) ∪ ( A ∩ B ) for all sets A and B we have˚ ν ◦ ˚R ν ◦ ν − ( x ) = (cid:0) ˚ ν ◦ ˚R ν ◦ ν − ( x ) \ ˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) (cid:1) ∪ (cid:0) ˚ ν ◦ ˚R ν ◦ ν − ( x ) ∩ ˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) (cid:1) .Boydela Tour 75for all ν ∈ M . By parallel coherence we have ˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) ∩ ˚ µ ◦ ˚L µ ◦ µ − ( x ) ⊆ ˚ µ ◦ ˚K µ ◦ µ − ( x ) for all µ , ν ∈ M , and since µ is consistent we get˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) ∩ ℓ M ( x ) = [ µ ∈ M ˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) ∩ ˚ µ ◦ ( ˚L µ \ ˚K µ ) ◦ µ − ( x )= [ µ ∈ M ˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) ∩ ˚ µ ◦ ˚L µ ◦ µ − ( x ) \ ˚ µ ◦ ˚K µ ◦ µ − ( x )= ∅ , hence ˚ H ( x ) ∩ ℓ M ( x ) = [ ν ∈ M ˚ ν ◦ ˚R ν ◦ ν − ( x ) ∩ ℓ M ( x )= [ ν ∈ M (cid:0) ˚ ν ◦ ˚R ν ◦ ν − ( x ) \ ˚ ν ◦ ( ˚R ν ∩ ˚K ν ) ◦ ν − ( x ) (cid:1) ∩ ℓ M ( x ) . Finally, by using the obvious fact that f ( A ) \ f ( A ∩ B ) ⊆ f ( A \ B ) for any function f , we get˚ H ( x ) ∩ ℓ M ( x ) ⊆ [ ν ∈ M ˚ ν ◦ ( ˚R ν \ ˚K ν ) ◦ ν − ( x ) ∩ ℓ M ( x ) ⊆ ℓ ↑ M ( x ) hence ˚ H ( x ) ∩ ℓ M ( x ) \ ℓ ↑ M ( x ) = ∅ . This proves that H is disjoint from V M , A M , ℓ M \ ℓ ↑ M and therefore that M has the effective deletion property.Yet parallel coherence is not sufficient in the light of parallel independence, as we now show. Proposition 7.5.
Parallel coherence does not generalize parallel independence.Proof.
In Example 6.5 is exhibited a set M = { µ , µ } that is shown to be parallel independent. But wesee that µ ( R ⊓ K ) ⊓ µ ( L ) = µ ( x | u ) ⊓ µ ( x | v ) = x | = G is not a subgraph of µ ( K ) = x , hence M is not parallel coherent.Parallel coherence is therefore too restricted to support a definition of parallel rewriting in the presentframework. The problem here as above is that deleted attributes can be recovered by the right-hand sideof rules, and that this possibility is not accounted for in the algebraic definitions, since these do notdistinguish between graph items and attributes.To summarize, we have established that the following implications hold, and no other:regularity ⇒ parallel coherence ⇒ effective deletion property ⇐ parallel independence.We see this as an endorsement of parallel rewriting based on the effective deletion property (Defini-tion 6.6), even if it is the only property that cannot be characterized simply on pairs of matchings. Thissuggests that the effective deletion property would be worth transposing to an algebraic framework. Butthere is no straightforward way of doing this, as can now be shown. Corollary 7.6.
For any set of unlabeled rules R , any unlabeled graph G and any subset M ⊆ M ( R , G ) ,M is regular iff M is parallel coherent iff M has the effective deletion property. Proof.
Assume that M has the effective deletion property, then G k M is disjoint from V M , A M , ℓ M \ ℓ ↑ M hence so is F ν ∈ M ν ↑ ( R ν ) . For all µ ∈ M we have V µ ⊆ V M and A µ ⊆ A M , hence F ν ∈ M ν ↑ ( R ν ) is disjointfrom V µ , A µ , ∅ and therefore so is ν ↑ ( R ν ) for every ν ∈ M . Thus ν ↑ ( ˙R ν ) ∩ µ ( ˙L µ ) \ µ ( ˙K µ ) = ∅ and ν ↑ ( ~ R ν ) ∩ µ ( ~ L µ ) \ µ ( ~ K µ ) = ∅ , which is equivalent to ν ↑ ( ˙R ν ) ∩ µ ( ˙L µ ) ⊆ µ ( ˙K µ ) and ν ↑ ( ~ R ν ) ∩ µ ( ~ L µ ) ⊆ µ ( ~ K µ ) . Since these graphs are unlabeled, this entails that ν ↑ ( R ν ) ⊓ µ ( L µ ) ⊳ µ ( K µ ) for all µ , ν ∈ M ,hence that M is regular by Lemma 6.3. The equivalences follow by Lemma 7.3 and Theorem 7.4.Hence an algebraic approach to parallel graph transformation that would apply to the category of(unlabeled) graphs could not distinguish these notions. In this sense parallel coherence is already theright algebraic translation of the effective deletion property (and of regularity), even if it is too weak toaccount for the special treatment of attributes in the present non algebraic framework. Many notions of attributed graphs exist in the literature. For instance, in [17, 10] graph items can hold atmost one attribute. This means that concurrent rules could possibly conflict because of their right-handsides, if two rules required to attribute distinct values to the same graph item. Our choice of attachingsets of attributes to vertices and arrows means that new attributes are freely included in those sets, andthus avoids conflicting right-hand sides. Indeed, we see from Definition 6.6 that a conflict must involvean element of V M , A M or ℓ M . Hence the right-hand sides of rules never create conflicts, though they may settle the conflicts created in the left-hand sides and are therefore relevant to parallel independence.Other notions of attributed graphs that allow unbounded attributes are possible, for instance the E-graphs from [11]. But the fact that in E-graphs a single value can be referenced several times as attributeof a vertex or arrow means that the number of matchings of rules may uselessly inflate.Another approach to parallelism is to accept overlapping, non independent matchings and ask theuser to decide what to do in particular situations [13]. The present approach shows that the user can bespared this work not just on parallel independent matchings, but on the larger class of sets that satisfy theeffective deletion property (or parallel coherence in an algebraic framework).It is also possible to restrict by design all overlaps to vertices, as is the case in Hyperedge Replace-ment Systems [9], and still be able to specify powerful parallel transformations [15], though in a nondeterministic way. Note that these are asynchronous models of parallelism, where determinism amountsto confluence. This property has been widely studied in term rewriting; it becomes more subtle whenacyclic term graphs are considered [16], and more elusive when cycles are allowed [2, 1]. Our model ofparallelism is a synchronous one where deterministic transformations can be designed without referenceto confluence [3], as in cellular automata.The use of parallel transformations to define sequential independence in an algebraic approach tograph rewriting (as in Definition 4.1) could be worth investigating. References [1] Zena M. Ariola & Stefan Blom (1997):
Cyclic Lambda Calculi . In Mart´ın Abadi & Takayasu Ito,editors: Theoretical Aspects of Computer Software - TACS ’97, LNCS 1281, Springer, pp. 77–106,doi:10.1007/BFb0014548.[2] Zena M. Ariola & Jan Willem Klop (1996):
Equational Term Graph Rewriting . Fundamenta Informaticae26(3/4), pp. 207–240, doi:10.3233/FI-1996-263401. .Boydela Tour 77 [3] T. Boy de la Tour & R. Echahed (2020):
Combining Parallel Graph Rewriting and Quotient Graphs . In: 13thInternationalWorkshop,WRLA2020, LNCS 12328, Springer, pp. 1–18, doi:10.1007/978-3-030-63595-4 1.[4] T. Boy de la Tour & R. Echahed (2020):
Parallel Coherent Graph Transformations . In: Proceedings ofWADT 2020, the 25th International Workshop on Algebraic DevelopmentTechniques, LNCS, Springer, toappear, see also CoRR (abs/1904.08850).[5] T. Boy de la Tour & R. Echahed (2020):
Parallel Rewriting of Attributed Graphs . Theoretical ComputerScience 848, pp. 106–132, doi:10.1016/j.tcs.2020.09.025.[6] A. Corradini, D. Duval, M. L¨owe, L. Ribeiro, R. Machado, A. Costa, G. Azzi, J. S. Bezerra & L. M. Rodrigues(2018):
On the Essence of Parallel Independence for the Double-Pushout and Sesqui-Pushout Approaches .In R. Heckel & G. Taentzer, editors: GraphTransformation,Specifications,andNets-InMemoryofHartmutEhrig, LNCS 10800, Springer, pp. 1–18, doi:10.1007/978-3-319-75396-6 1.[7] Andrea Corradini, Ugo Montanari, Francesca Rossi, Hartmut Ehrig, Reiko Heckel & Michael L¨owe (1997):
Algebraic Approaches to Graph Transformation - Part I: Basic Concepts and Double Pushout Approach .In Grzegorz Rozenberg, editor: Handbookof Graph Grammarsand Computing by Graph Transformations,Volume1: Foundations, World Scientific, pp. 163–246, doi:10.1142/9789812384720 0003.[8] A. Costa, J. Bezerra, G. Azzi, L. Rodrigues, T. R. Becker, R. G. Herdt & R. Machado (2016):
Veri-graph: A System for Specification and Analysis of Graph Grammars . In L. Ribeiro & T. Lecomte, ed-itors: Formal Methods: Foundations and Applications SBMF 2016, LNCS 10090, Springer, pp. 78–94,doi:10.1007/978-3-319-49815-7 5.[9] F. Drewes, H.-J. Kreowski & A. Habel (1997):
Hyperedge Replacement Graph Grammars . In G. Rozenberg,editor: Handbookof Graph Grammarsand Computing by Graph Transformations,Volume 1: Foundations,World Scientific, pp. 95–162, doi:10.1142/9789812384720 0002.[10] Dominique Duval, Rachid Echahed, Fr´ed´eric Prost & Leila Ribeiro (2014):
Transformation of AttributedStructures with Cloning . In Stefania Gnesi & Arend Rensink, editors: FundamentalApproachestoSoftwareEngineering-FASE2014, LNCS 8411, Springer, pp. 310–324, doi:10.1007/978-3-642-54804-8 22.[11] Hartmut Ehrig, Karsten Ehrig, Ulrike Prange & Gabriele Taentzer (2006):
Fundamentals of AlgebraicGraph Transformation . Monographs in Theoretical Computer Science. An EATCS Series, Springer,doi:10.1007/3-540-31188-2.[12] Hartmut Ehrig & Hans-J¨org Kreowski (1976):
Parallelism of Manipulations in Multidimensional Informa-tion Structures . In: Mathematical Foundations of Computer Science, LNCS 45, Springer, pp. 284–293,doi:10.1007/3-540-07854-1 188.[13] Ole Kniemeyer, G¨unter Barczik, Reinhard Hemmerling & Winfried Kurth (2007):
Relational GrowthGrammars - A Parallel Graph Transformation Approach with Applications in Biology and Architec-ture . In: Third International Symposium AGTIVE, Revised Selected and Invited Papers, pp. 152–167,doi:10.1007/978-3-540-89020-1 12.[14] Leen Lambers, Hartmut Ehrig & Fernando Orejas (2008):
Efficient Conflict Detection in Graph Trans-formation Systems by Essential Critical Pairs . Electron. Notes Theor. Comput. Sci 211, pp. 17–26,doi:10.1016/j.entcs.2008.04.026.[15] Ivan Lanese & Ugo Montanari (2005):
Synchronization Algebras with Mobility for Graph Transformations .Electr.NotesTheor.Comput.Sci 138(1), pp. 43–60, doi:10.1016/j.entcs.2005.05.004.[16] D. Plump:
Term Graph Rewriting . In: Handbook of Graph Grammars and Computing by Graph Transfor-mation,Volume2: Applications,LanguagesandTools, doi:10.1142/9789812815149 0001.[17] Detlef Plump & Sandra Steinert (2004):
Towards Graph Programs for Graph Algorithms . In: SecondInter-nationalConference,ICGT2004, LNCS 3256, pp. 128–143, doi:10.1007/978-3-540-30203-2 11.[18] Barry K. Rosen (1975):