A graph-dynamical interpretation of Kiselman's semigroups
aa r X i v : . [ m a t h . D S ] M a y A GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS
ELENA COLLINA AND ALESSANDRO D’ANDREAA
BSTRACT . A Sequential Dynamical System (SDS) is a quadruple (Γ , S i , f i , w ) consisting of a(directed) graph Γ = (
V, E ) , each of whose vertices i ∈ V is endowed with a finite set state S i and an update function f i : Q j,i → j S j → S i — we call this structure an update system — anda word w in the free monoid over V , specifying the order in which update functions are to beperformed. Each word induces an evolution of the system and in this paper we are interested inthe dynamics monoid, whose elements are all possible evolutions.When Γ is a directed acyclic graph, the dynamics monoid of every update system supported on Γ naturally arises as a quotient of the Hecke-Kiselman monoid associated with Γ . In the specialcase where Γ = Γ n is the complete oriented acyclic graph on n vertices, we exhibit an updatesystem whose dynamics monoid coincides with Kiselman’s semigroup K n , thus showing that thedefining Hecke-Kiselman relations are optimal in this situation. We then speculate on how theseresults may extend to the general acyclic case. C ONTENTS
1. Introduction 12. Sequential dynamical systems 23. Combinatorial definitions 44. Kiselman’s semigroup and canonical words over the complete graph 65. The join operation 96. An update system with universal dynamics 107. Proof of Proposition 6.2 128. Conclusions and further developments 13Acknowledgements 14References 141. I
NTRODUCTION
In this paper, we show how the recently defined notion of Hecke-Kiselman monoid finds anatural realization in the combinatorial-computational setting of Sequential Dynamical Systems.Let Q be a mixed graph, i.e., a simple graph with at most one connection between each pair ofdistinct vertices; connections can be either oriented (arrows) or non-oriented (edges). In [GM11],Ganyushkin and Mazorchuk associated with Q a semigroup HK Q generated by idempotents a i indexed by vertices of Q , subject to the following relations • a i a j = a j a i , if i and j are not connected; • a i a j a i = a j a i a j , if ( i j ) ∈ Q , i.e., i and j are connected by an edge; • a i a j = a i a j a i = a j a i a j , if ( i / / j ) ∈ Q , i.e., i and j are connected by an arrow from i to j .The semigroup HK Q is known as the Hecke-Kiselman monoid attached to Q .For the two extremal types of mixed graphs — graphs, where all sides are edges, and orientedgraphs, in which all sides are arrows — Hecke-Kiselman monoids are well understood: when Date : November 6, 2017. Q is an oriented graph with n vertices and no oriented cycles, then HK Q is isomorphic to aquotient of Kiselman’s semigroup K n [Kis02, KM09], which is known to be finite [KM09].On the other hand, when Q has only unoriented edges, HK Q is finite if and only if Q is a(finite) disjoint union of finite simply laced Dynkin diagrams, and the corresponding semigroupis then variously known as Springer-Richardson , -Hecke , or Coxeter monoid attached to Q .The problem of characterizing mixed graphs inducing a finite Hecke-Kiselman monoid in thegeneral case seems to be difficult and only very partial results are known [AD13]. The studyof (certain quotients of) Hecke-Kiselman monoids and their representations has also attractedrecent interest, see for instance [For12, Gre12, GM13].The choice of a simple (i.e., without loops or multiple edges) graph Γ is also one of the essen-tial ingredients in the definition of a Sequential Dynamical System (SDS). The notion of SDS hasbeen introduced by Barrett, Mortveit and Reidys [BMR00, BMR01, BR99] in order to constructa mathematically sound framework for investigating computer simulations; this structure hasfound wide applicability in many concrete situations, cf. [MR08] and references therein. SDSon a directed acyclic graph Γ are related to Hecke-Kiselman monoids in that (see Proposition 2.4below) the so-called Γ -local functions [MR08] satisfy the relations listed in the presentation of HK Γ ; in other words, the evaluation morphism mapping each word (or update schedule) in thealphabet V to the corresponding composition of Γ -local functions factors through HK Γ .One is naturally led to wonder whether HK Γ is the smallest quotient through which all ofthe above evaluation morphisms must factor, or additional universal relations among Γ -localfunctions may be found. Henceforth, Γ n = ( V, E ) will denote the oriented graph where V = { , . . . , n } and ( i, j ) ∈ E if and only if i < j . In this paper, we show that HK Γ is optimalin the special case Γ = Γ n , by proving Theorem 6.3, which immediately implies the followingstatement. Theorem.
There exists an update scheme S ⋆n = (Γ n , S i , f i ) such that the associated evaluationmorphism factors through no nontrivial quotient of HK Γ n . We believe that the same claim holds for every finite directed acyclic graph and mention inSection 8 some evidence in support of this conjecture.The paper is structured as follows. In Section 2 we recall the definition of SDS, define thedynamics monoid of an update system supported on an oriented graph Γ , and show that it is aquotient of the Hecke-Kiselman monoid HK Γ as soon as Γ has no oriented cycles. In Sections3 and 4 we list the results on Kiselman’s semigroup K n that are contained in [KM09] and drawsome useful consequences. Section 5 introduces the join operation , which is the key ingredientin the definition of the update system S ⋆n , which is given in Section 6. The rest of the paper isdevoted to the proof of Theorem 6.3.2. S EQUENTIAL DYNAMICAL SYSTEMS An update system is a triple S = (Γ , ( S i ) i ∈ V , ( f i ) i ∈ V ) consisting of(1) a base graph Γ = (
V, E ) , which is a finite directed graph, with V as vertex set and E ⊆ V × V as edge set; we will write i → j for ( i, j ) ∈ E . The vertex neighbourhood of a given vertex i ∈ V is the subset x [ i ] = { j : i → j } . (2) a collection S i , i ∈ V, of finite sets of states . We denote by S = Q i ∈ V S i the family ofall the possible system states , i.e., n -tuples s = ( s i ) i ∈ V , where s i belongs to S i for eachvertex i . The state neighbourhood of i ∈ V is S [ i ] = Q j ∈ x [ i ] S j , and the restriction of s = ( s j ) j ∈ V to x [ i ] is denoted by s [ i ] = ( s j ) j ∈ x [ i ] ∈ S [ i ] . (3) for every vertex i , a vertex (update) function f i : S [ i ] → S i , GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS 3 computes the new state value on vertex i as a function of its state neighbourhood. Inparticular, if x [ i ] is empty, then f i is a constant t ∈ S i and we will write f i ≡ t . Eachvertex function f i can be incorporated into a Γ -local function F i : S → S defined as F i ( s ) = t = ( t j ) j ∈ V , where t j = (cid:26) s j , if i = jf i ( s [ i ]) , if i = j An SDS is an update system S endowed with(4) an update schedule , i.e., a word w = i i . . . i k in the free monoid F( V ) over the alphabet V (from now on, we will often abuse the notation denoting both the alphabet and thevertex set with the same letter V , and both letters and vertices with the same symbols).The update schedule w induces a dynamical system map (SDS map), or an evolution of S , F w : S → S , defined as F w = F i F i . . . F i k . Remark . As the graph Γ sets up a dependence relation between nodes under the action ofupdate functions, it makes sense to allow Γ to possess self-loops, and arrows connecting thesame vertices but going in opposite directions. However, we exclude the possibility of multipleedges between any two given vertices. Notice, however, that all SDS of interest in this paper willbe supported on directed acyclic graphs, thus excluding in particular the possibility of self-loops.Denote by End( S ) the set of all maps S → S , with the monoid structure given by composition.Then, the Γ -local functions F i , i ∈ V, generate a submonoid of S which we denote by D ( S ) .The monoid D ( S ) is the image of the natural homomorphism F : F( V ) → End( S ) w F w . mapping each update schedule w to the corresponding evolution F w ; in particular, we denote by F ⋆ the identity map, induced by the empty word ⋆ .Once an underlying update system has been chosen, our goal is to understand the monoidstructure of D ( S ) . Example 2.2.
Let
Γ = ( { i } , ∅ ) be a Dynkin graph of type A . It has only one vertex i , so thereis only one vertex function f i , which is constant as there are no arrows starting in i . The systemdynamics monoid D ( S ) = { F ⋆ , F i } contains exactly two elements, as soon as | S i | > . If | S i | = 1 , then D ( S ) = { F ⋆ } . Example 2.3.
Let Γ be the graph i / / j Let us consider S i = { , , } , S j = { , } . Set up an update system on Γ by requiring that f i : S [ i ] = S j → S i acts as f i ( s ) = s + 1 and that f j ≡ . Evolutions induced by words ⋆ (the empty word), i, j, ij and ji on S = S i × S j all differ from each other, as they take differentvalues on (0 , : F ⋆ (0 ,
0) = (0 , F i (0 ,
0) = (1 , F j (0 ,
0) = (0 , F ij (0 ,
0) = F i F j (0 ,
0) = F i (0 ,
1) = (2 , F ji (0 ,
0) = F j F i (0 ,
0) = F j (1 ,
0) = (1 , . Both F i and F j are idempotent. Moreover, it is easy to see that F iji = F jij = F ij , E. COLLINA AND A. D’ANDREA as F i F j F i = F j F i F j = F i F j , so that F i a , F j a extends to an isomorphism from D ( S ) to Kiselman’s semigroup K K = h a , a | a = a , a = a , a a a = a a a = a a i . These examples are instances of the following statement.
Proposition 2.4.
Let S = (Γ , S i , f i ) be an update system defined on a directed acyclic graph Γ .Then the Γ -local functions F i satisfy: ( i ) F i = F i , for every i ∈ V ; ( ii ) F i F j F i = F j F i F j = F i F j , if i → j (and hence, j i ); ( iii ) F i F j = F j F i , if i and j are not connected.Proof. Every Γ -local function F i only affects the vertex state s i . As F i ( s ) only depends on s [ i ] ,and i / ∈ x [ i ] , then each F i is idempotent.For the same reason, it is enough to check ( ii ) and ( iii ) on a graph with two vertices i = j . Ifthey are not connected, then both f i and f j are constant, hence F i and F j trivially commute. Ifthere is an arrow i → j , then F i ( a i , a j ) = ( f i ( a j ) , a i ) , whereas F j ( a i , a j ) = ( a i , t ) , since f j ≡ t is a constant. Then it is easy to check that the compositions F i F j , F i F j F i , F j F i F j coincide,as they map every element to ( f i ( t ) , t ) . (cid:3) These relations are remindful of those in the presentation of a Hecke-Kiselman monoid.
Definition 2.5.
Let
Γ = (
V, E ) be a finite directed acyclic graph. The Hecke-Kiselman monoid associated with Γ is defined as follows HK Γ = h a i , i ∈ V | a i = a i , for every i ∈ V ; a i a j a i = a j a i a j = a i a j , for i → j ; (2.1) a i a j = a j a i , for i j, and j i i (2.2)This structure has been first introduced in [GM11] for a finite mixed graph, i.e., a simple graph(without loops or multiple edges) in which edges can be either oriented or unoriented: there, anunoriented edge ( i, j ) is used to impose the customary braid relation a i a j a i = a j a i a j .If S = (Γ , S i , f i ) is an update system on a finite directed acyclic graph Γ = (
V, E ) , thenProposition 2.4 amounts to claiming that the evaluation homomorphism F : F( V ) → End( S ) factors through the Hecke-Kiselman monoid HK Γ .Our case of interest is when the graph Γ = Γ n is the complete graph on n vertices, wherethe orientation is set so that i → j if i < j . In this case, the semigroup HK Γ n coincideswith Kiselman’s semigroup K n , as defined in [KM09]. The monoid K n , however, only reflectsimmediate pairwise interactions between vertex functions. One may, in principle, wonder if K n is indeed the smallest quotient of F( V ) through which evaluation maps F factor, or additionalidentities may be imposed that reflect higher order interactions.In this paper we will exhibit an update system S ⋆n , defined on the graph Γ n , whose dynamicsmonoid is isomorphic to K n . In other words, we will show that K n → D ( S ⋆n ) is indeed anisomorphism, once suitable vertex functions have been chosen.3. C OMBINATORIAL DEFINITIONS
Let F( A ) be the free monoid over the alphabet A and denote by ⋆ the empty word. Recall that,for every subset B ⊆ A , the submonoid h B i ⊆ F( A ) is identified with the free monoid F( B ) . Definition 3.1.
Let w ∈ F( A ) . We define • subword of w to be a substring of consecutive letters of w ; • quasi-subword of w to be an ordered substring u of not necessarily consecutive letters of w . GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS 5
We will denote the relation of being a quasi-subword by ≤ , so that v ≤ w if and only if v is a quasi-subword of w .Obviously, every subword is a quasi-subword. Also notice that v ≤ w and w ≤ v if and onlyif v = w . Example 3.2.
Set w = acaab ∈ F( { a, b, c } ) ; then • aab is a subword (hence a quasi-subword) of w ; • aaa is a quasi-subword of w which is not a subword; • abc is neither a subword nor a quasi-subword of w .Trivial examples of subwords of w are the empty word ⋆ and w itself. Definition 3.3.
Let w ∈ F( A ) . Then • if w is non-empty, the head of w , denoted h( w ) ∈ A , is the leftmost letter in w ; • if a ∈ A , then the a - truncation T a w ∈ F( A ) of w is the longest (non-empty) suffix of w with head a , or the empty word in case a does not occur in w .Similarly, if I ⊂ A , we denote by T I w the longest (non-empty) suffix of w whose head lies in I , or the empty word in case no letter from I occurs in w .The following observations all have trivial proofs. Remark . ( i ) If w ∈ F( A ) does not contain any occurrence of a ∈ A , then T a ( ww ′ ) = T a ( w ′ ) , for every w ′ ∈ F( A ) . ( ii ) For every w ∈ F( A ) and a ∈ A , one may (uniquely) express w as w = w ′ T a w where w ′ ∈ h A \ { a }i . ( iii ) If w ∈ F( A ) contains some occurrence of a ∈ A , then T a ( ww ′ ) = (T a w ) w ′ , for every w ′ ∈ F( A ) . Remark . ( i ) If w ∈ F( A ) , and a, b ∈ A , then either T a w is a suffix of T b w , or vice versa. ( ii ) If T b w is a suffix of T a w for all b ∈ A , then h( w ) = a , hence T a w = w . ( iii ) T I w = T a w for some a ∈ I , and T b w is a suffix of T I w for every b ∈ I . Definition 3.6.
Given I ⊆ A , the deletion morphism is the unique semigroup homomorphismsatisfying ∂ I : F( A ) → F( A \ I ) ⊆ F( A ) a i ⋆ , for i ∈ Ia j a j , for j / ∈ I. It associates with any w ∈ F( A ) the longest quasi-subword of w containing no occurrence ofletters from I . Remark . For every
I, J ⊆ A , ∂ I ∂ J = ∂ I ∪ J E. COLLINA AND A. D’ANDREA
4. K
ISELMAN ’ S SEMIGROUP AND CANONICAL WORDS OVER THE COMPLETE GRAPH
In this section, we recall results from [GM11] and draw some further consequences. Choosean alphabet A = { a i , ≤ i ≤ n } ; Kiselman’s semigroup K n has the presentation K n = h a i ∈ A | a i = a i for all i ; a i a j a i = a j a i a j = a i a j for i < j i . In accordance with [KM09], let π : F( A ) → K n denote the canonical evaluation epimorphism. Definition 4.1 ([GM11]) . Let w ∈ F( A ) . A subword of w of the form a i ua i , where a i ∈ A and u ∈ F( A ) , is special if u contains both some a j , j > i, and some a k , k < i . Remark . Notice that a subword a i ua i cannot be special if i = 1 or i = n .Let us recall the following fact. Theorem 4.3 ([KM09]) . Let w ∈ F( A ) . The set π − π ( w ) contains a unique element whose onlysubwords of the form a i ua i are special.Remark . ( i ) The unique element described in Theorem 4.3 contains at most one occurrence of a andat most one occurrence of a n . ( ii ) In order to prove Theorem 4.3, the authors of [KM09] define a binary relation → in F( A ) as follows: w → v if and only if either( → ) w = w a i a i w , v = w a i w , or( → ) w = w a i ua i w , v = w a i uw and u ∈ h a i +1 , . . . , a n i , or( → ) w = w a i ua i w , v = w ua i w and u ∈ h a , . . . , a i − i .It is possible to iterate such simplifications and write (finite) sequences w → v → v · · · → v k , (4.1) so that each word contains exactly one letter less than the previous and they all belongto the same fiber with respect to π . Moreover, each v i is a quasi-subword of w and of v j ,for all j < i .A sequence like (4.1) is called simplifying sequence if v k is not simplifiable any further,and each v i → v i +1 is called a simplifying step . It should be stressed that a simplifyingstep of type can be seen as both of type and of type . ( iii ) According to Theorem 4.3, every simplifying sequence for w ends on the same word.We will refer to any word, whose only subwords of the form a i ua i are special, as a canon-ical word , so that the above theorem claims existence of a unique canonical word v in each π − π ( w ) , w ∈ F( A ) . Thus, the assignment w v is a well defined map Can : F( A ) → F( A ) associating with each word its unique canonical form . Notice that w is canonical if and only if w = Can w . Remark . ( i ) If w is canonical then all of its subwords are canonical. ( ii ) A word u ∈ h a i , h ≤ i ≤ k i is canonical if and only if, for every j < h or j > k , theword a j u is canonical. Moreover, in such cases, Can( a j u ) = a j Can u. Remark . More consequences of Theorem 4.3 are that ( i ) the canonical form Can w is the word of minimal length in π − π ( w ) ; ( ii ) due to Remark 4.4 ( iii ) , Can w is the last element in every simplifying sequence startingfrom any of the words in π − π ( w ) ; GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS 7 ( iii ) Can w is a quasi-subword of all the words in any simplifying sequence beginning from w ; ( iv ) if w contains any given letter, so does Can w . Lemma 4.7.
For every u, v ∈ F( A ) , Can( uv ) = Can ((Can u ) v )= Can ( u Can v )= Can (Can u Can v ) Proof.
Can is constant on fibres of π . (cid:3) Definition 4.8. If I ⊆ A , then Can I w = Can ∂ A \ I w, where w ∈ F( A ) . In particular Can A w = Can w .Before proceeding further, we need to make an important observation. Say we have a sequenceof steps leading from a word y to a word x and that each step removes a single letter. The sameprocedure can be applied to every subword of y , and again at each step (at most) a single letter isremoved, eventually yielding a subword of x . Every subword of x is obtained in this way fromsome (possibly non-unique) subword of y . Notice that in the cases we will deal with, some ofthe steps could be simplifications of type → for which there is an ambiguity on which of the twoidentical letters is to be removed.We will say that a subword of x of the form a i ua i originates from a subword w of y if w yields a i ua i under the sequence of simplifications and w is of the form a i va i . Let us clarify things withan example. In the following sequence of steps, we have highlighted the letter to remove at eachstep: bdbc b d abcdc → b b d bcabcdc → b bb cabcdc → bcabcdc. Notice that in the last step, there is an ambiguity on which letter is being removed. Then thesubwords bdbcdab, dbcdab, bcdab of bdbcdabcdc all yield bcab ; however, bcab originates onlyfrom bdbcdab and bcdab .Henceforth, we will shorten the notation and denote by ∂ i , T i , . . . the maps ∂ a i , T a i , . . . . Lemma 4.9.
Assume y = Can y and let x = ∂ { ,...,k − } y , where ≤ k ≤ n . Then any subword a i ua i ≤ x is either special or satisfies u ∈ h a i , . . . , a n i .Proof. Let a i ua i be a subword of x . Then i ≥ k , as x = ∂ { ,...,k − } y contains no a i , i < k .The above subword a i ua i originates from a subword a i va i of y . If a i ua i is not special, theneither u ∈ h a j , j ≤ i i , or u ∈ h a j , j ≥ i i . However, the former case does not occur, otherwise a i va i would fail to be special, as x is obtained from y by only removing letters a j , ≤ j < k ,and k ≤ i . This is a contradiction, as y is canonical. (cid:3) Lemma 4.10.
Assume y = Can y and let x = ∂ { ,...,k − } y , where ≤ k ≤ n . Then anysimplifying sequence x = ∂ { ,...,k − } y → · · · → Can x = Can { k,...,n } y is such that all simplifying steps are of type → (possibly of type → ).Proof. By Lemma 4.9, any subword a i ua i ≤ x is either special or satisfies u ∈ h a j , i ≤ j ≤ n i ,hence the first simplifying step is of type → (and possibly of type → ).We want to show that, starting from x = ∂ { ,...,k − } y and applying simplifications of type → or of type → , one can never obtain a word admitting a subword a i ua i , u = ⋆, on which it ispossible to apply a step of type → (which is not of type → ). E. COLLINA AND A. D’ANDREA
Assume therefore by contradiction that a i ua i , where ⋆ = u ∈ h a j , k ≤ j < i i , occursas a subword after having performed some simplifying steps of type → on x . The subword a i ua i originates from a subword a i va i of x , which is necessarily special, since v cannot lie in h a j , i ≤ j ≤ n i , as some a j , j < i, occurs in u = ⋆ , and u is obtained from v after removingsome letters. This shows that letters a j , j > i, must occur in v , whereas they do not occur in u .However, a simplifying step of type → that removes a letter from between the two occurrencesof a i either occurs completely between them, or begins on the left of both. In the former case,it does not change the speciality of the subword, whereas in the latter case a simplification oftype → can only remove a letter a j , j < i , due to the presence of the left a i ; we thus obtain acontradiction. (cid:3) Corollary 4.11. If ua v is canonical, then Can( uv ) admits u as a prefix.Proof. By Lemma 4.10, all simplifying sequences from uv = ∂ ( ua v ) to Can( uv ) only containsteps of type → and no such simplifying step does alter u . (cid:3) Proposition 4.12.
Assume that ua j v is canonical, and that u ∈ h a i , k ≤ i ≤ n i , where k > j .Then ua j Can { k,...,n } v is also canonical.In particular, if ua v is canonical, then ua Can { k,...,n } v is also canonical.Proof. We prove the latter claim, as the proof of the former more general statement is completelyanalogous.We know that v is canonical. Then Lemma 4.10 shows that ua ∂ { ,...,k − } v can be simplifiedinto ua Can { k,...,n } v by only using steps of type → on the right of a .If ua Can { k,...,n } v is not canonical, then we may find a subword a i xa i that is not special. Asboth u and Can { k,...,n } v are canonical, then a must occur in x . Say that a i xa i originates fromthe subword a i ya i of ua v . No simplification of type → , when performed on the right of a , canchange the set of letters that appear between the two a i . This yields a contradiction, as a i ya i isspecial, whereas a i xa i is not. (cid:3) Corollary 4.13.
For every choice of u, u ′ , v, v ′ ∈ h a , a , . . . , a n i , ( i ) if ua va v ′ is canonical, then ua Can( vv ′ ) is canonical; ( ii ) if ua va v ′ is canonical, then ua Can( vv ′ ) is canonical; ( iii ) if ua u ′ a va v ′ is canonical, then both ua u ′ a Can( vv ′ ) and ua Can( u ′ vv ′ ) are canon-ical.Proof. ( i ) follows directly from Proposition 4.12, whereas ( iii ) follows by applying Proposition4.12 and then ( ii ) .However, ( ii ) is equivalent to ( i ) , as both ua va v ′ and ua va v ′ contain single occurrencesof a and a , and every simplifying sequence for the former can be turned into a simplifyingsequence for the latter by switching a with a . (cid:3) Lemma 4.14.
Let w ∈ F( A ) , ≤ i ≤ n . Then T i Can { i,...,n } w = Can { i,...,n } T i w. Proof.
Once again, we may assume without loss of generality that i = 1 . If there are no occur-rences of a in w , then both sides equal the empty word and we are done.Otherwise, using Remark 3.4 ( ii ) , write w = u T w and Can T w = a v . We are asked toshow that T Can( u T w ) = Can(T w ) , which is equivalent, using Lemma 4.7, to T Can( ua v ) = a v. GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS 9
Notice that v is canonical and u ∈ h a , . . . , a n i . The simplifying steps that can occur on a wordof type ua v may only affect u : indeed, v is canonical, there is an only occurrence of a in ua v ,and the only special words that begin in u and end in v contain an occurrence of a , thus leadingto a → .An easy induction now shows that Can( ua v ) = u ′ a v , where u ′ is a quasi-subword of u ,hence T Can( ua v ) = a v . (cid:3) Lemma 4.15.
Let w ∈ F( A ) , ≤ k ≤ n . Then ( i ) Can { k,...,n } w = Can { k,...,n } Can w ; ( ii ) Can { k,...,n } w is a quasi-subword of Can w .Proof. Let k > , the case k = 1 being trivial. It is easy to check that Can ◦ ∂ { ,...,k − } isconstant on fibres of π — it is invariant under all simplifying steps — which takes care of ( i ) .We may therefore assume in ( ii ) that w is a canonical word; however Can { k,...,n } w is obviouslya quasi-subword of w . (cid:3)
5. T
HE JOIN OPERATION
Given an update system over the complete oriented graph Γ n , Proposition 2.4 proves that itsdynamics monoid is an epimorphic image of Kiselman’s semigroup K n . In next section, wewill exhibit an update system S ⋆n over Γ n whose dynamics monoid is isomorphic to K n ; from adynamical point of view, S ⋆n serves as a universal update system .We introduce the following operation in order to construct, later, a family of update functions. Definition 5.1.
Take u, v ∈ F( A ) . The join of u and v , denoted by [ u, v ] , is the shortest wordadmitting u as quasi-subword and v as suffix. Namely, [ u, v ] = u + v where u = u + u − , so that u − is the longest suffix of u which is a quasi-subword of v .Notice that the decomposition u = u + u − strictly depends on the choice of v . Example 5.2.
For instance, consider u = cbadc and v = abdc . Then, [ u, v ] = cbabdc. Remark . ( i ) The empty word ⋆ is a subword of every w ∈ F( A ) , so that [ ⋆, u ] = [ u, ⋆ ] = u. ( ii ) If u is a quasi-subword of v , then [ u, v ] = v . ( iii ) If u is not a quasi-subword of v , then [ wu, v ] = wu + v = w [ u, v ] , for every w ∈ F( A ) . ( iv ) If [ u, v ] = u + v , then [ u, wv ] = [ u + , w ] v Indeed, write u = u + u − : u − is the longest suffix of u which is a quasi-subword of v , butthere could be a suffix of u + which is a quasi-subword of w . ( v ) If u, v ∈ F( A ) are canonical, [ u, v ] may fail to be so. For instance, [ a a , a a ] = a a a ,which is not canonical.The following lemma will be used later in the proof of Proposition 6.2. Lemma 5.4.
Let u, x, y ∈ F( A ) , if ux ≤ uy , then x ≤ y . Proof. As ux is a quasi-subword of uy , by Remark 5.3 ( ii ) , [ ux, uy ] = uy. Assume that x is not a quasi-subword of y and write it as x = x + x − , where x − is its longestsuffix which is a quasi-subword of y . Using Remark 5.3 ( iii ) , [ ux, y ] = ux + y. Finally, using Remark 5.3 ( iv ) , [ ux, uy ] = [ ux + , u ] y However, [ ux + , u ] = u can hold only if ux + is not longer than u , i.e., only if x + = ⋆ , hence x isa quasi-subword of y . (cid:3)
6. A
N UPDATE SYSTEM WITH UNIVERSAL DYNAMICS If U, V ⊂ F( A ) , denote by [ U, V ] ⊂ F( A ) the subset of all elements [ u, v ] , u ∈ U, v ∈ V . Definition 6.1.
The update system S ⋆n is the triple (Γ n , S i , f i ) , where(1) Γ n is, as before, the complete oriented graph on n vertices, where i → j if and only if i < j .(2) On vertex i , the state set S i ⊂ F( A ) is inductively defined as S i = { ⋆, a n } if i = n, { ⋆, a n − , a n − a n } if i = n − , { ⋆ } ∪ a i [ S n , [ . . . , [ S i +2 , S i +1 ] . . . ]] if ≤ i ≤ n − . (3) On vertex i , the vertex function is f i : S [ i ] → S i ( s i +1 , . . . , s n ) a i [ s n , [ . . . , [ s i +2 , s i +1 ] . . . ]] , if i ≤ n − , whereas f n − ( s n ) = a n − s n and f n ≡ a n is constant.We will abuse the notation and denote by ⋆ also the system state ( ⋆, . . . , ⋆ ) ∈ S . The rest ofthe paper will be devoted to the proof of the following Proposition 6.2.
Consider the evaluation morphism F : F( A ) → D ( S ⋆n ) , mapping each word w ∈ F( A ) to the corresponding evolution F w ∈ D ( S ⋆n ) . If p = ( p , . . . , p n ) = F w ⋆ , then: ( i ) One has p i = ( F w ⋆ ) i = Can { i,...,n } T i w. ( ii ) For every choice of k, ≤ k ≤ n , one may find j, ≤ j ≤ k, so that [ p k , [ . . . , [ p , p ] . . . ]] = T { ,...,k } Can w. Our central result then follows immediately.
Theorem 6.3.
With the same hypotheses and notation as in Proposition 6.2, ( i ) Can w = [ p n , [ . . . , [ p , p ] . . . ]] ; ( ii ) if u, v ∈ F( A ) , then F u = F v if and only if Can u = Can v ; ( iii ) K n is isomorphic to D ( S ⋆n ) .Proof. ( i ) Use Proposition 6.2 ( ii ) when k = n . Then [ p n , [ . . . , [ p , p ] . . . ]] = T { ,...,n } Can w = Can w. ( ii ) If F u = F v , then they certainly compute the same state on ⋆ . However, by ( i ) , onemay recover both Can u and Can v from this state, hence Can u = Can v . The otherimplication follows trivially, as Can u = Can v forces u and v to induce the same elementin K n , hence the same dynamics on S ∗ n . GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS 11 ( iii ) This is just a restatement of ( ii ) : distinct elements in K n have distinct S ∗ n -actions, sincethey act in a different way on the system state ⋆ . (cid:3) Remark . As an immediate consequence of Proposition 6.2, if j = h(Can w ) , then T j Can w =Can w , hence [ p j , [ . . . , [ p , p ] . . . ]] = [ p n , [ . . . , [ p , p ] . . . ]] = Can w. We will prove Proposition 6.2 by induction on the number n of vertices. The following tech-nical fact is needed in the proof of the inductive step, and we assume in its proof that Proposition6.2 and Theorem 6.3 hold on S ⋆k , for k < n . Proposition 6.5.
Let u, v ∈ h a j +1 , a j +2 , . . . , a n i be chosen so that ua j Can v is a canonicalword. Then [Can( uv ) , v ] = uv. In particular, [Can( uv ) , a j v ] = ua j v .Proof. We may assume, without loss of generality, that j = 1 .The statement is easily checked case by case when n = 1 or . Notice that u can have at mostone occurrence of a , whereas v may have many. Let us therefore distinguish four cases:(1) There is no occurrence of a in either u or v .In this case, we can use inductive assumption, after removing the vertex indexed by .(2) There is a single occurrence of a in u .Write u = u ′ a u ′′ . As u ′ a u ′′ a Can v is canonical, then, using Corollary 4.13 ( ii ) andLemma 4.7, Can( u ′ a u ′′ v ) = Can( u ′ a u ′′ Can v )= u ′ a Can( u ′′ Can v )= u ′ a Can( u ′′ v ) . Thus, we need to compute [ u ′ a Can( u ′′ v ) , v ] . However, applying Case 1 gives [Can( u ′′ v ) , v ] = u ′′ v, hence [ u ′ a Can( u ′′ v ) , v ] = u ′ a u ′′ v follows from Remark 5.3 ( iii ) .(3) a occurs in v , but not in u .Write Can v = v ′ a v ′′ . As ua Can v = ua v ′ a v ′′ is canonical, then, using Corollary4.13 ( i ) , also ua Can( v ′ v ′′ ) is canonical. However Lemma 4.15 informs us that Can( v ′ v ′′ ) = Can ∂ Can v = Can { ,...,n } Can v = Can { ,...,n } v = Can ∂ v, so that ua Can ∂ v is canonical. We may then use Case 1 to argue that(6.1) [Can( u∂ v ) , ∂ v ] = u∂ v. Lemma 4.15 implies that
Can( u∂ v ) = Can ∂ ( uv ) is a quasi-subword of Can( uv ) .By Corollary 4.11, both Can( uv ) and Can( u∂ v ) admit u as a prefix, so, using Lemma5.4, we can write Can( uv ) = uy, Can( u∂ v ) = uz, z ≤ y. We need to show that [Can( uv ) , v ] = uv . Now, applying Corollary 4.13 ( ii ) , Can( uv ) = Can( uv ′ ) a v ′′ = uxa v ′′ , where x is some quasi-subword of v ′ . As y = xa v ′′ ≤ v ′ a v ′′ = Can v ≤ v, then [Can( uv ) , v ] = u + v where u = u + u − and u − y ≤ v . However, this would force u − z ≤ u − y ≤ v hence also u − z ≤ ∂ v ; and this is only possible if u − = ⋆ , as Equation (6.1) shows.(4) a occurs both in u and in v .This is similar to the previous case. Write u = u ′ a u ′′ , Can v = v ′ a v ′′ . ApplyingCorollary 4.13 ( iii ) to the canonical word ua Can v = u ′ a u ′′ a v ′ a v ′′ , we obtain thatboth u ′ a u ′′ a Can ∂ v = u ′ a u ′′ a Can( v ′ v ′′ ) and u ′ a Can( u ′′ ∂ v ) = u ′ a Can( u ′′ Can ∂ v ) = u ′ a Can( u ′′ v ′ v ′′ ) are canonical. Similarly, Can( uv ) = Can( u ′ a u ′′ v ′ a v ′′ )= Can( u ′ a u ′′ v ′ v ′′ )= Can( u ′ a Can( u ′′ v ′ v ′′ ))= u ′ a Can( u ′′ ∂ v ) . Now, as u ′′ a Can ∂ v , being a subword of u ′ a u ′′ a Can ∂ v , is canonical, we have(6.2) [Can( u ′′ ∂ v ) , ∂ v ] = u ′′ ∂ v. As a consequence, [Can( u ′ a u ′′ v ′ a v ′′ ) , ∂ v ] = [ u ′ a Can( u ′′ ∂ v ) , ∂ v ]= [ u ′ a , u ′′ ] ∂ v = u ′ a u ′′ ∂ v, and one may complete the proof as in Case 3. (cid:3)
7. P
ROOF OF P ROPOSITION n = 1 being trivial, we assume that Proposition 6.2 and Theorem 6.3hold for a complete graph on less than n vertices, Proof of Proposition 6.2.
Let us start by proving Part ( i ) .Let w ∈ F( A ) . By inductive hypothesis, for a complete graph on n − vertices , . . . , n , theSDS map F ∂ w constructs on the i th vertex the state p i = Can { i,...,n } T i ∂ w = Can { i,...,n } T i w, hence, the statement holds for all vertices i > .As far as vertex is concerned, we need to show that p = Can T w . If w does not containthe letter a , then p = ⋆ = T w and there is nothing to prove; otherwise, ( F w ) = ( F T w ) . Weknow that the state on vertex depends only on the system state ( p ′ , . . . , p ′ n ) , which is computedby ∂ T w on the subgraph indexed by { , . . . , n } , i.e., p = a [ p ′ n , [ . . . , [ p ′ , p ′ ] . . . ]] . However, we may apply induction hypothesis and use Theorem 6.3 ( i ) , which yields p = a [ p ′ n , [ . . . , [ p ′ , p ′ ] . . . ]]= a Can ∂ T w = Can T w GRAPH-DYNAMICAL INTERPRETATION OF KISELMAN’S SEMIGROUPS 13
As for Part ( ii ) of Proposition 6.2, we proceed by induction on k . The basis of inductiondescends directly from Part ( i ) and Lemma 4.14, as p = Can T w = T Can w. Assume now k > . By inductive hypothesis, there exists j < k such that [ p k − , [ . . . , [ p , p ] . . . ]] = T { ,...,k − } Can w. Recall that, by Remark 3.5, either T k Can w is a suffix of T { ,...,k − } Can w or vice versa. In theformer case, we know by Part ( i ) and Lemma 4.14, that p k = Can { k,...,n } T k w = T k Can { k,...,n } w. (7.1)By Lemma 4.15, Can { k,...,n } w is a quasi-subword of Can w , hence p k = T k Can { k,...,n } w is aquasi-subword of T k Can w , which is a suffix of T { ,...,k − } Can w . Thus, [ p k , [ p k − [ . . . , [ p , p ] . . . ]]] = [ p k , T { ,...,k − } Can w ]= T { ,...,k − } Can w = T { ,...,k } Can w. If, instead, T { ,...,k − } Can w is a suffix of T k Can w , pick j so that T j Can w = T { ,...,k − } Can w and argue as follows.Choose u such that T k Can w = u T j Can w. As u ∈ h a i , k ≤ i ≤ n i , then by Part ( i ) p k = Can { k,...,n } ( u T j Can w ) = Can( u∂ { ,...,k − } T j Can w ) . By Proposition 4.12, as T k Can w = u T j Can w is canonical, we may argue that the word ua j Can { k,...,n } T j Can w is also canonical. Apply now Proposition 6.5 to ua j ∂ { ,...,k − } T j Can w ,to obtain [ p k , a j ∂ { ,...k − } T j Can w ] = [Can( u∂ { ,...,k − } T j Can w ) , a j ∂ { ,...,k − } T j Can w ] == ua j ∂ { ,...,k − } T j Can w, hence a fortiori [ p k , T j Can w ] = u T j Can w = T k Can w , which equals T { ,...,k } Can w . (cid:3)
8. C
ONCLUSIONS AND FURTHER DEVELOPMENTS If Γ is a finite directed acyclic graph, then any update system S supported on Γ induces adynamics monoid D ( S ) which naturally arises as a quotient of HK Γ . We refer to the smallestquotient of HK Γ through which all evaluation maps F : F( V ) → D ( S ) factor as the universaldynamics monoid D (Γ) . In this paper, we have proved that D (Γ n ) ≃ HK Γ n = K n . Conjecture 1. D (Γ) ≃ HK Γ for every finite directed acyclic graph. This has been computationally checked for all instances with ≤ vertices and on most graphson vertices. A conceptual approach to the problem of determining the universal dynamics of agiven graph Γ is by constructing an initial object, if there exists one, in the category of (pointed)update systems supported on Γ ; here “pointed” means that a preferred system state has beenchosen. Conjecture 2.
The pair ( S ⋆n , ⋆ ) is an initial object in the category of pointed update systemssupported on Γ n . The dynamics of an initial update system is clearly universal and its elements are told apart bytheir action on the marked system state ⋆ : this has been the philosophy underneath the proof ofTheorem 6.3. A CKNOWLEDGEMENTS
We would like to thank Francesco Vaccarino for his interest in the problem, VolodymyrMazorchuk and Riccardo Aragona for giving a preliminary assessment of this work, HenningMortveit for encouraging comments and Giorgio Ascoli for indicating possible applications. Weare grateful to the unknown referees for their suggestions, which were decisive in order to im-prove and substantially simplify the exposition.Results in this paper are part of EC’s doctoral dissertation, which was partially written duringa leave of absence from Banca d’Italia. This work was completed while ADA was visiting Uni-versity of Pisa on a sabbatical leave; he was supported by Ateneo Fundings from “La Sapienza”University in Rome. R
EFERENCES [AD13] Riccardo Aragona and Alessandro D’Andrea,
Hecke-Kiselman monoids of small cardinality , SemigroupForum (2013), no. 1, 32–40.[BMR00] C. L. Barrett, H. S. Mortveit, and C. M. Reidys, Elements of a theory of simulation. II. Sequentialdynamical systems , Appl. Math. Comput. (2000), no. 2-3, 121–136.[BMR01] ,
Elements of a theory of simulation. III. Equivalence of SDS , Appl. Math. Comput. (2001),no. 3, 325–340.[BR99] C. L. Barrett and C. M. Reidys,
Elements of a theory of computer simulation. I. Sequential CA overrandom graphs , Appl. Math. Comput. (1999), no. 2-3, 241–259.[For12] Love Forsberg, Effective representations of Hecke-Kiselman monoids of type A , arXiv:1205.0676.[GM11] Olexandr Ganyushkin and Volodymyr Mazorchuk,
On Kiselman quotients of 0-Hecke monoids , Int. Elec-tron. J. Algebra (2011), 174–191.[GM13] Anna-Louise Grensing and Volodymyr Mazorchuk, Monoid algebras of projection functors , SemigroupForum (2013).[Gre51] J. A. Green,
On the structure of semigroups , Ann. of Math. (2) (1951), 163–172.[Gre12] Anna-Louise Grensing, Monoid algebras of projection functors , J. Algebra (2012), 16–41.[Kis02] Christer O. Kiselman,
A semigroup of operators in convexity theory , Trans. Amer. Math. Soc. (2002),no. 5, 2035–2053.[KM09] Ganna Kudryavtseva and Volodymyr Mazorchuk,
On Kiselman’s semigroup , Yokohama Math. J. (2009), no. 1, 21–46.[MR08] Henning S. Mortveit and Christian M. Reidys, An introduction to sequential dynamical systems , Univer-sitext, Springer, New York, 2008.
E-mail address : [email protected], [email protected] D IPARTIMENTO DI S CIENZE DI B ASE E A PPLICATE PER L ’I NGEGNERIA , U
NIVERSIT ` A DI R OMA “L A S APIENZA ”,V IA A NTONIO S CARPA
OME , I
TALY S ERVIZIO G ESTIONE R ISCHI F INANZIARI , B
ANCA D ’I TALIA , V IA N AZIONALE
91 – 00184 R
OME , I
TALY
E-mail address : [email protected] D IPARTIMENTO DI M ATEMATICA , U
NIVERSIT ` A DI R OMA “L A S APIENZA ”, P. LE A LDO M ORO , 5 – 00185R
OME , I, I