Nondeterministic Automata and JSL-dfas
aa r X i v : . [ c s . F L ] J u l Nondeterministic Automata and
JSL -dfas
Robert S.R. MyersJuly 14, 2020
Here’s a summary of our results. • Section 2 provides background on finite join-semilattices and describes an equivalent category
Dep . The latterhas the finite relations as objects/morphisms; its self-duality takes the converse of objects/morphisms. Thissection serves as a succinct version of our paper ‘Representing Semilattices as Relations’. • Section 3 introduces the concept of
Dependency Automaton i.e. two nfas with a relation between their states sat-isfying compatibility conditions. They are essentially deterministic automata interpreted in
Dep , or equivalentlydeterministic finite automata interpreted in join-semilattices. The state-minimal
JSL -dfa accepting L amountsto the left quotients of L . As a dependency automaton it can be represented as the state-minimal dfas for L and L r related by the dependency relation DR L ( u − L, v − L r ) ∶ ⇐⇒ uv r ∈ L .We also go into some detail concerning various canonical JSL -dfas and their corresponding dependency automata.For example, Polak’s syntactic semiring is the transition semiring of the state-minimal
JSL -dfa. Also, the powersemiring of the syntactic monoid dualises the closure of L under left/right quotients and boolean operations. • Section 4 contains many results concerning the Kameda-Weiner algorithm. They lack a unifying thread, althoughthey’re all concerned with the same topic. The reader might skip to the final subsection. There it is proved thatan nfa N is ‘subatomic’ iff the transition monoid of rsc ( rev (N )) is syntactic. The Kameda-Weiner algorithm is not an easy read [KW70]. It searches for an edge-covering of a bipartite graph bycomplete bipartite graphs, where each covering induces a nondeterministic automaton. The best known lower-boundtechniques for nondeterministic automata involve such edge-coverings [GH06]. Thus we begin with a structural theoryat this underlying level. Our approach is based on the work of Moshier and Jipsen [Jip12]. Our category
Dep is avariant of their category
Ctxt , and we denote its composition by as they did.
Notation 2.1.1 (Relations and graphs) .
1. Given a subset X ⊆ Z then its relative complement is written X ⊆ Z . Given z ∈ Z we may write z ∶= { z } . Thecollection of all subsets of Z is denoted P Z .2. A relation is a subset of a specified cartesian product R ⊆ X × Y . We denote its domain by R s ∶= X and its codomain by R t ∶= Y . The relational composition R ; S ⊆ R s × S t is defined whenever R t = S s , as follows: R ; S( x, z ) ∶ ⇐⇒ ∃ y ∈ R t . R( x, y ) ∧ S( y, z ) . Each set Z has the identity relation ∆ Z ⊆ Z × Z defined ∆ Z ( z , z ) ∶ ⇐⇒ z = z .The image of X ⊆ R s under R is denoted R [ X ] ∶= { y ∈ R t ∶ ∃ x ∈ X. R ( x, y )} ; we may write R [{ z }] as R [ z ] . R ∣ X,Y ∶= R ∩ X × Y is the domain-codomain restriction of the relation R . The converse relation is defined˘ R ( x, y ) ∶ ⇐⇒ R ( y, x ) , in particular ˘ R s = R t and ˘ R t = R s .1. An undirected graph (or just graph ) ( V, E ) is a finite set V and an irreflexive and symmetric relation E ⊆ V × V .A bipartition for a graph ( V, E ) is a pair ( X, Y ) where X ∩ Y = ∅ , X ∪ Y = V and E = E ∣ X,Y ∪ E ∣ Y,X . A graphis said to be bipartite if it has a bipartition. ∎ Note 2.1.2 (Bipartitioned graphs as binary relations) . A bipartite undirected graph ( V, E ) with bipartition ( X, Y ) amounts to a relation E ∣ X,Y ⊆ X × Y . This completely captures its structure. Every relation between finite sets arisesfrom a bipartitioned graph, modulo bijective relabelling of its domain (or codomain).From this perspective, complete bipartite graphs (bicliques) are cartesian products . Covering the edges of a bipartitegraph by bicliques amounts to factorising R = S ; T . This relationship is well-known [GPJL91]. The number of bicliquesis the cardinality of the set S t = T s factorised through. The minimum possible cardinality is the bipartite dimension of the respective bipartite graph i.e. the minimum number of bicliques needed to cover the edges. ∎ Definition 2.1.3 (Biclique edge-coverings) .
1. A biclique of a relation R is a cartesian product X × Y ⊆ R .2. A biclique edge-covering of a relation R is a factorisation R = S ; T . Its underlying bicliques : C S , T ∶= { ˘ S [ x ] × T [ x ] ∶ x ∈ S t = T s } satisfy the equality ⋃ C S , T = R .3. The bipartite dimension dim ( R ) is the minimal cardinality ∣ C S , T ∣ over all factorisations R = S ; T . ∎ Notation 2.1.4 (Lower/upper bipartition) . When a binary relation R is viewed as a bipartitioned graph, we mayrefer to its domain as the lower bipartition and its codomain as the upper bipartition . ∎ Example 2.1.5 (Biclique edge-coverings) .
1. Each relation R has two canonical biclique edge-coverings i.e. R = ∆ R s ; R and R = R ; ∆ R t . Viewed as a bipartitegraph, the stars centered at each vertex of the lower bipartition cover the edges. Alternatively we can take eachstar centered at a vertex in the upper bipartition. Consequently dim ( R ) ≤ min (∣ R s ∣ , ∣ R t ∣) .2. Each undirected graph ( V, E ) provides an irreflexive symmetric relation E ⊆ V × V . From our viewpoint, thisrelation defines a bipartitioned graph. It is known as the bipartite double cover of ( V, E ) i.e. take two copiesof V and connect e ( u ) to e ( v ) iff E ( u, v ) . Starting with a complete graph on vertices V yields the relation∆ V ⊆ V × V . Interestingly, dim ( ∆ V ) ≈ ⌈ log (∣ V ∣)⌉ by applying Sperner’s theorem [BFRK08].3. Each finite poset P = ( P, ≤ P ) provides an order relation ≤ P ⊆ P × P . Viewed as a bipartitioned graph, pathsamount to alternating relationships p ≤ P p ≥ P p ≤ P p ⋯ . The edge-coverings from (1) above are optimal i.e. dim ( ≤ P ) = ∣ P ∣ . The lower/upper bipartition’s stars correspond to principal up/downsets.4. Consider C n = ({ , ..., n − } , E n ) where E n ( i, j ) ∶ ⇐⇒ ∣ j − i ∣ = n i.e. the 2 n -cycle. It has preciselytwo bipartitions if n ≥
1. Assuming 0 is in the lower bipartition, the respective relation E n ∣ X,Y relates evens toodds and has bipartite dimension ∣ X ∣ = ∣ Y ∣ = n .5. Consider P n = ({ , ..., n } , E n ) where E n ( i, j ) ∶ ⇐⇒ ∣ j − i ∣ = n . It has precisely twobipartitions if n ≥
1. Assuming 0 is in the lower bipartition, the respective relation E n ∣ X,Y relates evens to odds.Moreover dim ( E n ∣ X,Y ) = dim ( E n − ∣ X,Y ) = n . ∎ Definition 2.1.6 (The category
Dep ) . The objects of the category
Dep are the relations
G ⊆ G s × G t between finitesets. A morphism R ∶ G → H is a relation R ⊆ G s × H t such that the diagram: G t (R u ) ˘ / / H t G s G O O R qqqqqqqqq R l / / H s H O O Rel f for some R l ⊆ G s × H s and R u ⊆ H t × G t . The identity morphisms are id G ∶= G and composition R S ∶ G R Ð→ H S Ð→ I is defined: G t (R u ) ˘ / / H t (S u ) ˘ / / I t G s G O O R qqqqqqqqq R l / / H s H O O S qqqqqqqqq S l / / I s I O O That is, R S ∶= R l ; S l = R l ; S = R l ; H ; ( S u ) ˘ = R ; ( S u ) ˘ = G ; ( R u ) ˘; ( S u ) ˘ is any of the five equivalent relationalcompositions starting from the bottom left and ending at the top right. ∎ Then a
Dep -morphism
R ∶ G → H is a relation factorising through G on the left and H on the right. In view ofNote 2.1.2, it amounts to two biclique edge-coverings of R . Lemma 2.1.7.
Dep is a well-defined category.Proof.
Concerning identity morphisms, ∆ G s ; G = G = G ; ∆˘ G t ; graph-theoretically we are using the star-coverings fromExample 2.1.5.1. Concerning composition, R S is well-defined: (i) the commuting rectangle provides witnessingrelations R l ; S l and S u ; R u , (ii) R S is independent of the witnesses for R and S by considering the 5 relationalcompositions. We have R id G = R ; ∆˘ G t = R and id G R = ∆ G s ; R = R . Composition is associative because relationalcomposition is.
Example 2.1.8 ( Dep -morphisms) . Dep -morphisms are closed under converse and union .Given
R ∶ G → H then ˘ R ∶ ˘ H → ˘ G by taking the converse of the commutative square, which actually swaps thewitnessing relations. We have ∅ ∶ G → H via empty witnessing relations. Given R , S ∶ G → H then R ∪ S ∶ G → H by (i) unioning the respective witnessing relations, (ii) the bilinearity of relational composition w.r.t. union.2. Bipartite graph isomorphisms β ∶ G → G induce Dep -isomorphisms.
Suppose we have a bipartite graph isomorphism β ∶ G → G where each G i = ( V i , E i ) , so E ( x, y ) ⇐⇒ E ( β ( x ) , β ( y )) . Given any bipartition ( X, Y ) of G we obtain a bipartition ( β [ X ] , β [ Y ]) of G . Setting G i ∶= E i ∣ X × Y provides the Dep -morphism below left: Y β ∣ Y × β [ Y ] / / β [ Y ] X G O O R β ∣ X × β [ X ] / / β [ X ] G O O β [ Y ] ˘ β ∣ β [ Y ]× Y / / Yβ [ X ] G O O S ˘ β ∣ β [ X ]× X / / X G O O The bijective inverse β − = ˘ β provides witnessing relations in the opposite direction i.e. the Dep -morphism
S ∶ G → G above right. These morphisms are mutually inverse: G is Dep -isomorphic to G .3. The canonical quotient poset of a preorder defines a
Dep -isomorphism.
Let
G ⊆ X × X be a transitive and reflexive relation. There is a canonical way to construct a poset P = ( X / E , ≤ P ) via the equivalence relation E ( x , x ) ∶ ⇐⇒ G ( x , x ) ∧ G ( x , x ) , where J x K E ≤ P J x K E ∶ ⇐⇒ G ( x , x ) .Consider the Rel -diagram: X ∉ / / { ˘ G [ x ] ∶ x ∈ X } {⋃ ↓ P J x K E ∶ x ∈ X } ( λ J x K E . ˘ G [ x ]) ˘ / / X / E X G O O λx. G [ x ] / / { G [ x ] ∶ x ∈ X } ⊈ O O {⋃ ↑ P J x K E ∶ x ∈ X } ( λ J x K E . G [ x ]) ˘ / / X / E ≤ P O O Rel f is the category whose objects are the finite sets and whose morphisms are the binary relations, composed via relational composition. The converse symbol (R u ) ˘ is intentional. It provides symmetry later on. G [ x ] is the ‘upwards closure’ i.e. the union of the upwards closure ↑ P J x K E , whereas ˘ G [ x ] is the‘downwards closure’ in a similar manner. The left square commutes for completely general reasons, defining the Dep -morphism: R ( x , ˘ G [ x ]) ∶ ⇐⇒ ∃ x ∈ X. [ G ( x , x ) ∧ G ( x, x )] ⇐⇒ G ( x , x ) . The right square involves bijections via (i) identifying elements of P with principal up/downsets, (ii) the disjoint-ness of equivalence classes. It also commutes: ⋃ ↑ P J x K E ⊈ ⋃ ↓ P J x K E ⇐⇒ ⋃ ↑ P J x K E ∩ ⋃ ↓ P J x K E ≠ ∅ ⇐⇒ ∃ x ∈ X. J x K E ≤ P J x K E ≤ P J x K E ⇐⇒ J x K E ≤ P J x K E .In fact, R ∶ G → ⊈ is an instance of the natural isomorphism red G from Theorem 2.2.14 further below, and theright square defines a Dep -isomorphism by Example 2 above. Thus
G ≅ ≤ P , although whenever ∣ X ∣ > ∣ X / E ∣ thisisomorphism cannot arise from a bipartite graph isomorphism .4. Monotonicity can be characterised by
Dep -morphisms.
Given finite posets P and Q , a function f ∶ P → Q is monotonic iff the following Rel -diagram commutes: P f / / Q ≤ Q / / QP ≤ P O O f / / Q ≤ Q O O as the reader may verify. Actually, f is monotonic iff f ; ≤ Q ∶ ≤ P → ≤ Q is a Dep -morphism. Indeed, given that f ; ≤ Q ∶ ≤ P → ≤ Q is a Dep -morphism we’ll prove that f is monotonic in Example 2.2.4 further below.5. Biclique edge-coverings amount to
Dep -monos .Generally speaking,
Dep -morphisms represent two edge-coverings of a bipartitioned graph. A single edge-covering amounts to a
Dep -mono of a special kind: G t ∆ G t / / G t G s G O O G qqqqqqqqq G l / / H s H O O i.e. morphisms G ∶ G → H with the additional assumption G t = H t . It will follow later that any mono R ∶ G → I induces such a G ∶ G → H where ∣ H s ∣ ≤ ∣ I s ∣ and ∣ H t ∣ ≤ ∣ I t ∣ i.e. see Theorem ?? .6. Biclique edge-coverings amount to
Dep -epis .Analogous to the previous example, a single edge-covering can be represented as a
Dep -epi
G ∶ H → G where G s = H s . This will follow from self-duality i.e. epis are precisely the converses of monos. ∎ Some of the examples above are order-theoretic in nature. Indeed, the main result of this section is:
Dep is categorically equivalent to the finite join-semilattices equipped with join-preserving morphisms.
This result will characterise
Dep -objects modulo isomorphism. They are the union-free or reduced relations. Alge-braically they correspond to the finite lattices.
Definition 2.2.1 (Image, preimage, closure, interior) . For any binary relation
R ⊆ R s × R t define: R ↑ ∶ PR s → PR t R ↓ ∶ PR t → PR s cl R ∶= R ↓ ○ R ↑ ∶ ( PR s , ⊆ ) → ( PR s , ⊆ ) R ↑ ( X ) ∶= R [ X ] R ↓ ( Y ) ∶= { x ∈ R s ∶ R [ x ] ⊆ Y } in R ∶= R ↑ ○ R ↓ ∶ ( PR t , ⊆ ) → ( PR t , ⊆ ) P Z is the collection of all subsets of Z . The fixed points of the closure operator cl R are C ( R ) ∶= { R ↓ ( Y ) ∶ Y ⊆R t } . The fixed-points of the interior operator in R are O ( R ) ∶= { R [ X ] ∶ X ⊆ R s } and are called the R -open sets . ∎ Definition 2.2.2 (Component relations) . For each
Dep -morphism
R ∶ G → H define: R − ∶= {( g s , h s ) ∈ G s × H s ∶ h s ∈ H ↓ ( R [ g s ])} R + ∶= {( h t , g t ) ∈ H t × G t ∶ g t ∈ ˘ G ↓ ( ˘ R [ h t ])} called the lower/upper components respectively. ∎ Importantly, R ’s component relations are witnesses and contain all other witnesses. Lemma 2.2.3 (Morphisms characterisation and maximum witnesses) . Let
R ⊆ G s × H t be any relation between finite sets.1. R ↑ ( X ) ⊆ Y ⇐⇒ X ⊆ R ↓ ( Y ) for all subsets X ⊆ G s , Y ⊆ H t .2. The following labelled equalities hold: ( ↑↓↑ ) R ↑ ○ R ↓ ○ R ↑ = R ↑ R ↓ ○ R ↑ ○ R ↓ = R ↓ ( ↓↑↓ )( ¬ ↑ ¬ ) ¬ G t ○ R ↑ ○ ¬ G s = ˘ R ↓ ¬ G s ○ R ↓ ○ ¬ G t = ˘ R ↑ ( ¬ ↓ ¬ ) cl R (resp. in R ) is a well-defined closure (resp. interior) operator, in fact in R = ¬ R t ○ cl ˘ R ○ ¬ R t .4. R defines a Dep -morphism G → H iff R ↑ ○ cl G = R ↑ = in H ○ R ↑ , or equivalently R ↑ ○ cl G = in H ○ R ↑ .5. Each R ∶ G → H has the maximum witnesses ( R − , R + ) i.e.– R − ; H = R = G ; R ˘ + .– whenever R l ; H = R = G ; R ˘ r then R l ⊆ R − and R r ⊆ R + .6. For every G -open Y ∈ O ( G ) and every g t ∈ G t . Y ⊆ in G ( g t ) ⇐⇒ g t ∉ Y. Proof.
See background paper i.e.– Lemma 4.1.7,
Relating ( − ) ↑ and ( − ) ↓ and also Lemma 4.2.7.– Lemma 4.1.10, Morphism characterisation and maximum witnesses . Example 2.2.4. Characterizing monotonicity .Recalling Example 2.1.8.4, suppose f ∶ P → Q is a function and f ; ≤ Q ∶ ≤ P → ≤ Q is a Dep -morphism. Since cl ≤ P constructs the upwards closure in P , by Lemma 2.2.3.4 for any p ∈ P : ( f ; ≤ Q ) ↑ ( ↑ P p ) = ( f ; ≤ Q )[ p ] or equivalently ↑ Q f [ ↑ P p ] = ↑ Q f ( p ) . Thus whenever p ≤ P p ′ we know ↑ Q f ( p ′ ) ⊆ ↑ Q f ( p ) i.e. f ( p ) ≤ Q f ( p ′ ) .2. One-sided maximal bicliques .When searching for small edge-coverings by bicliques one can restrict to maximal ones i.e. X × Y ⊆ G where ( X, Y ) is pairwise-maximal w.r.t. inclusion [Orl77]. Then Lemma 2.2.3.4 says: Dep -morphisms
R ∶ G → H are two one-sided maximal edge-coverings, i.e. ( R ˘ − [ h s ] × H [ h s ]) h s ∈ H s isleft-maximal and ( ˘ G [ g t ] × R ˘ + [ g t ]) g t ∈ G t is right-maximal. Interior operators are also known as co-closure operators: monotone, idempotent and co -extensive i.e. in ( Y ) ⊆ Y . maximal bicliques without changing the morphism’s domain/codomain. ∎ Definition 2.2.5 ( JSL f ) . A join-semilattice is a set with a binary operation ∨ and a nullary operation – , satisfying: – ∨ x = x = – ∨ x x ∨ ( y ∨ z ) = ( x ∨ y ) ∨ z x ∨ y = y ∨ x x ∨ x = x. We write them S = ( S, ∨ S , – S ) where S is the underlying set, ∨ S ∶ S × S → S is a function and – S ∈ S . A join-preservingmorphism f ∶ S → T is a function f ∶ S → T such that f ( s ∨ S s ) = f ( s ) ∨ T f ( s ) and f ( – S ) = – T . Finally, JSL f is thecategory of finite join-semilattices and join-preserving morphisms. ∎ Example 2.2.6 (Clarifying join-semilattices) .
1. Join-semilattices are precisely the commutative and idempotent monoids. Consequently each S has a Cayley-representation as endofunctions ( − ∨ S s ∶ S → S ) s ∈ S closed under functional composition.2. More importantly, the join-semilattices are precisely the partially-ordered sets with all finite suprema. Thebinary operation ∨ S is the binary join, – S is the bottom element. Inductively, ⋁ S X exists for all finite X ⊆ S .3. The finite join-semilattices are precisely the finite bounded lattices: the finite partially-ordered sets with all finitesuprema and infima. Indeed, every finite join-semilattice is complete i.e. has all joins, hence has all meets too.That is, ⋀ S X exists for any finite subset X ⊆ S .4. By (3), each finite S ∶= ( S, ∨ S , – S ) can be flipped yielding the order-dual join-semilattice S op ∶= ( S, ∧ S , ⊺ S ) .5. The join-semilattice isomorphisms are precisely the bijective join-semilattice morphisms. They are also preciselythe order-isomorphisms between the underlying posets i.e. bijections preserving and reflecting the ordering. Forfinite join-semilattices they are precisely the bounded lattice isomorphisms by (3).6. Let = ({ , } , ∨ , ) be the two element set with ordering 0 ≤
1. Modulo isomorphism there is only onejoin-semilattice with two elements. ∎ Each binary relation G induces two isomorphic join-semilattices: the in G -fixpoints ( O ( G ) , ∪ , ∅ ) and the cl G -fixpoints ( C ( G ) , ∨ cl G , cl G ( ∅ )) . The latter’s join constructs the closure of the union, whereas its meet is simply intersection.This situation is well-known within the area of Formal Concept Analysis. Theorem 2.2.7 (Bounded lattice isomorphism of a bipartitioned graph) . For any
G ⊆ G s ×G t we have the isomorphism: α G ∶ ( C ( G ) , ∨ cl G , cl G ( ∅ )) → ( O ( G ) , ∪ , ∅ ) α G ( X ) ∶= G [ X ] α − G ( Y ) ∶= G ↓ ( Y ) . Proof.
See background paper i.e. Lemma 4.2.5,
The bounded lattices of G -open/closed sets and their irreducibles . Note 2.2.8 (More about join-semilattices) . Join and meet-irreducibles .Fix a join-semilattice ( S, ∨ S , – S ) . An element s ∈ S is join-irreducible if whenever s = ⋁ S X for finite X ⊆ S wehave s ∈ X . They are denoted J ( S ) ⊆ S . Likewise s ∈ S is meet-irreducible if whenever s is a finite meet ⋀ S X then s ∈ X ; they are denoted M ( S ) ⊆ S .2. Adjoint morphisms .Each
JSL f -morphism f ∶ S → T has an adjoint f ∗ ∶ T op → S op defined f ∗ ( t ) ∶= ⋁ S { s ∈ S ∶ f ( s ) ≤ T t } . It is uniquelydetermined by the adjoint relationship f ( s ) ≤ T t ⇐⇒ s ≤ S f ∗ ( s ) , and preserves all finite meets in T . We’vealready seen examples i.e. R ↓ = ( R ↑ ∶ ( PR s , ∪ , ∅ ) → ( PR t , ∪ , ∅ )) ∗ .3. Self-duality of
JSL f .Adjoint morphisms actually define an equivalence functor ( − ) ∗ ∶ JSL opf → JSL f where S ∗ ∶= S op is the order-dual join-semilattice and f ∗ is the adjoint morphism. It is witnessed by the natural isomorphism λ ∶ Id JSL f ⇒ ( − ) ∗ ○ (( − ) ∗ ) op where λ S ∶= id S . 6. Monos and epis .The
JSL f -monomorphisms are precisely the injective ones and the epimorphisms are precisely the surjective ones.The latter situation is unlike the case of distributive lattices where ↪ × is epic. Injective JSL f -morphisms f are also order-embeddings i.e. f ( s ) ≤ T f ( s ) ⇐⇒ s ≤ S s . Generally speaking, injective monotone functionsneedn’t be order-embeddings e.g. take a bijection from a 2-antichain to a 2-chain. ∎ Note 2.2.9 (Irreducibles) .
1. A bottom element is the empty join and hence never join-irreducible. The top element of a finite join-semilatticeis the empty meet, hence never meet-irreducible.2. The join-irreducibles of ( P X, ∪ , ∅ ) are the singleton sets { x } , the meet-irreducibles their relative complements.3. Each element of a join-semilattice is the join of those join-irreducibles below it. In fact, J ( S ) ⊆ S is the minimalsubset generating S under joins. Order dually, M ( S ) ⊆ S is the minimal subset generating S under meets.4. Finite distributive lattices S are determined by their subposet of join-irreducibles. That is, they are isomorphicto the downwards-closed subsets of ( J ( S ) , ≤ S ) , equipped with union (binary join) and intersection (binary meet).Every join-irreducible is actually join-prime i.e. j ≤ S ⋁ S S ⇐⇒ ∃ s ∈ S.j ≤ S s .5. For finite distributive lattices S , the subposet of join-irreducibles is order-isomorphic to the subposet of meet-irreducibles via τ S ∶ J ( S ) → M ( S ) with action j ↦ ⋁ S ↑ S j and inverse m ↦ ⋀ S ↓ S m . This fails for non-distributive lattices. ∎ We now have enough structure to define the functorial translation between relations and algebras.
Definition 2.2.10 (The equivalence functors) . Open ∶ Dep → JSL f constructs the semilattice of G -open sets: Open
G ∶= ( O ( G ) , ∪ , ∅ ) Open
R ∶= λY. R ˘ + [ Y ] . Pirr ∶ JSL f → Dep constructs Markowsky’s poset of irreducibles [Mar75].
Pirr S ∶= ≰ S ∣ J ( S ) × M ( S ) Pirr f ( j, m ) ∶ ⇐⇒ f ( j ) ≰ T m ( Pirr f ) − ( j , j ) ∶ ⇐⇒ j ≤ T f ( j ) ( Pirr f ) + ( m , m ) ∶ ⇐⇒ f ∗ ( m ) ≤ S m ,where Pirr f ’s components are also described above. ∎ Open G is the inclusion-ordered set of neighbourhoods G [ X ] of the lower bipartition i.e. particular subsets of theupper bipartition. In the other direction, Pirr S is the domain/codomain restriction of ≰ S ⊆ S × S to join/meet-irreducibles respectively. We have extended the concept studied by Markowsky to morphisms. Note 2.2.11 (Concerning
Open ’s action on morphisms) . Given
R ∶ G → H we defined Open R as λY. R ˘ + [ Y ] . It mayequivalently be defined λY. R ↑ ○ G ↓ ( Y ) i.e. one needn’t compute the maximal witness R + . It may also be defined λY. R ˘ u [ Y ] where R u ⊆ R + is any upper witness, since R ˘ + [ G [ X ]] = G ; R ˘ + [ X ] = G ; R ˘ u [ X ] = R ˘ u [ G [ X ]] . ∎ Example 2.2.12 (Semilattices as binary relations) . Boolean lattices correspond to identity relations .Observe
Open ∆ X = ( P X, ∪ , ∅ ) for any finite set X . Applying Pirr yields the bijection { x } ↦ x , which is bipartiteisomorphic to ∆ X and hence Dep -isomorphic.2.
Distributive lattices correspond to order relations .Given any order-relation ≤ P ⊆ P × P then Open ≤ P consists of all upwards closed subsets of P ordered byinclusion. Since they are closed under unions and intersections, Open ≤ P is a distributive lattice. Conversely if S is distributive one can show Pirr S ; τ ˘ S = ≤ S op ∣ J ( S ) × J ( S ) , using notation from Note 2.2.9.5. See the backgroundpaper Lemma 2.2.3.14 ‘Standard order-theoretic results’. Then Pirr S is bipartite isomorphic to an order-relationand hence Dep -isomorphic too. 7.
Partition lattices represented via functional composition .Recall the inclusion-ordered lattice ER ( X ) of equivalence relations on a finite non-empty set X . Meets areintersections, whereas joins are constructed by taking the transitive closure of the union. Viewed as a join-semilattice, there is a natural binary relation G such that Open
G ≅ ER ( X ) : G ⊆
Set ( , X ) × Set ( X, ) G ( f, g ) ∶ ⇐⇒ g ○ f = id where 2 ∶= { , } and Set ( A, B ) is the set of functions from A to B . Notice we have: ∣ X ∣ = ∣ G s ∣ > ∣ J ( ER ( X ))∣ = (∣ X ∣ ) ∣ X ∣ = ∣ G t ∣ = ∣ M ( ER ( X ))∣ . A concrete example .Let P be the path of edge-length 6 with vertices { , ..., } . One of its bipartitions amounts to G ⊆ { , , } × { , , , } where G ( x, y ) ∶ ⇐⇒ ∣ x − y ∣ =
1. Applying
Open yields: { , , , }{ , , } ♣♣♣♣♣♣♣ { , , } ◆◆◆◆◆◆◆ { , } { , } ◆◆◆◆◆◆◆ ♣♣♣♣♣♣♣ { , } ∅ ❖❖❖❖❖❖❖❖❖ ♦♦♦♦♦♦♦♦♦ this being the smallest join-semilattice S such that ∣ J ( S )∣ < ∣ M ( S )∣ . ∎ Example 2.2.13 ( JSL f -morphisms as Dep -morphisms) . Given f ∶ S → T we have the following Dep -morphism of type ≰ S → ≰ T , S f ˘ ∗ / / TS ≰ S O O f / / T ≰ T O O via the adjoint relationship f ( s ) ≰ T t ⇐⇒ s ≰ S f ∗ ( t ) in contrapositive form. Pirr f arises by restricting thedomain/codomain and passing to the maximum witnesses. Importantly, there is an equivalence functor Nleq ∶ JSL f → Dep with
Nleq S ∶= ≰ S and Nleq f ( s, t ) ∶ ⇐⇒ f ( s ) ≰ T t . In a precise sense, Pirr is the smallest restriction possible. ∎ We now explicitly describe the equivalence between semilattices and graphs, including the relevant componentrelations. From one perspective we represent each S as inclusion-ordered subsets of M ( S ) ; from another we show eachbipartitioned graph is Dep -isomorphic to its reduction – a kind of union-free normal form.
Theorem 2.2.14 (Categorical equivalence) . Open ∶ Dep → JSL f and Pirr ∶ JSL f → Dep define an equivalence ofcategories via natural isomorphisms: rep ∶ Id JSL f ⇒ Open ○ Pirr rep S ∶= λs ∈ S. { m ∈ M ( S ) ∶ s ≰ S m } rep − S ∶= λY. ⋀ S M ( S ) ∖ Yred ∶ Id Dep ⇒ Pirr ○ Open red G ∶= {( g s , Y ) ∈ G s × M ( Open G ) ∶ G [ g s ] ⊈ Y } red − G ∶= ˘ ∈ ⊆ J ( Open G ) × G t where red G and its inverse have associated component relations: ( red G ) − ∶= {( g s , X ) ∈ G s × J ( Open G ) ∶ X ⊆ G [ g s ]} ( red G ) + ∶= ˘ ∉ ⊆ M ( Open G ) × G t ( red − G ) − ∶= {( X, g s ) ∈ J ( Open G ) × G s ∶ G [ g s ] ⊆ X } ( red − G ) + ∶= {( g t , Y ) ∈ G t × M ( Open G ) ∶ in G ( g t ) ⊆ Y } roof. See background paper i.e. Theorem 4.2.10,
Dep is equivalent to
JSL f .So rep S represents a join-semilattice as neighbourhoods of the relation Pirr S . Its inverse is relatively clear: everyelement arises uniquely as the meet of those meet-irreducibles above it. Concerning the other natural isomorphism,red G reduces a bipartitioned graph G by discarding vertices whose neighbourhood is a union of othervertices’ neighbourhoods in a canonical manner. It is worth clarifying the above statement. Firstly, J ( PirrOpen G ) ⊆ { G [ g s ] ∶ g s ∈ G s } M ( PirrOpen G ) ⊆ { in G ( g t ) ∶ g t ∈ G t } because the supersets join/meet-generate Open G respectively. The join-irreducible G [ g s ] ’s correspond to those g s whoseneighbourhood is not a union of others. Less obviously the meet-irreducible in G ( g t ) ’s correspond to those g t whoseneighbourhood ˘ G [ g t ] is not a union of others: in G ( g t ) = in G ( g t ) ∧ in G in G ( g t ) ⇐⇒ in G ( g t ) = in G ( in G ( g t ) ∩ in G ( g t )) (definition of ∧ in G ) ⇐⇒ G ↓ ( g t ) = G ↓ ( in G ( g t ) ∩ in G ( g t )) (by Lemma 2.2.7) ⇐⇒ G ↓ ( g t ) = G ↓ ( in G ( g t )) ∩ G ↓ ( in G ( g t )) ( G ↓ preserves ∩ ) ⇐⇒ G ↓ ( g t ) = G ↓ ( g t ) ∩ G ↓ ( g t ) ( ↓↑↓ ) ⇐⇒ G ↓ ( g t ) = G ↓ ( g t ∩ g t ) ( G ↓ preserves ∩ ) ⇐⇒ ¬ R s ○ G ↓ ( g t ) = ¬ R s ○ G ↓ ( g t ∩ g t ) ( ¬ R s is bijective) ⇐⇒ ˘ G [ g t ] = ˘ G [ g t ] ∪ ˘ G [ g t ] ( ¬ ↓ ¬ ) .Then finally we have: G [ g s ] ⊈ in G ( g t ) ( ) ⇐⇒ g s ⊈ G ↓ ○ G ↑ ○ G ↓ ( g t ) (↓↑↓) ⇐⇒ g s ⊈ G ↓ ( g t ) ( ) ⇐⇒ G [ g s ] ⊈ g t ⇐⇒ G ( g s , g t ) where ( ) and ( ) follow by the adjoint relationship in Lemma 2.2.3.1. So reduction discards ‘degenerate’ vertices andevery relation is Dep -isomorphic to its reduction. This is a form of union-freeness. Importantly:
Proposition 2.2.15 (Reduction preserves bipartite dimension) . dim ( G ) = dim ( PirrOpen G ) for any G ⊆ G s × G t . Example 2.2.16 (Reduction and bipartite dimension) .
1. Isolated points have empty neighbourhoods and so are ‘discarded’ by red G . The bipartite dimension is preservedbecause it is defined in terms of edges.2. If two points have the same neighbourhood, only one representative occurs in the reduction PirrOpen G . Thesquare C arises as G ⊆ { , } × { , } where G [ ] = G [ ] = G t . Its reduction { ∅ } × { G t } is bipartite graphisomorphic to a single edge P . Concerning bipartite dimension, if two vertices have the same neighbourhoodwe may assume they reside in the same bicliques.3. A vertex’s neighbourhood can be a non-degenerate union of others e.g. G [ ] = G [ ] ∪ G [ ] below:1 30 ✁✁✁✁✁ ❂❂❂❂❂ ✁✁✁✁✁ ❂❂❂❂❂ Applying red G we obtain two disjoint edges. This preserves the bipartite dimension because we can add 2 toeach biclique involving 0 or 4. This method extends to the general cases G [ x ] = G [ X ] and ˘ G [ y ] = ˘ G [ Y ] .4. Suppose G is a disjoint union of bicliques i.e. G = ⋃ i ∈ I X i × Y i where X i ∩ X j = ∅ = Y i ∩ Y j whenever i ≠ j . Thenreduction is a special case of Example 2.1.8.3 i.e. a preorder whose quotient poset is discrete.5. Example 2.2.12.3 described a natural bipartitioned graph which was not reduced. In the automata-theoreticsection we’ll see many important examples. ∎
9e described the self-duality of
JSL f in Note 2.2.8.3. In Dep , this self-duality simply takes the converse relationon both objects and morphisms . Furthermore, the associated component relations are simply swapped . Theorem 2.2.17 (Self-duality) . We have the self-duality functor ( − ) ∨ ∶ Dep op → Dep : G ∨ ∶= ˘ G R ∶ G → H ( R op ) ∨ ∶= ˘ R ∶ ˘ H → ˘ G ( G ∨ ) − ∶= G + ( G ∨ ) + ∶= G − with witnessing natural isomorphism α ∶ Id Dep ⇒ ( − ) ∨ ○ (( − ) ∨ ) op defined α G ∶= id G = G .Proof. See background paper i.e. Theorem 4.1.13,
Self-duality of
Dep .There is also an important natural isomorphism connecting the two self-dualities.
Theorem 2.2.18 (Self-duality transfer) . ∂ ∶ ( − ) ∗ ○ Open op ⇒ Open ○ ( − ) ∨ with ∂ G ∶= λY. ˘ G [ Y ] is a natural isomorphism with inverse ∂ − G ∶= λY. G [ Y ] .In fact, if R ∶ G → H is a Dep -morphism then ( Open R ) ∗ = ∂ − G ○ Open ˘ R ○ ∂ H with action λY. H ↑ ○ R ↓ ( Y ) .2. λ ∶ ( − ) ∨ ○ Pirr op ⇒ Pirr ○ ( − ) ∗ where λ S ∶= id Pirr S = Pirr S is a self-inverse natural isomorphism.Proof.
1. See background paper Theorem 4.6.7 i.e. ‘ ∂ defines a natural isomorphism’.2. Given any JSL f -morphism f ∶ S → T we need to establish the following square commutes: ( Pirr S ) ∨ id S / / Pirr ( S op )( Pirr T ) ∨ ( Pirr f op ) ˘ O O id T / / Pirr ( T op ) Pirr ( f ∗ ) O O Indeed for any s ∈ S and t ∈ T , Pirr ( f ∗ )( t, s ) ∶ ⇐⇒ f ∗ ( t ) ≰ S op s ⇐⇒ s ≰ S f ∗ ( t ) ⇐⇒ f ( s ) ≰ S t ⇐⇒ Pirr f ( s, t ) ⇐⇒ ( Pirr f ) ˘ ( t, s ) via the usual adjoint relationship.Note that JSL f has enough projectives, using category-theoretic parlance. Proposition 2.2.19 ( JSL f and Dep have enough projectives) . Let Z be a finite set and S a finite join-semilattice.1. Open ∆ Z = ( P Z, ∪ , ∅ ) is the free join-semilattice on ∣ Z ∣ -generators.2. ε S ∶ Open ∆ J ( S ) ↠ S where ε S ( X ) ∶= ⋁ S X is surjective and extends J ( S ) ↪ S . Correspondingly, Dep hasepimorphisms
G ∶ ∆ G s → G .3. Given f ∶ Open ∆ Z → T and surjective q ∶ S ↠ T then f = q ○ g where g ∶ Open ∆ Z → S extends λz.q ∗ ( f ({ z })) . Since the self-duality preserves freeness,
JSL f has enough injectives too. The witnessing embeddings ι S ∶= ι ○ rep S ∶ S → Open ∆ M ( S ) first represent and then include into a powerset. In Dep they amount to monomorphisms
G ∶ G → ∆ G t .Concerning both projectivity and injectivity, there is an important special case involving endomorphisms. Corollary 2.2.20 (Endomorphism representations) . The
JSL f -diagrams below commute for any S -endomorphism δ , S δ / / S Open ∆ J ( S ) ε S O O O O ( Pirr δ ) ↑− / / Open ∆ J ( S ) ε S O O O O Open ∆ M ( S ) (( Pirr δ ) ˘ + ) ↑ / / Open ∆ M ( S ) S O O ι S O O δ / / S O O ι S O O Concerning
Dep , R ∶ G → G induces both R − ∶ ∆ G s → ∆ G s and R ˘ + ∶ ∆ G t → ∆ G t . Pirr S restricts to the join/meet-irreducibles of S . It turns out one can instead pass to any join/meetgenerators. Roughly speaking, one can extend the domain/codomain of Pirr S and Pirr f in the ‘obvious’ way. Proposition 2.2.21 ( Dep generator isomorphisms) . Let f ∶ S → T be a join-semilattice morphism, J S , M S ⊆ S bejoin/meet generators for S , and J T , M T ⊆ T be join/meet generators for T .1. We have the Dep -isomorphism I S ∶ ≰ S ∣ J S × M S → Pirr S , I S ∶= ≰ S ∣ J S × M ( S ) ( I S ) − ( x, j ) ∶ ⇐⇒ j ≤ S x ( I S ) + ( m, y ) ∶ ⇐⇒ m ≤ S y I − S ∶= ≰ S ∣ J ( S ) × M S ( I − S ) − ( j, x ) ∶ ⇐⇒ x ≤ S j ( I − S ) + ( y, m ) ∶ ⇐⇒ y ≤ S m .2. The following Dep -diagram commutes where I f ( s, t ) ∶ ⇐⇒ f ( s ) ≰ S t : ≰ S ∣ J T × M T o o I − T Pirr T ≰ S ∣ J S × M S I f O O I S / / Pirr S Pirr f O O Example 2.2.22.
Applying Proposition 2.2.21 we see that Example 2.2.13 is essentially
Pirr f . ∎ So far we’ve seen that
Dep is well-behaved w.r.t. bipartite dimension. However, aside from that, connections withgraph theory have been a bit thin on the ground. So before proceeding to the automata-theoretic constructions wemention some additional relationships.
Example 2.2.23 (Further graph-theoretic connections) . Discarding vertices .Let G be a reduced relation. Discarding a vertex g s ∈ G s in the lower bipartition amounts to generating a subjoin-semilattice ⟨ J ( S ) ∖ { j }⟩ S . Discarding g t ∈ G t amounts to constructing a quotient (⟨ M ( S ) ∖ { m }⟩ S op ) op .2. Kronecker product over boolean semiring .One can combine binary relations via G ? H (( g s , h s ) , ( g t , h t )) ∶ ⇐⇒ G ( g s , g t ) ∧ H ( h s , h t ) i.e. the Kroneckerproduct over the boolean semiring [Wat01]. It defines a functor − ? − ∶ Dep × Dep → Dep whose correspondingjoin-semilattice functor is the tight tensor product S ⊗ t T ∶= Ti [ S op , T ] . To explain briefly,(a) Ti [ − , − ] ∶ JSL opf × JSL f → JSL f restricts the usual hom-functor to morphisms which factor through a booleanlattice. The join-semilattice structure on morphisms is defined pointwise.(b) There is a universal property w.r.t. bilinearity via a natural isomorphism ut ∶ Ti [ − ⊗ t − , − ] ⇒ Ti [ − , Ti [ − , − ]] .(c) The tight tensor product is distinct from the tensor product [GW05]; they coincide on distributive lattices.3. Extension to non-bipartite graphs .We’ve seen that reduced relations correspond to finite join-semilattices. This categorical equivalence can beextended to reduced undirected graphs versus finite De Morgan algebras i.e. bounded lattices with an order-reversing involution where distributivity is not assumed.(a) By undirected graph we mean a symmetric relation
E = ˘ E i.e. a standard undirected graph where self-loopsare now permitted.(b) The algebras may be axiomatised by extending join-semilattices with a unary operation satisfying σ ( x ∨ y ) ≤ σ ( x ) and σ ( σ ( x )) = x . A morphism is a join-semilattice morphism preserving σ .(c) Given ( V, E ) we construct the De Morgan algebra ∂ E ∶ Open E → ( Open E ) op where ∂ E ( X ) ∶= E [ X ] . Given aDe Morgan algebra σ ∶ S → S op we construct the undirected graph ( J ( S ) , Pirr σ ) . ∎ Dependency Automata
Definition 3.1.1 (Nondeterministic finite automaton) .
1. A nondeterministic finite automaton (or nfa ) is a tuple
N = ( I, Z, N a , F ) where:– Z is a finite set,– I, F ⊆ Z are subsets, and– N a ⊆ Z × Z for each a ∈ Σ.The elements of Z , I , F are called states , initial states and final states respectively. Each N a is called the a -transition relation . We often reuse the symbol denoting the nfa (e.g. N ) to denote the transitions (e.g. N a ).We may also denote the states, initial states and final states by Z N , I N and F N respectively.2. For w ∈ Σ ∗ inductively define N w as N ε ∶= ∆ Z , N ua ∶= N u ; N a . Then we say N accepts the language : L ( N ) ∶= { w ∈ Σ ∗ ∶ N w [ I ] ∩ F ≠ ∅ } . Constructions on nondeterministic automata
N = ( I, Z, N a , F ) .a. Given S ⊆ Z then N @ S ∶= ( S, Z, N a , F ) is the nfa with its initial states changed to S . Notice that L ( N @ S ) = ⋃ z ∈ S L ( N @ z ) .b. N ’s reverse nfa is: rev ( N ) ∶= ( F, Z, ( N a ) ˘ , I ) and accepts the reverse language ( L ( N )) r .c. There are various concepts relating to reachability :rs ( N ) ∶= { N w [ I ] ∶ w ∈ Σ ∗ } reach ( N ) ∶= ⋃ rs ( N ) ⊆ Z rsc ( N ) ∶= ({ I } , rs ( N ) , λX. N a [ X ] , { X ∈ rs ( N ) ∶ X ∩ F ≠ ∅ }) That is, rs ( N ) consists of N ’s reachable subsets , reach ( N ) consists of N ’s reachable states and finally rsc ( N ) is the famous reachable subset construction . The latter is a dfa – see Definition 3.1.3 below.d. If I ⊆ X ⊆ Z and N a [ X ] ⊆ X (for a ∈ Σ) the nfa
N ∩ X ∶= ( I, X, N a ∣ X × X , F ∩ X ) accepts L ( N ) . Then: reach ( N ) ∶= reach ( N ) ∩ N is the reachable part of N .e. The coreachable part of N also accepts L ( N ) : coreach ( N ) ∶= rev ( reach ( rev ( N ))) . f. An nfa isomorphism f ∶ M → N is a bijection f ∶ Z M → Z N which preserves and reflects the initial states,the final states, and also the transitions. That is: z ∈ I M ⇐⇒ f ( z ) ∈ I N M a ( z , z ) ∶ ⇐⇒ N a ( f ( z ) , f ( z )) z ∈ F M ⇐⇒ f ( z ) ∈ F N for each z, z , z ∈ Z and a ∈ Σ. We may also write
M ≅ N .g. Each nfa has an associated join-semilattice of accepted languages by varying the initial states: langs ( N ) ∶= ({ L ( N @ S ) ∶ S ⊆ Z } , ∪ , ∅ ) . Equivalently, langs ( N ) ∶= langs ( Det ( dep ( N ))) is the join-semilattice of languages accepted by the full subsetconstruction – see Definition 3.4.1.2 and Note 3.2.7.4. We say N is state-minimal if there is no nfa accepting L ( N ) with strictly fewer states. ∎ xample 3.1.2 (Some small nfas) . L = a ( b + c ) + b ( a + c ) + c ( a + b ) from [ADN92] is a language with two state-minimal nfas. ● b,c & & ▼▼▼▼▼▼ i a qqqqqq b / / c & & ▼▼▼▼▼▼ ● a,c / / o ● a,b qqqqqq ● a & & ▼▼▼▼▼▼ i b,c qqqqqq a,c / / a,b & & ▼▼▼▼▼▼ ● b / / o ● c qqqqqq L = a + aa from [LRT09] is an example of a language which is not ‘biresidual’ [Tam10, LRT09]. It has 5 state-minimal nfas: i a / / @A BC a O O i a / / o i a / / @A BC a O O o a / / oi a / / i a / / o i a / / @A BC a O O ● a / / o i a / / o a / / o L = ( ab ) ∗ + ( abc ) ∗ has a unique state-minimal nfa shown below left. io a (cid:7) (cid:7) io a ❇❇❇❇❇ ● b H H ● c > > ⑤⑤⑤⑤⑤ ● b o o io a / / ● b / / o a / / c (cid:15) (cid:15) ● b + + o a k k ● a / / ● b / / o BC@A c O O Every regular language has a unique state-minimal partial deterministic machine, shown for this L above right.4. Consider the language L n = ( a + b ) ∗ a ( a + b ) n for any n ≥
0. If n = i a,b $ $ @A BC a O O o EDGF (cid:15) (cid:15) ● a,b o o EDGF (cid:15) (cid:15) ● a,b o o EDGF (cid:15) (cid:15) ● a,b o o a d d EDGF a,b (cid:15) (cid:15) i a,b (cid:5) (cid:5) a / / ● a,b / / ● a,b / / ● a,b / / o Each state-minimal nfa accepting L arises by removing transitions from the left machine. One may remove anyedge ● → i , and also the rightmost a -loop. There is a similar state-minimal machine for any L n with n + ⋅ ( n + ) + n + state-minimal nfas accepting L n . On the other hand,the state-minimal partial deterministic automaton accepting L n has 2 n + nodes. ∎ We now recall deterministic finite automata and their associated canonical construction i.e. the state-minimaldeterministic machine for a regular language.
Definition 3.1.3 (Deterministic finite automaton) .
1. A deterministic finite automaton (or dfa ) is an nfa ( I, Z, N a , F ) where ∣ I ∣ = N a is a function. We maywrite them as δ = ( i, Z, δ a , F ) where i ∈ Z . For each w ∈ Σ ∗ we inductively define the endofunction δ w ∶ Z → Z as follows: δ ε ∶ = id Z and δ ua ∶ = δ a ○ δ u for each ( u, a ) ∈ Σ ∗ × Σ.2. Given a dfa δ = ( z , Z, δ a , F ) accepting L and u ∈ Σ ∗ , L ( δ @ δ u ( i ) ) = u − L ∶ = { w ∈ Σ ∗ ∶ uw ∈ L } . In other words, the unique u -successor of z accepts u − L . The latter set is the left word quotient of L by u andis also known as the Brzozowski derivative [Brz64]. Here, b,c
ÐÐ→ indicates there is one b -labelled edge and another parallel c -labelled one. Initial states are indicated by i , final states by o .
13. Fix any regular L ⊆ Σ ∗ and let LW ( L ) ∶ = { u − L ∶ u ∈ Σ ∗ } be L ’s left word quotients . Then: dfa ( L ) ∶ = ( L, LW ( L ) , λX.a − X, { X ∈ LW ( L ) ∶ ε ∈ X }) is the state-minimal dfa accepting L . It is well-defined because a − ( u − L ) = ( ua ) − L .4. A dfa morphism f ∶ ( x , X, γ a , F X ) → ( y , Y, δ a , F Y ) is a function f ∶ X → Y such that for a ∈ Σ: f ○ δ a = γ a ○ f f ( x ) = y f − ( F Y ) = F X . The final condition asserts that the final states are both preserved and reflected, noting that f − ∶ P Y → P X isthe preimage function. Importantly, dfa morphisms always preserve the accepted language.5. Each dfa δ ∶ = ( z , Z, δ a , F ) has a dfa of accepted languages : simple ( δ ) ∶ = ( L, langs ( δ ) , λX.a − X, { X ∈ langs ( δ ) ∶ ε ∈ X }) where langs ( δ ) ∶ = { L ( δ @ z ) ∶ z ∈ Z } . There is a surjective dfa morphism acc δ ∶ δ ↠ simple ( δ ) defined acc δ ( z ) ∶ = L ( δ @ z ) i.e. the acceptance map . Theword simple is non-standard yet well-motivated: every surjective dfa morphism f ∶ simple ( δ ) ↠ γ is bijective.6. An ordered dfa ( p , P , δ a , F ) consists of a partially ordered set P = ( P, ≤ P ) and a dfa ( p , P, δ a , F ) whose deter-ministic transitions are respectively monotonic δ a ∶ P → P . An ordered dfa morphism is a dfa morphism betweenordered dfas which is also monotonic. ∎ Note 3.1.4 (Concerning dfa ( L ) ) . State-minimal dfas are often introduced via Hopcroft’s algorithm. One takes thereachable part of a given dfa, afterwards identifying states accepting the same language. The latter uses Hopcroft’spartition refinement, essentially constructing the Myhill-Nerode congruence. There are two ‘representation indepen-dent’ ways of defining it: (1) as equivalence classes of the Myhill-Nerode congruence MN L ⊆ Σ ∗ × Σ ∗ for L , (2) as theleft word quotients u − L also known as Brzozowski derivatives [Brz64]. ∎ Note 3.1.5 (Concerning simple ( δ ) ) . The dfa simple ( δ ) has no more states than δ . Each state z of the latter accepts L ( δ @ z ) (by definition), as does the state L ( δ @ z ) in simple ( δ ) . This construction is defined for dfas but not nfas.However, later we’ll introduce a related construction simple ∨ ( N ) for each nfa N – see Definition 4.2.4. ∎ We now introduce dependency automata i.e. two nfas compatible w.r.t. a bipartitioned graph.
Definition 3.1.6 (Dependency automaton) . A dependency automaton is a triple ( N , G , N ′ ) where:1. G ⊆ G s × G t is a binary relation (bipartitioned graph).2. N ∶ = ( I N , G s , N a , F N ) is an nfa over the lower bipartition.3. N ′ ∶ = ( I N ′ , G t , N ′ a , F N ′ ) is an nfa over the upper bipartition.4. N a ; G = G ; ( N ′ a ) ˘ for each a ∈ Σ.5. F N ′ = G [ I N ] and F N = ˘ G [ I N ′ ] .Condition (4) induces Dep -endomorphisms which we denote by N † a ∶ G → G for each a ∈ Σ. A dependency automaton ( N , G , N ′ ) accepts the language L ( N , G , N ′ ) ∶ = L ( N ) i.e. the language accepted by the lower nfa. ∎ Each nfa induces a dependency automaton with only linear blowup.
Definition 3.1.7 (Nfa’s associated dependency automaton) . Given an nfa N with states Z , dep ( N ) ∶ = ( N , ∆ Z , rev ( N )) is its associated dependency automaton. ∎ For well-definedness consider Definition 3.1.6 when
G = ∆ Z . Then (4) amounts to taking the converse relation and(5) to swapping the initial/final states. So each nfa can be viewed as a dependency automaton. Very importantly,each regular language has an associated dependency automaton too.14 efinition 3.1.8 (Canonical dependency automaton) . Given a regular language L ⊆ Σ ∗ , dep ( L ) ∶ = ( dfa ( L ) , DR L , dfa ( L r )) where DR L ( u − L, v − L r ) ∶ ⇐⇒ uv r ∈ L is the respective canonical dependency automaton . ∎ Example 3.1.9 ( dep ( L ) ) . If L = a + aa so L = L r then dfa ( L ) = dfa ( L r ) is i a / / o a / / o excluding the sink. Thecanonical dependency automaton takes the form: o o o a o o o a iaa − L a − L LL ssssss a − L ssssss aa − Li a / / o a / / o excluding the sink state from the top and bottom (which are isolated in DR L ). ∎ Lemma 3.1.10. dep ( L ) is a well-defined dependency automaton.Proof. Concerning (4), ( λX.a − X ) ; DR L [ u − L ] = DR L [( ua ) − L ] = { v − L r ∶ v ∈ Σ ∗ , uav r ∈ L } DR L ; ( λX.a − X ) ˘ [ u − L ] = ( λX.a − X ) ˘ [{ v − L r ∶ v ∈ Σ ∗ , uv r ∈ L }] = { v − L r ∶ v ∈ Σ ∗ , u ( va ) r ∈ L } = { v − L r ∶ v ∈ Σ ∗ , uav r ∈ L } .Concerning (5), v − L r ∈ DR L [ L ] ⇐⇒ v r ∈ L ⇐⇒ v ∈ L r ⇐⇒ ε ∈ v − L r u − L ∈ ( DR L ) ˘ [ L r ] ⇐⇒ u ∈ L ⇐⇒ ε ∈ u − L. In both the classes of examples so far, the upper nfa accepts a word iff the lower nfa accepts its reverse. Thissituation holds generally for all dependency automata.
Lemma 3.1.11. If ( N , G , N ′ ) is a dependency automaton then L ( N ′ ) = ( L ( N )) r .Proof. Since N a ; G = G ; ( N ′ a ) ˘ we have N † a ∶ G → G and composing yields N w ; G = G ; ( N ′ w ) ˘ for w ∈ Σ ∗ . Then: w ∈ L ( N ′ ) ⇐⇒ N ′ w [ I N ′ ] ∩ F N ′ ≠ ∅ ⇐⇒ N ′ w [ I N ′ ] ⊈ F N ′ = G [ I N ] (by def.) ⇐⇒ N ′ w [ I N ′ ] ⊈ ˘ G ↓ ( I N ) ( ¬ ↑ ¬ ) ⇐⇒ N ′ w ; ˘ G [ I N ′ ] ⊈ I N (adjoints) ⇐⇒ ˘ G ; N ˘ w [ I N ′ ] ⊈ I N (see above) ⇐⇒ N ˘ w [ F N ] ⊈ I N (by def.) ⇐⇒ w ∈ L ( rev ( N )) ⇐⇒ w ∈ ( L ( N )) r . Definition 3.1.12 (The category aut
Dep ) . Its objects are the dependency automata. Take any two of them: ( M , F , M ′ ) where M = ( I M , F s , M a , F M ) and M ′ = ( I ′M , F t , M ′ a , F ′M )( N , G , N ′ ) where N = ( I N , G s , N a , F N ) and M ′ = ( I ′N , G t , N ′ a , F ′N ) .An aut Dep -morphism R ∶ ( M , F , M ′ ) → ( N , G , N ′ ) is a Dep -morphism R ∶ F → G such that for each a ∈ Σ, M a ; R = R ; ( N ′ a ) ˘ R [ I M ] = F N ′ ˘ R [ I N ′ ] = F M . Composition is inherited from
Dep . The leftmost condition can be written M † a R = R N † a . ∎ emma 3.1.13. aut Dep is a well-defined category.Proof.
Identity morphisms id ( N , G , N ′ ) ∶ = id G = G are well-defined. Indeed, the conditions concerning dependencyautomata state precisely that G is an aut Dep -endomorphism of ( N , G , N ′ ) . It remains to verify that compatible aut Dep -morphisms are closed under
Dep -composition. To this end, take a dependency automaton ( O , H , O ′ ) where O = ( I O , H s , O a , F O ) and O ′ = ( I ′H , H t , O ′ a , F ′H ) and also a morphism S ∶ ( N , G , N ′ ) → ( O , H , O ′ ) . The aut Dep -morphisms inform us that M † a R = R N † a and N † a S = S N † a , so that: M † a ( R S ) = ( M † a R ) S = ( R N † a ) S = R ( N † a S ) = R ( S O † a ) = ( R S ) O † a . Finally, R S [ I M ] = R ; S ˘ + [ I M ] = S ˘ + [ R [ I M ]] = S ˘ + [ F ′N ] (def. of Dep and aut
Dep ) = S ˘ + [ G [ I N ]] = G ; S ˘ + [ I N ] = S [ I N ] = F ′O . (def. of Dep and aut
Dep ) ( R S ) ˘ [ I ′O ] = ( R S ) ∨ [ I ′O ] (def. of ( − ) ∨ ) = S ∨ R ∨ [ I ′O ] (functoriality) = ˘ S ; ( R ∨ ) ˘ + [ I ′O ] ( Dep -composition) = ˘ S ; R ˘ − [ I ′O ] = R ˘ − [ ˘ S [ I ′O ]] = R ˘ − [ F N ] ( Dep -composition) = R ˘ − [ ˘ G [ I ′N ]] = ( R − ; G ) ˘ [ I ′N ] = ˘ R [ I N ] = F M (def. of aut Dep and
Dep ). Theorem 3.1.14 (Self-duality of aut
Dep ) . We have the self-duality functor
Rev ∶ aut op Dep → aut Dep : Rev ( N , G , N ′ ) ∶ = ( N ′ , ˘ G , N ) Rev R ∶ = R ∨ . recalling the self-duality of Dep from Theorem 3.1.14.Proof.
Its action on objects is well-defined: (4) holds because we know N a ; G = G ; ( N ′ a ) ˘ and hence N ′ a ; ˘ G = ˘ G ; ( N a ) ˘;(5) holds because it is invariant under swapping the lower/upper nfa. Its action on morphisms is well-defined by asimilar argument, recalling that ( R ∶ F → G ) ∨ ∶ = ˘ R ∶ ˘ G → ˘ F . Then it is a functor because ( − ) ∨ is. It is an equivalencefunctor for the same reason.Next we specify a way in which dependency automata can be isomorphic i.e. via distinct pairs of witnessing relations ( N a , N ′ a ) of the same Dep -endomorphism N a ; G = ∶ N † a ∶ = G ; ( N ′ a ) ˘. In other words, there can be too few transitions relative to the inclusion-maximal components ( N † a ) − and ( N † a ) + . There can also be too few initial/final states (thesesets correspond to Dep -morphisms too).
Proposition 3.1.15 ( aut Dep transition-based isomorphisms) . Given ( N , G , N ′ ) and ( M , G , M ′ ) such that: N a ; G = M a ; G ( for a ∈ Σ ) G [ I N ] = G [ I M ] ˘ G [ I N ′ ] = ˘ G [ I M ′ ] then id G = G ∶ ( N , G , N ′ ) → ( M , G , M ′ ) is an aut Dep -isomorphism.Proof. G certainly defines a Dep -morphism id G ∶ = G ∶ G → G . It is an aut Dep -morphism G ∶ ( N , G , N ′ ) → ( M , G , M ′ ) because: M a ; G = N a ; G (by assumption) = G ; N ˘ a ( ( N , G , N ′ ) a dependency automaton)and similarly G [ I M ] = G [ I M ] = F M ′ , ˘ G [ I N ′ ] = G [ I M ′ ] = F M . By a symmetric argument we infer G ∶ ( M , G , M ′ ) → ( N , G , N ′ ) is well-defined, and also the inverse because id G id G = id G in Dep . Proposition 3.1.16 (Polytime canonical dependency automaton) . Given dfas α , β s.t. L ( β ) = ( L ( α )) r one can build L ( α ) ’s canonical dependency automaton in polytime. roof. Minimising α in polytime yields γ ∶ = ( x , X, γ a , F γ ) , minimising β yields δ ∶ = ( y , Y, δ a , F δ ) . Construct G ⊆ X × Y , G ( γ u ( x ) , δ v ( y )) ∶ ⇐⇒ γ v r ( γ u ( x )) ∈ F γ ⇐⇒ uv r ∈ L noting that γ and δ are reachable. Then we have the bipartite graph isomorphism: Y β / / LW ( L r ) X G O O β / / LW ( L ) DR L O O where β ( γ u ( x )) ∶ = u − L and β ( δ v ( y )) ∶ = v − L r . It induces a Dep -isomorphism – in fact an aut
Dep -isomorphism.
Just as each nfa has a reverse, each dependency automaton ( N , G , N ′ ) has a reverse Rev ( N , G , N ′ ) . It swaps thelower/upper nfa and takes the converse of G (equivalently, swaps the bipartitions). This construction arose fromthe self-duality of Dep . We now focus on lifting the categorical equivalence
Dep ≅ JSL f to one between dependencyautomata and deterministic finite automata interpreted in join-semilattices . In the process we’ll generalise the subsetconstruction to dependency automata. Definition 3.2.1 ( dfa JSL ) . A JSL -dfa is a 4-tuple ( s, S , γ a , F ) where S = ( S, ∨ S , – S ) is a finite join-semilattice, s ∈ S is an element, δ a ∶ S → S is an join-semilattice morphism for a ∈ Σ, and F ∶ = ↓ S t ⊆ S for some t ∈ S . It accepts thelanguage its underlying dfa does. A JSL -dfa morphism is a dfa morphism which is also a join-semilattice morphismi.e. preserves all joins. Given w ∈ Σ ∗ we inductively define endomorphisms δ ε ∶ = id S and δ wa ∶ = δ a ○ δ w . The category dfa JSL consists of the
JSL -dfas with their morphisms, where composition is functional. ∎ Importantly,
JSL -dfas are deterministic finite automata interpreted in join-semilattices. classical dfa JSL -dfastates Z S initial state α ∶ { ∗ } → Z α ∶ → S transitions δ a ∶ Z → Z δ a ∶ S → S final states ω ∶ Z → { , } ω ∶ S → Indeed, viewing sets as algebras for the empty signature then { ∗ } is free 1-generated, just as is the free 1-generatedjoin-semilattice. Morphisms from such algebras amount to picking a single element. On the other hand, { , } and are the unique (modulo isomorphism) two-element algebras of their respective varieties. Morphisms to such algebrasamount to subsets i.e. the elements sent to 1. Permitting every function ω ∶ Z → { , } permits any set of final states.Morphisms ω ∶ S → must have a largest element sent to 0, so that ω − ({ }) = ↓ S t for some t ∈ S .The following Lemma provides further clarification. That is, a join of states accepts the union of the languagesaccepted by its summands. As a special case, the bottom element accepts the empty language. Lemma 3.2.2.
For any
JSL -dfa δ = ( s , S , δ a , F ) and X ⊆ S , L ( δ @ ⋁ S X ) = ⋃ s ∈ X L ( δ @ s ) . Proof.
Let t = ⋁ S F be the largest non-final state. Each δ w ∶ S → S is an endomorphism so δ w (⋁ S X ) ≰ S t ⇐⇒ ⋁ S { δ w ( x ) ∶ x ∈ X } ≰ S t ⇐⇒ ∃ x ∈ X.δ w ( x ) ≰ S t .Next, the category of JSL -dfas is self-dual. That is, one can take adjoints and exchange the initial state with thelargest non-final state. Other varieties where dfas can be interpreted include pointed sets, distributive lattices, boolean algebras and vector spaces over F . heorem 3.2.3 (Self-duality of dfa JSL ) . We have the self-duality ( − ) „ ∶ dfa op JSL → dfa JSL , ( s , S , δ a , F ) „ ∶ = (⋁ S F , S op , ( δ a ) ∗ , ↑ S s ) f „ ∶ = f ∗ with witnessing natural isomorphism λ ∶ Id dfa JSL ⇒ ( − ) „ ○ (( − ) „ ) op where λ ( s , S ,δ a ,F ) ∶ = id S . Dual machines accept the reversed language.
Lemma 3.2.4. L ( δ „ ) = ( L ( δ )) r for any JSL -dfa δ .Proof. Let δ = ( s , S , δ a , F ) and consider the morphisms: α ∶ → S where α ( ) ∶ = s ω ∶ S → where ω − ({ }) = F δ w ∶ S → S for w ∈ Σ ∗ . Then we calculate: w ∈ L ( δ ) ⇐⇒ ω ○ δ w ○ α = id (consider action on ⊺ ) ⇐⇒ α ∗ ○ ( δ w ) ∗ ○ ω ∗ = ( id ) ∗ = id op (apply ( − ) ∗ ∶ JSL opf → JSL f ) ⇐⇒ α ∗ ○ ( δ „ ) w r ○ ω ∗ = id op (by def. of ( − ) „ ) ⇐⇒ α ∗ ○ ( δ „ ) w r ○ ω ∗ ( ) =
0. (since ⊺ op = ⇐⇒ α ∗ ○ ( δ „ ) w r (⋁ S F ) ≰ op ⇐⇒ ( δ „ ) w r (⋁ S F ) ≰ S op ( α ∗ ) ∗ ( ) (adjoints) ⇐⇒ ( δ „ ) w r (⋁ S F ) ≰ S s (since ( α ∗ ) ∗ = α ) ⇐⇒ w r ∈ L ( δ „ ) .Importantly, dependency automata and dfas interpreted in semilattices are two sides of the same coin. Definition 3.2.5 (Equivalence functors for automata) . Det ∶ aut Dep → dfa JSL determinises a dependency automaton:
Det ( N , G , N ′ ) ∶ = ( F N ′ , Open G , λY. ( N ′ a ) ˘ [ Y ] , { Y ∈ O ( G ) ∶ Y ∩ I N ′ ≠ ∅ }) and acts on morphisms as Open does (see Definition 2.2.10.1).2.
Airr ∶ dfa JSL → aut Dep constructs a dependency automaton over the semilattice’s irreducibles:
Airr ( s, S , δ a , F ) ∶ = ( N , Pirr S , N ′ ) N ∶ = ( J ( S ) ∩ ↓ S s, J ( S ) , ( Pirr δ a ) − , J ( S ) ∩ F ) N ′ ∶ = ( M ( S ) ∩ ↑ S ⋁ S F , M ( S ) , ( Pirr δ a ) + , M ( S ) ∩ ↑ S s ) .It acts on morphisms as Pirr does (see Definition 2.2.10.2). ∎ Theorem 3.2.6 (Automata-theoretic categorical equivalence) . Det ∶ aut Dep → dfa JSL and
Airr ∶ dfa JSL → aut Dep definean equivalence of categories with natural isomorphisms inherited from Theorem 2.2.14: rep ∶ Id dfa JSL ⇒ Det ○ Airr rep ( s, S ,δ a ,F ) ∶ = rep S red ∶ Id aut Dep ⇒ Airr ○ Det red ( N , G , N ′ ) ∶ = red G Proof. Det is well-defined .Since
Open is a well-defined functor we need only show
Det is well-defined on objects and morphisms. Concerningobjects, first recall:
Det ( N , G , N ′ ) ∶ = ( F N ′ , Open G , λY. ( N ′ a ) ˘ [ Y ] , { Y ∈ O ( G ) ∶ Y ∩ I N ′ ≠ ∅ }) . Then F N ′ = G [ I N ] ∈ O ( G ) is an element of the join-semilattice Open G , as required. Since N a ; G = N † a = G ; ( N ′ a ) ˘we have N † a ∶ G → G ; applying Open yields an endomorphism of
Open G with action λY ∈ O ( G ) . ( N † a ) ˘ + [ Y ] . The18atter can be rewritten λY. ( N ′ a ) ˘ [ Y ] because G ; ( N † a ) ˘ + = G ; ( N ′ a ) ˘ and each Y = G [ X ] for some X . Finally, thenon-final states { Y ∈ O ( G ) ∶ Y ∩ I N ′ = ∅ } have a largest element in G ( I N ) . Then the JSL -dfa is well-defined.To see
Det is well-defined on morphisms we’ll show the respective
JSL f -morphisms preserve the additionalstructure. Given R ∶ ( M , G , M ′ ) → ( N , H , N ′ ) we have Open R ∶ Open G → Open H . The initial state is preserved: Open G ( F M ′ ) = Open G ( G [ I M ]) = G ; R ˘ + [ I M ] = R [ I M ] = F N ′ . The transitions are preserved because for any Y = G [ X ] , Open R (( M ′ a ) ˘ [ Y ]) = R ˘ + [( M ′ a ) ˘ [ Y ]] (def. of Open ) = G ; ( M ′ a ) ˘; R ˘ + [ X ] = M a ; G ; R ˘ + [ X ] (def. of aut Dep ) = M a ; R [ X ] = R ; ( N ′ a ) ˘ [ X ] (def. of aut Dep ) = G ; R ˘ + ; ( N ′ a ) ˘ [ X ] = R ˘ + ; ( N ′ a ) ˘ [ Y ] = ( N ′ a ) ˘ [ Open R ( Y )] . (def. of Open )To see the final states are preserved, observe I M ′ determines the Dep -morphism: G t I M′ × { } / / G s G O O F M × { } / / Pirr O O Fix Y = G [ X ] ∈ O ( G ) . Then Y is final iff Y ∩ I M ′ ≠ ∅ iff X ∩ F M ≠ ∅ . Moreover Open R ( Y ) = R [ X ] is final iff R [ X ] ∩ I N ′ ≠ ∅ . By assumption ˘ R [ I N ′ ] = F M , so Y is final iff X ∩ ˘ R [ I N ′ ] ≠ ∅ iff R [ X ] ∩ I N ′ ≠ ∅ iff Open R ( Y ) is final.2. Airr is well-defined .Since
Pirr is a well-defined functor it suffices to show
Airr is well-defined on objects and morphisms. Concerningobjects,
Airr ( s, S , δ a , F ) ∶ = ( N , Pirr S , N ′ ) and both N and N ′ are well-defined nfas. Condition (4) holds: ( Pirr δ a ) − ; Pirr S = Pirr δ a id Pirr S = Pirr δ a = id Pirr S Pirr δ a = Pirr S ; ( Pirr δ a ) ˘ + . Finally, condition (5) holds:
Pirr S [ I N ] = Pirr S [{ j ∈ J ( S ) ∶ j ≤ S s }] = { m ∈ M ( S ) ∶ ∃ j ∈ J ( S ) . [ j ≤ S s and j ≰ S m ]} = { m ∈ M ( S ) ∶ s ≰ S m } = F N ′ ( Pirr S ) ˘ [ I N ′ ] = ( Pirr S ) ˘ [{ m ∈ M ( S ) ∶ ⋁ S F ≤ S m }] = { j ∈ J ( S ) ∶ ∃ m ∈ M ( S ) . [ j ≰ S m and ⋁ S F S ≤ S m ]} = { j ∈ J ( S ) ∶ j ≰ S ⋁ S F S } = F N .To see Airr is well-defined on morphisms, take f ∶ ( s , S , γ a , F S ) → ( t , T , δ a , F T ) so we have Pirr f ∶ Pirr S → Pirr T . Then let us verify the required identities: ( Pirr γ a ) − ; Pirr f = Pirr γ a Pirr f = Pirr ( γ a ○ f ) = Pirr ( f ○ δ a ) = Pirr f Pirr δ a = Pirr f ; ( Pirr δ a ) ˘ + Airr f ∶ ( M , Pirr S , M ′ ) → ( N , Pirr T , N ′ ) then, Pirr f [ I M ] = Pirr f [{ j ∈ J ( S ) ∶ j ≤ S s }] = { m ∈ M ( T ) ∶ ∃ j ∈ J ( S ) . ( f ( j ) ≰ T m and j ≤ S s )} = { m ∈ M ( T ) ∶ ∃ j ∈ J ( S ) . ( j ≰ S f ∗ ( m ) and j ≤ S s )} = { m ∈ M ( T ) ∶ s ≰ S f ∗ ( m )} . = { m ∈ M ( T ) ∶ f ( s ) ≰ T m } . = { m ∈ M ( T ) ∶ t ≰ T m } = F N ′ . ( Pirr f ) ˘ [ I N ′ ] = { j ∈ J ( S ) ∶ ∃ m ∈ M ( T ) . ( f ( j ) ≰ T m and ⋁ S F T ≤ T m )} = { j ∈ J ( S ) ∶ f ( j ) ≰ S ⋁ T F T } = { j ∈ J ( S ) ∶ j ≰ S ⋁ S F S } = F N .3. rep restricts to a natural isomorphism as claimed .Recall the natural isomorphism rep ∶ Id JSL f ⇒ Open ○ Pirr where rep S ∶ = λs ∈ S. { m ∈ M ( S ) ∶ s ≰ S m } . Thengiven any JSL -dfa δ ∶ = ( s , S , δ a , F ) it suffices to establish the typing rep S ∶ δ → DetAirr δ where: DetAirr δ = ({ m ∈ M ( S ) ∶ s ≰ S m } , OpenPirr S , λY. ( Pirr δ a ) ˘ + [ Y ] , { Y ∈ O ( Pirr S ) ∶ ⋁ S F ∈ ↓ S Y }) . The initial state is clearly preserved. Next, the deterministic transitions are preserved:rep S ○ δ a ( s ) = { m ∈ M ( S ) ∶ δ a ( s ) ≰ S m } = { m ∈ M ( S ) ∶ s ≰ S ( δ a ) ∗ ( m )}( Pirr δ a ) ˘ + [ rep S ( s )] = ( Pirr δ a ) ˘ + [{ m ∈ M ( S ) ∶ s ≰ S m }] = { m ∈ M ( S ) ∶ ∃ m ′ ∈ M ( S ) . ( s ≰ S m ′ and ( δ a ) ∗ ≤ S m ′ )} = { m ∈ M ( S ) ∶ s ≰ S ( δ a ) ∗ ( m )} .Concerning the final states we know rep S [ F ] = { M ( S ) ∩ ≰ S [ s ] ∶ s ∈ F } , so given s ∈ F let Y s ∶ = { m ∈ M ( S ) ∶ s ≰ S m } ∈ O ( Pirr S ) . Then: ⋁ S F ∉ ↓ S Y s ⇐⇒ ∀ m ∈ M ( S ) . ( s ≰ S m ⇒ ⋁ S F ≰ S m ) ⇐⇒ ∀ m ∈ M ( S ) . (⋁ S F ≤ S m ⇒ s ≤ S m ) ⇐⇒ s ≤ S ⋁ S F ⇐⇒ s ∉ F .4. red restricts to a natural isomorphism as claimed .Recall the natural isomorphism red ∶ Id Dep ⇒ Pirr ○ Open . Take any dependency automaton M ∶ = ( M , G , M ′ ) .It suffices to establish the typing red G ∶ M → AirrDet M whose codomain is ( N , PirrOpen G , N ′ ) where: N ∶ = ({ j ∈ J ( Open G ) ∶ j ⊆ F M ′ } , J ( Open G ) , N a , { Y ∈ J ( Open G ) ∶ Y ∩ I M ′ ≠ ∅ }) N ′ ∶ = ({ m ∈ M ( Open G ) ∶ in G ( I M ′ ) ⊆ m } , M ( S ) , N ′ a , { m ∈ M ( S ) ∶ F M ′ ⊈ m }) .Firstly by Dep -composition and naturality, M a ; red G = M † a red G = red G PirrOpen M † a = red G ; ( PirrOpen M † a ) ˘ + = red G ; ( N ′ a ) ˘ . Finally we establish the two remaining conditions:red G [ I M ] = { Y ∈ M ( Open G ) ∶ ∃ g s ∈ I M . G [ g s ] ⊈ Y } = { Y ∈ M ( Open G ) ∶ G [ I M ] ⊈ Y } = { Y ∈ M ( Open G ) ∶ F M ′ ⊈ Y } (def. of aut Dep ) = F N ′ (see above). ( red G ) ˘ [ I N ′ ] = { g s ∈ G s ∶ ∃ Y ∈ M ( Open G ) . ( G [ g s ] ⊈ Y and in G ( I M ′ ) ⊆ Y )} = { g s ∈ G s ∶ G [ g s ] ⊈ in G ( I M ′ )} = { g s ∈ G s ∶ { g s } ⊈ G ↓ ○ in G ( I M ′ )} (adjoints) = { g s ∈ G s ∶ g s ∉ G ↓ ( I M ′ )} ( ↓↑↓ ) = ˘ G [ I M ′ ] = F M . N induces a dependency automaton dep ( N ) = Det ( N , ∆ Z , rev ( N )) . Note 3.2.7 ( Det ( N , ∆ z , rev ( N )) is N ’s full subset construction) . Given any nfa
N = ( z , Z, N a , F ) , Det ( dep ( N )) = Det ( N , ∆ Z , rev ( N )) = ( F rev ( N ) , Open ∆ Z , λX. ( N ˘ a ) ˘ [ X ] , { X ⊆ Z ∶ X ∩ I rev ( N ) ≠ ∅ }) = ( I N , ( P Z, ∪ , ∅ ) , λX. N a [ X ] , { X ⊆ Z ∶ X ∩ F N ≠ ∅ }) i.e. the full subset construction for N endowed with inclusion ordering. This explains Definition 3.2.8 below. ∎ Definition 3.2.8 (Full subset construction sc ( N ) ) . For any nfa N define: sc ( N ) ∶ = Det ( dep ( N )) = ( I N , ( P Z, ∪ , ∅ ) , λX. N a [ X ] , { X ⊆ Z ∶ X ∩ F N ≠ ∅ }) . This is N ’s full subset construction endowed with its JSL -dfa structure. ∎ Note 3.2.9 ( Det ( N , G , N ′ ) restricts rev ( N ′ ) ’s full subset construction) . Generally speaking,
Det ( N , G , N ′ ) is ob-tained from rev ( N ′ ) ’s full subset construction by restricting to Open
G ⊆
Open ∆ G t . This generalises Note 3.2.7. ∎ Note 3.2.10 ( Det and
Airr preserve the accepted language) . Given any dependency automaton ( N , G , N ′ ) , theclassically reachable part of Det ( N , G , N ′ ) has a classical description too: reach ( Det ( N , G , N ′ )) = rsc ( rev ( N ′ )) . It follows by Lemma 3.1.11 that
Det preserves the accepted language. The natural isomorphism red ∶ Id aut Dep ⇒ Airr ○ Det informs us that
Airr also preserves the accepted language. ∎ Corollary 3.2.11 (Language correpondence) . Let δ = ( s , S , δ a , F ) be a JSL -dfa and N the lower nfa of Airr δ . Then: L ( N @ Z ) = acc δ (⋁ S Z ) for every Z ⊆ J ( S ) .In particular each individual state j ∈ J ( S ) of N accepts acc δ ( j ) .Proof. By Note 3.2.10
Det preserves the accepted language. Given
Airr δ = ( N , G , N ′ ) and Z ⊆ J ( S ) then: ( N @ Z , G , M ) where M ∶ = rev ( rev ( N ′ ) @ G [ Z ] ) is a well-defined dependency automaton. Applying Det yields the
JSL -dfa ( DetAirr δ ) @ G [ Z ] which accepts L ( N @ Z ) .Finally rep − δ ′ provides a language-preserving isomorphism ( DetAirr δ ) @ G [ Z ] → δ @ ⋁ S Z . Example 3.2.12 (Dualising the full subset construction) . Via relative complement we have the
JSL -dfa isomorphism: ( sc ( N )) „ ≅ sc ( rev ( N )) which follows by considering ( sc ( N )) „ = ( F ,
Open ∆ Z , R ↓ a , { X ⊆ Z ∶ X ∩ I ≠ ∅ }) . In other words, the dual of thefull subset construction for N is the full subset construction for rev ( N ) . This isomorphism instantiates the naturalisomorphism ˆ ∂ described below. ∎ The self-duality transfers of Theorem 2.2.18 generalise naturally to the automata-theoretic setting.
Theorem 3.2.13 (Automata-theoretic self-duality transfer) . ˆ ∂ ∶ ( − ) „ ○ Det op ⇒ Det ○ Rev restricts ∂ from Theorem 2.2.18.1.2. ˆ λ ∶ Rev ○ Airr op ⇒ Airr ○ ( − ) „ restricts λ from Theorem 2.2.18.2.Proof.
21. Given a dependency automaton ( N , G , N ′ ) it suffices to show ∂ G ∶ ( Open G ) op → Open ˘ G defines an dfa JSL -morphismof type ( Det ( N , G , N ′ )) „ → DetRev ( N , G , N ′ ) .Concerning preservation of the initial state, ∂ G (⋃ { Y ∈ O ( G ) ∶ Y ∩ I N ′ ≠ ∅ }) = ∂ G (⋃{ Y ∈ O ( G ) ∶ Y ∩ I N ′ = ∅ }) = ∂ G ( in G ( I N ′ )) (see below) = ˘ G [ in G ( I N ′ )] (def. of ∂ G ) = G ↓ ( in G ( I N ′ )) (De Morgan duality) = G ↓ ( I N ′ ) ( ↓↑↓ ) = ˘ G [ I N ′ ] (De Morgan duality) = F N (def. of ( N , G , N ′ ) ).The marked equality holds because in G ( I N ′ ) is the largest G -open in I N ′ . Next, the final states are preservedand reflected: X ∈ ∂ − G ({ Y ∈ O ( ˘ G ) ∶ Y ∩ I N ≠ ∅ }) ⇐⇒ ˘ G [ X ] ∩ I N ≠ ∅ ⇐⇒ G ↓ ( X ) ∩ I N ≠ ∅ (De Morgan duality) ⇐⇒ I N ⊈ G ↓ ( X ) (def. of ( N , G , N ′ ) ⇐⇒ G [ I N ] ⊈ X (adjoints) ⇐⇒ F N ′ ⊈ X (def. of ( N , G , N ′ ) ). ⇐⇒ X ∈ ↑ Open G F N ′ .Finally, preservation of the deterministic transitions follows by the naturality of ∂ .2. Given a JSL -dfa δ = ( s , S , δ a , F ) it suffices to show λ S ∶ ( Pirr S ) ˘ → Pirr ( S op ) defines an aut Dep -morphism oftype
RevAirr δ → Airr ( δ „ ) . It types correctly, and since λ ’s components are identity morphisms we are done. JSL f and Dep have enough projectives/injectives by Proposition 2.2.19, as do the automata-theoretic categories.
Proposition 3.2.14 ( dfa JSL and aut
Dep have enough projectives) . Let Z be a finite set and γ ∶ = ( s , S , γ a , F γ ) be a JSL -dfa.1. Given any nfa ( I, Z, R a , F ) we have the JSL -dfa ( I, Open ∆ Z , R ↑ a , F γ ) .2. ε S ∶ ( J ( S ) ∩ ↓ S s , S , ( Pirr γ a ) ↑ − , J ( S ) ∩ F γ ) ↠ γ where ε S ( X ) ∶ = ⋁ S X is a surjective JSL -dfa morphism.3. Given ( I, Open ∆ Z , R ↑ a , F ) f Ð→ ( t , T , δ a , F δ ) q ↞ γ then f = q ○ g where g uniquely extends λz.q ∗ ( f ({ z })) . The self-duality of dfa
JSL preserves the freeness of the join-semilattice, so we immediately deduce:
Corollary 3.2.15. dfa
JSL and aut
Dep have enough injectives.
Recall Proposition 2.2.21. Choosing any join/meet-generators for S we can construct ≰ S ∣ J × M ≅ Pirr S . Likewise,given any JSL -dfa δ over S , choosing such generators yields a dependency automaton aut Dep -isomorphic to
Airr δ . Proposition 3.2.16 ( aut Dep generator-based isomorphisms) . In the notation of Proposition 2.2.21,1. I S ∶ N S → Airr ( s , S , γ a , F S ) defines an aut Dep -isomorphism where N S ∶ = ( N , ≰ S ∣ J S × M S , N ′ ) , N ∶ = ( J S ∩ ↓ S s , J S , N a , J S ∩ F S ) N a ( x , x ) ∶ ⇐⇒ x ≤ S δ a ( x ) N ′ ∶ = ( M S ∩ ↑ S ⋁ S F S , M S , N ′ a , M S ∩ ↑ S s ) N ′ a ( y , y ) ∶ ⇐⇒ ( δ a ) ∗ ( y ) ≤ S y .2. Suppose f ∶ ( s , S , γ a , F S ) → ( t , T , δ a , F T ) is a JSL -dfa morphism. Then I f ∶ N S → N T defines an aut Dep -isomorphism. sc ( N ) ∶ = Det ( dep ( N )) also defines a free construction. Theorem 3.2.17 (Free
JSL -dfa on a dfa) . If γ is a dfa, δ = ( t , T , δ a , F δ ) is a JSL -dfa and f ∶ γ → δ is a dfa morphism, λS. ⋁ T f [ S ] ∶ sc ( γ ) → δ is a well-defined JSL -dfa morphism . Proof.
Let γ = ( z , Z, γ a , F γ ) and denote the candidate JSL -dfa morphism by ˆ f . Concerning the initial state, ˆ f ({ z }) = f ( z ) = t because f is a dfa morphism by assumption. Concerning final states ⋁ T f [ S ] ∈ F δ ⇐⇒ ∃ z ∈ S.f ( s ) ∈ F δ (since there is a largest non-final state) iff S ∩ F γ ≠ ∅ (since f is a dfa morphism). Finally for each S ⊆ Z ,ˆ f ○ γ ↑ a ( S ) = ⋁ T f [ γ a [ S ]] = ⋁ T { f ○ γ a ( s ) ∶ s ∈ S } = ⋁ T { δ a ○ f ( s ) ∶ s ∈ S } ( f a dfa morphism) = δ a (⋁ T { f ( s ) ∶ s ∈ S }) ( δ a preserves T -joins) = δ a ○ ˆ f ( S ) .There is also a free construction for ordered dfas, recalling Definition 3.1.3.6. Theorem 3.2.18 (Free
JSL -dfa on an ordered dfa) . Let γ = ( p , P , γ a , F γ ) be an ordered dfa, δ = ( t , T , δ a , F δ ) a JSL -dfa and f ∶ γ → δ an ordered dfa morphism. Then we have the well-defined JSL -dfa morphism, λS. ⋁ T f [ S ] ∶ Det ( γ ↓ , ≥ P , rev ( γ ↓ )) → δ where γ ↓ ∶ = ( ↓ P p , P, γ a ; ≥ P , F γ ) is an nfa . Proof.
We first verify ( γ ↓ , ≥ P , rev ( γ ↓ )) is a dependency automaton. Concerning transitions, ( γ a ; ≥ P ) ; ≥ P = γ a ; ≥ P bytransitivity and ≥ P ; ( γ a ; ≥ P ) = γ a ; ≥ P by Example 2.1.8.4 (via γ a ∶ P op → P op ). Concerning the remaining conditions: ≥ P [ I dfa ↓ ( L r ) ] = ≥ P [ ↓ P p ] = ↓ P p = F rev ( dfa ↓ ( L r )) , ≥ ˘ P [ I rev ( dfa ↓ ( L r )) ] = ≤ P [{ Y ∈ LW ( L r ) ∶ ε ∈ Y }] = { Y ∈ LW ( L r ) ∶ ε ∈ Y } = F dfa ↓ ( L r ) . Denote the candidate
JSL -dfa morphism by ˆ f . Recall Det ( γ ↓ , ≥ P , rev ( γ ↓ )) is sc ( γ ↓ ) restricted to Open ≥ P . Concerningthe initial state, ˆ f ( ↓ P p ) = f ( p ) = t by monotonicity and the fact that f is a dfa morphism. Concerning final states, ⋁ T f [ S ] ∈ F δ ⇐⇒ ∃ z ∈ S.f ( s ) ∈ F δ iff S ∩ F γ ≠ ∅ (since f is a dfa morphism). Finally for each down-closed S ⊆ P ,ˆ f ○ ( γ a ; ≥ P ) ↑ ( S ) = ⋁ T f [ γ a ; ≥ P [ S ]] = ⋁ T { f ○ γ a ( s ) ∶ s ∈ S } ( f ○ γ a monotonic) = ⋁ T { δ a ○ f ( s ) ∶ s ∈ S } ( f a dfa morphism) = δ a (⋁ T { f ( s ) ∶ s ∈ S }) ( δ a preserves T -joins) = δ a ○ ˆ f ( S ) . Previously we described the canonical dependency automaton for L ⊆ Σ ∗ in Definition 3.1.8. We now describe thestate-minimal JSL -dfa for L . These two machines are actually the same object modulo categorical equivalence. Definition 3.3.1 (Left and right quotients) . Fix any regular language L ⊆ Σ ∗ and recall the left word quotients LW ( L ) from Definition 3.1.3.1. For U, V ⊆ Σ ∗ , U − L ∶ = { w ∈ Σ ∗ ∶ ∃ u ∈ U.uw ∈ L } is a left quotient , LV − ∶ = { w ∈ Σ ∗ ∶ ∃ v ∈ V.wv ∈ L } is a rightquotient .2. Let LQ ( L ) ∶ = { U − L ∶ U ⊆ Σ ∗ } and RQ ( L ) ∶ = { LV − ∶ V ⊆ Σ ∗ } . Then LW ( L ) ⊆ LQ ( L ) and likewise we may write Lv − instead of L { v } − . 23. We have finite join-semilattice LQ ( L ) ∶ = ( LQ ( L ) , ∪ , ∅ ) .4. J ( LQ ( L )) ⊆ LW ( L ) because the latter generate LQ ( L ) under unions. Thus J ( LQ ( L )) consists of those left wordquotients which are not unions of others, so in particular are non-empty. ∎ Definition 3.3.2 (State-minimal
JSL -dfa) . Let dfa ( L ) ∶ = ( L, LQ ( L ) , λX.a − X, { X ∈ LQ ( L ) ∶ ε ∈ X }) . ∎ Observe that the state-minimal dfa ( L ) is obtained from dfa ( L ) by restricting to left word quotients u − L . Con-versely, every left quotient U − L is a finite union of left word quotients and a − ( − ) preserves unions. Lemma 3.3.3. dfa ( L ) is the state-minimal JSL -dfa accepting L .Proof. It accepts L – the reachable part of its underlying dfa is precisely the state-minimal dfa dfa ( L ) . Concerningthe state-minimality of dfa ( L ) , take any JSL -dfa δ = ( s , S , δ, F ) accepting L and consider the languages accepted byvarying the initial state, noting ∣ langs ( δ )∣ ≤ ∣ S ∣ . By Lemma we know 3.2.2 LQ ( L ) ⊆ langs ( δ ) , hence ∣ LQ ( L )∣ ≤ ∣ S ∣ . Example 3.3.4 ( dfa ↓ ( L ) ) . Applying Proposition 3.2.16 to dfa ( L ) with join-generating subset J LQ ( L ) ∶ = LW ( L ) yieldsa dependency automaton, whose lower nfa takes the following form: dfa ↓ ( L ) ∶ = ({ X ∈ LW ( L ) ∶ X ⊆ L } , LW ( L ) , N a , { X ∈ LW ( L ) ∶ ε ∈ X }) N a ( X , X ) ∶ ⇐⇒ X ⊆ a − X .It accepts L by the witnessing aut Dep -isomorphism (see Note 3.2.10). Importantly, we’ll use it to represent the canonicaldistributive
JSL -dfa further below. ∎ Recall the self-duality ( − ) „ ∶ dfa op JSL → dfa JSL of Theorem 3.2.3, itself arising from the self-duality of
JSL f in Note2.2.8.3. We now describe an important representation of dfa ( L ) which explains its meet structure. Lemma 3.3.5. [ LX − ] − L = { w ∈ Σ ∗ ∶ Lw − ⊆ LX − } for any subsets X, L ⊆ Σ ∗ .Proof. [ LX − ] − L = { w ∈ Σ ∗ ∶ ∀ v ∈ Σ ∗ . ( v ∈ LX − ⇒ vw ∉ L )} = { w ∈ Σ ∗ ∶ ∀ v ∈ Σ ∗ . ( vw ∈ L ⇒ v ∈ LX − )} = { w ∈ Σ ∗ ∶ ∀ v ∈ Σ ∗ . ( v ∈ Lw − ⇒ v ∈ LX − )} = { w ∈ Σ ∗ ∶ Lw − ⊆ LX − } . Theorem 3.3.6 (Fundamental dualising isomorphism dr L ) . For each regular L ⊆ Σ ∗ we have the JSL -dfa isomorphism: ( dfa ( L r )) „ dr L ÐÐ→ dfa ( L ) dr L ( X ) ∶ = [ X r ] − L dr − L ∶ = dr L r , noting that reversal/complement of languages commute. There is also an alternative description: dr L ( U − L r ) ∶ = ⋃{ X ∈ LW ( L ) ∶ X ∩ U r = ∅ } . Proof.
1. We first establish the underlying join-semilattice isomorphism dr L ∶ ( LQ ( L r )) op → LQ ( L ) . Now, dr L is certainlya well-defined function. It is monotone because X ⊆ Y ∈ LW ( L r ) implies Y r ⊆ X r and hence [ Y r ] − L ⊆ [ X r ] − L .Likewise dr − L is a well-defined monotone function. Next, given any X ∈ LW ( L r ) , dr − L ○ dr L ( X ) = dr − L ([ X r ] − L ) = [[ X r ] − L r ] − L r = [ L r X − ] − L r = { w ∈ Σ ∗ ∶ L r w − ⊈ L r X − } (by Lemma 3.3.5). = { w ∈ Σ ∗ ∶ ∃ u ∈ Σ ∗ . [ uw ∈ L r and ∀ v ∈ Σ ∗ . [ uv ∈ L r ⇒ v ∉ X ]]} = { w ∈ Σ ∗ ∶ ∃ u ∈ Σ ∗ . [ w ∈ u − L r and ∀ v ∈ Σ ∗ . [ v ∈ u − L r ⇒ v ∈ X ]]} = { w ∈ Σ ∗ ∶ ∃ u ∈ Σ ∗ . [ w ∈ u − L r and u − L r ⊆ X ]} = X (since X ∈ LQ ( L ) ).24t immediately follows that dr L ○ dr − L ( Y ) = Y by substituting L ↦ L r . Thus dr L is a bijective order-preservingand order-reflecting function, hence a bounded-lattice isomorphism and in particular a JSL f -morphism. Finallywe establish the alternative action: dr L ( U − L r ) = [( U − L r ) r ] − L = [ L ( U r ) − ] − L = { w ∈ Σ ∗ ∶ Lw − ⊈ L ( U r ) − } (by Lemma 3.3.5). = { w ∈ Σ ∗ ∶ ∃ x ∈ Σ ∗ . [ xw ∈ L and ∀ y ∈ Σ ∗ . [ xy ∈ L ⇒ y ∉ U r ]]} = { w ∈ Σ ∗ ∶ ∃ x ∈ Σ ∗ . [ w ∈ x − L and ∀ y ∈ Σ ∗ . [ y ∈ x − L ⇒ y ∈ U r ]]} = { w ∈ Σ ∗ ∶ ∃ x ∈ Σ ∗ . [ w ∈ x − L and x − L ⊆ U r ]} = { w ∈ Σ ∗ ∶ ∃ x ∈ Σ ∗ . [ w ∈ x − L and x − L ∩ U r = ∅ ]} = ⋃{ X ∈ LW ( L ) ∶ X ∩ U r = ∅ } .2. It remains to establish that the join-semilattice isomorphism dr L defines a dfa morphism. Concerning preserva-tion of the initial state: dr L ( i ( dfa ( L r )) ∗ ) = dr L (⋁ LQ ( L r ) { X ∈ LW ( L r ) ∶ ε ∈ X }) = dr L (⋃{ X ∈ LW ( L r ) ∶ ε ∉ X }) = dr L ( L − L r ) = ⋃{ X ∈ LW ( L ) ∶ X ∩ L = ∅ } = ⋃{ X ∈ LW ( L ) ∶ X ⊆ L } = L = i dfa ( L ) .Concerning transitions, let γ a ∶ LQ ( L r ) → LQ ( L r ) and δ a ∶ LQ ( L ) → LQ ( L ) be the deterministic a -transitions forthe respective machines (both have action λX.a − X ). It suffices to show ( γ a ) ∗ = dr − L ○ δ a ○ dr L : dr − L ○ δ a ○ dr L ( X ) = dr − L ( a − ([ X r ] − L )) = dr − L ([ X r a ] − L ) = ⋃{ X ∈ LW ( L r ) ∶ Y ∩ aX = ∅ } = ⋃{ X ∈ LW ( L r ) ∶ a − Y ⊆ X } = ( γ a ) ∗ ( X ) .The final states are preserved and reflected: X ∈ dr − L ( F dfa ( L ) ) ⇐⇒ X ∈ dr − L ({ Y ∈ LQ ( L ) ∶ ε ∈ Y }) ⇐⇒ ε ∈ dr L ( X ) ⇐⇒ ε ∈ [ X r ] − L ⇐⇒ X r ∩ L ≠ ∅ ⇐⇒ X ∩ L r ≠ ∅ ⇐⇒ L r ⊈ X ⇐⇒ X ∈ F ( dfa ( L r )) ∗ . Note 3.3.7. dr L provides a bijection between L ’s left word quotients U − L and right word quotients LV − = (( V r ) − L r ) r .2. If L = L r we have an order-reversing involutive isomorphism dr L ∶ ( LQ ( L )) op → LQ ( L ) . Thus LQ ( L ) is a DeMorgan algebra whose bounded lattice structure needn’t be distributive. This holds for any unary language. ∎ Corollary 3.3.8 (Meet-generating LQ ( L ) ) . LQ ( L ) is join-generated by LW ( L ) and meet-generated by dr L [ LW ( L r )] = {[ Lv − ] − L ∶ v ∈ Σ ∗ } .2. Y ⊆ dr L ( v − L r ) ⇐⇒ v r ∉ Y for each Y ∈ LQ ( L ) . . Each Y ∈ LQ ( L ) arises as an intersection: Y = ⋀ LQ ( L ) { dr L ( v − L r ) ∶ v r ∉ Y } = ⋂{ dr L ( v − L r ) ∶ v r ∉ Y } .Proof.
1. That LQ ( L ) is generated by LW ( L ) under finite unions follows via Definition 3.3.1. Concerning the new claim,the order isomorphism dr L from Theorem 3.3.6 preserves/reflects meet-irreducibles. Then recalling Definition3.3.1 we have M ( LQ ( L r ) op ) = J ( LQ ( L r )) ⊆ LW ( L r ) , so applying dr L we obtain a meet-generating set.2. Given Y ⊆ dr L ( v − L r ) then since dr L ( v − L r ) = ⋃{ X ∈ LW ( L ) ∶ v r ∉ X } doesn’t contain v r we immediatelydeduce v r ∉ Y . Conversely if v r ∉ Y then for every Y ⊇ X ∈ LW ( L ) we have v r ∉ X hence X ⊆ dr L ( v − L r ) , sothat Y ⊆ dr L ( v − L r ) too.3. By (1) each Y ∈ LW ( L ) is the meet of those K ∈ dr L [ LW ( L r )] ⊆ LQ ( L ) above it. By (2) Y ⊆ dr L ( v − L r ) iff v r ∉ Y ,which implies the first equality. Finally this meet is actually an intersection: given w ∈ ⋂{ dr L ( v − L r ) ∶ v r ∉ Y } then if w ∉ Y we obtain the contradiction w ∈ dr L (( w r ) − L r ) .With reference to the Corollary 3.3.8, the next Lemma explains the strong connection between the state-minimal JSL -dfa and the canonical dependency automaton ( dfa ( L ) , DR L , dfa ( L r )) where DR L ( u − L, v − L r ) ∶ ⇐⇒ uv r ∈ L . Lemma 3.3.9 (Dependency Lemma) . For any regular L ⊆ Σ ∗ and words u, v ∈ Σ ∗ , u − L ⊈ [ Lv − ] − L ⇐⇒ uv ∈ L or equivalently u − L ⊈ dr L ( v − L r ) ⇐⇒ uv r ∈ L ⇐⇒ DR L ( u, v ) . Proof.
We calculate: u − L ⊈ [ Lv − ] − L ⇐⇒ ∃ y ∈ Σ ∗ . [ uy ∈ L and y ∉ [ Lv − ] − L ] ⇐⇒ ∃ y ∈ Σ ∗ . [ uy ∈ L and ∀ x ∈ Σ ∗ . [ xy ∈ L ⇒ x ∉ Lv − ]] ⇐⇒ ∃ y ∈ Σ ∗ . [ uy ∈ L and ∀ x ∈ Σ ∗ . [ x ∈ Ly − ⇒ x ∈ Lv − ]] ⇐⇒ ∃ y ∈ Σ ∗ . [ u ∈ Ly − and Ly − ⊆ Lv − ] ⇐⇒ u ∈ Lv − ⇐⇒ uv ∈ L .If L is regular we may rewrite this in terms of dr L and DR L as above, see Theorem 3.3.6 and Definition 3.1.8.We are now ready for the main result of this subsection. Theorem 3.3.10 (Dependency Theorem) . The state-minimal
JSL -dfa is isomorphic to the determinisation of thecanonical dependency automaton. α ∶ dfa ( L ) → Det ( dfa ( L ) , DR L , dfa ( L r )) α ( X ) ∶ = { v − L r ∶ v ∈ X r } α − ( Y ) ∶ = [(⋂ Y ) r ] − L Proof.
1. We first establish the underlying isomorphism α ∶ S → Open DR L where S ∶ = LQ ( L ) .By Theorem 2.2.14 we have the isomorphism rep S ∶ S → OpenPirr S where rep S ( X ) ∶ = { Y ∈ M ( S ) ∶ X ⊈ Y } .Proposition 2.2.21 permits one to extend the domain/codomain of Pirr S to join/meet generators, which byCorollary 3.3.8 can be J S ∶ = LW ( L ) and M S ∶ = {[ Lv − ] − L ∶ v ∈ Σ ∗ } . Then we obtain the Dep -isomorphism: I − S ∶ = ≰ S ∣ J ( S ) × M S ∶ Pirr S → ≰ S ∣ J S × M S where ( I − S ) + ( y, m ) ∶ ⇐⇒ y ≤ S m and thus Open I − S ∶ OpenPirr S → Open ≰ S ∣ J S × M S is an isomorphism with action λX. LW ( L r ) ∩ ↓ S X . Next, we’llestablish the bipartite graph isomorphism: M S λX.dr − L ( X ) / / LW ( L r ) J S ≰ S ∣ J S × M S O O id LW ( L ) / / LW ( L ) DR L O O M S = dr L [ LW ( L r )] ; this Rel -diagram commutes by theDependency Lemma 3.3.9. It defines a
Dep -isomorphism DR L ∶ ≰ S ∣ J S × M S → DR L and hence a JSL f -isomorphism Open DR L ∶ = λX.dr − L [ X ] recalling that any upper witness can be used by Note 2.2.11. Then we have thecomposite isomorphism: S rep S ÐÐ→
OpenPirr S I − S ÐÐ→
Open ≰ S ∣ J S × M S Open DR L ÐÐÐÐÐ→
Open DR L with action and inverse action: X ∈ LQ ( L ) ↦ { Y ∈ M ( S ) ∶ X ⊈ Y } (apply rep S ) ↦ { Z ∈ LW ( L r ) ∶ ∃ Y ∈ M ( S ) . ( X ⊈ Y and Z ⊆ Y )} (apply Open I − S ) = { Z ∈ LW ( L r ) ∶ X ⊈ Z } ↦ dr − L [{ Z ∈ LW ( L r ) ∶ X ⊈ Z }] (apply Open DR L ) = { dr − L ( Z ) ∶ X ⊈ Z ∈ LW ( L r )} = { Y ∈ LW ( L r ) ∶ X ⊈ dr L ( Y )} (substitute Z ∶ = dr L ( Y ) ) = { v − L r ∶ v ∈ X r } (alternative action of dr L ). S ∈ O ( DR L ) ↦ dr L [ S ] (apply ( Open DR L ) − ) ↦ M ( S ) ∩ ↓ S dr L [ S ] (apply Open I S ) = M ( S ) ∩ dr L [ S ] (by 1) = dr L [ J ( LQ ( L r )) ∩ S ] ( M ( S ) = dr L [ J ( LQ ( L r ))] ) ↦ ⋀ S M ( S ) ∖ dr L [ J ( LQ ( L r )) ∩ S ] (apply rep − S ) = ⋀ S { dr L ( X ) ∶ X ∈ J ( LQ ( L r )) ∖ S } = dr L [⋃{ X ∈ J ( LQ ( L r )) ∶ X ∉ S }] ( dr L an isomorphism) = [(⋂{ X ∈ J ( LQ ( L r )) ∶ X ∈ S }) r ] − L = [(⋂{ X ∈ LW ( L r ) ∶ X ∈ S }) r ] − L (by 2) = [(⋂ S ) r ] − L .Concerning (1), each DR L -open set S is up-closed in ( LW ( L r ) , ⊆ ) so dr L [ S ] is down-closed in ( M S , ⊆ ) recalling M ( S ) ⊆ M S . Concerning (2), if w − L r ∈ S some join-irreducible v − L r ⊆ w − L r must also lie in S .2. It remains to establish that α is a dfa morphism: α ∶ dfa ( L ) → ( F dfa ( L r ) , Open DR L , δ a , { L r }) δ a ∶ = λS. { Y ∈ LW ( L r ) ∶ ∃ X ∈ S.a − Y = X } The initial state is preserved because α ( L ) = { v − L r ∶ v ∈ L r } = { X ∈ LW ( L r ) ∶ ε ∈ X } = F dfa ( L r ) . Next we showthe transitions are preserved, denoting the domain dfa’s transitions by γ a ∶ = λX.a − X . Given any X ∈ LW ( L ) , α ○ γ a ( X ) = α ( a − X ) = { v − L r ∶ v ∈ ( a − X ) r } = { v − L r ∶ v ∈ X r a − } . (a) δ a ○ α ( X ) = δ a ({ w − L r ∶ w ∈ X r }) = { v − L r ∶ ∃ w ∈ X r .a − ( v − L r ) = w − L r } = { v − L r ∶ ∃ w ∈ X r . ( va ) − L r = w − L r } . (b)Let us establish ( a ) = ( b ) via mutual inclusions.– ( a ) ⊆ ( b ) : Given v ∈ X r a − we deduce w ∶ = va ∈ X r , hence v − L r resides in (b).– ( b ) ⊆ ( a ) : We may assume X ∶ = ( u r ) − L ; we know there exists w ∈ X r = L r u − such that ( va ) − L r = w − L r .Then wu ∈ L r hence u ∈ w − L r so that vau ∈ L r . Thus va ∈ L r u − i.e. v ∈ X r a − so v − L r resides in (a).Lastly the final states are preserved and reflected: X ∈ α − ({ L r }) ⇐⇒ α ( X ) = L r ⇐⇒ ε ∈ X r ⇐⇒ ε ∈ X. Note 3.3.11 (Canonical dependency automaton as canonical residual automata) .
27y the Dependency Theorem 3.3.10, the canonical dependency automaton corresponds to the state-minimal dfainterpreted in join-semilattices. On the other hand, the categorical equivalence dfa
JSL ≅ aut Dep of Theorem 3.2.6already provides the component isomorphism:rep dfa ( L ) ∶ dfa ( L ) → Det ( Airr ( dfa ( L ))) = Det ( N L , Pirr LQ ( L ) , N ′ ) . The lower nfa N L is precisely the canonical residual automaton of [DLT01]. That is, let ILQ ( L ) ∶ = J ( LQ ( L )) ⊆ LW ( L ) be the irreducible left quotients i.e. those left word quotients not arising as the union of others (so, non-empty). Then: N L = ( ILQ ( L ) ∩ ↓ LQ ( L ) L, ILQ ( L ) , N a , { X ∈ ILQ ( L ) ∶ ε ∈ X }) N a ( X , X ) ∶ ⇐⇒ X ⊆ a − X . Relabelling the upper bipartition we obtain ( N L , DR L ∣ ILQ ( L ) × ILQ ( L r ) , N L r ) . The upper nfa is the canonical residualnfa for L r . The bipartitioned graph is obtained by restricting the dependency relation to irreducibles. It is necessarily aut Dep -isomorphic to the canonical dependency automaton, and actually constructable from it in polytime. It is neverlarger than our chosen description and potentionally far smaller. ∎ Recalling Definition 3.1.3, the minimisation of a classical dfa δ ∶ = ( z , Z, δ a , F ) can be understood as follows: δ acc δ / / / / simple ( δ ) reach ( simple ( δ )) (cid:31) ? ι O O reach ( δ ) (cid:31) ? ι O O acc reach ( δ ) / / / / simple ( reach ( δ )) dfa ( L ) Traditionally one first takes reach ( δ ) by restricting to states reachable from z via the underlying directed graph ⋃ a ∈ Σ δ a ⊆ Z × Z . From the perspective of dfa morphisms we construct the minimal sub-dfa of δ (i.e. the inclusion ι above). Secondly one can apply Hopcroft’s algorithm to compute a partition of the states i.e. the equivalence classesover which the state-minimal dfa can then be defined. From the perspective of dfa morphisms we construct the largestquotient-dfa of δ (i.e. the surjection acc reach ( δ ) above) . The latter sends a state to the language it accepts, yieldingprecisely the state-minimal machine dfa ( L ) . Notice the other way to minimise δ : quotient first; restrict to reachablesecond.We expressed minimisation in terms of dfa morphisms because one has exactly the same situation in dfa JSL , whosemorphisms must also preserve the join-semilattice structure. For any
JSL -dfa δ ∶ = ( s , S , δ a , F ) , δ acc δ / / / / simple ( δ ) reach ( simple ( δ )) (cid:31) ? ι O O reach ( δ ) (cid:31) ? ι O O acc reach ( δ ) / / / / simple ( reach ( δ )) dfa ( L ) We’ve already seen the state-minimal dfa ( L ) and its close connection to the canonical dependency automaton. Wenow introduce the corresponding concepts of reachability and simplicity, recalling the notation of Definition 3.1.3. Definition 3.4.1 ( JSL -reachability and simplicity) . Let δ ∶ = ( s , S , δ a , F ) be a JSL -dfa.1. δ is JSL -reachable if it has no proper sub
JSL -dfas: every injective
JSL -dfa morphism f ∶ γ ↣ δ is an isomorphism.Given any R ⊆ S with s ∈ R and δ a ( R ) ⊆ R for a ∈ Σ, then R ∩ δ ∶ = ( s , R , δ a ∣ R × R , F ∩ R ) is a JSL -dfa accepting L ( δ ) . In particular, reach ( δ ) ∶ = reach ( δ ) ∩ δ where reach ( δ ) ∶ = ⟨ reach ( δ )⟩ S is the reachable sub JSL -dfa of δ . By largest quotient we mean the respective equivalence relation is the largest w.r.t. inclusion. The respective quotient-dfa actually hasthe least possible number of states amongst other such quotients. By ⟨ reach ( δ )⟩ S we mean the sub join-semilattice of S generated by reach ( δ ) . δ is simple if it has no proper quotient JSL -dfas: every surjective
JSL -dfa morphism f ∶ δ ↠ γ is an isomorphism.We have the join-semilattice of accepted languages langs ( δ ) ∶ = ( langs ( δ ) , ∪ , ∅ ) by Definition 3.1.3.5 and Lemma3.2.2. Then the simple JSL -dfa: simple ( δ ) ∶ = ( L, langs ( δ ) , λX.a − X, { X ∈ langs ( δ ) ∶ ε ∈ X }) is the largest quotient JSL -dfa of δ via acc δ . Finally, a JSL -dfa δ is simplified if simple ( δ ) = δ . ∎ Lemma 3.4.2 (Well-definedness of reach ( − ) and simple ( − ) ) . reach ( δ ) is the JSL -reachable sub-dfa of δ .2. simple ( δ ) is the simple quotient dfa of δ .3. L ( simple ( δ ) @ X ) = X for each X ∈ simple ( δ ) .Proof. R ∩ δ is a well-defined JSL -dfa: (a) the conditions ensure each δ a ∶ S → S restricts to an R -endomorphism, (b)just as F = h − ({ }) for some h ∶ S → , F ∩ R = ( h ○ ι ) − ({ }) where ι ∶ R ↪ S .Concerning well-definedness of reach ( δ ) , R ∶ = reach ( δ ) is the reachable part of the underlying classical dfa closedunder all S -joins. Certainly R ⊆ S and s ∈ R . Next, δ a [ R ] ⊆ R because δ a preserves all joins, so applying δ a tojoins of classically reachable states is the same as taking the join of a -successors of classically reachable states.It accepts L because the reachable part of its underlying classical dfa is precisely reach ( δ ) .Finally, reach ( δ ) is JSL -reachable because any sub
JSL -dfa must at least contain the underlying reachable partand be closed under the algebraic structure.2. Concerning well-definedness of simple ( δ ) , langs ( δ ) is closed under arbitrary unions by Lemma 3.2.2. Certainly L ∈ langs ( δ ) and the transitions are well-defined by Definition 3.1.3.2. The final states are well-defined becausethe union of all languages sans ε does not contain it either.It accepts L because the reachable part of its underlying classical dfa is precisely dfa ( L ) . Finally, acc δ ∶ δ → langs ( δ ) is additionally a join-semilattice morphism by Lemma 3.2.2. It is simple because each state X ∈ lang ( δ ) accepts X i.e. distinct states accept distinct languages, so there can be no quotient dfa and thus also no quotient JSL -dfa.3. Follows via Definition 3.1.8.2.However, the self-duality of dfa
JSL provides an additional relationship.
Theorem 3.4.3 ( JSL -reachability is dual to simplicity) . Let δ ∶ = ( s , S , δ a , F ) be a JSL -dfa.1. δ is JSL -reachable iff every join-irreducible j ∈ J ( S ) is classically reachable.2. δ is simple iff distinct states accept distinct languages.3. δ is JSL -reachable iff its dual δ „ is simple.Proof. δ is JSL -reachable iff reach ( δ ) = δ iff every state is a join of classically reachable states. Since J ( S ) is theminimal join-generating set we infer (1). Concerning (2), δ is simple iff acc δ ∶ δ → langs ( δ ) is bijective iff distinctstates accept distinct languages. Finally, the concepts of JSL -reachable and simple are categoricially dual, recalling
JSL f -monos are precisely the injective morphisms and JSL f -epis are precisely the surjective ones (see Note 2.2.8.4).We also mention a basic characterisation of simplified JSL -dfas.
Lemma 3.4.4 (Simplified
JSL -dfas) . For any
JSL -dfa γ t.f.a.e.1. γ is simplified i.e. simple ( γ ) = γ .2. There exists a finite set of regular languages S ∋ L ( γ ) , closed under unions and left-letter quotients s.t. γ = ( L ( γ ) , ( S, ∪ , ∅ ) , λX.a − X, { K ∈ S ∶ ε ∈ K }) . roof. Given (1) then (2) follows by choosing S ∶ = langs ( γ ) , recalling L ( γ a ( z )) = a − L ( γ a ) by Lemma 3.1.3.5. Given(2), the specified quadruple is a well-defined JSL -dfa because a − ( − ) preserves unions and there is a largest non-finalstate ⋃{ K ∈ S ∶ ε ∈ S } . Corollary 3.4.5 ( simple ( − ) is the De Morgan dual of reach ( − ) ) . acc ( reach ( δ „ )) „ ∶ ( reach ( δ „ )) „ → simple ( δ ) is an isomorphism for any JSL -dfa δ .Proof. Given δ ∶ = ( s , S , δ a , F ) there is an injective JSL -dfa morphism ι ∶ reach ( δ „ ) ↪ δ „ by Lemma 3.4.2. ByTheorem 3.2.3 we have: δ λ δ Ð→ ( δ „ ) „ ι ∗ ↠ ( reach ( δ „ )) „ where the identity function λ δ ∶ = id S is a component of the natural isomorphism witnessing self-duality, and ι ∗ issurjective by Note 2.2.8.4. By Theorem 3.4.3.3 the codomain is simple, so the surjective morphism acc ( reach ( δ „ )) „ is an isomorphism. langs (( reach ( δ „ )) „ ) = langs ( δ ) because ι ∗ is surjective, hence acc ( reach ( δ „ )) „ has codomain langs ( δ ) . Example 3.4.6 (Dualising the reachable subset construction) . In Example 3.2.12 we described the dual of the full subset construction. Again letting δ = Det ( N , ∆ Z , rev ( N )) , wenow provide a description of γ „ where γ ∶ = reach ( δ ) . By Corollary 3.4.5 we have the isomorphism: acc γ „ ∶ γ „ → simple ( δ „ ) sending unions of reachable subsets to their accepted language via the JSL -dfa γ „ . We now describe this isomorphismin more detail. First we write γ = ( I, S , λX. N a [ X ] , { X ∈ S ∶ X ∩ F ≠ ∅ }) , so that: γ „ = ( reach ( N ) ∩ F , S op , β a , { X ∈ S ∶ I ⊈ X }) where β a ∶ = ( γ a ) ∗ .Then β w = ( γ w r ) ∗ = λY. ⋃{ X ∈ rs ( N ) ∶ N w r [ X ] ⊆ Y } since the reachable subsets rs ( N ) join-generate γ . Next, w ∈ acc γ „ ( Y ) ⇐⇒ β w ( Y ) ∈ F γ „ (by def.) ⇐⇒ I ⊈ β w ( Y ) ⇐⇒ ¬ ( I ⊆ ⋃{ X ∈ rs ( N ) ∶ N w r [ X ] ⊆ Y }) (see above) ⇐⇒ ¬ ( N w r [ I ] ⊆ Y ) (see below) ⇐⇒ N w r [ I ] ⊈ Y .Concerning the marked equivalence, ( ⇐ ) follows immediately because I ∈ rs ( N ) . Conversely if for each z ∈ I we have z ∈ X z ∈ rs ( N ) with N w r [ X z ] ⊆ Y then N w r [ I ] ⊆ N w r [⋃ z ∈ Z X z ] ⊆ Y too. Thus we obtain a more explicit descriptionof the isomorphism i.e. acc γ „ ( Y ) = { w ∈ Σ ∗ ∶ N w r [ I ] ⊈ Y } . ∎ Corollary 3.4.7. reach ( − ) preserves simplicity and simple ( − ) preserves JSL -reachability.Proof. If δ is simple then it is isomorphic to γ ∶ = simple ( δ ) so distinct states accept distinct languages by Theorem3.4.3.2. Ignoring the join-structure, reach ( γ ) is a sub-dfa of γ , so distinct states continue to accept distinct languagesand reapplying Theorem 3.4.3.2 we deduce simplicity. The second statement follows by duality i.e. Theorem 3.4.3.3. Corollary 3.4.8 (Characterisation of dfa
JSL -minimality) . A JSL -dfa δ is JSL -reachable and simple iff acc δ ∶ δ → dfa ( L ( δ )) is a well-defined JSL -dfa isomorphism.Proof.
Let L ∶ = L ( δ ) . Suppose acc δ ∶ δ → dfa ( L ) has correct typing and is an isomorphism. The notions of ‘ JSL -reachable’ and ‘simple’ are invariant under isomorphism, so we show γ ∶ = dfa ( L ) is JSL -reachable and simple. Firstly, reach ( γ ) = γ because the left quotients LQ ( L ) arise from LW ( L ) via finite unions. Finally, γ is simple because γ „ ≅ dfa ( L r ) by Theorem 3.3.6 which is JSL -reachable by the preceding argument, so γ is simple by Theorem 3.4.3.3.Conversely let δ be JSL -reachable and simple. Since δ is simple, the surjection acc δ ∶ δ → simple ( δ ) is anisomorphism. Since δ is JSL -reachable, by Theorem 3.4.3.1 and Lemma 3.2.2 we have langs ( δ ) = LQ ( L ) , hence simple ( δ ) = dfa ( L ( δ )) . 30 orollary 3.4.9 (Meet-generators for JSL -dfas) . Let γ = ( s , S , γ a , F ) be a JSL -dfa.1. If γ is simplified it is meet-generated by {⋃{ j ∈ J ( langs ( γ )) ∶ w ∉ j } ∶ w ∈ Σ ∗ } .2. If γ is JSL -reachable it is meet-generated by {⋁ S { γ w ( s ) ∶ w ∉ j } ∶ j ∈ J ( langs ( γ „ ))} .Proof.
1. By Lemma 3.4.4 there exists a set S of union and left-letter-quotient closed languages s.t. γ = ( L, S ³¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ( S, ∪ , ∅ ) , λX.a − X, { K ∈ S ∶ ε ∈ K }) where L ∶ = L ( γ ) .By Theorem 3.4.3 we know γ „ is JSL -reachable, hence join-generated by elements ( γ w ) ∗ ( K ) where K ∶ = ⋃{ K ∈ S ∶ ε ∉ K } . Finally observe that: ( γ w ) ∗ ( K ) = ⋃{ j ∈ J ( S ) ∶ γ w ( j ) ⊆ K } = ⋃{ j ∈ J ( S ) ∶ ε ∉ γ w ( j )} = ⋃{ j ∈ J ( S ) ∶ w ∉ j } so S is meet-generated by these elements.2. By Theorem 3.4.3 we may assume (modulo isomorphism) that γ = δ „ where δ = ( t , T , λX.a − X, { K ∈ T ∶ ε ∈ K }) is simplified. Consequently s = ⋃{ K ∈ T ∶ ε ∉ K } and we calculate: ⋁ S { γ w ( s ) ∶ w ∉ j } = ⋀ T {( δ w r ) ∗ ( s ) ∶ w ∉ j } = ⋀ T {(⋃{ j ′ ∈ J ( T ) ∶ w r ∉ j ′ } ∶ w ∉ j } (see proof of (1)) = ⋀ T {(⋃{ j ′ ∈ J ( T ) ∶ w ∉ j ′ } ∶ w ∉ j } = j (see below).Concerning the marked equality: ⊆ follows because if w ∉ j then w ∉ { j ′ ∈ J ( T ) ∶ w ∉ j ′ } ; ⊇ follows becausewhenever w ∉ j we know j ⊆ { j ′ ∈ J ( T ) ∶ w ∉ j ′ } . Finally, J ( T ) join-generates T and thus meet-generates S .The self-duality of dfa JSL (Theorem 3.2.3) corresponds to the self-duality of aut
Dep (Theorem 3.1.14). But whatdoes
JSL -reachability correspond to at the level of dependency automata? Our next result shows it is a combinationof the classical reachable subset construction and the classical reachable nfa construction.
Theorem 3.4.10 ( aut Dep -reachability) . We have the aut
Dep -isomorphism: ˘ ∈ ∶ Airr ( reach ( dep ( N ))) → ( rsc ( N ) , ˘ ∈ , rev ( reach ( N ))) . for each nfa N = ( z , Z, N a , F ) .Proof. By Note 3.2.7 the determinisation δ ∶ = Det ( dep ( N )) is N ’s full subset construction endowed with its join-semilattice structure Open ∆ Z = ( P Z, ∪ , ∅ ) . Consider: reach ( δ ) = ( I, S , γ a , F γ ) S ∶ = reach ( δ ) = ⟨ rs ( N )⟩ Open ∆ Z ι ∶ S ↪ Open ∆ Z . Then S is join-generated by J S ∶ = rs ( N ) but what about a meet-generating set? The surjective adjoint ι ∗ ∶ ( P Z, ∩ , Z ) ↠ S op provides one: M ( S ) = J ( S op ) ⊆ ι ∗ [ J ( P Z, ∩ , Z )] = { M z ∶ z ∈ Z } where M z ∶ = ι ∗ ( z ) .Since rs ( N ) join-generates S we know M z = ⋃{ X ∈ rs ( N ) ∶ z ∉ X } is the union of reachable subsets without z . Itfollows that M S ∶ = { M z ∶ z ∈ reach ( N )} ⊇ M ( S ) because if z is unreachable then M z = reach ( N ) = ⊺ S ∉ M ( S ) cannotcontribute. Now, Airr δ is isomorphic to ( M , ⊈ , M ′ ) by Proposition 3.2.16 where: M ³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ({ X ∈ rs ( N ) ∶ X ⊆ I } , rs ( N ) , M a , { X ∈ rs ( N ) ∶ X ∩ F ≠ ∅ }) M a ( X , X ) ∶ ⇐⇒ X ⊆ N a [ X ]({ M z ∶ z ∈ Z, F ∩ S ⊆ M z } , M S , M ′ a , { M z ∶ z ∈ Z, I ⊈ M z })´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ M ′ M ′ a ( M z , M z ) ∶ ⇐⇒ ( δ a ) ∗ ( M z ) ⊆ M z .31he lower nfa M turns out to be rsc ( N ) with some additional degenerate structure, we’ll come back to this point.Concerning the upper nfa, the calculations: F ∩ S ⊆ M z ( adjoints ) ⇐⇒ ι ( F ∩ S ) ⊆ z ⇐⇒ z ∉ F ∩ S ⇐⇒ z ∈ F ∩ reach ( N ) I ⊈ M z ( adjoints ) ⇐⇒ ι ( I ) ⊈ z ⇐⇒ z ∈ I . ( δ a ) ∗ ( M z ) ⊆ M z ⇐⇒ ι (( δ a ) ∗ ( M z )) ⊆ z (adjoints) ⇐⇒ z ∉ ( δ a ) ∗ ( M z ) ⇐⇒ ∀ X ∈ rs ( N ) . [ γ a ( X ) ⊆ M z ⇒ z ∉ X ] ⇐⇒ ∀ X ∈ rs ( N ) . [ z ∈ X ⇒ γ a ( X ) ⊈ M z ] ⇐⇒ ∀ X ∈ rs ( N ) . [ z ∈ X ⇒ z ∈ γ a ( X )] (via adjoints) ⇐⇒ N a ( z , z ) (since z reachable)show that it is essentially rev ( reach ( N )) . More precisely we have the bipartite graph isomorphism: M S β / / reach ( N ) rs ( N ) ⊈ O O id rs (N) / / rs ( N ) ˘ ∈ O O where the bijection β has action M z ↦ z . Indeed X ⊈ M z ⇐⇒ X ⊈ z ⇐⇒ z ∈ X ⇐⇒ ˘ ∈ ( X, z ) . Then it followsfrom the earlier calculations that we have the aut Dep -isomorphism ˘ ∈ ∶ Airr δ → ( M , ˘ ∈ , rev ( reach ( N ))) . InstantiatingProposition 3.1.15 provides the isomorphism ˘ ∈ ∶ ( M , ˘ ∈ , rev ( reach ( N ))) → ( rsc ( N ) , ˘ ∈ , rev ( reach ( N ))) . This followsby the calculations M a ; ˘ ∈ ( X, z ) ⇐⇒ ∃ X ′ ∈ rs ( N ) . [ X ′ ⊆ N a [ X ] ∧ z ∈ X ′ ] ⇐⇒ z ∈ N a [ X ] ⇐⇒ ( λY. N a [ Y ]) ; ˘ ∈ ( X, z ) and ˘ ∈ [{ X ∈ rs ( N ) ∶ X ⊆ I }] = I = ˘ ∈ [{ I }] . The third requirement in Proposition 3.1.15 is trivial because bothdependency automata have the same upper nfa. Composing these two aut Dep -isomorphisms yields:˘ ∈ ˘ ∈ = id rs ( N ) ; ˘ ∈ = ˘ ∈ ∶ Airr ( reach ( Det ( N , ∆ Z , rev ( N )))) → ( rsc ( N ) , ˘ ∈ , rev ( reach ( N ))) i.e. relate a join-irreducible reachable subset Y to its elements z ∈ Y – all classically reachable in the nfa N . Note 3.4.11 (Reachability in aut
Dep ) . Given the full subset construction δ = Det ( N , ∆ Z , rev ( N )) , Theorem 3.4.10describes reach ( δ ) as a dependency automaton. What about for arbitrary JSL -dfas? In a sense we’ve already coveredthe general case via Corollary 3.2.15. The
JSL -dfas with carrier
Open ∆ Z = ( P Z, ∪ , ∅ ) are injective objects and every JSL -dfa embeds into one. ∎ Theorem 3.4.12 ( aut Dep -simplicity) . We have the aut
Dep -isomorphism: I ∶ ( coreach ( N ) , ∈ , rsc ( rev ( N ))) → Airr ( simple ( dep ( N ))) I ( z, Y ) ∶ ⇐⇒ L ( N @ z ) ∩ Y r ≠ ∅ .for any nfa N = ( I, Z, N a , F ) .Proof. Let δ ∶ = Det ( dep ( N )) and apply the duality of Theorem 3.1.14 to the isomorphism of Theorem 3.4.10: R ∶ = ∈ ∶ ( rev ( reach ( N )) , ∈ , rsc ( N )) → Rev ( Airr ( reach ( δ ))) . Observe R has bijective lower witness α ∶ = λz. ⋃{ X ∈ rs ( N ) ∶ z ∉ X } by inspecting the proof of Theorem 3.4.10. ByTheorem 3.2.13.2 we have ˆ λ ∶ Rev ○ Airr op ⇒ Airr ○ ( − ) „ and hence the component:ˆ λ reach ( δ ) = id Airr ( reach ( δ )) = Pirr ⟨ rs ( N )⟩ Open ∆ Z whose domain is the codomain of R and whose codomain is Airr ( reach ( δ )) „ . Corollary 3.4.5 provides the isomorphism: f ∶ = acc ( reach ( δ )) „ ∶ ( reach ( δ )) „ → simple ( δ „ ) aut Dep -isomorphism
Airr f . By Example 3.2.12 we know δ „ ≅ Det ( Rev ( dep ( N ))) hence simple ( δ „ ) exactly equals simple ( Det ( Rev ( dep ( N )))) . Composing these three Dep -isomorphisms yields: R Airr f ( z, Y ) ⇐⇒ α ; Airr f ( z, Y ) (using R ’s lower witness) ⇐⇒ f (⋃{ X ∈ rs ( N ) ∶ z ∉ X }) ≰ langs ( δ „ ) Y ⇐⇒ f (⋃{ X ∈ rs ( N ) ∶ z ∉ X }) ⊈ Y ⇐⇒ { w ∈ Σ ∗ ∶ N w r [ I ] ⊈ ⋃{ X ∈ rs ( N ) ∶ z ∉ X }} ⊈ Y (by Example 3.4.6) ⇐⇒ { w ∈ Σ ∗ ∶ z ∈ N w r [ I ]} ⊈ Y ⇐⇒ { w ∈ Σ ∗ ∶ w r ∈ L ( rev ( N ) @ z )} ⊈ Y ⇐⇒ L ( rev ( N ) @ z ) ∩ Y r ≠ ∅ .Finally we reparameterise via N ↦ rev ( N ) recalling that coreach ( N ) = rev ( reach ( rev ( N ))) by definition.We can now explain the original motivation for the above results. Theorem 3.4.13 (Brzozowski construction of state-minimal dfa) . We have the dfa-isomorphism: acc rsc ( rev ( rsc ( rev ( N )))) ∶ rsc ( rev ( rsc ( rev ( N )))) → dfa ( L ( N )) for any nfa N = ( I, Z, N a , F ) .Proof. Consider the dependency automaton ( N , ∆ Z , rev ( N )) . By Theorem 3.4.12 its simplification amounts to N ∶ = ( coreach ( N ) , ∈ , rsc ( rev ( N ))) . Then Det N is a simple JSL -dfa. By Corollary 3.4.7, reach ( Det N ) is both simple and JSL -reachable, hence isomorphic to dfa ( L ( N )) by Corollary 3.4.8. Thus the classically reachable part reach ( Det N ) is isomorphic to dfa ( L ( N )) . Finally by Note 3.2.10 we have reach ( Det N ) = rsc ( rev ( rsc ( rev ( N )))) . Recall the state-minimal machine dfa ( L ) from Definition 3.1.3. Its states LW ( L ) are the left word quotients u − L ,also known as Brzozowski derivatives [Brz64, Con71]. Definition 3.5.1 (Minimal boolean/distributive
JSL -dfa) . Fix a regular language L ⊆ Σ ∗ .1. L ’s left predicates and state-minimal boolean JSL -dfa . LP ( L ) are all set-theoretic boolean combinations of L ’s left word quotients LW ( L ) . They admit a boolean algebrastructure, with underlying join-semilattice LP ( L ) ∶ = ( LP ( L ) , ∪ , ∅ ) . Then J ( LP ( L )) are its atoms and M ( LP ( L )) its co-atoms. The canonical boolean JSL -dfa for L is defined: dfa ¬ ( L ) ∶ = ( L, LP ( L ) , λX.a − X, { K ∈ LP ( L ) ∶ ε ∈ K }) . L ’s positive left predicates and state-minimal distributive JSL -dfa .Let LD ( L ) be the closure of LW ( L ) under all intersections and unions. The subsets define a distributive latticewith underlying join-semilattice LD ( L ) ∶ = ( LD ( L ) , ∪ , ∅ ) . Meet is intersection and its top element is Σ ∗ . The canonical distributive JSL -dfa for L is defined: dfa ∧ ( L ) ∶ = ( L, LD ( L ) , λX.a − X, { K ∈ LD ( L ) ∶ ε ∈ K }) . ∎ Note 3.5.2 (Canonicity of
JSL -dfas) . We briefly explain the sense in which these
JSL -dfas are canonical, see [MAMU14].– dfa ¬ ( L ) is the underlying JSL -dfa of the state-minimal BA -dfa.– dfa ∧ ( L ) is the underlying JSL -dfa of the state-minimal DL -dfa. ∎
33n the remainder of this subsection, we’ll describe the canonical boolean/distributive
JSL -dfas as dependencyautomata. This immediately provides representations of their dual
JSL -dfas. The next subsection is dedicated tothe transition-semiring of an nfa. These admit a
JSL -dfa structure. In particular, the canonical syntactic
JSL -dfa dfa
Syn ( L ) is the dual of syntactic semiring for L r [Pol01]. Lemma 3.5.3 (Concerning atoms and finality) .
1. The atoms J ( LP ( L )) are pairwise disjoint and their union is Σ ∗ .2. Given u ∈ α ∈ J ( LP ( L )) and Y ∈ LP ( L ) then u ∈ Y ⇐⇒ α ⊆ Y .3. For any Y ∈ LP ( L ) we have ε ∈ Y ⇐⇒ Y ⊈ dr L ( L r ) .Proof.
1. The atoms J ( LP ( L )) are pairwise-disjoint because their meet (intersection) is the bottom element ∅ . The unionof all atoms is the top element i.e. the empty intersection Σ ∗ .2. Fix any Y ∈ LP ( L ) and u ∈ α ∈ J ( LP ( L )) . Given α ⊆ Y then certainly u ∈ Y . Conversely if u ∈ Y then it mustlie in some atom, which is unique by disjointness, hence α ⊆ Y .3. If ε ∈ Y then certainly Y ⊈ dr L ( L r ) because the latter does not contain ε (see Theorem 3.3.6). Conversely if Y ⊈ dr L ( L r ) there exists u ∈ Y such that ∀ X ∈ LW ( L ) . ( u ∈ X ⇐⇒ ε ∈ X ) i.e. α and ε reside in the same atom,so ε ∈ Y by (2). Lemma 3.5.4 (Well-definedness of canonical boolean/distributive
JSL -dfa) . dfa ¬ ( L ) and dfa ∧ ( L ) are well-defined fixpoints of simple ( − ) which accept L and have dfa ( L ) as a sub JSL -dfa.Proof.
The join-semilattice LP ( L ) is closed under unions, thus well-defined. We have L ∈ LQ ( L ) ⊆ LP ( L ) and a − ( − ) preserves unions. The final states are well-defined by Lemma 3.5.3.3. Each state K accepts K i.e. w ∈ K ⇐⇒ ε ∈ w − K ⇐⇒ w − K ⊈ dr L ( L r ) where the latter corresponds to JSL -dfa acceptance. Then it is a fixpoint of simple ( − ) as claimed and clearly has the sub JSL -dfa dfa ( L ) . Finally dfa ∧ ( L ) is sandwiched between them via JSL -dfa inclusionmorphisms, with well-defined final states by Lemma 3.5.3.3.Generally speaking, L r ’s left word quotients biject with LP ( L ) ’s atoms. Theorem 3.5.5 (Quotient-atom bijection) . Each regular L has the canonical bijection: κ L ∶ LW ( L r ) → J ( LP ( L )) κ L ( v − L r ) ∶ = J v r K E L κ − L ( X ) ∶ = [ X r ] − L r , and respective relationship: κ L ( x − L r ) ⊆ a − κ L ( y − L r ) ⇐⇒ ( xa ) − L r = y − L r for any x, y ∈ Σ ∗ .Proof.
1. We first verify κ L is a well-defined function: v − L r = v − L r ⇐⇒ ∀ w ∈ Σ ∗ . [ v w ∈ L r ⇐⇒ v w ∈ L r ] ⇐⇒ ∀ w ∈ Σ ∗ . [ wv r ∈ L ⇐⇒ wv r ∈ L ] ⇐⇒ ∀ w ∈ Σ ∗ . [ v r ∈ w − L ⇐⇒ v r ∈ w − L ] ⇐⇒ J v r K E L = J v r K E L (by Lemma 4.4.3.6).It is clearly surjective and also injective by reversing the argument above. The action of κ − L is well-definedbecause κ L is injective.2. Suppose ( xa ) − L r = y − L r so that J ax r K E L = J y r K E L by applying κ L . Since x r ∈ a − J ax r K E L we deduce J x r K E L ⊆ a − J ax r K E L = a − J y r K E L . Conversely suppose the inclusion J x r K E L ⊆ a − J y r K E L holds. Then ax r ∈ J y r K E L andconsequently J ax r K E L = J y r K E L , so applying κ − L we infer ( xa ) − L r = y − L r .34 ote 3.5.6 (Canonicity of κ L ) . κ L arises from the duality between Set -dfas (classical dfas) and BA -dfas i.e. finitedeterministic automata interpreted in boolean algebras [MAMU14]. In particular, the dual of the state-minimal BA -dfafor L is isomorphic to the state-minimal Set -dfa for L r . ∎ Theorem 3.5.7 (Canonical boolean dependency automaton) . We have the aut
Dep -isomorphism: ¬ LP ( L ) ○ κ L ∶ dep ( rev ( dfa ( L r ))) → Airr ( dfa ¬ ( L )) with action λv − L r . J v r K E L and inverse κ − L .Proof. Consider the dependency automaton of irreducibles:
Airr ( dfa ¬ ( L )) = ( N , ¬ dfa ¬ ( L ) , M ) N = ({ J x K E L ∶ x ∈ L } , J ( dfa ¬ ( L )) , N a , { J ε K E L }) N a ( J x K E L , J x K E L ) ⇐⇒ ( x r a ) − L r = ( x r ) − L r M = ({ J ε K E L } , M ( dfa ¬ ( L )) , M a , { J x K E L ∶ x ∈ L }) M a ( J x K E L , J x K E L ) ⇐⇒ N a ( J x K E L , J x K E L ) .To explain, N ’s description follows by unwinding the definitions and the relationship J x K E L ⊆ a − J x K E L ⇐⇒ ( x r a ) − L r = ( x r ) − L r from Theorem 3.5.5. Likewise M follows via the definitions and the following calculation,where γ a ∶ = λX.a − X ∶ LP ( L ) → LP ( L ) : M a ( J x K E L , J x K E L ) ⇐⇒ ( γ a ) ∗ ( J x K E L ) ⊆ J x K E L (by definition) ⇐⇒ ⋃{ J x K E L ∶ a − J x K E L ⊆ J x K E L } ⊆ J x K E L ⇐⇒ x ∉ ⋃{ J x K E L ∶ a − J x K E L ⊆ J x K E L } ⇐⇒ a − J x K E L ⊈ J x K E L ⇐⇒ J x K E L ⊆ a − J x K E L .We now verify the claimed dependency automaton isomorphism using the canonical bijection κ L from Theorem 3.5.5.First of all, α ∶ = ¬ LP ( L ) ○ κ L defines a Dep -isomorphism via the bijective witnesses: LW ( L r ) ¬ LP ( L ) ○ κ L / / M ( LP ( L )) LW ( L r ) ∆ LW ( Lr ) O O κ L / / J ( LP ( L )) ¬ LP ( L ) O O It remains to verify the constaints from Definition 3.1.12. Let δ ∶ = dfa ( L r ) be the classical state-minimal dfa so that δ a ( Y , Y ) ⇐⇒ Y = a − Y . Then we calculate: δ ˘ a ; α ( v − L r , J y K E L ) ⇐⇒ ∃ w ∈ Σ ∗ . [( wa ) − L r = v − L r ∧ α ( w − L r ) = J y K E L ] ⇐⇒ ∃ w ∈ Σ ∗ . [( wa ) − L r = v − L r ∧ J w r K E L = J y K E L ] ⇐⇒ ∃ w ∈ Σ ∗ . [ J w r K E L ⊆ a − J v r K E L ∧ J w r K E L = J y K E L ] (by Theorem 3.5.5) ⇐⇒ J y K E L ⊆ a − J v r K E L ⇐⇒ M a ( J y K E L , J v r K E L ) (see above) ⇐⇒ α ; M ˘ a ( v − L r , J y K E L ) Concerning the remaining conditions, ˘ α [ I M ] = { κ − L ( J ε K E L )} = { L r } = F rev ( δ ) and finally: α [ I rev ( δ ) ] = α [{ Y ∈ LW ( L r ) ∶ ε ∈ Y }] = α [{ v − L r ∶ v ∈ L r }] = { J v r K E L ∶ v r ∈ L } = F M .Importantly this provides a dual representation. 35 orollary 3.5.8 (Dualising dfa ¬ ( L ) ) . We have the
JSL -dfa isomorphism: θ ∶ sc ( dfa ( L r )) → ( dfa ¬ ( L )) „ θ ( S ) ∶ = ⋃{ J v r K E L ∶ v − L r ∉ S } Proof.
First recall the inverse of the isomorphism from Theorem 3.5.7, f ∶ = λX. [ X r ] − L r ∶ Airr ( dfa ¬ ( L )) → M M ∶ = dep ( rev ( dfa ( L r ))) . It has upper witness β ∶ = κ − L ○ ¬ dfa ¬ ( L ) so that Det f = λS.β [ S ] and also ( Det f ) „ = λS.β − [ S ] , since the adjoint of anisomorphism acts like the inverse. Recall the natural isomorphism ˆ ∂ − from Theorem 3.2.13 and rep − from Theorem3.2.6. Then we have the composite join-semilattice isomorphism: Det ( dep ( dfa ( L r ))) ˆ ∂ − M ÐÐ→ ( Det M ) „ ( Det f ) „ ÐÐÐÐ→ ( DetAirr ( dfa ¬ ( L ))) „ rep „ dfa ¬( L ) ÐÐÐÐÐÐ→ ( dfa ¬ ( L )) „ which acts on S ⊆ LW ( L r ) as follows: S ↦ ˆ ∂ − dep ( rev ( dfa ( L r ))) ( S ) = ∆ LW ( L r ) [ S ] ↦ ( Det f ) „ ( S ) = { J v r K E L ∶ v − L r ∈ S } (see above) ↦ rep „ dfa ¬ ( L ) ({ J v r K E L ∶ v − L r ∉ S }) = ⋂{ J v r K E L ∶ v − L r ∈ S } (adjoint acts as rep − dfa ¬ ( L ) ) = ⋃{ J v r K E L ∶ v − L r ∉ S } .We now turn our attention to positive predicates, recalling dr L from Theorem 3.3.6. Lemma 3.5.9 (Concerning irreducibles in LD ( L ) ) . ( LD ( L )) op ≅ LD ( L ) via relative complement.2. dr L ( v − L r ) = ⋂{ X ∈ LW ( L ) ∶ v r ∈ X } .3. ∣ J ( LD ( L ))∣ = ∣ M ( LD ( L ))∣ = ∣ LW ( L r )∣ where: J ( LD ( L )) = { dr L ( v − L r ) ∶ v − L r ∈ LW ( L r )} M ( LD ( L )) = { dr L ( v − L r ) ∶ v − L r ∈ LW ( L r )} .4. S ⊆ dr L ( v − L r ) ⇐⇒ v r ∉ S , for each S ∈ LD ( L ) .5. The canonical bijection τ LD ( L ) ∶ J ( LD ( L )) → M ( LD ( L )) has action: τ LD ( L ) ( dr L ( v − L r )) ∶ = dr L ( v − L r ) see Note 2.2.9.Proof.
1. Consider θ ∶ = λX.X ∶ ( LD ( L )) op → LD ( L ) . It is a well-defined bijection by the set-theoretic De Morgan laws and u − L = u − L . It is an order-isomorphism because X ⊆ Y ⇐⇒ Y ⊆ X , hence a join-semilattice isomorphism too.2. We calculate: dr L ( v − L r ) = dr L ( v − L r ) ( v − ( − ) preserves complement) = ⋃{ X ∈ LW ( L r ) ∶ v r ∉ X } (see Theorem 3.3.6) = ⋂{ X ∶ v r ∈ X ∈ LW ( L r )} = ⋂{ X ∈ LW ( L r ) ∶ v r ∈ X } .36. We first show M ( LD ( L )) has the claimed description. By Corollary 3.3.8 each X ∈ LQ ( L ) is an intersection of dr L ( v − L r ) ’s, so every S ∈ LD ( L ) is an intersection of them too. Then these elements meet-generate LD ( L ) . Tosee they are all meet-irreducible, fix v ∈ Σ ∗ . We’ll show dr L ( v − L r ) has the following unique cover in LD ( L ) : K v ∶ = ⋂{ dr L ( Y ) ∶ dr L ( v − L r ) ⊂ dr L ( Y ) , Y ∈ LW ( L )} . Certainly dr L ( v − L r ) ⊆ K v . Crucially if dr L ( v − L r ) ⊂ dr L ( Y ) then by strictness we know dr L ( Y ) ⊈ dr L ( v − L r ) ,hence v r ∈ dr L ( Y ) by Corollary 3.3.8. Then we have the strict inclusion dr L ( v − L r ) ⊂ K v . Since K v is the meetof all meet-irreducibles strictly greater than dr L ( v − L r ) it is also the unique cover of the latter.The description of J ( LD ( L )) follows by (1) i.e. they are the relative complements of the meet-irreducibles in LD ( L ) . Finally, both sets have cardinality ∣ LW ( L r )∣ .4. For any v ∈ Σ ∗ we first establish: ⋂{ X ∈ LW ( L r ) ∶ v r ∈ X } ⊈ dr L ( v − L r ) ⇐⇒ ∀ X ∈ LW ( L ) . [ v r ∈ X ⇒ X ⊈ dr L ( v − L r )] (A) ⇐⇒ ∀ X ∈ LW ( L ) . [ v r ∈ X ⇒ v r ∈ X )] (Corollary 3.3.8) ⇐⇒ v r ∈ ⋂{ X ∈ LW ( L r ) ∶ v r ∈ X } .Concerning (A), the implication ( ⇒ ) follows because X ⊆ dr L ( v − L r ) would yield a contradiction, whereas ( ⇐ ) holds because X ⊈ dr L ( v − L r ) implies v r ∈ X by Corollary 3.3.8, so the intersection contains v r too. Then,invoking (2) and (3), we’ve established the original claim whenever S ∈ J ( LD ( L )) . In the general case S = ⋃ J where J ⊆ J ( LD ( L )) , S ⊆ dr L ( v − L r ) ⇐⇒ ∀ K ∈ J.K ⊆ dr L ( v − L r ) ⇐⇒ ∀ K ∈ J.v r ∉ K ⇐⇒ v r ∉ S .5. Each join-irreducible takes the form j v ∶ = dr L ( v − L r ) where v ∈ Σ ∗ . By Note 2.2.9 we know j v is join-prime, sofor any J ⊆ J ( LD ( L )) we have j v ⊈ ⋃ J ⇐⇒ ∀ j ∈ J.j v ⊈ j . Then we calculate: τ LD ( L ) ( j v ) = ⋃{ S ∈ LD ( L ) ∶ j v ⊈ S } = ⋃{ j v ∈ J ( LD ( L )) ∶ j v ⊈ j v } ( j v join-prime) = ⋃{ j v ∶ dr L ( v − L r ) ⊈ dr L ( v − L r )} = ⋃{ j v ∶ v r ∈ dr L ( v − L r )} (Corollary 3.3.8) = ⋃{ j ∈ J ( LD ( L )) ∶ v r ∉ j } = dr L ( v − L r ) (by (4)).Lemma 3.5.9 provides a natural bijection LW ( L r ) ≅ J ( LD ( L )) akin to the quotient-atom bijection. Theorem 3.5.10 (Quotient-intersection bijection) . Each regular L has the canonical bijection, λ L ∶ LW ( L r ) → J ( LD ( L )) λ L ( Y ) ∶ = dr L ( Y ) λ − L ∶ = λ L r , and respective relationship: λ L ( x − L r ) ⊆ a − λ L ( y − L r ) ⇐⇒ y − L r ⊆ ( xa ) − L r for any x, y ∈ Σ ∗ .Proof. The bijection follows by Lemma 3.5.9. Concerning the relationship, y − L r ⊆ ( xa ) − L r ⇐⇒ ∀ v ∈ Σ ∗ . [ yv ∈ L r ⇒ xav ∈ L r ] ⇐⇒ ∀ v ∈ Σ ∗ . [ v r y r ∈ L ⇒ v r ax r ∈ L ] ⇐⇒ ∀ v ∈ Σ ∗ . [ y r ∈ [ v r ] − L ⇒ x r ∈ [( va ) r ] − L ] ⇐⇒ ∀ X ∈ LW ( L ) . [ y r ∈ X ⇒ x r ∈ a − X ] ⇐⇒ ⋂{ X ∈ LW ( L ) ∶ x r ∈ X } ⊆ a − ⋂{ X ∈ LW ( L ) ∶ y r ∈ X } .37 ote 3.5.11 (Canonicity of λ L ) . It arises from the duality between
Poset -dfas and DL -dfas, see [MAMU14]. ∎ Recall the nfa dfa ↓ ( L ) from Example 3.3.4. It arises from the state-minimal deterministic machine dfa ( L ) byextending the initial states and transitions. Theorem 3.5.12 (Canonical distributive dependency automaton) . We have the aut
Dep -isomorphism: D ∶ ( rev ( dfa ↓ ( L r )) , ⊆ , dfa ↓ ( L r )) → Airr ( dfa ∧ ( L )) D ( v − L r , dr L ( v − L r )) ∶ ⇐⇒ v − L r ⊆ v − L r with inverse E ( S, v − L r ) ∶ ⇐⇒ λ − L ( S ) ⊆ v − L r .Proof. To see D ’s domain is a well-defined dependency automaton, observe that its dual ( dfa ↓ ( L r ) , ⊇ , rev ( dfa ↓ ( L r ))) is well-defined by Theorem 3.2.18. Next we establish the commuting relations: LW ( L r ) τ LD ( L )○ λL / / M ( LD ( L )) LW ( L r ) ⊆ O O λ L / / J ( LD ( L )) ⊈ O O via the following calculation: λ L ( v − L r ) ⊈ dr L ( v − L r ) ⇐⇒ dr L ( v − L r ) ⊈ dr L ( v − L r ) (def. of λ ) ⇐⇒ v r ∈ dr L ( v − L r ) (by Lemma 3.5.9.4) ⇐⇒ v r ∉ dr L ( v − L r ) ⇐⇒ v r ∉ dr L ( v − L r ) ⇐⇒ dr L ( v − L r ) ⊆ dr L ( v − L r ) (by Corollary 3.3.8) ⇐⇒ v − L r ⊆ v − L r ( dr L an order-iso) ⇐⇒ v − L r ⊆ v − L r , ⇐⇒ v − L r ⊆ τ LD ( l ) ○ λ L ( v − L r ) .Since the witnesses are bijections we’ve established that D underlying Dep -morphism is an isomorphism. Concerningthe remaining conditions, D ’s domain has lower nfa N ∶ = rev ( dfa ↓ ( L r )) with transitions N a ( Y , Y ) ∶ ⇐⇒ Y ⊆ a − Y .Furthermore Airr ( dfa ∧ ( L )) ’s upper nfa M has transitions: M a ( dr L ( v − L r ) , dr L ( v − L r )) ⇐⇒ ( γ a ) ∗ ( dr L ( v − L r )) ⊆ dr L ( v − L r ) ⇐⇒ ⋃{ j ∈ J ( LD ( L )) ∶ a − j ⊆ dr L ( v − L r )} ⊆ dr L ( v − L r ) ⇐⇒ ⋃{ j ∈ J ( LD ( L )) ∶ v r ∉ a − j } ⊆ dr L ( v − L r ) (by Lemma 3.5.9.4) ⇐⇒ ⋃{ j ∈ J ( LD ( L )) ∶ av r ∉ j } ⊆ dr L ( v − L r ) ⇐⇒ dr L (( v a ) − L r ) ⊆ dr L ( v − L r ) (by Lemma 3.5.9.4) ⇐⇒ v − L r ⊆ ( v a ) − L r ( dr L an order-iso)where γ a ∶ = λX.a − X ∶ LD ( L ) → LD ( L ) . Then we verify: N a ; D ( v − L r , dr L ( v − L r )) ⇐⇒ ∃ v ∈ Σ ∗ . [ v − L r ⊆ ( va ) − L r ∧ D ( v − L r , dr L ( v − L r )] ⇐⇒ ∃ v ∈ Σ ∗ . [ v − L r ⊆ ( va ) − L r ∧ v − L r ⊆ v − L r ] ⇐⇒ ∃ v ∈ Σ ∗ . [ v − L r ⊆ v − L r ∧ v − L r ⊆ ( v a ) − L r ] (A) ⇐⇒ ∃ v ∈ Σ ∗ . [ D ( v − L r , dr L ( v − L r )) ∧ M ˘ a ( dr L ( v − L r ) , dr L ( v − L r ))] ⇐⇒ D ; M ˘ a ( v − L r , dr L ( v − L r )) .Concerning (A), ( ⇒ ) follows because a − ( − ) preserves inclusions so we can choose v ∶ = v ; ( ⇐ ) follows analogously,38hoosing v ∶ = v . Finally we verify: D [ I N ] = D [{ Y ∶ ε ∈ Y ∈ LW ( L r )}] = { dr L ( Y ) ∶ ε ∈ Y ∈ LW ( L r )} = { dr L ( v − L r ) ∶ v ∈ L r } = { dr L ( v − L r ) ∶ L ⊈ dr L ( v − L r )} (by Corollary 3.3.8) = F M (see Definition 3.2.5).˘ D [ I M ] = ˘ D [{ dr L ( v − L r ) ∶ dr L ( L r ) ⊆ dr L ( v − L r )}] (see Definition 3.2.5) = ˘ D [{ dr L ( v − L r ) ∶ v − L r ⊆ L r }] = { v − L r ∶ ∃ v ∈ Σ ∗ . [ v − L r ⊆ v − L r ∧ v − L r ⊆ L r ]} = { v − L r ∶ v − L r ⊆ L r } = F N . Corollary 3.5.13 (Dualising dfa ∧ ( L ) ) . We have the
JSL -dfa isomorphism, ̺ L ∶ Det ( dfa ↓ ( L r ) , ⊇ , rev ( dfa ↓ ( L r ))) → ( dfa ∧ ( L )) „ ̺ L ∶ = λS. ⋂{ dr L ( Y ) ∶ Y ∈ S } ̺ − L ∶ = λK. { Y ∈ LW ( L r ) ∶ K ⊆ dr L ( Y )} .Proof. First recall the isomorphism from Theorem 3.5.12, E ∶ Airr ( dfa ∧ ( L )) → Rev M where M ∶ = ( dfa ↓ ( L r ) , ⊇ , rev ( dfa ↓ ( L r ))) . Since E has bijective upper witness ( τ LD ( L ) ○ λ L ) − and join-semilattice adjoints act as the inverse, it follows that ( Det E ) „ = λX.τ LD ( L ) ○ λ L [ X ] . Further recall the natural isomorphism ˆ ∂ − (Theorem 3.2.13) and rep − (Theorem 3.2.6).Then we have the composite join-semilattice isomorphism: Det M ˆ ∂ − M ÐÐ→ ( DetRev M ) „ ( Det E ) „ ÐÐÐÐ→ ( DetAirr ( dfa ∧ ( L ))) „ rep „ dfa ∧( L ) ÐÐÐÐÐÐ→ ( dfa ∧ ( L )) „ . Given any subset S ⊆ LW ( L r ) upwards-closed w.r.t. inclusion, S ↦ ˆ ∂ − M ( S ) = ⊇ [ S ] = S ( S down-closed) ↦ ( Det E ) „ ( S ) = τ LD ( L ) ○ λ L [ S ] = { dr L ( v − L r ) ∶ v − L r ∉ S } ↦ rep „ dfa ∧ ( L ) ({ dr L ( v − L r ) ∶ v − L r ∉ S }) = ⋂{ dr L ( v − L r ) ∶ v − L r ∈ S } .Finally the action of the inverse follows by the bijectivity of ̺ L . We start by recalling the syntactic monoid of a regular language and the transition monoid of a classical dfa.
Definition 3.6.1 (Transition monoids and syntactic monoids) .
1. Given any set Σ we have the free Σ -generated monoid Σ ∗ ∶ = ( Σ ∗ , ⋅ , ε ) where multiplication is concatenation.2. Given a dfa δ = ( z , Z, δ a , F ) , its transition monoid is defined TM ( δ ) ∶ = ({ δ w ∶ w ∈ Σ ∗ } , ○ , id Z ) where ○ isfunctional composition and δ ε = id Z (see Definition 3.1.3). It admits a natural dfa structure accepting L : dfa TM ( δ ) ∶ = ( id Z , { δ w ∶ w ∈ Σ ∗ } , λf.δ a ○ f, { f ∶ f ( z ) ∈ F }) . Finally we have J − K TM ( δ ) ∶ Σ ∗ ↠ TM ( δ ) where J w K TM ( δ ) ∶ = δ w .39. The syntactic monoid of a regular language L ⊆ Σ ∗ is the quotient Syn ( L ) ∶ = Σ ∗ / S L by the syntactic congruence S L ∶ = {( u, v ) ∈ Σ ∗ × Σ ∗ ∶ ∀ x, y ∈ Σ ∗ . [ xuy ∈ L ⇐⇒ xvy ∈ L ]} . It admits a natural dfa structure accepting L : dfa Syn ( L ) ∶ = ( J ε K S L , Σ ∗ / S L , λx.x ⋅ J a K S L , { J w K S L ∶ w ∈ L }) . We also denote the underlying set by
Syn ( L ) ∶ = Σ ∗ ∖ S L . ∎ Lemma 3.6.2 (The syntactic/transition monoid are well-defined) . TM ( δ ) is a well-defined finite monoid and L ( dfa TM ( δ ) ) = L ( δ ) .2. Syn ( L ) is a well-defined monoid and L ( dfa Syn ( L ) ) = L .Proof.
1. Fix a dfa δ = ( z , Z, δ a , F ) . The set of all endofunctions on a set equipped with functional composition define afinite monoid; TM ( δ ) defines a submonoid. Finally: w ∈ L ( dfa TM ( δ ) ) ⇐⇒ δ w ∈ F dfa TM ( δ ) ⇐⇒ δ w ( z ) ∈ F ⇐⇒ w ∈ L ( δ ) .
2. To see S L ⊆ Σ ∗ × Σ ∗ is a congruence for ( Σ ∗ , ⋅ , ε ) , given S L ( u , u ) and S L ( v , v ) , x ( u v ) y ∈ L ⇐⇒ x ( u ) v y ∈ L ⇐⇒ x ( u ) v y ∈ L ⇐⇒ xu ( v ) y ∈ L ⇐⇒ xu v y ∈ L. Thus
Syn ( L ) is a well-defined monoid. It is finite because the equivalence classes are precisesly the atoms ofthe set-theoretic boolean algebra generated by the finite set { x − Ly − ∶ x, y ∈ Σ ∗ } . Finally: w ∈ L ( dfa Syn ( L ) ) ⇐⇒ J w K ∈ F dfa Syn ( L ) ⇐⇒ w ∈ L. The final equivalence follows because if w ∈ L then J w K S L ⊆ L . Indeed if u ∈ J w K S L then choosing x = y = ε wehave xwy ∈ L ⇐⇒ xuy ∈ L i.e. u ∈ L .As is well-known, L ’s syntactic monoid is isomorphic to the transition monoid of L ’s state-minimal dfa. Theorem 3.6.3 ( Syn ( L ) ≅ TM ( dfa ( L )) ) . We have the monoid isomorphism: λ J w K S L .λX.w − X ∶ Syn ( L ) → TM ( dfa ( L )) . Proof.
The function is well-defined and injective because: J u K S L = J u K S L ⇐⇒ ∀ x, y ∈ Σ ∗ . [ xu y ∈ L ⇐⇒ xu y ∈ L ] ⇐⇒ ∀ X ∈ LW ( L ) , y ∈ Σ ∗ . [ u y ∈ X ⇐⇒ u y ∈ X ] ⇐⇒ ∀ X ∈ LW ( L ) , y ∈ Σ ∗ . [ y ∈ u − X ⇐⇒ y ∈ u − X ] ⇐⇒ ∀ X ∈ LW ( L ) . [ u − X = u − X ] ⇐⇒ λX ∈ LW ( L ) .u − X = λX ∈ LW ( L ) .u − X .It is surjective because dfa ( L ) ’s transition monoid consists of the functions { λX ∈ LW ( L ) .w − X ∶ w ∈ Σ ∗ } . Finally itis a monoid morphism because λX.ε − X = id LW ( L ) and ( uv ) − X = v − ( u − X ) .We can now introduce another canonical JSL -dfa and its equivalent dependency automaton.
Definition 3.6.4 ( L ’s minimal boolean syntactic JSL -dfa ) . Let
LRW ( L ) ∶ = { u − Lv − ∶ u, v ∈ Σ ∗ } be the left-right-word-quotients and LRP ( L ) the closure of LRW ( L ) under the set-theoretic boolean operations. Then: dfa ¬ Syn ( L ) ∶ = ( L, LRP ( L ) , λX.a − X, { K ∶ ε ∈ K }) is the canonical boolean syntactic JSL -dfa over the join-semilattice
LRP ( L ) ∶ = ( LRP ( L ) , ∪ , ∅ ) . ∎ Lemma 3.6.5 ( J ( LRP ( L )) = Syn ( L ) ) . LRP ( L ) ’s atoms are the equivalence classes of the syntactic congruence S L .Proof. An equivalence class amounts to ⋂ i u − i Lv − i ∩ ⋂ j u − j Lv − j involving every left-right-word-quotient u − Lv − .40ext we describe the minimal boolean syntactic JSL -dfa as a dependency automaton.
Theorem 3.6.6 (Canonical boolean syntactic dependency automaton) . We have the aut
Dep -isomorphism: λX.X r ∶ dep ( rev ( dfa Syn ( L r ) )) → Airr ( dfa ¬ Syn ( L )) , whose inverse has action λX.X r .Proof. We have the bijection λX.X r ∶ Syn ( L r ) → Syn ( L ) because ∀ x, y ∈ Σ ∗ . [ xuy ∈ L ⇐⇒ xvy ∈ L ] is equivalent to ∀ x, y ∈ Σ ∗ . [ xu r y ∈ L r ⇐⇒ xv r y ∈ L r ] . Then we have the Dep -isomorphism f ∶ = λX.X r , Syn ( L r ) λX.X r / / { X ∶ X ∈ Syn ( L )} Syn ( L r ) ∆ Syn ( Lr ) O O λX.X r / / Syn ( L ) ¬ LRP ( L ) O O where ¬ LRP ( L ) constructs the relative complement in Σ ∗ . It is a Dep -isomorphism because the witnesses are bijections.It remains to verify the other constraints. Denote the transitions of the left (resp. right) dependency automaton’slower (resp. upper) nfa by N (resp. M ). Then: N a ( J u K S Lr , J u K S Lr ) ⇐⇒ J u a K S Lr = J u K S Lr M a ( J u K S L , J u K S L ) ⇐⇒ ( γ a ) ∗ ( J u K S L ) ⊆ J u K S L ⇐⇒ ⋃{ J u K S L ∶ a − J u K S L ⊆ J u K S L } ⊆ J u K S L ⇐⇒ ⋃{ J u K S L ∶ u ∉ a − J u K S L } ⊆ J u K S L ⇐⇒ ⋃{ J u K S L ∶ au ∉ J u K S L } ⊆ J u K S L ⇐⇒ J au K S L ⊆ J u K S L ⇐⇒ J u K S L ⊆ J au K S L ⇐⇒ J u K S L = J au K S L .where γ a ∶ = λX.a − X ∶ dfa ¬ Syn ( L ) → dfa ¬ Syn ( L ) . We now verify the condition concerning transitions: N a ; f ( J u K S L , J u K S L ) ⇐⇒ ∃ u ∈ Σ ∗ . [ J ua K S Lr = J u K S Lr ∧ J u K r S Lr = J u K S L ] ⇐⇒ ∃ u ∈ Σ ∗ . [ J ua K S Lr = J u K S Lr ∧ J u K S Lr = J u r K S Lr ] ⇐⇒ J u r a K S Lr = J u K S Lr ⇐⇒ J u r K S L = J au K S L ⇐⇒ ∃ u ∈ Σ ∗ . [ J u r K S L = J u K S L ∧ J u K S L = J au K S L ] ⇐⇒ ∃ u ∈ Σ ∗ . [ f ( J u K S L ) = J u K S L ∧ M ˘ a ( J u K S L , J u K S L )] ⇐⇒ f ; M a ( J u K S L , J u K S L ) .Finally we calculate: f [ I rev ( dfa Syn ( Lr ) ) ] = f [ F dfa Syn ( Lr ) ] = f [{ J w K S Lr ∶ w ∈ L r }] = { J w r K S L ∶ w ∈ L r } = { J w K S L ∶ L r ⊈ J w K S L } = F M . ˘ f [ I M ] = ˘ f [{ J u K S L ∶ ⋃{ J w K S L ∶ ε ∉ L } ⊆ J u K S L }] = ˘ f [{ J u K S L ∶ J ε K S L ⊆ J u K S L }] = ˘ f [{ J ε K S L }] = J ε K S Lr = F N . Note 3.6.7 (Canonical distributive syntactic
JSL -dfa) . R ∶ ( rev (( dfa Syn ( L r ) ) ↓ ) , ⊆ , ( dfa Syn ( L r ) ) ↓ ) → Airr ( dfa ∧ Syn ( L )) ∎ .7 Transition semirings of JSL -dfas
Whilst classical dfas induce monoids,
JSL -dfas induce idempotent semirings . Definition 3.7.1 (Transition semiring of a
JSL -dfa) . P f Σ ∗ ∶ = (( P f Σ , ∪ , ∅ ) , ⋅ , { ε }) is the free Σ -generated idempotent semiring where P f Σ ∗ is the set of finite languages,its multiplication being sequential composition of languages.2. Fix a JSL -dfa γ = ( s , S , γ a , F ) and recall the composites ( γ w ∶ S → S ) w ∈ Σ ∗ from Definition 3.2.1. More generallyfor any K ⊆ Σ ∗ we can construct the pointwise-join of { γ w ∶ w ∈ K } , γ K ∶ = λs. ⋁ S { γ w ( s ) ∶ w ∈ K } ∶ S → S . Then γ ’s transition semiring is the idempotent semiring TS ( γ ) ∶ = ( S γ , ○ , id S ) where: S γ ∶ = ( S γ , ∨ S γ , λX. – S ) S γ ∶ = { γ K ∶ K ⊆ Σ ∗ } γ U ∨ S γ γ V ∶ = γ U ∪ V .
3. Since TS ( γ ) is Σ-generated by { γ a ∶ a ∈ Σ } we have the unique extension J − K TS ( γ ) ∶ P f Σ ∗ ↠ TS ( γ ) i.e. asurjective idempotent semiring morphism.4. Finally the semiring TS ( γ ) has a natural associated JSL -dfa structure: ts ( γ ) ∶ = ( id S γ , S γ , λf.γ a ○ f, { f ∶ f ≰ S γ γ L }) accepting L ∶ = L ( γ ) . ∎ Lemma 3.7.2 ( TS ( γ ) and ts ( γ ) well-defined) . TS ( γ ) is a well-defined idempotent semiring.2. ts ( γ ) is a JSL -reachable
JSL -dfa accepting L ( γ ) .Proof. Let γ = ( s , S , γ a , F ) be a JSL -dfa.1. S γ defines an ‘additive’ idempotent commutative monoid; ( S γ , ○ , id S ) defines a ‘multiplicative’ monoid. Mul-tiplication left/right distributes over addition and – S γ annihilates multiplication because composition of join-semilattice morphisms is bilinear w.r.t. pointwise-joins.2. We first establish ts ( γ ) is a well-defined JSL -dfa. The transition endomorphisms are well-defined functions, andpreserve the join by bilinearity. The final states are well-defined by construction since γ L ∈ S γ . This JSL -dfaaccepts L because γ w ≰ S γ γ L ⇐⇒ w ∈ L , as we now show. • ( ⇒ ) : contrapositive follows because if w ∈ L then γ L is a join of morphisms including γ w . • ( ⇐ ) : w ∈ L implies γ w ( s ) ∈ F whereas γ L ( s ) ∉ F .Finally it is JSL -reachable because (i) each γ w is classically reachable from the identity function id S , (ii) each γ K is the join of γ w ’s. Lemma 3.7.3. dfa TM ( rsc ( N )) ≅ reach ( ts ( reach ( sc ( N )))) for any nfa N .Proof. Let S ∶ = reach ( sc ( N )) i.e. the closure of the reachable subsets rs ( N ) under unions. Then we need to establishthe dfa isomorphism λγ w .δ w ∶ γ → δ where: γ = ( id rs ( N ) , { γ w ∶ rs ( N ) → rs ( N ) , w ∈ Σ ∗ } , λf.γ a ○ f, { f ∶ f ( I ) ∩ F ≠ ∅ }) δ = ( id S , { δ w ∶ S → S , w ∈ Σ ∗ } , λf.δ a ○ f, { f ∶ f ≰ δ L }) γ w and δ w have action λX. N w [ X ] . The candidate isomorphism is a well-defined bijection because δ w isuniquely determined by the domain-codomain restriction γ w . It clearly preserves the initial state and preserves/reflectsthe transitions. Finally, δ w ≰ δ L ⇐⇒ ∃ u ∈ Σ ∗ .δ w ( N u [ I ]) ⊈ δ L ( N u [ I ]) ⇐⇒ ∃ u ∈ Σ ∗ , z ∈ Z N . ( z ∈ N w [ N u [ I ]] ∧ z ∉ N L [ N u [ I ]]) ⇐⇒ w ∈ L (A) ⇐⇒ γ w ( I ) ∩ F ≠ ∅ .Concerning (A), ( ⇒ ) is immediate whereas ( ⇐ ) follows by choosing u ∶ = ε . Definition 3.7.4 (Power semiring and syntactic semiring) .
1. The finitary power semiring of a monoid M ∶ = ( M, ⋅ M , M ) is the idempotent semiring: P f M ∶ = (( P f M, ∪ , ∅ ) , ⋅ , { M }) S ⋅ S ∶ = { m ⋅ M m ∶ m ∈ S , m ∈ S } where P f M is the set of finite subsets of M . If M is a finite monoid we may instead write P M .2. Given any set Σ then P f Σ ∗ is the free Σ -generated idempotent semiring .3. The syntactic semiring Syn ∨ ( L ) ∶ = P f Σ ∗ / S ∨ L of a regular language L ⊆ Σ ∗ is the quotient of the free Σ-generatedidempotent semiring by L ’s syntactic semiring congruence S ∨ L ⊆ P f Σ ∗ × P f Σ ∗ [Pol01]: S ∨ L ( U, V ) ∶ ⇐⇒ ∀ x, y ∈ Σ ∗ . [{ x } ⋅ U ⋅ { y } ⊆ L ⇐⇒ { x } ⋅ V ⋅ { y } ⊆ L ] . It admits a natural
JSL -dfa structure accepting L , syn ( L ) ∶ = ( J { ε } K S ∨ L , ( P f Σ ∗ / S ∨ L , ∨ Syn ∨ ( L ) , J ∅ K S ∨ L ) , λX.X ⋅ Syn ∨ ( L ) J { a } K S ∨ L , { J U K S ∨ L ∶ U ∩ L ≠ ∅ }) . ∎ Lemma 3.7.5 (Power/syntactic semirings are well-defined) . P f M is a well-defined idempotent semiring.2. S L ( u, v ) ⇐⇒ S ∨ L ({ u } , { v }) for all u, v ∈ Σ ∗ .3. Syn ∨ ( L ) is a well-defined finite idempotent semiring.4. syn ( L ) is a well-defined JSL -dfa accepting L .Proof.
1. Let M = ( M, ⋅ M , M ) be a monoid. Firstly, ( P f M, ∪ , ∅ ) is the free join-semilattice on M . Secondly, themultiplication ⋅ is respectively bilinear by construction.2. We calculate: S ∨ L ({ u } , { v }) ⇐⇒ ∀ x, y ∈ Σ ∗ . [{ x } ⋅ { u } ⋅ { y } ⊆ L ⇐⇒ { x } ⋅ { v } ⋅ { y } ⊆ L ] ⇐⇒ ∀ x, y ∈ Σ ∗ . [ xuy ∈ L ⇐⇒ xvy ∈ L ] ⇐⇒ ∀ x, y ∈ Σ ∗ . [ xuy ∈ L ⇐⇒ xvy ∈ L ] ⇐⇒ S L ( u, v ) .3. We’ll show S ∨ L is a congruence for the free idempotent semiring P f Σ ∗ . First observe: S ∨ L ( U, V ) ⇐⇒ ∀ X, Y ∈ P f Σ ∗ . [ X ⋅ U ⋅ Y ⊆ L ⇐⇒ X ⋅ V ⋅ Y ⊆ L ] . ( ⋆ )43ndeed: ( ⇐ ) follows by restriction to words, ( ⇒ ) follows via X ⋅ U ⋅ Y ⊆ L ⇐⇒ ∀ x, y ∈ Σ ∗ . [{ x } ⋅ U ⋅ { y } ⊆ L ] .Fixing S ∨ L ( U i , V i ) for i = ,
2, it is a congruence for binary joins and multiplication: { x } ⋅ ( U ∪ U ) ⋅ { y } ⊆ L ⇐⇒ ∀ i ∈ { , } . { x } ⋅ U i ⋅ { y } ⊆ L ⇐⇒ ∀ i ∈ { , } . { x } ⋅ V i ⋅ { y } ⊆ L ⇐⇒ { x } ⋅ ( V ∪ V ) ⋅ { y } ⊆ L { x } ⋅ ( U ⋅ U ) ⋅ { y } ⊆ L ⇐⇒ { x } ⋅ U ⋅ ( U ⋅ { y }) ⊆ L ⇐⇒ { x } ⋅ V ⋅ ( U ⋅ { y }) ⊆ L (via ⋆ ) ⇐⇒ ({ x } ⋅ V ) ⋅ U ⋅ { y } ⊆ L ⇐⇒ ({ x } ⋅ V ) ⋅ V ⋅ { y } ⊆ L (via ⋆ ) ⇐⇒ { x } ⋅ ( V ⋅ V ) ⋅ { y } ⊆ L .To see Syn ∨ ( L ) is finite, recall the syntactic monoid is finite by Lemma 3.6.2 and consider the mapping: q ∶ = λ { J u K S L ∶ u ∈ U ∈ P f Σ ∗ } . J U K S ∨ L ∶ P Syn ( L ) ↠ Syn ∨ ( L ) . Well-definedness follows via (2) and it is clearly surjective, hence
Syn ∨ ( L ) is finite.4. We show syn ( L ) is a well-defined JSL -dfa. It is finite because the syntactic semiring is finite – see (3). Itsjoin-semilattice structure is well-defined because S ∨ L is a well-defined congruence. Its deterministic transitionsare well-defined because multiplication in Syn ∨ ( L ) is bilinear. It remains to show the final states are well-defined. First observe if J U K S ∨ L = J V K S ∨ L and U ⊈ L then V ⊈ L by choosing x = y = ε . Secondly, the non-finals { J U K S ∨ L ∶ U ⊆ L } are closed under joins because given (finitely many) U i ⊆ L then ⋃ U i ⊆ L too. This well-defined JSL -dfa accepts L because its classically reachable part is isomorphic to the syntactic monoid Syn ( L ) endowedwith its dfa structure.Analogous to Theorem 3.6.3, dfa ( L ) ’s transition semiring is isomorphic to L ’s syntactic semiring. Theorem 3.7.6 ( Syn ∨ ( L ) ≅ TS ( dfa ( L )) ) . We have the idempotent semiring isomorphism: α ∶ = λ J U K S ∨ L .λX.U − X ∶ Syn ∨ ( L ) → TS ( dfa ( L )) . It also defines a
JSL -dfa isomorphism syn ( L ) → ts ( dfa ( L )) .Proof. It is well-defined and injective because: J U K S ∨ L = J U K S ∨ L ⇐⇒ ∀ x, y ∈ Σ ∗ . [ xU y ⊆ L ⇐⇒ xU y ⊆ L ] ⇐⇒ ∀ x, y ∈ Σ ∗ . [ xU y ⊈ L ⇐⇒ xU y ⊈ L ] ⇐⇒ ∀ x, y ∈ Σ ∗ . [ xU y ∩ L ≠ ∅ ⇐⇒ xU y ∩ L ≠ ∅ ] ⇐⇒ ∀ x, y ∈ Σ ∗ . [ U y ∩ x − L ≠ ∅ ⇐⇒ U y ∩ x − L ≠ ∅ ] ⇐⇒ ∀ X ∈ LW ( L ) , y ∈ Σ ∗ . [ U y ∩ X ≠ ∅ ⇐⇒ U y ∩ X ≠ ∅ ] ⇐⇒ ∀ X ∈ LW ( L ) , y ∈ Σ ∗ . [ y ∈ [ U ] − X ⇐⇒ y ∈ [ U ] − X ] ⇐⇒ ∀ X ∈ LW ( L )[[ U ] − X = [ U ] − X ] ⇐⇒ λX ∈ LQ ( L ) . [ U ] − X = λX ∈ LQ ( L ) . [ U ] − X .Concerning the final equivalence, LW ( L ) join-generates LQ ( L ) and each U − ( − ) preserves unions. Next, α is surjectivebecause dfa ( L ) ’s transition semiring consists of the endomorphisms λX.U − X for U ⊆ Σ ∗ , or equivalently where U ∈ P f Σ ∗ since LW ( L ) is finite. Next, α is a monoid morphism because λX.ε − X is the identity function and ( U V ) − X = V − ( U − X ) . Finally it preserves the join structure because ∅ − X = ∅ and ( U ∪ V ) − X = U − X ∪ V − X .Finally we establish the claimed JSL -dfa isomorphism. The transitions follow because ( U a ) − ( X ) = a − ( U − X ) .Concerning final states, α is a join-semilattice isomorphism hence an order isomorphism, so it suffices to show α preserves the largest non-final state. Then we must prove the marked equality below: λX. [ U ] − X ! = λX. [ L ] − X where J U K S ∨ L ∶ = ⋁ Syn ∨ ( L ) { J { u } K S ∨ L ∶ u ∉ L } . Firstly, U ⊆ L by well-definedness. Conversely each u ∈ L has some u ∈ U s.t. J { u } K S ∨ L = J { u } K S ∨ L . Then by an earliercalculation we know λX.u − X = λX.u − X , so the marked equality follows.44 orollary 3.7.7. syn ( L ) ≅ ts ( dfa ( L )) .Proof. The join-semilattice isomorphism and transitions follows via Theorem 3.7.6. The initial state is preserved i.e. J { ε } K S ∨ L ↦ id dfa ( L ) . Lastly the final states are preserved/reflected: λX.U − X ≰ γ L ⇐⇒ ∃ w ∈ Σ ∗ .U − ( w − L ) ⊈ ⋃ x ∉ L x − ( w − L ) ⇐⇒ ∃ w, v ∈ Σ ∗ , u ∈ U. ( wuv ∈ L ∧ wLv ∩ L = ∅ ) ⇐⇒ U ∩ L ≠ ∅ . Corollary 3.7.8. dfa
Syn ( L ) ≅ reach ( syn ( L )) .Proof. By Corollary 3.7.7 we know syn ( L ) ≅ ts ( dfa ( L )) . Recall that dfa ( L ) = ( L, LW ( L ) , γ a , F γ ) and dfa ( L ) = ( L, LQ ( L ) , δ a , F δ ) where both γ and δ have action λX.a − X . Observe that: reach ( ts ( dfa ( L ))) = ( id LQ ( L ) , { δ w ∶ w ∈ Σ ∗ } , λf.δ a ○ f, { f ∶ f ≰ δ L }) . By Theorem 3.6.3 it suffices to establish the dfa isomorphism λγ w .δ w ∶ dfa TM ( dfa ( L )) → reach ( ts ( dfa ( L ))) . It is awell-defined bijection because δ w ∶ LQ ( L ) → LQ ( L ) is completely determined by its domain-codomain restriction γ w .The initial state and transitions of the two dfas are defined in the same way. Finally, γ w ( L ) ∈ F γ ⇐⇒ ε ∈ w − L ⇐⇒ w ∈ L ⇐⇒ ∃ u ∈ Σ ∗ . [ δ w ( u − L ) ⊈ δ L ( u − L )] (A) ⇐⇒ δ w ≰ δ L .Concerning (A), ( ⇐ ) follows by contradiction whereas ( ⇒ ) holds by choosing u ∶ = ε and observing ε ∉ [ L ] − L .In order to dualise the above constructions one needs the notion of right-quotient closure (see Definition 3.3.1). Definition 3.7.9 (Right-quotient closure) .
1. A
JSL -dfa δ is right-quotient closed if K ∈ langs ( δ ) and V ⊆ Σ ∗ implies KV − ∈ langs ( δ ) .2. The right-quotient closure of a JSL -dfa γ is the simplified JSL -dfa: rqc ( γ ) ∶ = ( L ( γ ) , ( T, ∪ , ∅ ) , λX.a − X, { K ∈ T ∶ ε ∈ K }) where T is the closure of { jv − ∶ j ∈ J ( langs ( γ )) , v ∈ Σ ∗ } under unions. ∎ Lemma 3.7.10 (The right-quotient closure is well-defined) . Fix any
JSL -dfa γ .1. rqc ( γ ) is a simplified JSL -dfa accepting L ( γ ) .2. rqc ( γ ) is the smallest right-quotient closed JSL -dfa δ such that langs ( γ ) ⊆ langs ( δ ) . ∎ Proof. T contains L ( γ ) and is closed under unions. It also closed under left-letter-quotients: a − (⋃ i ∈ I j i v − i ) = ⋃ i ∈ I ( a − j i ) v − i = ⋃ i ∈ I ( ⋃ k ∈ K i j i,k ) v − i = ⋃ i ∈ I,k ∈ K i j i,k v − i . Then rqc ( γ ) is a well-defined simplified JSL -dfa accepting L ( γ ) by Lemma 3.4.4.2. We’ll show rqc ( γ ) is right-quotient closed by showing T is right-word-quotient closed (recall T is union-closed): (⋃ i ∈ I j i v − i ) v − = ⋃ i ∈ I ( j i v − i ) v − = ⋃ i ∈ I j i ( vv i ) − . Since rqc ( γ ) is simplified by (1) we deduce langs ( γ ) ⊆ langs ( rqc ( γ )) . Finally is it the smallest such JSL -dfabecause every state is the union of right-quotients of languages in J ( langs ( γ )) .45 heorem 3.7.11 (Transition-semiring dualises right-quotient closure) . If δ is a JSL -reachable
JSL -dfa then: acc ( ts ( δ )) „ ∶ ( ts ( δ )) „ → rqc ( δ „ ) is a JSL -dfa isomorphism.Proof.
Firstly γ ∶ = ts ( δ ) is JSL -reachable by Lemma 3.7.2, so its dual γ „ is simple by Theorem 3.4.3. Then acc γ „ definesa JSL -dfa isomorphism to its simplification simple ( γ „ ) . We’ll show the latter is precisely rqc ( δ „ ) . Fix δ = ( t , T , δ a , F δ ) and L ∶ = L ( δ „ ) . Then by definition γ = ( id T , S δ , γ a , F γ ) where: S δ ∶ = { δ K ∶ T → T ∶ K ⊆ Σ ∗ } S δ ∶ = ( S δ , ∪ , ∅ ) γ a ∶ = λf.δ a ○ f. Let us break the argument down into steps.1. We’ll show langs ( δ „ ) ⊆ simple ( γ „ ) . Fixing any element of γ „ we can rewrite acceptance as follows: u ∈ acc γ „ ( δ K ) ⇐⇒ id T ≰ ( γ u r ) ∗ ( δ K ) (by definition) ⇐⇒ γ u r ( id T ) ≰ δ K (adjoints) ⇐⇒ δ u r ≰ δ K ⇐⇒ ∃ v ∈ Σ ∗ . [ δ u r ( δ v ( t )) ≰ T δ K ( δ v ( t ))] ( δ is JSL -reachable) ⇐⇒ ∃ v ∈ Σ ∗ . [ δ vu r ( t ) ≰ T δ v ⋅ K ( t )] ⇐⇒ ∃ v ∈ Σ ∗ , m ∈ M ( T ) . [ δ v ⋅ K ( t ) ≤ T m ∧ δ vu r ( t ) ≰ T m ] ⇐⇒ ∃ v ∈ Σ ∗ , m ∈ M ( T ) . [ t ≤ T ( δ v ⋅ K ) ∗ ( m ) ∧ t ≰ T ( δ vu r ) ∗ ( m )] ⇐⇒ ∃ v ∈ Σ ∗ , j ∈ J ( T op ) . [( δ v ⋅ K ) ∗ ( j ) ≤ T op t ∧ ( δ vu r ) ∗ ( j ) ≰ T op t ] ⇐⇒ ∃ v ∈ Σ ∗ , j ∈ J ( langs ( δ „ )) . [ uv r ∈ j ∧ K r v r ∩ j = ∅ ] ⇐⇒ ∃ v ∈ Σ ∗ , j ∈ J ( langs ( δ „ )) . [ u ∈ j ( v r ) − ∧ v r ∉ [ K r ] − j ] ⇐⇒ ∃ v ∈ Σ ∗ , j ∈ J ( langs ( δ „ )) . [ u ∈ jv − ∧ v ∉ [ K r ] − j ] (A).Recalling Corollary 3.4.9.2, for each j ∈ J ( langs ( δ „ )) we’ll show δ j r accepts j . Fixing j , first observe: v ∉ [ j ] − j ⇐⇒ ∀ x ∈ Σ ∗ . [ x ∈ j ⇒ xv ∉ j ] ⇐⇒ ∀ x ∈ Σ ∗ . [ x ∈ j ⇒ x ∉ jv − ] ⇐⇒ ∀ x ∈ Σ ∗ . [ x ∈ j ⇒ x ∈ jv − ] ⇐⇒ j ⊆ jv − ⇐⇒ jv − ⊆ j . (B). u ∈ acc γ „ ( δ j r ) ⇐⇒ ∃ v ∈ Σ ∗ , j ∈ J ( langs ( δ „ )) . [ u ∈ jv − ∧ v ∉ [ j ] − j ] (by A) ⇐⇒ ∃ v ∈ Σ ∗ , z ∈ Z. [ u ∈ jv − ∧ jv − ⊆ j ] (by B) ⇐⇒ u ∈ j .Thus γ „ accepts every language in langs ( δ „ ) via closure under joins.2. Next we show γ „ is right-quotient closed. Aside from the composite endomorphisms γ w ∶ = λf.δ w ○ f we alsohave φ w ∶ = λf.f ○ δ w ∶ S δ → S δ . They are well-defined because the composition of join-semilattice morphisms isbilinear. Their adjoints witness right-word-quotient closure: u ∈ acc γ „ (( φ w ) ∗ ( δ K )) ⇐⇒ id T ≰ S δ ( γ u r ) ∗ (( φ w ) ∗ ( δ K )) (by definition) ⇐⇒ id T ≰ S δ ( φ w ○ γ u r ) ∗ ( δ K ) ⇐⇒ φ w ○ γ u r ( id T ) ≰ S δ δ K (adjoints) ⇐⇒ δ wu r ≰ S δ δ K ⇐⇒ γ wu r ( id T ) ≰ S δ δ K ⇐⇒ id T ≰ S δ ( γ wu r ) ∗ ( δ K ) (adjoints) ⇐⇒ uw r ∈ acc γ „ (( φ w ) ∗ ( δ K )) ⇐⇒ u ∈ acc γ „ (( φ w ) ∗ ( δ K ))( w r ) − .Closure under right-quotients follows by closure under unions.3. Combining (1) with (2) we deduce rqc ( δ „ ) ⊆ simple ( γ „ ) . Finally, the reverse inclusion follows by (A) i.e. each acc γ „ ( δ K ) is a union of jv − ’s. 46 orollary 3.7.12 (Right-quotient closed vs. finite Σ-generated idempotent semirings) .
1. If δ is a simple right-quotient closed JSL -dfa, δ „ ≅ ts ( δ „ ) is a Σ -generated idempotent semiring acting on itself.2. If S = ( S , ⋅ S , S ) is a finite Σ -generated idempotent semiring and s ∈ S then ( S , S , λs.s ⋅ S a, { s ∈ S ∶ s ≰ S s }) „ is a simple right-quotient closed JSL -dfa.Proof.
1. Modulo isomorphism δ is simplified. Then δ = rqc ( δ ) ≅ ( ts ( δ „ )) „ by Theorem 3.7.11, so that δ „ ≅ ts ( δ „ ) .2. First, γ ∶ = ( S , S , λs.s ⋅ S a, { s ∈ S ∶ s ≰ S s }) is a well-defined JSL -dfa because right-multiplication preserves joins.It is
JSL -reachable because S is Σ-generated. Next we’ll show λf.f ( S ) ∶ ts ( γ ) → γ is a JSL -dfa isomorphism.It is a well-defined function by construction and surjective because S is Σ-generated and γ U ( S ) = J U K S . Itis injective because each γ U = λs.s ⋅ S J U K S acts as right-multiplication by J U K S . Finally it preserves joins andmultiplication. Then γ „ ≅ ( ts ( γ )) „ ≅ rqc ( γ „ ) is simple and right-quotient closed by Theorem 3.7.11. Corollary 3.7.13 (Quotients of finite idempotent semirings) .
1. Given a
JSL -dfa inclusion morphism ι ∶ γ ↪ δ between simplified right-quotient closed JSL -dfas, λ J U K TS ( δ „ ) . J U K TS ( γ „ ) ∶ TS ( δ „ ) ↠ TS ( γ „ ) is a well-defined surjective semiring morphism.2. Let f ∶ ( S , ⋅ S , S ) ↠ ( T , ⋅ T , T ) be a surjective semiring morphism where S is a finite Σ -generated idempotentsemiring. Given any s ∈ S we have the JSL -dfa embedding: f ∗ ∶ ( id T , T , λt.t ⋅ T f ( a ) , { t ∶ t ≰ T f ( s )}) „ → ( id S , S , λs.s ⋅ S a, { s ∶ s ≰ S s }) „ between simple right-quotient closed JSL -dfas.Proof.
1. Firstly ι ∗ ∶ δ „ → γ „ is a surjective JSL -dfa morphism by Theorem 3.2.3. Since γ and δ are right-quotient closed, ts ( δ „ ) ≅ ( rqc ( δ )) „ = ( δ ) „ ι ∗ Ð→→ ( γ ) „ = ( rqc ( γ )) „ ≅ ts ( γ „ ) by applying Theorem 3.7.11. Then we have the surjective JSL -dfa morphism f ∶ ts ( δ „ ) ↠ ts ( γ „ ) . It is a join-semilattice morphism preserving the unit (initial state) and right-multiplication by generators. Then f (( δ a ) ∗ ) ∶ = ( γ a ) ∗ and thus f (( δ U ) ∗ ) = ( γ U ) ∗ by induction over words and joins. Then f preserves the multiplication too: f (( δ V ) ∗ ○ ( δ U ) ∗ ) = f (( δ U ○ δ V ) ∗ ) = f (( δ V ⋅ U ) ∗ ) = ( γ V ⋅ U ) ∗ = ( γ U ○ γ V ) ∗ = ( γ V ) ∗ ○ ( γ U ) ∗ so it is a surjective semiring morphism. Finally it preserves the generators so has the claimed description.2. The surjective semiring morphism also defines a JSL -dfa morphism: f ∶ ( id S , S , λs.s ⋅ S a, { s ∶ s ≰ S s }) → ( id T , T , λt.t ⋅ T f ( a ) , { t ∶ t ≰ T f ( s )}) because right-multiplication preserves joins. Both JSL -dfas are
JSL -reachable because f is surjective, so that f [ Σ ] generates T . Then its adjoint defines an injective JSL -dfa morphism between simple
JSL -dfas. Finally each
JSL -dfa is right-quotient closed via closure under left multiplication on the dual side.Next we dualise the syntactic semiring. 47 efinition 3.7.14 ( L ’s minimal syntactic JSL -dfa) . The closure of
LRW ( L ) ∶ = { u − Lv − ∶ u, v ∈ Σ ∗ } under unionsdefines the minimal syntactic JSL -dfa dfa
Syn ( L ) i.e. the smallest right-quotient closed JSL -dfa accepting L . ∎ Note 3.7.15.
The minimal syntactic
JSL -dfas satisfies dfa
Syn ( L ) = rqc ( dfa ( L )) . ∎ Corollary 3.7.16 (Dualising the syntactic semiring) . We have the
JSL -dfa isomorphism: acc ( syn ( L r )) „ ∶ ( syn ( L r )) „ → dfa Syn ( L ) Proof.
First let δ ∶ = dfa ( L r ) . By Theorem 3.7.6 we know syn ( L r ) ≅ ts ( δ ) so that ( syn ( L r )) „ ≅ ( ts ( δ )) „ . By Theorem3.7.11 we know ( ts ( δ )) „ ≅ rqc ( δ „ ) because δ is JSL -reachable (see Corollary 3.4.8). Finally by Theorem 3.3.6 wehave δ „ ≅ dfa ( L ) , so that ( syn ( L )) „ ≅ rqc ( dfa ( L )) . Since rqc ( dfa ( L )) is simplified this isomorphism must be theacceptance map.Finally we describe the dual of the power semiring of the syntactic monoid. It is essentially the canonical booleansyntactic JSL -dfa from Definition 3.6.4.
Corollary 3.7.17 (Dualising P Syn ( L ) ) . Let δ ∶ = sc ( dfa Syn ( L ) ) .1. We have the semiring isomorphism λδ U . { J u K S L ∶ u ∈ U } ∶ TS ( δ ) → P Syn ( L ) .2. We have the JSL -dfa isomorphism acc ( ts ( δ )) „ ∶ ( ts ( δ )) „ → dfa ¬ Syn ( L ) .Proof.
1. Denote the candidate isomorphism by α . Given γ ∶ = dfa Syn ( L ) then δ U = λS. ⋃ u ∈ U γ u [ S ] . Given δ U = δ U thenapplying them to { J ε K S L } we see α is a well-defined injective function. It is clearly surjective and also preservesjoins i.e. α ( δ U ∪ U ) = α ( δ U ) ∪ α ( δ U ) . Finally α ( δ ∅ ) = ∅ and the multiplication is also preserved: α ( δ V ○ δ U ) = α ( δ U ⋅ V ) = { J x K S L ∶ x ∈ U ⋅ V } = { J u K S L ∶ u ∈ U } ⋅ { J v K S L ∶ v ∈ V } = α ( δ U ) ⋅ α ( δ V ) .2. By Example 3.2.12 δ „ ≅ sc ( rev ( dfa Syn ( L r ) )) hence δ „ ≅ dfa ¬ Syn ( L ) by Theorem 3.6.6. Applying Theorem 3.7.11yields: ( ts ( δ )) „ ≅ rqc ( δ „ ) ≅ rqc ( dfa ¬ Syn ( L )) = dfa ¬ Syn ( L ) , since the latter is simplified and right-quotient closed by construction. L -coverings An L -covering is an edge-covering of the dependency relation DR L (Definition 3.1.8) by left-maximal bicliques. Thatis, each biclique A × B ⊆ DR L is inclusion-maximal on the left. Importantly, they can be defined as certain Dep -morphisms.
Definition 4.1.1 ( L -coverings) . Fix any regular language L ⊆ Σ ∗ .1. An L -covering is a Dep -morphism DR L ∶ DR L → H such that H t = LW ( L r ) .The Dep − morphism is determined by H , so it may be denoted ⟨ L, H ⟩ . We may also refer to the L -covering viathe relation H ⊆ H s × LW ( L r ) alone.2. Given an L -covering H , Definition 2.2.2 provides ⟨ L, H ⟩ − ⊆ LW ( L ) × H s . But we may also directly define: ⟨ L, H ⟩ − ( u − L, h s ) ∶ ⇐⇒ ∀ Y ∈ LW ( L r ) . [ H ( h s , Y ) ⇒ DR L ( u − L, Y )] without knowing H ⊆ H s × LW ( L r ) is an L -covering. ∎ efinition 4.1.2 ( L -covering constructions) . Fix an L -covering DR L ∶ DR L → H .1. H ’s biclique-form is the L -covering H ♭ ⊆ H ♭ s × LW ( L r ) where: H ♭ s ∶ = {⟨ L, H ⟩ ˘ − [ h s ] × H [ h s ] ∶ h s ∈ H s } H ♭ ( A × B, Y ) ∶ ⇐⇒ Y ∈ B. It turns out that ⟨ L, H ♭ ⟩ − ( X, A × B ) ⇐⇒ X ∈ A . Finally, we say H is in biclique-form if H = H ♭ .2. H ’s induced nfa N H has states H s and is defined: I N H ∶ = ⟨ L, H ⟩ − [ L ] F N H ∶ = { h ∈ H s ∶ ε ∈ ⋂⟨ L, H ⟩ ˘ − [ h ]} N H ,a ( h , h ) ∶ ⇐⇒ ∀ X ∈ LW ( L ) . (⟨ L, H ⟩ − ( X, h ) ⇒ ⟨ L, H ⟩ − ( a − X, h )) . Just as ⟨ L, H ⟩ is completely determined by H , the induced nfa N H is completely determined by ⟨ L, H ⟩ .3. An L -covering H ′ extends another L -covering H if H s = H ′ s , H ⊆ H ′ and ⟨ L, H ⟩ − = ⟨ L, H ′ ⟩ − . We say H is maximal if its only extension is itself.4. H is legitimate if L ( N H ) = L , see [KW70, Definition 16].5. H ’s dual is the L r -covering H ◇ ∶ = ⟨ L, H ⟩ ˘ − ⊆ H s × LW ( L ) . ∎ Note 4.1.3 (Concerning extensions of L -coverings) . Given any two L -extensions satisfying H s = H ′ s and H ⊆ H ′ wenecessarily have ⟨ L, H ′ ⟩ − ⊆ ⟨ L, H ⟩ − . Then Definition 4.1.2.3 could equivalently require ⟨ L, H ⟩ − ⊆ ⟨ L, H ′ ⟩ − , which ismore in-keeping with ‘maximality’. ∎ We now prove various basic facts concerning L -coverings. Lemma 4.1.4 ( L -coverings) . H ⊆ H s × LW ( L r ) is an L -covering iff DR L = I ; H for some I ⊆ LW ( L ) × H s .2. Each L -covering is a Dep -monomorphism via the witnesses: LW ( L r ) ∆ LW ( Lr ) / / LW ( L r ) LW ( L ) DR L O O ⟨ L, H ⟩ − / / H s H O O
3. If H is an L -covering then so is its biclique-form H ♭ ; moreover ⟨ L, H ♭ ⟩ − ( X, A × B ) ⇐⇒ X ∈ A .4. If H is an L -covering in biclique-form then its induced nfa satisfies: A × B ∈ I N H ⇐⇒ L ∈ A A × B ∈ F N H ⇐⇒ ε ∈ ⋂ A N H ,a ( A × B , A × B ) ⇐⇒ γ a [ A ] ⊆ A where γ a ∶ = λX.a − X. L ( N H ) = L ( N H ♭ ) because q ∶ = λh s . ⟨ L, H ⟩ ˘ − [ h s ] × H [ h s ] ∶ N H ↠ N H ♭ is a surjection which preserves/reflects initialstates, final states and transitions. If N H is state-minimal then q is an nfa isomorphism. L ( N H ) ⊆ L for each L -covering H .7. Let H be an L -covering.a. H ◇ is a well-defined maximal L r -covering.b. H ⊆ H ◇◇ and ⟨ L, H ⟩ − = ( H ◇ ) ˘ = ⟨ L, H ◇◇ ⟩ − , so H ◇◇ extends H .c. H is maximal iff H = H ◇◇ . However, even when N H is not state-minimal q is almost the same thing as an isomorphism. . H ♭ is maximal if H is.e. H ◇◇ is legitimate if H is.f. A × B ∈ ( H ◇◇ ) ♭ s ⇐⇒ B × A ∈ ( H ◇ ) ♭ s .8. If H is an L -covering in biclique-form and A × B ∈ ( N H ) u [ I N H ] then u − L ∈ A .Proof.
1. If DR L ∶ DR L → H is an L -covering it is a Dep -morphism, so ⟨ L, H ⟩ − ; H = DR L via the maximum witnesses(Lemma 2.2.3). Conversely if DR L = I ; H we know I ; H = DR L = DR L ; ∆˘ LW ( L r ) hence DR L is a Dep -morphism.2. The commuting diagram follows via maximum witnesses. It defines a
Dep -mono because the upper witness is abijective function, recalling
Dep -composition from Definition 2.1.6.3. Since H is an L -covering we know ⟨ L, H ⟩ − ; H = DR L . For completely general reasons ⋃ H ♭ s = DR L i.e. theunion of the cartesian products ⟨ L, H ⟩ ˘ − [ h s ] × H [ h s ] is L ’s dependency relation (see Note 2.1.2). If we define I ⊆ LW ( L ) × H ♭ s as I ( X, A × B ) ∶ ⇐⇒ X ∈ A then: I ; H ♭ ( X, Y ) ⇐⇒ ∃ A × B ∈ H ♭ s . [ X ∈ A ∧ Y ∈ B ] ⇐⇒ ∃ h s ∈ H s . [( X, Y ) ∈ ⟨ L, H ⟩ ˘ − [ h s ] × H [ h s ]] ⇐⇒ ( X, Y ) ∈ ⋃ H ♭ s ⇐⇒ DR L ( X, Y ) .Then by (1) H ♭ is a well-defined L -covering. It remains to establish ⟨ L, H ♭ ⟩ − = I . To this end, let A × B =S ˘ − [ h s ] × H [ h s ] and consider: ⟨ L, H ♭ ⟩ − ( X, A × B ) ∶ ⇐⇒ H ♭ [ A × B ] ⊆ DR L [ X ] (definition 2.2.2) ⇐⇒ B ⊆ DR L [ X ] ⇐⇒ H [ h s ] ⊆ DR L [ X ] (by def. of B ). ⇐⇒ ⟨ L, H ⟩ − ( h s , X ) (definition 2.2.2) ⇐⇒ X ∈ ⟨ L, H ⟩ − [ h s ] ⇐⇒ X ∈ A (by def. of A ).4. Concerning the transitions: N H ,a ( A × B , A × B ) ∶ ⇐⇒ ∀ X ∈ LW ( L ) . [⟨ L, H ⟩ − ( X, A × B ) ⇒ ⟨ L, H ⟩ − ( a − X, A × B )] ⇐⇒ ∀ X ∈ LW ( L ) . [ X ∈ A ⇒ a − X ∈ A ] (by (3)) ⇐⇒ γ a [ A ] ⊆ A .The characterisations of the initial/final states follow easily.5. Consider the well-defined surjection q ∶ H s ↠ H ♭ s with action λh s . ⟨ L, H ⟩ ˘ − [ h s ] × H [ h s ] . It preserves and reflectsthe initial states and also the final states: h s ∈ I N H ⇐⇒ h s ∈ ⟨ L, H ⟩ − [ L ] ⇐⇒ L ∈ ⟨ L, H ⟩ ˘ − [ h s ] ⇐⇒ q ( h s ) ∈ I N H♭ h s ∈ F N H ⇐⇒ ε ∈ ⋂⟨ L, H ⟩ ˘ [ h s ] ⇐⇒ q ( h s ) ∈ F N H♭ . The transitions are also preserved and reflected: N H ♭ ( q ( h ) , q ( h )) ⇐⇒ ∀ X ∈ LW ( L ) . [ X ∈ ⟨ L, H ⟩ ˘ − [ h ] ⇒ a − X ∈ ⟨ L, H ⟩ ˘ − [ h ]] ⇐⇒ ∀ X ∈ LW ( L ) . [⟨ L, H ⟩ − ( X, h ) ⇒ ⟨ L, H ⟩ − ( a − X, h )] ⇐⇒ N H ( h , h ) .Thus L ( N H ) = L ( N H ♭ ) because the nfas simulate one another. If N H is state-minimal q must be an isomorphism.6. By (5) we may assume the L -covering is in biclique-form. The induced nfa N H is described in (4). If w ∈ L ( N H ) then by induction we have γ w [ A ] ⊆ A n where L ∈ A , A n × B n ⊆ DR L and ε ∈ ⋂ A n . Thus ε ∈ w − L so w ∈ L .50. a. By Lemma 4.1.4.1 we know ⟨ L, H ⟩ − ; H = DR L hence ˘ H ; ⟨ L, H ⟩ ˘ − = ( DR L ) ˘ = DR L r . Thus H ◇ ∶ = ⟨ L, H ⟩ ˘ − isan L r -covering by applying Lemma 4.1.4.1 again. Finally H ◇ is maximal because its converse is a maximallower witness.b. Below on the left we’ve depicted H together with the respective lower component ⟨ L, H ⟩ − . LW ( L r ) ∆ LW ( Lr ) / / LW ( L r ) LW ( L ) ⟨ L, H ⟩ − / / DR L O O H s H O O LW ( L ) ∆ LW ( L ) / / LW ( L ) LW ( L r ) ⟨ L, H ◇ ⟩ − / / DR Lr O O H s H ◇ O O LW ( L r ) ∆ LW ( Lr ) / / LW ( L r ) LW ( L ) ( H ◇ ) ˘ / / DR L O O H s H ◇◇ O O The central diagram shows H ’s dual L r -covering H ◇ ∶ = ⟨ L, H ⟩ ˘ − and the respective lower component ⟨ L, H ◇ ⟩ − .Then the central diagram arises by dualising H and the right-most diagram arises by dualising H ◇ . Noticethat since H ◇ is already maximal, the right-most square swaps and reverses both relations. In particular:– from left to right ⟨ L, H ⟩ − = ( H ◇ ) ˘ = ⟨ L, H ◇◇ ⟩ − .– H ◇ = ⟨ L, H ⟩ ˘ − implies ˘ H ⊆ ⟨ L r , H ◇ ⟩ − by maximality, hence H ⊆ ⟨ L r , H ◇ ⟩ ˘ − = H ◇◇ .c. If H is maximal then H = H ◇◇ by (b). Conversely if H = ( H ◇ ) ◇ then it is maximal by (a).d. If H is maximal then H ♭ amounts to a bijective relabelling of H s , so it is also maximal.e. Since ⟨ L, H ⟩ − = ⟨ L, H ◇◇ ⟩ − we know N H = N H ◇◇ , hence H ◇◇ is also legitimate.f. We have H ◇◇ = ⟨ L r , H ◇ ⟩ ˘ − and also ⟨ L, H ◇◇ ⟩ − = ( H ◇ ) ˘ by (b). Then constructing the biclique-form of H ◇ amounts to constructing bicliques: ⟨ L r , H ◇ ⟩ ˘ − [ h s ] × H ◇ [ h s ] = H ◇◇ [ h s ] × ⟨ L, H ◇◇ ⟩ ˘ − [ h s ] and the claim the follows.8. Given A × B ∈ ( N H ) u ( I N H ) we’ll prove u − L ∈ A by induction on u . If u = ε this holds by definition of I N H . If u = u a we have A × B ∈ ( N H ) u [ I N H ] and a − [ A ] ⊆ A by Lemma 4.1.4.4. Then by induction u − L ∈ A andhence ( u a ) − L ∈ A , so we are done. There are various ways an nfa can be have many initial/final states and transitions.
Definition 4.2.1 (Locally/intersection-saturated and transition-maximality) . Let
N = ( I, Z, N a , F ) be an nfa.1. N is locally-saturated if for all a ∈ Σ and z, z , z ∈ Z , z ∈ I ⇐⇒ L ( N @ z ) ⊆ L ( N ) N a ( z , z ) ∶ ⇐⇒ L ( N @ z ) ⊆ a − L ( N @ z ) . N is intersection-saturated if for all z, z , z ∈ Z . N a ( z , z ) ⇐⇒ ∀ u ∈ Σ ∗ . ( z ∈ N u [ I N ] ⇒ z ∈ N ua [ I N ]) z ∈ F N ⇐⇒ ∀ u ∈ Σ ∗ . ( z ∈ N u [ I N ] ⇒ N u [ I N ] ∩ F N ≠ ∅ ) .These are the conditions for transitions and final states from Kameda and Weiner’s intersection rule [KW70].3. N is transition-maximal if adding transitions or colouring additional initial/final states changes the acceptedlanguage. More formally, an nfa M extends N if M = ( I M , Z, M a , F M ) where I ⊆ I M , each N a ⊆ M a , F ⊆ F M and finally L ( M ) = L ( N ) . Then N is transition-maximal if its only extension is itself. ∎ The concept of being locally-saturated arises naturally from canonical constructions, as we’ll see. It is ‘local’ becauseone can enforce it without changing the languages accepted by the individual states. It is worth clarifying the secondconcept straight away. 51 ote 4.2.2.
An nfa N is intersection-saturated iff the following hold:– whenever for every u -path to z there exists a ua -path to z then N a ( z , z ) . – whenever for every u -path to z we have u ∈ L ( N ) then z is final. Then each transition relation N a can be reconstructed from the deterministic transitions N u [ I ] → a N ua [ I ] of thereachable subset construction. Soon we’ll prove an nfa N is intersection-saturated iff rev ( N ) is locally-saturated. ∎ Perhaps unsurprisingly, transition-maximal machines are both locally-saturated and intersection-saturated. Wenow provide various examples of nfas in different classes.
Example 4.2.3 (Comparing notions of saturation) . Locally but not intersection-saturated (via final states) . The nfa below accepts a + aa and is locally-saturatede.g. there is no transition from the left-most state to the right-most because { ε } ⊈ a − { aa } . However it is notintersection-saturated because the central state should be final by Note 4.2.2. i a / / i a / / o Locally but not intersection-saturated (via transitions) . This locally-saturated nfa accepts a ( bb ∗ + cc ∗ ) : o b (cid:3) (cid:3) ● b o o GF ED c (cid:15) (cid:15) i a / / a o o ● BC@A b O O c / / o c D D It is not intersection-saturated because by Note 4.2.2 it should have the dashed transitions too.3.
Intersection-saturated but not locally-saturated . Take the reverse nfa of either (1) or (2). This follows by Theorem4.2.9 further below.4.
Locally-saturated, not transition-maximal . Example (2) is locally-saturated but not transition-maximal.5.
Locally and intersection-saturated, not transition-maximal . This nfa accepts L ∶ = a + b + ( a + + b + ) c + and islocally-saturated e.g. there is no dashed c -transition because c ∗ ⊈ c − { ε } . It is also intersection-saturated. i a (cid:7) (cid:7) a,b (cid:15) (cid:15) b (cid:23) (cid:23) ● a $ $ c ' ' c / / o c (cid:15) (cid:15) ● b d d c w w c o o o c D D However, it is not transition-maximal – adding the dashed c -transition preserves the accepted language.6. The nfa dfa ↓ ( L ) from Example 3.3.4 is always transition-maximal, as the reader may verify. ∎ There is a canonical way to locally saturate an nfa.
Definition 4.2.4 (Irreducible simplification) . We define the irreducible simplification of an nfa
N = ( I, Z, R a , F ) as: simple ∨ ( N ) ∶ = ({ X ∈ J ( S ) ∶ X ⊆ L ( N )} , J ( S ) , λX.a − X, { X ∈ J ( S ) ∶ ε ∈ X }) where S ∶ = langs ( dep ( N )) is the join-semilattice of languages accepted by N . ∎ Note 4.2.5 (Irreducible simplification is canonical) . simple ∨ ( N ) is the lower nfa of Airr ( simple ( dep ( N ))) . ∎ emma 4.2.6 (Concerning irreducible simplifications) . simple ∨ ( N ) accepts L ( N ) .2. L (( simple ∨ ( N )) @ Y ) = Y for each state Y ∈ J ( langs ( N )) .3. simple ∨ ( − ) preserves reachability.4. simple ∨ ( − ) is idempotent.Proof. Det ( − ) and Airr ( − ) preserve the accepted language by Note 3.2.10, the latter defined in terms of the lower nfa.Finally simple ( − ) preserves the accepted language since, ignoring the join-semilattice structure, it is a sub-dfa.2. Each state X in δ ∶ = simple ( Det ( N , ∆ Z , rev ( N ))) accepts X by Lemma 3.4.2.3. The lower nfa of Airr δ accepts L ( N ) and is simple ∨ ( N ) . For Y ∈ J ( langs ( N )) , the lower nfa of Airr ( δ @ Y ) accepts Y and is ( simple ∨ ( N )) @ I Y where I Y is the principal downset generated by Y . Thus ( simple ∨ ( N )) @ { Y } accepts Y since acc δ is monotonic.3. Let N = ( I, Z, N a , F ) be reachable and δ = Det ( N , ∆ Z , rev ( N )) . By surjectivity, given state X in simple ∨ ( N ) there exists z ∈ Z such that X = acc δ ({ z }) . By reachability we have a path in N : I ∋ z → a ⋯ → a n z n = z ∋ F so that L ( N @ z i + ) ⊆ a − L ( N @ z i ) for each 0 ≤ i < n . This implies: I simple ∨ ( N ) ∋ acc δ ({ z }) → a ⋯ → a n acc δ ({ z n }) = X ∈ F simple ∨ ( N ) in the nfa simple ∨ ( N ) .4. Follows by (2). Lemma 4.2.7. simple ∨ ( N ) is locally-saturated with no more states than N .Proof. Recall Definition 4.2.4 and let δ ∶ = Det ( dep ( N )) . Then Airr ( simple ( δ )) ’s lower nfa is locally-saturated via theirinitial states and transition structure (Definition 3.2.5) because each state X accepts X by Lemma 3.4.2. Finally, ∣ J ( langs ( δ ))∣ ≤ ∣ J ( Open ∆ Z )∣ = ∣ Z ∣ via the surjective join-semilattice morphism acc δ ∶ Open ∆ Z ↠ langs ( δ ) and Note 2.2.8.3.Actually, irreducible simplifications are precisely those nfas which are both locally-saturated and ‘union-free’. Theorem 4.2.8 (Characterizing irreducible simplifications) . The following statements are equivalent:1.
N ≅ simple ∨ ( N ) .2. λz.L ( N @ z ) ∶ N → simple ∨ ( N ) defines an nfa isomorphism.3. N is locally-saturated and satisfies: ∀ z ∈ Z. ∀ S ⊆ Z. [( L ( N @ z ) = L ( N @ S )) ⇒ z ∈ S ] (union-free) Proof. ( ⇐⇒ ) : given (1) then each state accepts a distinct language, so there is only one possible nfa isomorphism.2. ( Ô⇒ ) : Suppose λz.L ( N @ z ) defines an nfa isomorphism. Then N is locally-saturated because simple ∨ ( N ) is locally-saturated by (1), and this property is preserved by the nfa isomorphism. Recall the join-semilatticeof accepted languages langs ( N ) and also the relationship L ( N @ S ) = ⋃ z ∈ S L ( N @ z ) from Definition 3.1.1. Then(union-free) holds via Lemma 4.2.6.2 because it asserts each z ∈ Z accepts L ( N @ z ) ∈ J ( langs ( N )) .53. ( Ô⇒ ) : Suppose N is locally-saturated and satisfies (union-free). Firstly, f ∶ = λz.L ( N @ z ) ∶ Z → J ( langs ( N )) is a well-defined function because by (union-free) we know each L ( N @ z ) ∈ J ( langs ( N )) . Furthermore f isinjective for otherwise (union-free) would fail, and surjective because J ( langs ( N )) is the minimal join-generatingsubset of langs ( N ) (see Note 2.2.8.4). Concerning the nfa isomorphism, z is final iff ε ∈ L ( N @ z ) hence f preserves and reflects final states. The initial states and transitions are preserved and reflected because N islocally-saturated.We now characterize the intersection-saturated nfas. Theorem 4.2.9.
An nfa N is intersection-saturated iff rev ( N ) is locally-saturated.Proof. Let
N = ( I, Z, N a , F ) and fix any z ∈ Z . For completely general reasons: L ( rev ( N ) @ z ) = L ({ z } , Z, N ˘ a , I ) = ( L ( I, Z, N a , { z })) r = ({ u ∈ Σ ∗ ∶ z ∈ N u [ I ]}) r . Assuming rev ( N ) is locally-saturated we prove the condition concerning transitions: N a ( z , z ) ⇐⇒ N ˘ a ( z , z ) ⇐⇒ L (( rev ( N )) @ z ) ⊆ a − L (( rev ( N )) @ z ) ( rev ( N ) locally-saturated) ⇐⇒ { u ∈ Σ ∗ ∶ z ∈ N u [ I ]} r ⊆ a − ({ u ∈ Σ ∗ ∶ z ∈ N u [ I ]} r ) (see above) ⇐⇒ { u ∈ Σ ∗ ∶ z ∈ N u [ I ]} ⊆ ({ u ∈ Σ ∗ ∶ z ∈ N u [ I ]}) a − (since ( a − X ) r = X r a − ) ⇐⇒ ∀ u ∈ Σ ∗ . ( z ∈ N u [ I ] ⇒ z ∈ N ua [ I ]) .Finally ∀ u ∈ Σ ∗ . ( z ∈ N ˘ u [ F N ] ⇒ N ˘ u [ F N ] ∩ I N ≠ ∅ ) is equivalent to requiring L ( N @ z ) ⊆ L , which follows by localsaturation. Conversely if N is intersection-saturated it is locally-saturated by reversing the above arguments.Then there is also a canonical way to intersection saturate an nfa. Corollary 4.2.10. rev ( simple ∨ ( rev ( N ))) is an intersection-saturated nfa accepting L ( N ) , no larger than N .Proof. rev ( simple ∨ ( rev ( N ))) accepts the same language because rev ( − ) reverses it and simple ∨ ( − ) preserves it(Lemma 4.2.6.1). Moreover rev ( − ) preserves the number of states and simple ∨ ( − ) never increases it by Lemma 4.2.7.By the same Lemma we know simple ∨ ( rev ( N )) is locally-saturated, hence its reverse satisfies the intersection ruleby Theorem 4.2.9.Finally we collect a few results concerning transition-maximal nfas. Given any nfa, there is a non-canonical way toconstruct a transition-maximal extension: keep adding initial/final states and transitions whenever doing so preservesthe accepted language. Let us formally state this basic fact, an instantiation of Zorn’s Lemma in the finite seatting. Lemma 4.2.11.
Every nfa N has a transition-maximal extension (see Definition 4.2.1.2). Lemma 4.2.12. rev ( − ) preserves transition-maximality.Proof. Holds because an nfa M extends rev ( N ) iff rev ( M ) extends N .Transition-maximal transitions are determined by the order-structure of langs ( N ) ⊇ LW ( L ( N )) . Lemma 4.2.13 (Transition-maximal transitions and finality) . If an nfa
N = ( I, Z, N a , F ) is transition-maximal, N a ( z , z ) ⇐⇒ ∀ X ∈ LW ( L ( N )) . [ L ( N @ z ) ⊆ X ⇒ L ( N @ z ) ⊆ a − X ] (T) z ∈ F ⇐⇒ ∀ X ∈ LW ( L ( N )) . ( L ( N @ z ) ⊆ X ⇒ ε ∈ X ) . (F) Proof.
Let L ∶ = L ( N ) . 54. We’ll prove (T). Given an nfa N where N a ( z , z ) then L ( N @ z ) ⊆ a − L ( N @ z ) , so ( ⇒ ) holds generally because a − ( − ) is monotonic w.r.t. inclusions. We’ll refer to (T)’s right hand side by (RHS).Suppose N is transition-maximal and (RHS) holds for specific z , z . For a contradiction assume ( z , z ) ∉ N a ,letting M be N with the new transition. We know L ⊆ L ( M ) and we’ll show the converse, contradictingtransition-maximality. Consider: I ∋ i u Ð→ N z a Ð→ z v Ð→ M f ∈ F where the v -path uses the new transition n ≥ L ( N @ z ) ⊆ u − L and may write v = (∏ ≤ i ≤ n v i a ) w where N v i ( z , z ) for 1 ≤ i ≤ n and w ∈ L ( N @ z ) . Then it suffices to establish L ( N @ z ) ⊆ ( ua (∏ ≤ i ≤ n v i a )) − L byinduction. For n = n + L ( N @ z ) ⊆ ( ua ( ∏ ≤ i ≤ n v i a )) − L L ( N @ z ) ⊆ ( v n + ) − L ( N @ z ) to infer L ( N @ z ) ⊆ ( ua (∏ ≤ i ≤ n v i a ) v n + ) − L and finally L ( N @ z ) ⊆ ( ua (∏ ≤ i ≤ n v i a ) v n + a ) − L via (RHS).2. We’ll prove (F). The implication ( ⇒ ) is trivial because z ∈ F implies ε ∈ L ( N @ z ) . Conversely we’ll use transition-maximality. Assuming (RHS), and given any u -path I ∋ z → u z n = z through N , since L ( N @ z ) ⊆ u − L we infer ε ∈ u − L i.e. u ∈ L , so by transition-maximality z ∈ F . Corollary 4.2.14. If N is transition-maximal it is locally-saturated and intersection-saturated.Proof. Given transition-maximal
N = ( I, Z, N a , I ) we first we show N is locally-saturated. Given L ( N @ z ) ⊆ L ( N ) then z ∈ I by transition-maximality; the converse is trivial. Concerning transitions, N a ( z , z ) certainly implies L ( N @ z ) ⊆ a − L ( N @ z ) . Conversely if the latter holds, then whenever L ( N @ z ) ⊆ X ∈ LW ( L ) we infer L ( N @ z ) ⊆ a − L ( N @ z ) ⊆ a − X because a − ( − ) is monotonic w.r.t. inclusions. Thus N a ( z , z ) by Lemma 4.2.13, so N is locally-saturated. Finally, rev ( N ) is transition-maximal by Lemma 4.2.12 hence locally-saturated, so N is intersection-saturated by Theorem 4.2.9. Corollary 4.2.15. If N is transition-maximal and union-free then N ≅ simple ∨ ( N ) .Proof. By Corollary 4.2.14 N is locally-saturated, so by union-freeness N ≅ simple ∨ ( N ) via Theorem 4.2.8. Lemma 4.2.16. simple ∨ ( − ) preserves transition-maximality.Proof. Given
N = ( I, Z, N a , F ) we have the full subset construction δ ∶ = Det ( dep ( N )) , the quotient JSL -dfa acc δ ∶ δ ↠ simple ( δ ) and also the irreducible simplification simple ∨ ( N ) . If N a ( z , z ) then for completely general reasons L ( N @ z ) ⊆ a − L ( N @ z ) , or equivalently acc δ ({ z }) ⊆ a − acc δ ({ z }) i.e. acc δ ({ z }) → a acc δ ({ z }) in simple ∨ ( N ) .1. One cannot add an initial state to simple ∨ ( N ) whilst preserving acceptance because, by local saturation (Lemma4.2.7), any additional state accepts K ⊈ L ( N ) .2. For a contradiction suppose adding a final state K to simple ∨ ( N ) preserves acceptance. Then N ′ ∶ = ( I, Z, N a , F ∪ acc − δ ({ K })) accepts L ( N ) (which is a contradiction) because any additional accepting N ′ -path I ∋ z → a ⋯ → a n z n + ∈ acc − δ ({ K }) directly induces I simple ∨ ( N ) ∋ acc δ ({ z }) → a ⋯ → a n acc δ ({ z n + }) = K in simple ∨ ( N ) ’sextension.3. It remains to show no additional transitions can be added. For a contradiction, assume N ′ obtained by addinga single new transition X → a X to simple ∨ ( N ) satisfies L ( N ′ ) = L ( simple ∨ ( N )) = L ( N ) . Consider the nfa: M ∶ = ( I, Z, M a , F ) M a ∶ = ⎧⎪⎪⎨⎪⎪⎩ N a ∪ acc − δ ({ X }) × acc − δ ({ X }) if a = a N a otherwise.Let us show M has strictly more transitions than N . Firstly Y ∶ = acc − δ ({ X }) × acc − δ ({ X }) is non-emptybecause acc δ is surjective. Secondly N a ∩ Y = ∅ for otherwise X → a X would already be in simple ∨ ( N ) .55ertainly L ( N ) ⊆ L ( M ) . For a contradiction we establish the converse. Given an accepting M -path shownbelow left: I ∋ z a Ð→ ⋯ a n Ð→ z n ∈ F I simple ∨ ( N ) ∋ acc δ ({ z }) a Ð→ ⋯ a n Ð→ acc δ ({ z n }) ∈ F simple ∨ ( N ) there is a respective accepting N ′ -path shown above right. Indeed, if N a i + ( z i , z i + ) then acc δ ({ z i }) → a i + acc δ ({ z i + }) in simple ∨ ( N ) and hence N ′ . Otherwise ( z i , z i + ) ∈ acc − δ ({ X }) × acc − δ ({ X }) is covered by thesingle extra transition in N ′ . L -extensions Definition 4.3.1 ( L -extension) . Recall the transitions of the state-minimal
JSL -dfa dfa ( L ) i.e. γ a = λX.a − X ∶ LQ ( L ) → LQ ( L ) from Definition 3.3.2. An L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) is an injective JSL f -morphism e ∶ LQ ( L ) ↣ T together with T -endomorphisms δ a such that e ○ γ a = δ a ○ e for each a ∈ Σ. ∎ Then an L -extension is a join-preserving order-embedding of LQ ( L ) into T . Additionally each endomorphism λX.a − X of the former is extended by the endomorphism δ a of the latter. Note 4.3.2 (Representation theory) . By Theorem 3.7.6 one can view an L -extension as a representation of L ’ssyntactic semiring [Pol01]. Then we are considering the ‘representation theory’ of finite idempotent semirings. ∎ Example 4.3.3 ( L -extensions) .
1. Given γ a ∶ = λX.a − X we have two bijective L -extensions: id LQ ( L ) ∶ LQ ( L ) → ( LQ ( L ) , γ a ) dr − L ∶ LQ ( L ) → (( LQ ( L r )) op , ( γ a ) ∗ ) The second one follows by Theorem 3.3.6. They are essentially the same extension i.e. they are isomorphic whenviewed as algebras with ∣ Σ ∣ -many unary operations.2. LQ ( Σ ∗ ) ∶ = ({ ∅ , Σ ∗ } , ∪ , ∅ ) where each λX.a − X = id LQ ( Σ ∗ ) . Any S ∈ JSL f has endomorphism: c ⊺ S ∶ = λs. ⎧⎪⎪⎨⎪⎪⎩ – S s = – S ⊺ S otherwise.If Σ ≠ ∅ , the number of injective e ∶ LQ ( Σ ∗ ) ↣ S is ∣ S ∣ −
1. Each e defines an L -extension LQ ( Σ ∗ ) ↣ ( S , c ⊺ S ) .3. Let T be a finite union-closed set of languages such that (a) L ∈ T and (b) X ∈ T Ô⇒ a − X ∈ T for all a ∈ Σ.Then the inclusion ι ∶ LQ ( L ) ↪ (( T, ∪ , ∅ ) , λX.a − X ) is an L -extension. ∎ There is a direct translation from an L -extension to a JSL -dfa: inherit the initial state and extend the final statesof dfa ( L ) (see below). Conversely each JSL -dfa induces an L -extension by first simplifying and then forgetting theinitial state and final states. Definition 4.3.4 (Translation between L -extensions and JSL -dfas) .
1. The induced
JSL -dfa of an L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) is: jdfa ( e ) ∶ = ( e ( L ) , T , δ a , ↓ T e ( dr L ( L r ))) and accepts L .2. Conversely given any JSL -dfa δ = ( s , S , δ a , F ) then: lext ( δ ) ∶ = ι ∶ LQ ( L ) ↪ ( langs ( δ ) , λX.a − X ) . is its induced L ( δ ) -extension . ∎ Note 4.3.5.
56. Concerning jdfa ( − ) , the largest non-final state in LQ ( L ) is ⋃{ X ∈ LW ( L ) ∶ ε ∉ L } = dr L ( L r ) . Then by Definition3.2.1 it is well-defined JSL -dfa. It accepts L because the embedding e restricts to a dfa-isomorphism from dfa ( L ) i.e. the classical state-minimal dfa which is a sub dfa of dfa ( L ) .2. Concerning lext ( − ) , the join-semilattice of accepted language langs ( δ ) is from Definition 3.4.1. It was used todefine the simplification simple ( δ ) of the JSL -dfa δ . We remarked in Example 4.3.3.3 that such structures arewell-defined L -extensions. ∎ Definition 4.3.6 (Simplicity, reachability, transition-maximality, state-minimality) . Fix an L -extension e .1. e is simple if jdfa ( e ) is simple (Definition 3.4.1.2). Then e ’s simplification is simple ( e ) ∶ = lext ( jdfa ( e )) .2. e is reachable if the lower nfa of Airr ( jdfa ( e )) is reachable (Definition 3.1.1).3. e is transition-maximal if the lower nfa of Airr ( jdfa ( e )) is transition-maximal (Definition 4.2.1).4. e is state-minimal if the lower nfa of Airr ( jdfa ( e )) is state-minimal. ∎ To simplify an L -extension one views it as a JSL -dfa, simplifies it, and finally forgets the initial state and finalstates. Well-definedness follows because jdfa ( e ) accepts L , so that lext ( jdfa ( e )) is an L -extension. The notions ofsimplicity and simplification are inherited from JSL -dfas, whereas the notions of reachability, transition-maximalityand state-minimality are inherited from nfas. L -extensionsNote 4.3.7. The results in this section are currently not being used elsewhere. ∎ Lemma 4.3.8 (Reachability degeneracy) . Let e be a simple transition-maximal L -extension and Airr ( jdfa ( e )) = ( M , G , M ′ ) . Then M has at most one unreachable state, accepting Σ ∗ if it exists.Proof. By assumption the lower nfa M is transition-maximal. Then those states not reachable from an initial state areall final and have transitions to every other state by transition-maximality. Thus they all accept Σ ∗ , so by simplicitythere is at most one of them.We now come to another important notion of ‘maximality’ definable purely in terms of an L -extension’s structure. Definition 4.3.9 (Meet-maximality) . An L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) is meet-maximal if: j = ⋀ T { e ( X ) ∶ X ∈ LW ( L ) , j ≤ T e ( X )} δ a ( j ) = ⋀ T { e ( a − X ) ∶ X ∈ LW ( L ) , j ≤ T e ( X )} for all j ∈ J ( T ) and a ∈ Σ. ∎ Then in meet-maximal L -extensions each j ∈ J ( T ) is the meet of those embedded left word quotients of L above it.Moreover, the endomorphism extensions δ a ∶ T → T preserve these special meets. Importantly, each transition-maximalnfa induces a meet-maximal L -extension. Lemma 4.3.10. If N is a transition-maximal nfa, lext ( Det ( dep ( reach ( N )))) is a simple, reachable and transition-maximal L ( N ) -extension.Proof. Setting L ∶ = L ( N ) then the specified e is an L -extension because each operation preserves acceptance. It issimple because lext ( − ) first simplifies and then forgets the initial state and final states. Concerning reachability, if M ∶ = reach ( N ) then the lower nfa of Airr ( jdfa ( e )) is precisely the irreducible simplification simple ∨ ( M ) (Definition4.2.4) and the latter is reachable by Lemma 4.2.6.3. Concerning transition-maximality, M = reach ( N ) is transition-maximal for otherwise N wouldn’t be, hence simple ∨ ( M ) is transition-maximal by Lemma 4.2.16. Theorem 4.3.11 (Meet-maximality) . If an L -extension is simple and transition-maximal it is meet-maximal. roof. We may assume e is simplified i.e. e = simple ( e ) . Then it is an inclusion e ∶ LQ ( L ) ↪ ( T , λX.a − X ) where T = ( T, ∪ , ∅ ) and T is the set of languages accepted by the individual states of δ ∶ = jdfa ( e ) . Let M be the lower nfaof Airr δ which is transition-maximal by assumption, hence locally-saturated by Lemma 4.2.7. Since each j ∈ J ( T ) accepts j by Lemma 3.4.2.3, invoking Lemma 4.2.13 yields: M a ( j , j ) ⇐⇒ ∀ X ∈ LW ( L ) . [ j ⊆ X ⇒ j ⊆ a − X ] . (T)Furthermore by Lemma 4.2.13 M ’s final states are: F M = { j ∈ J ( T ) ∶ ε ∈ ⋂ { X ∈ LW ( L ) ∶ j ⊆ X }} . (F)We’re ready to prove meet-maximality, so fix any j ∈ J ( T ) and let M j ∶ = ⋀ T { X ∈ LW ( L ) ∶ j ⊆ X } . Certainly j ⊆ M j .For the reverse inclusion, first observe M j = ⋃ J for some non-empty J ⊆ J ( T ) and fix any j ∈ J .– If ε ∈ j ⊆ M j then necessarily ε ∈ j , for otherwise by (F) we’d have j ⊆ X ∈ LW ( L ) with ε ∉ X and hence thecontradiction ε ∉ M j .– Concerning transitions, ∀ j ∈ J ( T ) . [ M a ( j , j ) ⇒ M a ( j, j )] via (T). In particular, given j ⊆ X ∈ LW ( L ) then j ⊆ M j ⊆ X so we deduce j ⊆ a − X by assumption.So every word accepted by j ∈ J is accepted by j i.e. j ⊆ j ; moreover M j ⊆ Y because j ⊆ M was arbitrary. Thenwe’ve established: j = ⋀ T { X ∈ LW ( L ) ∶ j ⊆ X } for each j ∈ J ( T ) . Fixing any a ∈ Σ and j ∈ J ( T ) it remains to establish: a − j = ⋀ T { a − X ∶ j ⊆ X ∈ LW ( L )} . Indeed if j ′ ∈ J ( T ) lies below the (RHS) i.e. ∀ X ∈ LW ( L ) . [ j ⊆ X ⇒ j ′ ⊆ a − X ] then by (T) we infer M a ( j, j ′ ) andhence j ′ ⊆ a − j because M is locally-saturated. Finally a − j is itself a lower bound for (RHS) because a − ( − ) ismonotone w.r.t. inclusion.Are these special meets of left word quotients u − L always their intersection? The answer is no . Example 4.3.12 (Meet-maximal meets needn’t be intersections) . In [BT14, Theorem 7] a language L is implicitlyprovided s.t. if an nfa N accepts L and each L ( N @ { z } ) is a set-theoretic boolean combination of LW ( L ) then N is notstate-minimal. Given a transition-maximal extension of a state-minimal nfa we obtain a meet-maximal L -extensionby Theorem 4.3.11. If the special meets ⋀ T { j ∶ j ⊆ X ∈ LW ( L )} were intersections we’d obtain a contradiction via thelower nfa of Airr ( jdfa ( e )) – which is also a state-minimal nfa accepting L . ∎ We finally mention some related properties.
Lemma 4.3.13. If e ∶ LQ ( L ) ↪ ( T , δ a ) is transition-maximal and simplified, j ⊆ u − L ⇐⇒ ⋂ { X ∈ LW ( L ) ∶ j ⊆ X } ⊆ u − L for any j ∈ J ( T ) and u ∈ Σ ∗ .Proof. The implication ( ⇒ ) is immediate. Conversely we know e is meet-maximal by Theorem 4.3.11, so that j = ⋀ T { X ∈ LW ( L ) ∶ j ⊆ X } ⊆ ⋂ { X ∈ LW ( L ) ∶ j ⊆ X } ⊆ u − L . Lemma 4.3.14. If e ∶ LQ ( L ) ↪ ( T , δ a ) is transition-maximal and simplified then for any j ∈ J ( T ) , j ⊆ a − j ⇐⇒ ⋂ { X ∈ LW ( L ) ∶ j ⊆ X } ⊆ ⋂ { a − X ∈ LW ( L ) ∶ j ⊆ X } . Proof.
We calculate: j ⊆ a − j ⇐⇒ ∀ X ∈ LW ( L ) . [ j ⊆ X ⇒ j ⊆ a − X ] (by Theorem 4.3.11) ⇐⇒ ∀ X ∈ LW ( L ) . [ j ⊆ X ⇒ ⋂ { X ∈ LW ( L ) ∶ j ⊆ X } ⊆ a − X ] (by Lemma 4.3.13) ⇐⇒ ⋂ { X ∈ LW ( L ) ∶ j ⊆ X } ⊆ ⋂ { a − X ∈ LW ( L ) ∶ j ⊆ X } . .3.2 Reversing L -extensionsNote 4.3.15. The results in this section are currently not being used elsewhere. ∎ Definition 4.3.16 (Reversal of an L -extension) . Given an L -extension e let N be the lower nfa of Airr ( jdfa ( e )) .Then e ’s reversal is the L r -extension: rev ( e ) ∶ = lext ( Det ( dep ( rev ( N )))) . It is union-generated by the languages rev e ( j ) ∶ = L ( rev ( N ) @ j ) = { w ∈ Σ ∗ ∶ j ∈ N w r [ I N ]} where j ∈ J ( T ) . ∎ Note 4.3.17 (Alternative descriptions of rev ( e ) ) .
1. It is simple ( Det ( dep ( rev ( N )))) without the initial state or final states.2. By Corollary 3.4.5, it is isomorphic to ( reach ( Det ( dep ( N )))) „ without the initial state or final states. ∎ Lemma 4.3.18 (Reversing L -extensions) . Fix any L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) .1. rev ( e ) is a well-defined simplified L r -extension.2. If e is simple then rev ( e ) is reachable.3. If e is transition-maximal then so is rev ( e ) . Similarly if e is state-minimal then so is rev ( e ) .4. If e is simple, transition-maximal and rev e ( j ) = ⋃ { rev e ( j ) ∶ j ∈ J } for some j ∈ J ( T ) , J ⊆ J ( T ) then j = ⋀ T J .5. If e is simplified, transition-maximal and state-minimal then rev ( rev ( e )) = e .Proof.
1. Well-definedness follows by construction, noting that dep ( − ) , Det ( − ) and lext ( − ) preserve the accepted language L r . Likewise rev ( e ) is simplified by construction.2. If e is simple then N is coreachable because each j ∈ J ( T ) accepts a non-empty language. Then rev ( N ) isreachable and hence so is simple ∨ ( rev ( N )) by Lemma 4.2.6.3. If N is transition-maximal then rev ( N ) is too by Lemma 4.2.12, hence so is simple ∨ ( rev ( N )) by Lemma4.2.16. If e is state-minimal then the lower nfa of Airr ( jdfa ( e )) is a state-minimal nfa M accepting L . Since thelower nfa of Airr ( jdfa ( rev ( e ))) accepts L r and has no more states than M it is also state-minimal, so rev ( e ) is state-minimal.4. We may assume e = simple ( e ) is simplified. Let δ ∶ = jdfa ( e ) and N be the lower nfa of Airr δ . Suppose rev e ( j ) = ⋃ { rev e ( j ) ∶ j ∈ J } . Then ∀ u ∈ Σ ∗ . ∀ j ∈ J. ( j ∈ N u [ I N ] ⇒ j ∈ N u [ I N ]) . Now, since N is transition-maximal the intersection rule holds by Corollary 4.2.14, so that: ∀ j ∈ J. ∀ a ∈ Σ . N a [ j ] ⊆ N a [ j ] (A)because whenever N a ( j , j ′ ) and there is a u -path to j there is a ua -path to j ′ . Furthermore: j is final iff every j ∈ J is final . (B)Indeed, if j is final and j ∈ J then for every u -path to j we have a u -path to j and hence u ∈ L , so by transition-maximality j is final. Similarly, if every j ∈ J is final then every u -path to j satisfies u ∈ L so j is final bytransition-maximality. Then by (A) and (B) we deduce j ⊆ ⋂ J .To establish j = ⋀ T J we fix any j ′ ⊆ ⋀ T J and prove j ′ ⊆ j . Certainly ∀ j ∈ J.j ′ ⊆ j hence: ∀ j ∈ J. N a [ j ′ ] ⊆ N a [ j ] (C)because j ′′ ⊆ a − j ′ implies j ′′ ⊆ a − j . We now aim to prove N a [ j ′ ] ⊆ N a [ j ] . Given N a ( j ′ , j ′′ ) we certainly know ∀ j ∈ J. N a ( j, j ′′ ) by (C). Equivalently ∀ j ∈ J. N ˘ a ( j ′′ , j ) in rev ( N ) and thus ∀ j ∈ J.rev e ( j ) ⊆ a − rev e ( j ′ ) , wherethe latter uses a general property of nfas. Then: rev e ( j ) = ⋃ j ∈ J rev e ( j ) ⊆ a − rev e ( j ′ ) .
59n other words, for every u r -path to j in N there exists an ( au ) r -path to j ′′ . Applying the intersection-rule wededuce N a ( j , j ′′ ) as desired i.e. we have established N a [ j ′ ] ⊆ N a [ j ] . Furthermore if j ′ is final then every j ∈ J is final, so that j is final by (B). Then we’ve proved that j ′ ⊆ j and we’re done.5. An nfa is state-minimal iff its reverse is state-minimal. Then since rev ( e ) ∶ LQ ( L ) ↪ ( U , φ a ) accepts L r wededuce α ∶ = λj.rev e ( j ) ∶ J ( T ) → J ( U ) is bijective, for otherwise we’d contradict state-minimality. Let N be thelower nfa of Airr ( jdfa ( e )) and M be the lower nfa of Airr ( jdfa ( rev ( e ))) . Then we can bijectively relabel M to obtain the nfa: M ′ ∶ = ( α − [ I M ] , J ( T ) , M ′ a , α − [ F M ]) M ′ a ( j , j ) ∶ ⇐⇒ M a ( rev e ( j ) , rev e ( j )) . We’re going to show that rev ( M ′ ) extends N . By (2) we know M is transition-maximal, hence: ( rev ( M ′ )) a ( j , j ) ⇐⇒ M ′ a ( j , j ) ⇐⇒ M a ( rev e ( j ) , rev e ( j )) ⇐⇒ rev e ( j ) ⊆ a − rev e ( j ) (by Corollary 4.2.14) ⇐⇒ ∀ u ∈ Σ ∗ . [ j ∈ N u [ I N ] ⇒ j ∈ N ua [ I N ]] (by definition).Thus N ( j , j ) ⇒ ( rev ( M ′ )) a ( j , j ) because whenever there is a u -path to j we obtain a ua -path to j . Next,if j is initial in N then j ∈ I N = N ε [ I N ] and hence ε ∈ rev e ( j ) , so that j is final in M ′ , thus initial in rev ( M ′ ) .Finally if j is final in N then ∀ u ∈ Σ ∗ . [ j ∈ N u [ I N ] ⇒ u ∈ L ] or equivalently rev e ( j ) ⊆ L r , so that j is initial in M ′ , thus final in rev ( M ′ ) . Having established that rev ( M ′ ) extends N we immediately deduce rev ( M ′ ) = N by transition-maximality. It follows that: rev rev ( e ) ( rev e ( j )) = L ( rev ( M ) @ rev e ( j ) ) = L ( rev ( M ′ ) @ j ) (via M ≅ M ′ ) = L ( N @ j ) (via rev ( M ′ ) = N ) = j .Since α is bijective we know every j ∈ J ( T ) has a unique corresponding rev e ( j ) ∈ J ( U ) . Combining this withthe above equality we deduce rev ( rev ( e )) = e . This section is based on recent work of Tamm [Tam16]. Recall the minimal boolean and distributive
JSL -dfa fromDefinition 3.5.1. Fixing L , the left predicates LP ( L ) are those finitely many languages arising as a set-theoretic booleancombination of the left word quotients LW ( L ) . Importantly, any language can be transformed into a left predicate viaa closure operator. Definition 4.4.1 (Atomic languages, cl L and E L ) .
1. The atomic closure operator cl L ∶ P Σ ∗ → P Σ ∗ is defined: cl L ( X ) ∶ = ⋃ { α ∈ J ( LP ( L )) ∶ α ∩ X ≠ ∅ } = ⋂ { Y ∈ LP ( L ) ∶ X ⊆ Y } .Moreover the equivalence relation E L ⊆ Σ ∗ × Σ ∗ is defined: E L ( u, v ) ∶ ⇐⇒ ∀ X ∈ LW ( L ) . [ u ∈ X ⇐⇒ v ∈ X ] with equivalence classes J w K E L ⊆ Σ ∗ .2. A language is atomic w.r.t L [BT14] if it is a fixpoint of cl L . They are precisely the languages in LP ( L )
3. A language is positively atomic w.r.t L if it lies in LD ( L ) ⊆ LP ( L ) .4. A language is subatomic w.r.t L if it lies in LRP ( L ) . ∎ ote 4.4.2 (Compatible definitions of cl L ) . The distinct definitions of cl L ( X ) are consistent: each element of LP ( L ) is (i) the join (union) of join-irreducibles (atoms) below it, (ii) the meet (intersection) of elements above it. ∎ Lemma 4.4.3 (Concerning atomic closure) . Let α ∈ J ( LP ( L )) and X, Y ⊆ Σ ∗ .1. cl L is a well-defined closure operator.2. cl L ( X ) is the smallest left predicate containing X .3. α ⊆ cl L ( X ) iff α ∩ X ≠ ∅ .4. cl L ( X ∪ Y ) = cl L ( X ) ∪ cl L ( Y ) .5. cl L ( w − X ) ⊆ w − cl L ( X ) for all w ∈ Σ ∗ .6. J ( LP ( L )) = { J w K E L ∶ w ∈ Σ ∗ } .7. cl L ( X ) = ⋃ w ∈ X J w K E L .Proof. cl L is monotone: if X ⊆ Y then ∀ α ∈ J ( LP ( L )) . ( α ∩ X ≠ ∅ ⇒ α ∩ Y ≠ ∅ ) hence cl L ( X ) ⊆ cl L ( Y ) . Next, X ⊆ cl L ( X ) because the latter is an intersection of supersets of X . Finally cl L ○ cl L ( X ) = cl L ( X ) because cl L ( X ) ∈ LP ( L ) is the union of the atoms it includes.2. Follows by alternate definition.3. If α ∩ X ≠ ∅ then α ⊆ cl L ( X ) by definition. Conversely if α ⊆ cl L ( X ) then some u ∈ α satisfies u ∈ X for otherwisewe’d know X ⊆ α (the latter being a coatom), so that cl L ( X ) ⊆ α by the alternate definition (a contradiction).4. The inclusion ( ⊇ ) follows by monotonicity. Conversely, given an atom α ⊆ cl L ( X ∪ Y ) then by (3) there exists u ∈ α ∩ ( X ∪ Y ) and hence w.l.o.g. u ∈ α ∩ X and thus α ⊆ cl L ( X ) .5. Since X ⊆ cl L ( X ) we deduce w − X ⊆ w − cl L ( X ) and hence cl L ( w − X ) ⊆ w − cl L ( X ) because (a) the formeris the least atomic language above w − X , (b) the latter is in LP ( L ) because w − ( − ) preserves all set-theoreticboolean operations.6. An atom amounts to specifying X or X for each X ∈ LW ( L ) i.e. an E L equivalence-class.7. Follows by definition via (6). Note 4.4.4 ( cl L preserves unions) . The fixpoints (closed sets) of every closure operator are closed under intersections.By Lemma 4.4.3 LP ( L ) is also closed under unions , which is not a general property of closure operators. Then theclosed sets form a distributive lattice, in fact a boolean lattice because LP ( L ) is closed under relative complement. ∎ We’ve now arrived at the main definition of this section.
Definition 4.4.5 (Atomizer) . Each L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) has associated join-semilattice morphism: λY. cl L ( acc jdfa ( e ) ( Y )) ∶ T → LP ( L ) . Restricting to the image yields the atomizer at e ∶ T ↠ At e where At e ∶ = ( At e , ∪ , ∅ ) is the atomized semilattice . ∎ Note 4.4.6 (Atomizer’s action) . The atomizer constructs the closure of the accepted language. We often construct L -extensions by simplifying a JSL -dfa, in which case the atomizer is a domain/codomain restriction of cl L . ∎ Lemma 4.4.7.
The atomizer is a well-defined join-semilattice morphism.Proof.
Fixing an L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) , at e is a well-defined function because cl L ( X ) ∈ LP ( L ) . Given δ ∶ = jdfa ( e ) then acc δ ∶ T ↠ langs ( δ ) is a well-defined surjective JSL f -morphism by Definition 3.4.1. Finally cl L restricts to a morphism cl L ∶ langs ( δ ) → LP ( L ) because ∅ = cl L ( ∅ ) is atomic and cl L preserves binary unions byLemma 4.4.3.4. 61ecall the canonical quotient-atom bijection κ L from Theorem 3.5.5. We now use it to represent each atomizedsemilattice inside Dep . Definition 4.4.8 (Atomizer relation H e ) . Each L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) has an associated atomizer relation , H e ⊆ J ( T ) × LW ( L r ) H e ( j, v − L r ) ∶ ⇐⇒ v r ∈ at e ( j ) . Furthermore if e is simplified this becomes H e ( j, v − L r ) ⇐⇒ v r ∈ cl L ( j ) . ∎ This important concept is preserved under simplification of the L -extension. Lemma 4.4.9 ( H e ≅ H simple ( e ) ) . We have the
Dep -isomorphism: LW ( L r ) ∆ LW ( Lr ) / / LW ( L r ) J ( T ) H e O O acc jdfa ( e ) / / J ( langs ( e )) H simple ( e ) O O Proof.
The diagram commutes by unwinding the definitions, recalling each state Y in simple ( jdfa ( e )) accepts Y .Since Open H e = Open H simple ( e ) , applying Open yields an identity morphism (see Note 2.2.11), so this
Dep -morphism isactually an isomorphism.
Theorem 4.4.10 ( At e ≅ Open H e ) . For any L -extension e we have the join-semilattice isomorphism: θ e ∶ At e → Open H e θ e ( Y ) ∶ = { w − L r ∶ w ∈ Y r } θ − e ( S ) ∶ = { w ∈ Σ ∗ ∶ w − L r ∈ S } r . Proof.
Recall that the atomized semilattice At e is a sub join-semilattice of LP ( L ) . Concerning the latter, we mayinstantiate Proposition 2.2.21 with J LP ( L ) ∶ = LP ( L ) (every element) and M LP ( L ) ∶ = M ( LP ( L )) (the coatoms). Weimmediately obtain the Dep -isomorphism: I − LP ( L ) ∶ Pirr LP ( L ) → G where G ∶ = ⊈ ∣ LP ( L ) × M ( LP ( L )) and thus the composite join-semilattice isomorphism: α ∶ = LP ( L ) rep LP ( L ) ÐÐÐÐ→
OpenPirr LP ( L ) Open I − LP ( L ) ÐÐÐÐÐÐ→
Open G with action Y ↦ rep LP ( L ) ( Y ) . To clarify, α acts as rep LP ( L ) because each Y ∈ O ( Pirr LP ( L )) is downwards-closed in M ( LP ( L )) w.r.t. inclusion,and ( I − LP ( L ) ) ˘ + [ Y ] constructs the downwards-closure (see Proposition 2.2.21). It follows that At e ⊆ LP ( L ) may berepresented as a sub join-semilattice of Open G . Since at e is surjective we know J ∶ = at e [ J ( T )] join-generates theatomized semilattice At e . Then: α restricts to the isomorphism β ∶ At e → Open H where H ∶ = at e ∣ J ( T ) × J ; G .To explain, each cl L ( j ) ∈ J ( At e ) ⊆ J LP ( L ) satisfies rep LP ( L ) ( cl L ( j )) = G [ cl L ( j )] , and all other open sets are unions ofthem. To construct the desired isomorphism θ e recall the bijection κ L ∶ LW ( L r ) → J ( LP ( L )) from Theorem 3.5.5, andthe bijection J ( LP ( L )) → M ( LP ( L )) between atoms and coatoms (relative complement). Consider the relations: M ( LQ ( L )) f / / LW ( L r ) J ( T ) H O O ∆ J ( T ) / / J ( T ) H e O O where the composite bijection f has action J w K E L ↦ κ − L ( J w K E L ) = ( w r ) − L r . If they commute we have a Dep -isomorphism because the lower and upper witnesses are bijections. Then let us calculate: H ; f ( j, v − L r ) ⇐⇒ ∃ w ∈ Σ ∗ . [ at e ( j ) ⊈ J w K E L ∧ f ( J w K E L ) = v − L r ] ⇐⇒ at e ( j ) ⊈ J v r K E L (A) ⇐⇒ J v r K E L ⊆ at e ( j ) (atom vs. coatom) ⇐⇒ v r ∈ at e ( j ) (via Lemma 4.4.3.7) ⇐⇒ H e ( j, v − L r ) (by definition).62oncerning (A), ( ⇒ ) follows because we know ( w r ) − L r = v − L r and thus J w K E L = J v r K E L because κ L is injective.Conversely ( ⇐ ) follows by choosing w ∶ = v r . So we have the isomorphism θ e ∶ = Open H e ○ β with action: Y ↦ f [ rep LP ( L ) ( Y )] (by Note 2.2.11) = { f ( J w K E L ) ∶ Y ⊈ J w K E L } (def. of rep) = {[ w r ] − L r ∶ Y ⊈ J w K E L } (def. of f ) = {[ w r ] − L r ∶ w ∈ Y } ( Y is atomic). Recall the notion of L -covering H i.e. Definition 4.1.1. They amount to biclique edge-coverings of the dependencyrelation DR L . They are legitimate if their induced nfa N H (defined over the bicliques) accepts L . Crucially H e is alegitimate L -covering for any L -extension e . Theorem 4.5.1. H e is a legitimate L -covering for any L -extension e , ⟨ L, H e ⟩ − ; H e = DR L where ⟨ L, H e ⟩ − ( u − L, j ) ⇐⇒ acc jdfa ( e ) ( j ) ⊆ u − L. Proof.
Denote the acceptance map α ∶ = acc jdfa ( e ) for brevity. Observe α ( j ) ⊆ u − L ⇐⇒ at e ( j ) ⊆ u − L because u − L is atomic w.r.t. L . We first compute ⟨ L, H e ⟩ − without knowing H e is an L -covering. Afterwards we’ll verify theclaimed equality. ⟨ L, H e ⟩ − ( u − L, j ) ∶ ⇐⇒ H e [ j ] ⊆ DR L [ u − L ] ⇐⇒ ∀ v ∈ Σ ∗ . [ v r ∈ at e ( j ) ⇒ uv r ∈ L ] ⇐⇒ ∀ v ∈ Σ ∗ . [ v ∈ at e ( j ) ⇒ v ∈ u − L ] ⇐⇒ at e ( j ) ⊆ u − L ⇐⇒ α ( j ) ⊆ u − L (see above). ⟨ L, H e ⟩ − ; H e ( u − L, v − L r ) ⇐⇒ ∃ j ∈ J ( T ) . [ α ( j ) ⊆ u − L ∧ v r ∈ at e ( j )] (by def.) ⇐⇒ ∃ j ∈ J ( T ) . [ at e ( j ) ⊆ u − L ∧ v r ∈ at e ( j )] (see above) ⇐⇒ v r ∈ u − L (A) ⇐⇒ DR L ( u − L, v − L r ) .Concerning (A), ( ⇒ ) follows immediately. As for ( ⇐ ) , since u − L = ⋃ α [ S ] for some S ⊆ J ( T ) we deduce u − L = ⋃ at e [ S ] , so that v r ∈ u − L implies v r ∈ at e ( j ) ⊆ u − L for some j ∈ J ( T ) . Then H e is an L -covering by Lemma 4.1.4.1.It remains to establish the legitimacy of H e . By Lemma 4.1.4.6 we at least know L ( N H e ) ⊆ L , and it remains toprove the reverse inclusion. First let M be the lower nfa of Airr ( jdfa ( e )) , which accepts L by Note 3.2.10. Thesetwo nfas have the same states J ( langs ( e )) ; concerning their transitions: M a ( α ( j ) , α ( j )) ⇐⇒ α ( j ) ⊆ a − α ( j ) (by definition) Ô⇒ ∀ X ∈ LW ( L ) . [ α ( j ) ⊆ X ⇒ α ( j ) ⊆ a − X ] ⇐⇒ N H e ,a ( α ( j ) , α ( j )) (see ⟨ L, H e ⟩ − above).Moreover (a) I M = I N H e since α ( j ) ⊆ L ⇐⇒ ⟨ L, H e ⟩ ˘ − ( L, α ( j )) and (b) F M ⊆ F N H e because ε ∈ α ( j ) Ô⇒ ε ∈ at e ( j ) ⇐⇒ H e ( j, L r ) . It follows that N H e simulates M i.e. L ⊆ L ( N H e ) and we are done. Corollary 4.5.2 (Maximal legitimate L -coverings) . H ◇◇ e is a maximal legitimate L -covering.Proof. By Theorem 4.5.1 we know H e is a legitimate L -covering. Then H ◇◇ e is a maximal L -covering by Lemma4.1.4.7.c and legitimate by Lemma 4.1.4.7.e. Corollary 4.5.3. If e is simplified and transition-maximal then N H e is the lower nfa of Airr ( jdfa ( e )) .Proof. Let M be the lower nfa of Airr ( jdfa ( e )) . In the proof of Theorem 4.5.1 we showed N H e is an extension of M .Then by transition-maximality M = N H e . 63ach nfa canonically induces a legitimate L -covering – a pattern which the Kameda-Weiner algorithm can recognise.Moreover every transition-maximal union-free nfa (see Theorem 4.2.8) arises as an induced nfa. Corollary 4.5.4.
Fix any nfa N accepting L .1. H lext ( sc ( N )) is a legitimate L -covering.2. If N is transition-maximal and union-free then N ≅ N H lext ( sc (N)) .Proof.
1. The full subset construction sc ( N ) = Det ( dep ( N )) accepts L by Note 3.2.10, so the well-defined L -extension lext ( sc ( N )) accepts L by Note 4.3.5. Then the claim follows by Theorem 4.5.1.2. Since N is transition-maximal and union-free it is isomorphic to simple ∨ ( N ) by Corollary 4.2.15. Then N isthe isomorphic to the lower nfa of Airr ( simple ( sc ( N ))) , so the claim follows by Corollary 4.5.3. L -extensions Fixing any regular language L , there are finitely languages arising as a union of the atoms J ( LP ( L )) . Recall thatthese languages are called atomic (Definition 4.4.1). Likewise there are positively atomic languages (a subclass of theatomic ones) and also the subatomic languages (a superclass of the atomic ones). Definition 4.6.1 (Atomic, positively atomic and subatomic nfas and L -extensions) . Fix an nfa N accepting L .1. N is atomic if each state accepts an atomic language (equiv. langs ( N ) ⊆ LP ( L ) ).2. N is positively atomic if each state accepts a positively atomic language (equiv. langs ( N ) ⊆ LD ( L ) ).3. N is subatomic if each individual state accepts a subatomic language (equiv. langs ( N ) ⊆ LRP ( L ) ).Finally, an L -extension e is atomic (resp. positively atomic , subatomic ) if the lower nfa of Airr ( jdfa ( e )) is atomic(resp. positively atomic , subatomic ). ∎ Lemma 4.6.2 (Atomic L -extensions) . The following statements concerning L -extensions are equivalent.1. e is atomic.2. langs ( jdfa ( e )) ⊆ LP ( L ) .3. simple ( jdfa ( e )) is a sub JSL -dfa of dfa ¬ ( L ) .4. at e defines a surjective JSL -dfa morphism to a sub
JSL -dfa of dfa ¬ ( L ) .Proof. – ( ) ⇐⇒ ( ) : By Corollary 3.2.11 the languages accepted by the lower nfa (varying over subsets) are preciselythose accepted by jdfa ( e ) (varying over individual states).– ( ) ⇐⇒ ( ) : Follows because the transition structure of the two JSL -dfas is defined in the same way.– ( ) Ô⇒ ( ) : We know each state of jdfa ( e ) accepts an atomic language, so at e acts in the same way as the JSL -dfa morphism acc jdfa ( e ) . Then at e defines a JSL -dfa morphism to a sub
JSL -dfa of dfa ¬ ( L ) .– ( ) Ô⇒ ( ) : The dfa morphism informs us that each state accepts an atomic language. Definition 4.6.3 (Pseudo-atomicity) . An L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) is pseudo-atomic if the kernel of theatomizer ker ( at e ) ⊆ T × T is closed under λ ( x, y ) . ( δ a ( x ) , δ a ( y )) for each a ∈ Σ. ∎ However, induced nfas needn’t be transition-maximal nor union-free. e is pseudo-atomic if the join-semilattice congruence ker ( at e ) ⊆ T × T is also a congruence for each unaryoperation δ a ∶ T → T . We’re going to show that atomicity and pseudo-atomicity are equivalent concepts. Lemma 4.6.4 (Pseudo-atomic L -extensions) .
1. Every atomic L -extension is pseudo-atomic.2. simple ( − ) preserves pseudo-atomicity.3. e ∶ LQ ( L ) ↣ ( T , δ a ) is pseudo-atomic iff the atomized semilattice admits the L -extension structure: ι e ∶ LQ ( L ) ↪ ( At e , φ a ) φ a ( at e ( t )) ∶ = at e ( δ a ( t )) . Proof.
1. By Lemma 4.6.2.4 the dfa morphism at e satisfies at e ( δ a ( t )) = δ a ( at e ( t )) for each a ∈ Σ. The latter implies e ispseudo-atomic.2. Let δ ∶ = jdfa ( e ) and e ∶ = simple ( e ) recalling Definition 4.3.6. Recalling Lemma 3.4.2.3, at e ( acc δ ( t )) = cl L ( acc δ ( t )) = at e ( t ) for every t ∈ T . Then given any Y i = acc δ ( t i ) , at e ( Y ) = at e ( Y ) Ô⇒ at e ( t ) = at e ( t ) (see above) Ô⇒ at e ( δ a ( t )) = at e ( δ a ( t )) (by assumption) ⇐⇒ cl L ( acc δ ( δ a ( t ))) = cl L ( acc δ ( δ a ( t ))) (by definition) ⇐⇒ cl L ( a − ( acc δ ( t ))) = cl L ( a − ( acc δ ( t ))) ( acc δ a JSL -dfa morphism) ⇐⇒ cl L ( a − Y ) = cl L ( a − Y ) ⇐⇒ at e ( a − Y ) = at e ( cl L ( a − Y )) .Hence the simpification of e is also pseudo-atomic.3. If ι e is a well-defined L -extension then whenever at e ( t ) = at e ( t ) we deduce at e ( δ a ( t )) = φ a ( at e ( t )) = φ a ( at e ( t )) = at e ( δ a ( t )) , so that e is pseudo-atomic. Conversely, at e is stable under each δ a so the endo-morphisms φ a ∶ At e → At e are well-defined. Since δ ∶ = jdfa ( e ) accepts L , by varying the initial state it acceptsevery Y ∈ LQ ( L ) ⊆ LP ( L ) , so the inclusion ι e ∶ LQ ( L ) ↪ At e is a well-defined join-semilattice morphism. Observethat each Y ∈ LQ ( L ) has some t Y ∈ T with at e ( t Y ) = Y . Then the calculation: φ a ( ι e ( Y )) = φ a ( Y ) = φ a ( at e ( t Y )) = at e ( δ a ( t Y )) (def. of φ a ) = cl L ( acc jdfa ( e ) ( δ a ( t Y ))) (def. of at e ) = cl L ( a − ( acc jdfa ( e ) ( t Y ))) ( acc δ a JSL -dfa morphism) = a − Y ( LP ( L ) closed under a − ( − ) ) = ι e ( a − Y ) establishes that ι e is an L -extension. Theorem 4.6.5. An L -extension is atomic iff it is pseudo-atomic.Proof. If an L -extension is atomic it is pseudo-atomic by Lemma 4.6.4. Conversely given a pseudo-atomic L -extension e ∶ LQ ( L ) ↣ ( T , δ a ) then e ∶ = simple ( e ) is pseudo-atomic by Lemma 4.6.4.2. By Lemma 4.6.4.3 we have the L -extension ι e ∶ LQ ( L ) ↪ ( At e , φ a ) where φ a ∶ = λX.a − X ∶ At e → At e , since cl L ( a − X ) = a − X . Then e is simplified and eachstate X accepts X ∈ At e , so e is atomic.Tamm and Brzozowski proved an nfa N is atomic iff rev ( N ) ’s reachable subset construction is state-minimal[BT14]. We reprove their result using our terminology and then:– refine their result i.e. N is positively atomic (see Definition 4.4.1.3) iff the dfa isomorphism is also an orderisomorphism. 65 generalise their result i.e. N is subatomic (see Definition 4.4.1.4) iff rsc ( rev ( N )) ’s transition monoid is suitablyisomorphic to L r ’s syntactic monoid. Theorem 4.6.6 (Atomicity and rsc ( rev ( N )) ) . Let N be an nfa accepting L .1. N is atomic iff rsc ( rev ( N )) ≅ dfa ( L r ) [BT14].2. N is positively atomic iff the dfa isomorphism from (1) is also an order isomorphism w.r.t. inclusion.3. N is subatomic iff rsc ( rev ( N )) ’s transition monoid is isomorphic to L r ’s syntactic monoid via: λ J w K TM ( rsc ( rev ( N ))) . J w K S Lr ∶ TM ( rsc ( rev ( N ))) → Syn ( L r ) . Proof.
1. Let δ ∶ = sc ( dfa ( L r )) be the dual of dfa ¬ ( L ) – see Corollary 3.5.8.Assuming N is an atomic nfa, we have the composite JSL -dfa morphism: reach ( δ ) ↪ δ q ↠ reach ( sc ( rev ( N )))´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ φ . The surjection q arises by dualising ι ∶ simple ( sc ( N )) ↪ dfa ¬ ( L ) (see Lemma 4.6.2) and applying Corollary 3.4.5and Corollary 3.5.8. Viewing reach ( δ ) as its underlying dfa, consider its classically reachable part: reach ( reach ( δ )) = reach ( δ ) (by definition of reach ( − ) ) = reach ( sc ( dfa ( L r ))) = rsc ( rev ( rev ( dfa ( L r )))) (by Note 3.2.10) = rsc ( dfa ( L r )) ≅ dfa ( L r ) (holds for any dfa).The above observation provides the injective dfa morphism ψ below: χ ³¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹·¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹µ dfa ( L r ) ψ ↣ reach ( δ ) φ → reach ( sc ( rev ( N ))) Since dfa ( L r ) is state-minimal, the composite dfa morphism χ is injective. Since dfa ( L r ) is reachable we obtainthe dfa isomorphism dfa ( L r ) ≅ reach ( reach ( sc ( rev ( N ))))) = rsc ( rev ( N )) recalling Note 3.2.10.Conversely fix an nfa N such that dfa ( L r ) ≅ rsc ( rev ( N )) . By definition of reach ( − ) and Note 3.2.10, reach ( reach ( sc ( rev ( N ))))) = reach ( sc ( rev ( N ))) = rsc ( rev ( N )) . Then by definition of reach ( − ) we have an injective dfa morphism χ ∶ dfa ( L r ) ↣ reach ( sc ( rev ( N )))) . By takingthe free JSL -dfa on a dfa (Theorem 3.2.17) this extends to a
JSL -dfa morphism ˆ χ ∶ δ → reach ( sc ( rev ( N ))) .Applying duality, Corollary 3.5.8 and Corollary 3.4.5 we obtain a JSL -dfa morphism simple ( sc ( N )) → dfa ¬ ( L ) .Then every language accepted by N is atomic, so that N is itself atomic.2. Let δ ∶ = Det ( dfa ↓ ( L r ) , ⊆ , rev ( dfa ↓ ( L r ))) be the dual of dfa ∧ ( L ) – see Corollary 3.5.13.Assuming N is positively atomic, we have the composite JSL -dfa morphism: reach ( δ ) ↪ δ q ↠ reach ( sc ( rev ( N )))´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ φ . The surjection q arises by dualising ι ∶ simple ( sc ( N )) ↪ dfa ∧ ( L ) and applying Corollary 3.4.5 and Corollary3.5.13. Repeating the argument from (1) we obtain the injective dfa morphism ψ below: dfa ( L r ) ψ ↣ reach ( δ ) φ → reach ( sc ( rev ( N ))) . dfa ( L r ) ≅ rsc ( rev ( N )) . To see it isan order-isomorphism w.r.t. inclusion, first observe ψ has action λu − L r . { Y ∈ LW ( L r ) ∶ Y ⊆ u − L r } so itpreserves/reflects the inclusion ordering. Finally, φ certainly preserves inclusions since it is join-semilatticemorphism. It reflects inclusions when restricted to ψ [ dfa ( L r )] because simplicity forbids additional inclusions.Conversely, fix an nfa N such that dfa ( L r ) ≅ rsc ( rev ( N )) where this isomorphism also preserves and reflectsinclusions. Repeating (1) yields the injective dfa morphism χ ∶ dfa ( L r ) ↣ reach ( sc ( rev ( N ))) , additionallypreserving inclusions. In fact, χ is an ordered dfa morphism (see Definition 3.1.3) so applying the respectivefree construction (Theorem 3.2.18) provides ˆ χ ∶ δ → reach ( sc ( rev ( N ))) . Applying duality, Corollary 3.5.13 andCorollary 3.4.5 yields simple ( sc ( N )) → dfa ∧ ( L ) , so N is positively atomic.3. Assume N is a subatomic nfa. Then we have ι ∶ γ ↪ dfa ¬ Syn ( L ) where γ ∶ = simple ( sc ( N )) . Since ι ’s codomainis right-quotient closed we also have the JSL -dfa inclusion morphism ι ∶ rqc ( γ ) ↪ dfa ¬ Syn ( L ) . Dualising, andapplying Theorem 3.7.11 and Theorem 3.6.6, we obtain a surjective morphism q ∶ sc ( dfa Syn ( L r ) ) ↠ ts ( γ „ ) .Furthermore applying right-quotient closure to the inclusion dfa ( L ) ↪ γ yields dfa Syn ( L ) = rqc ( dfa ( L )) ↪ rqc ( γ ) .Dualising the latter and applying Corollary 3.7.16 we obtain a surjective morphism q ∶ ts ( γ „ ) ↠ syn ( L r ) onto L r ’s syntactic semiring (viewed as a JSL -dfa). Then consider the composite
JSL -dfa morphism: sc ( dfa Syn ( L r ) ) q Ð→→ ts ( γ „ ) q Ð→→ syn ( L r ) . The classically reachable part of the domain
JSL -dfa consists of singleton sets and is isomorphic to dfa
Syn ( L r ) .Likewise by Corollary 3.7.8 the reachable part of the codomain is isomorphic to dfa Syn ( L r ) . The image of areachable dfa under a dfa morphism is reachable, so the composite morphism restricts to: reach ( sc ( dfa Syn ( L r ) )) q / / / / reach ( ts ( γ „ )) q / / / / reach ( syn ( L r )) dfa Syn ( L r )≅ dfa Syn ( L r )≅ Then q is bijective and hence a dfa isomorphism, so that reach ( ts ( γ „ )) ≅ dfa Syn ( L r ) too. Importantly, reach ( ts ( γ „ )) ≅ dfa TM ( rsc ( rev ( N ))) because γ „ ≅ reach ( sc ( rev ( N ))) by Corollary 3.4.5 and Example 3.2.12, so we can apply Lemma 3.7.3. Finally,the action of the dfa isomorphism dfa TM ( rsc ( rev ( N ))) ≅ dfa Syn ( L r ) defines the desired monoid isomorphism.Conversely suppose TM ( rsc ( rev ( N ))) ≅ Syn ( L r ) via the generator-preserving mapping J w K TM ( rsc ( rev ( N ))) ↦ J w K Syn ( L r ) . Its action defines a dfa isomorphism dfa TM ( rsc ( rev ( N ))) ≅ dfa Syn ( L r ) , where the conditionsconcerning the initial state and transitions are obvious. The final states are preserved/reflected because: J w K TM ( rsc ( rev ( N ))) ∈ F dfa TM ( rsc ( rev (N))) ⇐⇒ ˘ N w [ I rev ( N ) ] ∩ F rev ( N ) ≠ ∅ ⇐⇒ ˘ N w [ F N ] ∩ I N ≠ ∅ ⇐⇒ w r ∈ L ⇐⇒ w ∈ L r ⇐⇒ J w K S Lr ∈ F dfa Syn ( Lr ) .Applying Lemma 3.7.3 we deduce dfa Syn ( L r ) ≅ dfa TM ( rsc ( rev ( N ))) ≅ reach ( ts ( sc ( rev ( N )))) . Then we havea dfa morphism f ∶ dfa Syn ( L r ) → ts ( sc ( rev ( N ))) . Applying the free construction (Theorem 3.2.17) we obtainˆ f ∶ sc ( dfa Syn ( L r ) ) → ts ( sc ( rev ( N ))) . It is actually surjective because ts ( − ) constructs JSL -reachable machines.Dualising this free-extension yields: rqc ( sc ( N )) ≅ ( ts ( sc ( rev ( N )))) „ ˆ f „ Ð→ ( sc ( dfa Syn ( L r ) )) „ ≅ dfa ¬ Syn ( L ) . The left isomorphism follows by Theorem 3.7.11 and Example 3.4.6, whereas the right one follows by Theorem3.6.6. Finally since simple ( sc ( N )) ↪ rqc ( sc ( N )) we deduce N is subatomic.67 eferences [ADN92] Andr´e Arnold, Anne Dicky, and Maurice Nivat. A note about minimal non-deterministic automata. Bulletin of the EATCS , 47:166–169, 1992.[BFRK08] Sergei Bezrukov, Dalibor Fronek, Steven J. Rosenberg, and Petr Kov. On biclique coverings.
DiscreteMathematics , 308(2):319 – 323, 2008. Combinatorics04.[Brz64] Janusz A. Brzozowski. Derivatives of regular expressions.
J. ACM , 11(4):481–494, October 1964.[BT14] Janusz Brzozowski and Hellis Tamm. Theory of tomata.
Theoretical Computer Science , 539:13 – 27, 2014.[Con71] J. H. Conway.
Regular Algebra and Finite Machines . Printed in GB by William Clowes & Sons Ltd, 1971.[DLT01] Fran¸cois Denis, Aur´elien Lemay, and Alain Terlutte.
Residual Finite State Automata , pages 144–157.Springer Berlin Heidelberg, Berlin, Heidelberg, 2001.[GH06] Hermann Gruber and Markus Holzer.
Finding Lower Bounds for Nondeterministic State Complexity IsHard , pages 363–374. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.[GPJL91] David A Gregory, Norman J Pullman, Kathryn F Jones, and J.Richard Lundgren. Biclique coveringsof regular bigraphs and minimum semiring ranks of regular matrices.
Journal of Combinatorial Theory,Series B , 51(1):73 – 89, 1991.[GW05] George Gr¨atzer and Friedrich Wehrung. Tensor products of semilattices with zero, revisited. arXivMathematics e-prints , page math/0501436, Jan 2005.[Jip12] Peter Jipsen. Categories of algebraic contexts equivalent to idempotent semirings and domain semirings.In Wolfram Kahl and Timothy G. Griffin, editors,
Relational and Algebraic Methods in Computer Science ,pages 195–206, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.[KW70] T. Kameda and P. Weiner. On the state minimization of nondeterministic finite automata.
IEEE Trans.Comput. , 19(7):617–627, July 1970.[LRT09] Michel Latteux, Yves Roos, and Alain Terlutte. Minimal nfa and birfsa languages.
RAIRO - TheoreticalInformatics and Applications , 43(2):221–237, 004 2009.[MAMU14] Robert S. R. Myers, Jiˇr´ı Ad´amek, Stefan Milius, and Henning Urbat.
Canonical Nondeterministic Au-tomata , pages 189–210. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014.[Mar75] George Markowsky. The factorization and representation of lattices.
Transactions of the American Math-ematical Society , 203:185–200, 1975.[Orl77] James Orlin. Contentment in graph theory: Covering graphs with cliques.
Indagationes Mathematicae(Proceedings) , 80(5):406–424, 1977.[Pol01] Libor Pol´ak.
Syntactic Semiring of a Language , pages 611–620. Springer Berlin Heidelberg, Berlin,Heidelberg, 2001.[Tam10] Hellis Tamm.
Some Minimality Results on Biresidual and Biseparable Automata , pages 573–584. SpringerBerlin Heidelberg, Berlin, Heidelberg, 2010.[Tam16] Hellis Tamm. New interpretation and generalization of the kameda-weiner method. In
ICALP , 2016.[Wat01] Valerie L. Watts. Boolean rank of kronecker products.