Hamiltonicity via cohomology of right-angled Artin groups
aa r X i v : . [ m a t h . G R ] J a n HAMILTONICITY VIA COHOMOLOGY OF RIGHT-ANGLEDARTIN GROUPS
RAM ´ON FLORES, DELARAM KAHROBAEI, AND THOMAS KOBERDA
Abstract.
Let Γ be a finite graph and let A (Γ) be the corresponding right-angled Artin group. We characterize the Hamiltonicity of Γ via the structureof the cohomology algebra of A (Γ). In doing so, we define and develop a newcanonical graph associated to a matrix, and in doing so provide a novel per-spective on the matrix determinant. We give some applications to complexitytheory, zero–knowledge proof protocols, and models for random graphs. Introduction
Let Γ be a finite, undirected graph. A basic problem in graph theory that hasbeen studied in a dizzyingly vast body of literature (see [6] for instance) is deter-mining when Γ admits a
Hamiltonian path or a
Hamiltonian cycle , which is to sayeither a path or a cycle that visits every vertex of Γ exactly once. One of the manyreasons that the problem of determining whether Γ is Hamiltonian (i.e. admits sucha path or cycle) is interesting is because it defies a straightforward characteriza-tion, and gives rise to the prototypical NP–complete decision problem [2, 17, 11].In this paper, we develop an algebraic framework for characterizing Hamiltonicityof simplicial graphs via right-angled groups, and in particular the cohomology ofthese groups.To set up the discussion, let Γ be a finite simplicial graph (i.e. a graph that isalso a simplicial complex) with vertex set V (Γ) and edge set E (Γ). We write A (Γ) = h V (Γ) | [ v, w ] = 1 , { v, w } ∈ E (Γ) i for the right-angled Artin group on Γ.The isomorphism type of A (Γ) determines the isomorphism type of Γ, as is well-known from the work of many authors [7, 19, 16]. Thus, the combinatorics of Γshould be reflected in the algebra of A (Γ).Now, let V and W be vector spaces over a field F with V finite dimensional,and let q : V × V → W be a (anti-)symmetric bilinear pairing on V . We say that( V, W, q ) is
Hamiltonian if whenever { w , . . . , w n } is a basis for V then there is apermutation σ ∈ S n such that for all 1 ≤ i < n , we have q ( w σ ( i ) , w σ ( i +1) ) = 0.The following two results are the main theorems of the paper. The first intrin-sically characterizes graphs admitting Hamiltonian paths via the cohomology ringof the corresponding right-angled Artin groups. Theorem 1.1.
Let Γ be a finite simplicial graph and let F be a field. We set V = H ( A (Γ) , F ) , W = H ( A (Γ) , F ) , Date : January 26, 2021. and let q be the cup product pairing V × V → W . Then Γ admits a Hamiltonianpath if and only if ( V, W, q ) is Hamiltonian. We are able to prove an analogue of Theorem 1.1 for graphs admitting Hamil-tonian cycles as well. We will say that (
V, W, q ) above is cyclic Hamiltonian if forevery basis { w , . . . , w n } of V , there is a permutation σ ∈ S n such that for all1 ≤ i < n , we have q ( w σ ( i ) , w σ ( i +1) ) = 0, and also q ( w σ ( n ) , w σ (1) ) = 0. Theorem 1.2.
Let { Γ , F, V, W, q } be as in Theorem 1.1. Then Γ admits a Hamil-tonian cycle if and only if ( V, W, q ) is cyclic Hamiltonian. Throughout the paper, we will concentrate on the proof of Theorem 1.1. Rela-tively mild generalizations are needed to obtain Theorem 1.2, and these are spelledout in Section 3.Theorem 1.1 fits into another circle of ideas as follows. We start with the fol-lowing general guiding problem:
Problem 1.3.
Let P be a property of finite simplicial graphs. Find a property Q of groups such that Γ has P if and only if A (Γ) has Q . We insist that Q be a property of the isomorphism type of the group, so thatin particular, Q should be independent of any generating set. Some satisfactoryanswers to Problem 1.3 are known, for instance when P is the property of beinga nontrivial join [20], being disconnected [5], containing a square [13, 14], being aco-graph [14, 15], being a finite tree or complete bipartite graph [12], admittinga nontrivial automorphism [9], being k –colorable [8], and fitting in a sequence ofexpanders [10].We remark that with the setup of Theorems 1.1 and 1.2, the graph Γ is connectedif and only if ( V, W, q ) is pairing–connected [10]. That is, whenever V ∼ = V ⊕ V is anontrivial direct sum decomposition of V then there are vectors v ∈ V and v ∈ V such that q ( v , v ) = 0. It is immediate to see that if ( V, W, q ) is Hamiltonian orcyclic Hamiltonian then it is also pairing–connected.One of the key ingredients in proving Theorem 1.1 is a certain combinatorialobject called the two–row graph associated to a matrix (see Subsection 2.3 below).The combinatorics of this graph appear to be difficult to understand in general.The crucial observation for us is that if a matrix is invertible, then its two–rowgraph contains a Hamiltonian path (Lemma 2.6). This result is in fact equivalentto the following purely linear algebraic fact. Here, we say that two rows of a squarematrix A are null-connected if all the consecutive 2 × A ) are singular; cf. Definition2.7. Proposition 1.4.
Let A ∈ GL n ( F ) . Then there exists a reordering of the rows ofthe matrix A such that no consecutive rows are null-connected. Establishing this fact is nontrivial, and requires the development of some linearalgebraic tools which, to the knowledge of the authors, are entirely novel. Theseideas, which can be found in Subsection 2.4, give a perspective on the determinantof a matrix which appears not to have been known before. While we are partiallyinspired by classical approaches which rely on the computation of 2 × RAPH HAMILTONICITY AND RAAGS 3 admits an analogous reinterpretation in terms of “toral” matrices, i.e. assumingthat the first and last row are consecutive, and also the first and last column.The last section is devoted to the analysis of applications of our results. First,we discuss different implications of the previous theory for Hamiltonicity testing, acrucial problem in complexity theory. For these applications, we generally assumethat all coefficients lie in the field with two elements, and thus all linear algebraicconsiderations become finitistic. Then, we develop a zero–knowledge proof proto-col , which turns out to be a variation of the standard one that uses Hamiltoniancycles. Recall that a zero–knowledge proof protocol (ZKP) is a probabilistic inter-active proof system in which a prover demonstrates to the verifier that they are inpossession of a piece of information without conveying additional data about theinformation. The reader may consult the recent book of N. Smart [21], whereinthere is an extensive cryptographic formulation of ZKPs (cf. [18]). Smart framesthe discussion of ZKPs in the setting of the graph isomorphism problem, which wasat one point conjectured to be NP–complete but which is now known to be solvablein quasi–polynomial time by a celebrated result of Babai [3]. It is well-known, how-ever, that ZKPs can be formulated in the context of any NP–complete problem [4].We conclude the paper by describing some interplay between our algebraic notionof Hamiltonicity and the theory of random graphs, and by posing some problemsthat arise naturally in these contexts.2.
Hamiltonian vector spaces
In order to prove Theorem 1.1, we begin by gathering some preliminary facts.2.1.
Cohomology of right-angled Artin groups.
In this subsection, we recallsome basic facts about the structure of the cohomology algebra of a right-angledArtin group A (Γ). The result recorded here is easy and well–known, and followsfrom standard methods in geometric topology together with the fact that the Sal-vetti complex associated to Γ is a classifying space for A (Γ). Some details arespelled out in [10], for instance.Let V (Γ) = { v , . . . , v n } and E (Γ) = { e , . . . , e m } be the vertices and edges ofΓ, and write ∪ for the cup product pairing on H ∗ ( A (Γ) , F ), where here F denotesan arbitrary field. Lemma 2.1.
Let Γ be a finite simplicial graph. Then there are bases { v ∗ , . . . , v ∗ n } for H ( A (Γ) , F ) and { e ∗ , . . . , e ∗ m } for H ( A (Γ) , F ) such that:(1) We have v ∗ i ∪ v ∗ j = 0 if and only if { v i , v j } / ∈ E (Γ) ;(2) We have v ∗ i ∪ v ∗ j = ± e ∗ k whenever { v i , v j } = e k ∈ E (Γ) . The easy direction.
One direction of Theorem 1.1 is straightforward.
Lemma 2.2.
Let Γ be a finite simplicial graph, let V = H ( A (Γ) , F ) , let W = H ( A (Γ) , F ) , and let q be the cup product pairing V × V → W . If ( V, W, q ) isHamiltonian then Γ admits a Hamiltonian path.Proof. Let { v , . . . , v n } be a list of the vertices in Γ, and write { v ∗ , . . . , v ∗ n } for thecorresponding dual cohomology classes in H ( A (Γ) , F ) as in Lemma 2.1. Since V isHamiltonian, we may re-index { v ∗ , . . . , v ∗ n } so as to assume that q ( v ∗ i , v ∗ i +1 ) = 0 for1 ≤ i < n . But then { v i , v i +1 } ∈ E (Γ) and hence { v , . . . , v n } forms a Hamiltonianpath in Γ. (cid:3) The proof of the converse of Lemma 2.2 will occupy the remainder of this section.
R. FLORES, D. KAHROBAEI, AND T. KOBERDA
A linear algebraic reduction.
Here and for the remainder of the paper, wewill use the notation a ji for entries in a matrix, where the subscript denotes therow and the superscript denotes the column. Rows of a matrix A will be denotes { r , . . . , r n } , and the entry in the j th column of r i will be denoted r ji .Let { w , . . . , w n } be an arbitrary basis for V = H ( A (Γ) , F ). We wish to showthat there is a re-indexing the basis { w , . . . , w n } such that after relabeling thebasis vectors, we have q ( w i , w i +1 ) = 0 for 1 ≤ i < n . We write w i = n X j =1 a ji v ∗ j for suitable coefficients { a ji } ≤ i,j ≤ n ⊂ F , so that A = ( a ji ) ≤ i,j ≤ n is an invertiblematrix over F . Thus, we may view the rows of A as corresponding to the expressionof { w , . . . , w n } in terms of the basis { v ∗ , . . . , v ∗ n } , and a re-indexing of { w , . . . , w n } is merely a permutation of the rows of A .A matrix A ∈ M n ( F ) will be called square–traceable if for all 1 ≤ i < n thereexists 1 ≤ j < n such that the determinant of the minor A ji = (cid:18) a ji a j +1 i a ji +1 a j +1 i +1 (cid:19) is nonzero.The main technical fact we establish in this paper is the following purely linearalgebraic statement: Lemma 2.3.
Let A ∈ GL n ( F ) . Then there is a permutation matrix σ ∈ GL n ( F ) such that σA is square–traceable. That is, A is square–traceable, possibly after apermutation of the rows. If A satisfies the conclusion of Lemma 2.3, then we shall say that A is permutation–square–traceable . Assuming Lemma 2.3, we can complete the proof of Theorem 1.1,as follows from the next lemma. Lemma 2.4.
Suppose Γ is a finite simplicial graph that admits a Hamiltonian path,let V = H ( A (Γ) , F ) , let W = H ( A (Γ) , F ) , and let q be the cup product pairing V × V → W . Then ( V, W, q ) is Hamiltonian.Proof. Let { w , . . . , w n } be a basis for H ( A (Γ) , F ), and let { v ∗ , . . . , v ∗ n } be thestandard basis for H ( A (Γ) , F ) arising from the duals of the vertices of Γ as inLemma 2.1. Assume that q ( v ∗ i , v ∗ i +1 ) = 0 for 1 ≤ i < n , which is possible since Γcontains a Hamiltonian path. Writing w i = n X j =1 a ji v ∗ j , we re-index the basis { w , . . . , w n } so that the matrix A = ( a ji ) ≤ i,j ≤ n is square–traceable, which is possible by Lemma 2.3.A straightforward calculation shows that in expressing q ( w i , w i +1 ) with respectto the vectors { q ( v ∗ j , v ∗ k ) | ≤ j < k ≤ n } (which span H ( A (Γ) , F )), we have that the coefficient of the vector q ( v ∗ j , v ∗ j +1 ) isexactly det A ji . Since det A ji = 0 for some choice of j and since q ( v ∗ j , v ∗ j +1 ) = 0 forall j by assumption, we have q ( w i , w i +1 ) = 0. Thus, ( V, W, q ) is Hamiltonian. (cid:3)
RAPH HAMILTONICITY AND RAAGS 5
Permutation–square–traceability.
The permutation–square–traceability ofa matrix A can be formulated in terms of the combinatorics of a certain graphconstructed from A . Let A ∈ M n ( F ). We view the rows of A as n linearly inde-pendent vectors over F , which we label { r , . . . , r n } . We record these vectors as r i = ( r i , . . . , r ni ) for 1 ≤ i ≤ n . We construct an undirected graph G ( A ), called the2 –row graph of A , as follows. The vertices of G ( A ) are simply the rows { r , . . . , r n } of A . Rows r i and r j of A are connected by an edge in G ( A ) if and only if for some1 ≤ k < n , the minor R ki,j = (cid:18) r ki r k +1 i r kj r k +1 j (cid:19) is invertible, which the reader may compare with the definition of square–traceabilityabove. In other words, the rows r i and r j span an edge whenever(Λ A )( e k ∧ e k +1 )has a nonzero coefficient for e i ∧ e j , where here Λ A denotes the alternating squareof A and where { e , . . . , e n } denote standard basis vectors with respect to which A is expressed.One may check that for the identity matrix Id, we have G (Id) is a path, and for adense open set of matrices in GL n ( C ) or GL n ( R ), the 2–row graph is complete. Tosee this last claim, we may restrict to the dense and open subset of n × n matriceswhere all entries are nonzero, and in the complement of the (closed subset withempty interior) where the determinant vanishes. For an invertible matrix A withall nonzero entries, two rows of A fail to be connected by an edge in G ( A ) if andonly if they are scalar multiples of each other. it is then immediate that any tworows of A are connected in G ( A ).The graph G ( A ) will usually be considered for invertible matrices, though insome instances we will consider non-invertible and even non-square matrices, forwhich the definition of G ( A ) clearly still makes sense.The usefulness of the 2–row graph comes from the following trivial observation: Lemma 2.5. If G ( A ) contains a Hamiltonian path then A is permutation–square–traceable.Proof. Choose a Hamiltonian path in G ( A ). Relabelling the vertices in this pathby the order in which the path visits them, we obtain a permutation matrix σ suchthat σA is square–traceable. (cid:3) Thus, to prove Lemma 2.3, it suffices to prove the following:
Lemma 2.6.
Let A ∈ GL n ( F ) . Then G ( A ) admits a Hamiltonian path. At first, it may seem difficult to prove anything general about the 2–row graphof an invertible matrix. However, it is an enlightening exercise to show that G ( A )is always connected. The reader will note that connectedness of G ( A ) is obtainedas an immediate consequence of Lemma 2.6.Observe that in general the converse of Lemma 2.6 is false, by considering forexample the matrix: A = . R. FLORES, D. KAHROBAEI, AND T. KOBERDA
In order to establish Lemma 2.6, we will need to develop some more linearalgebraic tools. Henceforth, a minor of a matrix A is a square submatrix of A obtained by deleting rows and columns. Definition 2.7.
Let r i and r j be rows of A , and let A i,j be the × n submatrix of A spanned by these two rows. We will say that rows r i and r j of A are null–connected if all × minors of A i,j spanned by consecutive columns are singular. We let G opp ( A ) denote the graph spanned by the rows of A , with null–connectednessas the edge relation. Note that G ( A ) and G opp ( A ) are complementary graphs in thecomplete graph on the rows of A . Definition 2.8.
Let M be a submatrix of A . We will call M a if thefollowing conditions hold.(1) All entries of M are nonzero.(2) M has at least two rows and two columns, and all columns are consecutivein A .(3) The subgraph of G opp ( A ) spanned by the rows occurring in M (viewed asrows of A ) is connected.(4) M is maximal with respect to these conditions. That is, there is no subma-trix N of A containing M as a proper submatrix, and which satisfies theprevious three conditions. The following observation about 1–blocks is elementary but useful, and justifiesthe terminology.
Lemma 2.9.
Let M be a –block in A . Then the row space of M is one–dimensional.Proof. If M has just two columns then the claim is clear. Let r = ( a , · · · , a n )and r = ( b , · · · , b n ) be two rows of M . Since these rows are null–connected in A , we have that ( a , a ) is proportional to ( b , b ) by a nonzero constant, say λ .Similarly, ( a , a ) and ( b , b ) are proportional by a nonzero constant, which musttherefore be equal to λ . By induction on n , we have that r and r are proportionalby λ . (cid:3) Let I ⊂ { , . . . , n } and let 1 ≤ s < t ≤ n . We will write M s,tI for the submatrix(not necessarily a 1–block) spanned by rows in the index set I and columns withindices between s and t (inclusively). If M = M s,tI and M = M p,qJ are suchmatrices, we will write M ⊳ M if p = t + 1.The following lemma shows that the structure of 1–blocks is highly constrained. Lemma 2.10.
Let M = M s,tI and M = M p,qJ be –blocks. If M = M then M and M are disjoint as submatrices of A .Proof. Without loss of generality, s ≤ p . Suppose that the ( i, k )–entry of A belongsto both blocks. This means that i ∈ I ∩ J and p ≤ k ≤ t . We claim that I = J in thiscase, which is easily seen to be sufficient to establish the lemma (cf. Lemma 2.9).Let R ⊆ I \ { i } denote the set of indices such that each row indexed by R isnull–connected to the row r i . Since M is a 1–block, we have that R = ∅ . Let ℓ ∈ R . Let us then consider the submatrix N of A spanned by r i and r ℓ , whichmust have the following shape: N = (cid:18) · · · a i b i c i d i · · ·· · · a ℓ b ℓ c ℓ d ℓ · · · (cid:19) , RAPH HAMILTONICITY AND RAAGS 7 where here the entries c i and c ℓ lie in the column indexed by p .Now, since M and M are 1–blocks, we must have that 0 / ∈ { b i , c i , d i , b ℓ } . Since r i and r ℓ are null–connected, this forces c ℓ and d ℓ to be nonzero as well, and so thevectors ( c i , d i ) and ( c ℓ , d ℓ ) are nonzero multiples of each other. It is clear that thisforces all entries in r ℓ to be nonzero for columns indexed between p and q .The vector ( r ki , r k +1 i ) is proportional to the vector ( r kℓ , r k +1 ℓ ) for p ≤ k < q bythe null–connectedness of r i and r ℓ (where here r ki denotes the k th column entry inthe row r i ). It follows that these pairs of vectors { ( r ki , r k +1 i ) , ( r kℓ , r k +1 ℓ ) } p ≤ k A matrix A admits a unique partition into submatrices of thefollowing three types:(1) –blocks;(2) × nonzero matrices which do not belong to any –block;(3) × zero matrices. Example 2.12. To illustrate the foregoing concepts, consider the matrix A = . Some tedious but straightforward calculations show that A is invertible. The graph G ( A ) is highly connected; in fact, the only missing edges are between rows and , and between rows and , and therefore these are the only pairs of rows in A that are null–connected. Setting I = { , } , the submatrix M , I forms the unique –block in A . Definition 2.13. Let M be a minor in A with at least two rows. We say that M isa if M is contained in a –block, and if the columns of M are consecutivein A . Definition 2.14. Let A be an n × n matrix and let M ⊳ M ⊳ · · · ⊳ M r be a sequence of minors in A of the form M i = M s i ,t i J i . We say that this sequenceis a if the following conditions hold: R. FLORES, D. KAHROBAEI, AND T. KOBERDA (1) For each i , the matrix is a nonzero × matrix or a –minor.(2) For ≤ i < r , it is not the case that M i and M i +1 belong to a common –block.We will say that a –track T is complete if the total number of columns in T is n . We say that two –tracks { M i } ≤ i ≤ r and { N j } ≤ j ≤ s are different if they aredistinct sequences of matrices. For σ ∈ S n , we say that a string a = (cid:16) a σ (1) , . . . , a nσ ( n ) (cid:17) of entries of A belongs to the (complete) –track T if every entry in a belongs tosome minor in T . The following is a crucial fact about 1–tracks. Lemma 2.15. Let a = ( a σ (1) , . . . , a nσ ( n ) ) be a string consisting of nonzero entriesof A . Then a belongs to exactly one complete –track.Proof. We will construct a 1–track T to which a belongs, in a canonical way. If a σ (1) does not belong to a 1–block, then we set M = a σ (1) . If for all 1 ≤ i ≤ n wehave a iσ ( i ) belong to the same 1–block then we may set A = M . Otherwise, thereis a k so that a jσ ( j ) belongs to a single 1–block B for j ≤ k , but a k +1 σ ( k +1) does not.We then let M be the 1–minor spanned by rows indexed by { σ (1) , . . . , σ ( k ) } andcolumns indexed by 1 ≤ j ≤ k .To construct M , we restart the construction of the previous paragraph at a k +1 σ ( k +1) , and thus inductively construct the 1–track T to which the string a belongs.It is immediate that the resulting 1–track is complete. Note that this constructionis canonical and hence the resulting 1–track is unique (cf. Remark 2.16 below). (cid:3) Remark 2.16. In order to obtain uniqueness of the –track in the proof of Lemma 2.15,we are using the uniqueness of –blocks from Lemma 2.10 in an essential way. If M ′ were a –minor containing both a iσ ( i ) and a i +1 σ ( i +1) that meets the –minor M k ∈ T ,then M ′ is a submatrix of M k . This follows, as Lemma 2.10 implies that M k and M ′ lie in a single –block, so that M ′ will be subsumed as a submatrix of M k in theconstruction of T . Let T be a complete 1–track in a matrix A , and let Σ T ⊂ S n be the collectionof permutations σ ∈ S n such that a σ = ( a σ (1) , . . . , a nσ ( n ) ) belongs to T . The nextlemma shows that every “cluster” of nonzero entries coming from a 1–minor in amatrix A contributes nothing to the determinant of A . Lemma 2.17. Let T = { M i } ≤ i ≤ r be a complete –track in A that contains atleast one –minor. Then X σ ∈ Σ T sgn( σ ) n Y i =1 a iσ ( i ) = 0 . Proof. Without loss of generality, we will assume that M is a 1–minor, which has k > M is contained in a 1–block, its rows are all proportional toeach other. It follows by an easy calculation that k Y i =1 a iσ ( i )RAPH HAMILTONICITY AND RAAGS 9 is constant for σ ∈ Σ T .We identify S k with the permutations of the k rows of A corresponding to therows of M , and which fix the remaining rows of A . Let a σ = ( a σ (1) , . . . , a nσ ( n ) )belong to T . Then for all τ ∈ S k , we have (cid:16) a τσ (1) , . . . , a kτσ ( k ) , a k +1 σ ( k +1) , . . . , a nσ ( n ) (cid:17) also belongs to T . It is not difficult to see then that for such a fixed σ , we have X τ ∈ S k sgn( τ ) (cid:16) a τσ (1) · · · a kτσ ( k ) · a k +1 σ ( k +1) · · · a nσ ( n ) (cid:17) = 0 . Indeed, this follows simply from the fact that half of the permutations in S k have sign 1 and half have sign − 1, and the previous observation that the prod-uct a τσ (1) · · · a kτσ ( k ) is independent of τ ∈ S k .Finally, consider the sum X σ ∈ Σ T sgn( σ ) n Y i =1 a iσ ( i ) . The previous considerations show that the total contribution from strings a σ whosetail is of the form ( a k +1 σ ( k +1) , . . . , a nσ ( n ) ) is zero. It follows that the total sum vanishes,as claimed. (cid:3) We can now finish the proof of the main result. Proof of Lemma 2.6. It suffices to show that if A ∈ GL n ( F ) then there is a re-ordering of the rows of A so that no pair of consecutive rows is null–connected. So,suppose the contrary, and we shall show that det( A ) = 0.Let a σ = ( a σ (1) , . . . , a nσ ( n ) ) be a string consisting of nonzero entries of A . Observethat there must be a 1–minor containing two consecutive entries of a σ . Indeed, oth-erwise there would be a reordering of the rows of A such that no pair of consecutiverows is null–connected.Let Σ N ⊂ S n denote the set of permutations σ for which the string a σ = (cid:16) a σ (1) , . . . , a nσ ( n ) (cid:17) consists of nonzero entries of A . By definition, we havedet( A ) = X σ ∈ Σ N sgn( σ ) n Y i =1 a iσ ( i ) . By Lemma 2.15, every such string a σ belongs to a unique complete 1–track, andconversely for each complete 1–track T and each a σ belonging to T we have that σ ∈ Σ N .It follows that there is a collection of distinct complete 1–tracks { T , . . . , T m } such that the collection of strings belonging to these 1–tracks partitions { a σ } σ ∈ Σ N .Lemma 2.17 implies that det( A ) = 0, as claimed. (cid:3) Remark 2.18. As stated in the introduction (see Proposition 1.3), Lemma 2.6admits a purely algebraic formulation, which might be of independent interest inlinear algebra. The equivalence between the two statements is immediate. From Hamiltonian paths to Hamiltonian cycles In this section, we extend the definition of the two–row graph G ( A ) to allow forcyclic permutations of the columns of A . To that end, we define the cyclic two–rowgraph G c ( A ) as follows. The vertices of G c ( A ) are the rows of A . Two rows r i and r j are adjacent if they are adjacent in G ( A ), or if the minor R ni,j = (cid:18) r ni r i r nj r j (cid:19) is invertible. Note that, for example, G c (Id) is a cycle. We establish a variation onLemma 2.6 by proving the following result: Proposition 3.1. Let A ∈ GL n ( F ) . Then G c ( A ) contains a Hamiltonian cycle. Recall that with the same notation as above, we consider a triple ( V, W, q ) andsay that it is cyclic Hamiltonian if for every basis { v , . . . , v n } of V , there is apermutation σ such that q ( v σ ( i ) , v σ ( i +1) ) = 0 for 1 ≤ i ≤ n , and where the indicesare considered cyclically. The following is a restatement of Theorem 1.2. Theorem 3.2. Let Γ be a finite simplicial graph and let F be a field. We set V = H ( A (Γ) , F ) , W = H ( A (Γ) , F ) , and let q be the cup product pairing V × V → W . Then Γ admits a Hamiltoniancycle if and only if ( V, W, q ) is cyclic Hamiltonian. As with Theorem 1.1, there is an easier and a harder direction. The proofof the easier direction is analogous to the proof of Lemma 2.2 above, and theharder direction follows from Proposition 3.1 by a proof that is analogous to thatof Theorem 1.1.3.1. Proving Proposition 3.1. Many of the concepts from Subsection 2.4 gener-alize verbatim or nearly verbatim to the cyclic two–row graph, after some obviousmodifications. For one, null–connectedness is now a stronger condition, as it is thecomplement of the adjacency relation in G c ( A ). The definition of a cyclic –block is identical to the definition of a 1–block in Definition 2.8, except that columnsare arranged in a cyclic order and the relation of being consecutive is correspond-ingly weakened. The superscripts in the notation M s,tI are taken cyclically. So, if A ∈ M ( F ) for example and if I = { , , } , then the submatrix M , I is spannedby the rows indexed by I , and by columns { , , , } . The sixth and first columnsare thus viewed as consecutive. For the purposes of our analyses, it is conceptuallyuseful to imagine the matrix A written on a torus, so that the top and bottom rowsare consecutive, and the leftmost and rightmost columns are consecutive.With the modified notion of consecutiveness, the proofs of Lemma 2.9 andLemma 2.10 for cyclic 1–blocks are the same. Similarly, the definition of a cyclic –minor transfers to cyclic 1–blocks easily. Slight care should be taken when defininga cyclic –track : if { M i } ≤ i ≤ r is a cyclic 1–track in A , then writing M = M s ,t J and M r = M s r ,t r J r , we require that t r ≤ s in the cyclic order on the columns of A . We obtain immediately the notion of a string a belonging to a cyclic 1–track,and now the string is considered up to a cyclic permutation. It is straightforwardto generalize Lemma 2.15 to cyclic 1–tracks. Lemma 2.17 then generalizes to cyclic1–tracks containing at least one cyclic 1–minor, simply by cyclically permuting thecolumns of A so that a cyclic 1–minor appears as the first minor in a 1–track. RAPH HAMILTONICITY AND RAAGS 11 Proof of Proposition 3.1. The proof is a reprise of the proof of Lemma 2.6: indeed,suppose the contrary. Then, for any permutation σ ∈ S n , at least one pair of rows { r σ ( i ) , r σ ( i +1) } is null–connected, for 1 ≤ i ≤ n . Let a σ = ( a σ (1) , . . . , a nσ ( n ) )be a string consisting of nonzero entries of A , and let T be the unique cyclic 1–track to which a σ belongs. It must be the case that two consecutive entries in a σ belong to a 1–minor, since otherwise we would violate our initial null–connectednessassumption, and therefore we must have that T contains at least one 1–minor. Thetotal sum of contributions to det( A ) from T is zero. It follows that det( A ) = 0, acontradiction. (cid:3) Remark 3.3. Proposition 3.1 also admits an algebraic formulation analogous toProposition 1.4, by adopting the less restrictive notion of consecutiveness definedin this section. Example 3.4. Let us briefly revisit Example 2.12 above, in the context of the cyclicconcepts. If one allows for wraparounds in the columns, then there is one more –block in A . Indeed, set J = { , } . Then the submatrix M , J is also a –block. Theremaining entries in A are accounted for according to Corollary 2.11. The graphs G ( A ) and G c ( A ) are isomorphic to each other. Applications and questions Hamiltonicity testing. Graph Hamiltonicity is a basic problem in com-plexity theory, and as is well-known and mentioned in the introduction, decidingwhether a finite graph admits a Hamiltonian path is NP–complete [2, 17, 11]. Assuch, we have an immediate corollary to Theorem 1.1: Corollary 4.1. Let A (Γ) be a right-angled Artin group, and let ( V, W, q ) denote thecohomology of A (Γ) in degrees one and two, equipped with the cup product pairing.Then determining whether or not ( V, W, q ) is Hamiltonian is NP–complete. Corollary 4.1 can be made computationally effective, since one may consider thecohomology of A (Γ) over a field F with two elements, so that V and W becomefinite sets. The map q can be specified by a matrix with entries in F , the number ofwhich is bounded by a polynomial in the number of vertices of Γ. Thus, Theorem 1.1gives an effective way to perform Hamiltonicity testing matrices over F , and whichdoes not rely directly on the adjacency matrix. Note that in order to falsify a claimthat a graph Γ is Hamiltonian, it suffices to exhibit just one basis for V that is notHamiltonian. Thus, Theorem 1.1 can be used to show that a particular graph isnot Hamiltonian.4.2. Zero-knowledge proofs. Theorem 1.1 allows for the formulation of a zero-knowledge proof protocol based on linear algebra. It is the same in spirit as thestandard zero-knowledge proof protocol using Hamiltonian cycles. We will spellout some details here, and refer the reader to [18] for instance for a more completeaccount. Setup: We begin with a finite simplicial graph Γ that has exactly one Hamil-tonian cycle which is very difficult to compute. Here, “difficult to compute” ismeant in the sense that Γ has a very large number of vertices and edges comparedto the computational resources of the parties participating in the protocol. Thus, the number of vertices and edges might be data that the participants could pro-cess reasonably, but a brute force search for the Hamiltonian cycle in Γ would beprohibitive in terms of time.The public data are the vector spaces V = H ( A (Γ) , F ) and W = H ( A (Γ) , F ),together with the cup product pairing q = ∪ . These can be given with respectto the standard bases as provided by Lemma 2.1. Alice has in her possession alist { v ∗ , . . . , v ∗ n } of standard basis vectors for V such that q ( v ∗ i , v ∗ i +1 ) = 0 for all i (where the indices are considered cyclically). She also has in her possession asubset Y ⊂ GL n ( F ) of some reasonable size (say polynomial in n ), and for each A ∈ Y , she knows a Hamiltonian cycle in G c ( A ). The set Y may or may not bepublic. Bob has access to an unbiased random bit generator. The protocol: (1) Alice chooses an element A ∈ Y randomly with respect to the uniformdistribution, which will be the random change of basis matrix. She obtainsa new basis { x , . . . , x n } from { v ∗ , . . . , v ∗ n } using A . Since she knows aHamiltonian cycle in Γ and in G c ( A ), Alice is able to efficiently find apermutation σ ∈ S n such that q ( x σ ( i ) , x σ ( i +1) ) = 0 for 1 ≤ i ≤ n , where asusual the indices are considered cyclically. Alice then creates locked boxes { B i } ≤ i ≤ n in which she hides these basis vectors. For each pair { i, j } with i < j , she creates two boxes S i,j and N i,j . In N i,j , she records the pairing q ( x i , x j ) ∈ W , and in S i,j she records 1 if the entry in N i,j is nonzero, and0 otherwise. In a box T , she conceals the linear map A .(2) Bob takes a random bit and shares it with Alice. If the bit is 1 then Aliceunlocks the boxes { B i } ≤ i ≤ n and the boxes { S σ ( i ) ,σ ( i +1) } ≤ i ≤ n , where againthe indices are considered cyclically. Bob checks that Alice has produced acycle in this way. If the bit is 0 then Alice opens the boxes { B i } ≤ i ≤ n , { N i,j } ≤ i Randomly generated matrices and random graphs with Hamiltonianpaths. Lemma 2.6 and Proposition 3.1 provide new potential models for randomgraphs with fixed properties, such as the existence of a Hamiltonian path or Hamil-tonian cycle.Choosing a probability threshold p ∈ (0 , n × n matrix A by placing a 1 in each entry with probability p and a 0 with probability1 − p . From the resulting matrix, one can construct the corresponding 2–row graph G ( A ), which then becomes a graph with some element of randomness. Of course,not every matrix over the field F will be invertible, though a definite fractionof them will be, even as n tends to infinity. In order to guarantee that G ( A ) isHamiltonian, one can discard the matrices with determinant zero.It is not difficult to see that if p is close to (but different from) 1 then G ( A ) will becomplete almost surely as n → ∞ (even without restricting to invertible matrices).To see a heuristic argument that could be made rigorous with some work, note that RAPH HAMILTONICITY AND RAAGS 13 for such a p ≈ . n very large, the probability that two rowswill be proportional to each other will be very small. Moreover, given two rows r i and r j of A , the probability that all the zeros occurring in r i and r j line up in thesame columns will also be very small. Therefore, the probability that r i and r j areconnected in G ( A ) is very high. Question 4.2. For random invertible matrices, what is the critical threshold for p above which G ( A ) will be complete almost surely as n tends to infinity? One can also consider a uniform measure on GL n ( F ) to produce graphs withsome degree of randomness: Question 4.3. Let A ∈ GL n ( F ) be random. What is the probability that G ( A ) iscomplete, as a function of n ? In the non-random context, the realization problem is interesting and is likelyquite difficult for large n : Question 4.4. Let Γ be a fixed graph Hamiltonian graph. Is Γ ∼ = G ( A ) for aninvertible matrix A ? What further restrictions besides Hamiltonicity does the in-vertibility of A impose? The realization problem is much easier if one does not require invertibility of A and if one drops the requirement that A be square. For a non-square matrix A , weadopt the obvious generalization of G ( A ). Proposition 4.5. Let Γ be an arbitrary finite simplicial graph. Then there is amatrix A such that Γ ∼ = G ( A ) .Proof sketch. Let Γ have n vertices, labeled { v , . . . , v n } . We build the matrix A ,one vertex of Γ at a time, padding with zeros appropriately. Let A be a 1 × A to be a 2 × v and v span an edge, and otherwise is the matrix (cid:18) (cid:19) . Now suppose that A j is constructed. We construct A j +1 as follows.(1) Add a row of zeros at the bottom of A j to get a matrix B ,j , and set thecounter i = 1.(2) Add a column of zeros on the right of B ,j to get a matrix B ,j . If thecounter i is equal to j + 1 then stop and set A j +1 = B ,j .(3) If v i and v j +1 span an edge in Γ, add two columns to B ,j which arezero everywhere except in rows i and j + 1, wherein the entries are (1 , , 1) respectively. Otherwise, do nothing to the matrix, increase thecounter i by one, and return to step 2.(4) Rename the resulting matrix B ,j , increase the counter i by one, and returnto step 2.We set A = A n with an extra column of zeros on the right. It is straightforwardto check that two rows of A will be null–connected if and only if the correspondingvertices of Γ do not span an edge. (cid:3) One can also consider the problem of determining the permutation–square–traceability of a matrix. The content of Lemma 2.6 is that an invertible matrixis always permutation–square–traceable. However, to check by brute force that amatrix is permutation–square–traceable requires checking all possible permutationsof the rows and examining the edge relation between consecutive rows. Question 4.6. Let A ∈ M n ( F ) . What is the complexity of the problem to decidewhether or not A is permutation–square–traceable? Note that Question 4.6 is only interesting for non-invertible matrices by the mainresults of this paper. Observe that a Hamiltonian path in G ( A ) is a certificate of thepermutation–square–traceability of a matrix, so that the problem in Question 4.6is clearly in NP. We suspect that it should be NP–complete. Acknowledgments We thank F. Abeles, R. Brualdi, A. de Camargo, T. Markham and J.M. Pe˜na forhelpful comments. Ram´on Flores is supported by FEDER-MEC grant MTM2016-76453-C2-1-P and FEDER grant US-1263032 from the Andalusian Government.Thomas Koberda is partially supported by an Alfred P. Sloan Foundation ResearchFellowship, by NSF Grant DMS-1711488, and by NSF Grant DMS-2002596. De-laram Kahrobaei is supported in part by a Canada’s New Frontiers in ResearchFund, under the Exploration grant entitled “Algebraic Techniques for QuantumSecurity”. We thank the University of York for hospitality while part of this re-search was conducted. References 1. Francine F. Abeles, Chi`o’s and Dodgson’s determinantal identities , Linear Algebra Appl. (2014), 130–137. MR 32084132. Sanjeev Arora and Boaz Barak, Computational complexity , Cambridge University Press, Cam-bridge, 2009, A modern approach. MR 25000873. L´aszl´o Babai, Graph isomorphism in quasipolynomial time [extended abstract] , STOC’16—Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, ACM,New York, 2016, pp. 684–697. MR 35366064. Manuel Blum, How to prove a theorem so no one else can claim it , Proceedings of theInternational Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), Amer. Math.Soc., Providence, RI, 1987, pp. 1444–1451. MR 9343485. Noel Brady and John Meier, Connectivity at infinity for right angled Artin groups , Trans.Amer. Math. Soc. (2001), no. 1, 117–132. MR 16751666. Reinhard Diestel, Graph theory , fifth ed., Graduate Texts in Mathematics, vol. 173, Springer,Berlin, 2017. MR 36443917. Carl Droms, Isomorphisms of graph groups , Proc. Amer. Math. Soc. (1987), no. 3, 407–408. MR 8911358. Ram´on Flores, Delaram Kahrobaei, and Thomas Koberda, An algebraic characterization of k –colorability , to appear.9. , Algorithmic problems in right-angled Artin groups: complexity and applications , J.Algebra (2019), 111–129. MR 387451910. , Expanders and right-angled Artin groups , Preprint (2020).11. Michael R. Garey and David S. Johnson, Computers and intractability , W. H. Freeman andCo., San Francisco, Calif., 1979, A guide to the theory of NP-completeness, A Series of Booksin the Mathematical Sciences. MR 51906612. Susan Hermiller and Zoran ˇSuni´c, Poly-free constructions for right-angled artin groups , J.Group Theory (2007), 117–138. MR 2288463 RAPH HAMILTONICITY AND RAAGS 15 13. Mark Kambites, On commuting elements and embeddings of graph groups and monoids , Proc.Edinb. Math. Soc. (2) (2009), no. 1, 155–170. MR 247588614. Sang-hyun Kim and Thomas Koberda, Embedability between right-angled Artin groups , Geom.Topol. (2013), no. 1, 493–530. MR 303976815. Sang-Hyun Kim and Thomas Koberda, Free products and the algebraic structure of diffeo-morphism groups , J. Topol. (2018), no. 4, 1054–1076. MR 398943716. Thomas Koberda, Right-angled Artin groups and a generalized isomorphism problem forfinitely generated subgroups of mapping class groups , Geom. Funct. Anal. (2012), no. 6,1541–1590. MR 300049817. Marvin L. Minsky, Computation: finite and infinite machines , Prentice-Hall, Inc., EnglewoodCliffs, N.J., 1967, Prentice-Hall Series in Automatic Computation. MR 035658018. Alon Rosen, Concurrent zero-knowledge , Information Security and Cryptography, Springer-Verlag, Berlin, 2006, With additional background and a foreword by Oded Goldreich.MR 227934319. Lucas Sabalka, On rigidity and the isomorphism problem for tree braid groups , Groups Geom.Dyn. (2009), no. 3, 469–523. MR 251617620. H. Servatius, Automorphisms of graph groups , J. Algebra (1989), no. 1, 34–60.MR 1023285 (90m:20043)21. Nigel Smart, Cryptography made simple , Information Security and Cryptography Book Series,Springer International Publishing, 2015. Ram´on Flores, Department of Geometry and Topology, University of Seville, Spain Email address : [email protected] Delaram Kahrobaei, Department of Computer Science, University of York, UK, NewYork University, Tandon School of Engineering, PhD Program in Computer Science,CUNY Graduate Center Email address : [email protected], [email protected] Thomas Koberda, Department of Mathematics, University of Virginia, Charlottesville,VA 22904 Email address ::