Linear Symmetries of the Unsquared Measurement Variety
Ioannis Gkioulekas, Steven J. Gortler, Louis Theran, Todd Zickler
LLinear Symmetries of the Unsquared Measurement Variety
Ioannis Gkioulekas, Steven J. Gortler, Louis Theran, and Todd Zickler
Abstract
We introduce a new family of algebraic varieties, L d,n , which we call the unsquared mea-surement varieties. This family is parameterized by a number of points n and a dimension d .These varieties arise naturally from problems in rigidity theory and distance geometry. In thoseapplications, it can be useful to understand the group of linear automorphisms of L d,n . Notably,a result of Regge implies that L , has an unexpected linear automorphism. In this paper, wegive a complete characterization of the linear automorphisms of L d.n for all n and d . We show,that apart from L , the unsquared measurement varieties have no unexpected automorphisms.Moreover, for L , we characterize the full automorphism group. Many questions in graph rigidity and distance geometry can be answered by studying an objectcalled the squared measurement variety. Given a configuration p of n ≥ d + 2 ordered points in R d , we can measure the N := (cid:0) n (cid:1) ordered squared Euclidean distances between each pair of thepoints and consider this as a single point in R N . (We associate each point in p with a vertex of thecomplete graph, K n , and each of the N vertex pairs with an edge of K n . Under this association,we refer to each of the N measurements as the squared distance of an edge.) If we take the unionof the measurement points over all possible configurations of n points in R d , we obtain a subset of R N which we call the Euclidean squared measurement set (of the complete graph K n ) denoted as M E d,n . Under complexification and the Zariski-closure of this measurement set, we obtain a varietywhich we call the squared measurement variety (of the complete graph) denoted as M d,n ⊂ C N .(Formal definitions are below.) This has also been called the Cayley-Menger variety. This varietyis linearly isomorphic to S n − d , the variety of complex, symmetric ( n − × ( n −
1) matrices of rank d or less, which is a well understood variety.In [5], Boutin and Kemper showed that one can uniquely reconstruct (up to congruence) ageneric configuration p in R d given its N unlabeled pairwise squared distances. By unlabeled, wemean that we are not told which distance measurement corresponds to which pair of points. Thiscentral result has many applications in rigidity and distance geometry [7, 19]. The key to their resultis showing that there are no permutations of the coordinate axes of C N (called edge permutations)that map M d,n to itself, except for the permutations that are consistent with a permutation of theindices of the n points. Such permutations are said to be induced by a vertex relabeling .One can, of course, expand the question and ask about all the non-singular linear maps on C N that map M d,n to itself. We call these linear automorphisms of M d,n . Due to the linear relationshipbetween M d,n and S n − d , classifying the linear automorphisms of M d,n boils down to looking at thelinear automorphisms of S n − d . This is a classical question, and it is well known that the linearautomorphisms of S n − d are all linear maps with a “factored” form B (cid:62) GB , where G ∈ S n − d and B is any ( n − × ( n −
1) non-singular matrix (see, e.g., [3]).1 a r X i v : . [ m a t h . M G ] J u l n even more general and daunting distance geometry problem is to reconstruct an n pointconfiguration in R d given an unlabeled set of Euclidean lengths of N paths through the configu-ration [10, 31]. By path, we mean an ordered sequence of vertices, and we define its length tobe the sum of the Euclidean lengths of each edge along the path. Importantly, a path-length isdefined as a sum of Euclidean edge lengths, not a sum of squared Euclidean edge lengths. As such,the relevant algebraic variety to study should represent lengths instead of squared lengths. Also,in the unlabeled setting, we are not even given the information as to which combinatorial pathswere measured in the first place. Thus we are not just concerned with coordinate permutations of C n but with more general linear maps acting on this variety (such as those arising from sums oflengths).To this end, we define the squaring map s ( · ) be the map from C N onto C N that acts by squaringeach of the N coordinates of a point. We then define the unsquared measurement variety of n pointsin d dimensions, L d,n , as the preimage of M d,n under the squaring map. (Each point in M d,n has2 N preimages in L d,n , arising through coordinate negations). We prove below (Theorem 2.11) thatfor d ≥
2, the variety L d,n is irreducible (for d = 1 it is actually a reducible arrangement of linearsubspaces).In this paper, we wish to understand the set of linear automorphisms of L d,n , ie. the non-singular linear maps on C N that map L d,n to itself. Any coordinate permutation that is inducedby a vertex relabeling must be an automorphism. Also due to the squaring construction, anycoordinate negation will be an automorphism. We call the group of automorphisms generated byvertex relabelings and coordinate the signed vertex relabelings .By homogeneity, any uniform scale on C N will also be an automorphism. Let us call thegroup of automorphisms generated by signed vertex relabelings and uniforms scalings the expected automorphisms of L d,n .We then ask: are there any “unexpected” linear automorphisms of L d,n ? Recall that M d,n hasmany linear automorphisms that are not permutations of any type. By analogy, there is no a priorirestriction on what the linear automorphisms of L d,n can be.In, fact, L , does have an unexpected linear automorphism. Regge [27] (see also, Roberts [28])showed that the following linear map always takes the Euclidean lengths l of the edges of any4-point configuration in R to those, l (cid:48) , of some different 4-point configuration in R . l (cid:48) = l l (cid:48) = l l (cid:48) = ( − l + l + l + l ) / l (cid:48) = ( l − l + l + l ) / l (cid:48) = ( l + l − l + l ) / l (cid:48) = ( l + l + l − l ) / (cid:63) )This “Regge symmtry” gives rise to an unexpected linear automorphism of L , . So the plot hasthickened.The first main result of this paper is that L , is the only unsquared measurement variety withan unexpected linear automorphism. Theorem 1.1.
Let d ≥ and let n ≥ d + 2 . Assume that { d, n } (cid:54) = { , } . Then any linearautomorphism A of L d,n of is a scalar multiple of a signed vertex relabeling. This theorem is proven by combining the three cases proven below in Theorems 5.2, 5.7 and5.27. 2he second main result of this paper is to fully characterize the group of linear automorphismsof L , . The details for this statement require a few definitions. Definition 1.2.
Define
Aut( L , ) to be the linear automorphisms of L , . Let the group P Aut( L , ) be induced on the equivalence classes of A ∈ Aut( L , ) under the relation “ A (cid:48) is a complex scale of A ”.We also consider the real subgroup Aut R ( L , ) . This has a counterpart P Aut R ( L , ) of equiv-alence classes up to real scale, and P + Aut R ( L , ) , on equivalence classes defined up to positivescale. It is well-defined to refer to an element of P + Aut R ( L , ) as being non-negative, since anyequivalence class containing a non-negative A consists entirely of non-negative matrices. Theorem 1.3.
The group P Aut( L , ) is of order · . It is generated by linearautomorphisms that are represented by matrices with rational elements.The group P + Aut R ( L , ) is of order and is isomorphic to the Weyl group D . The subsetof non-negative elements of P + Aut R ( L , ) is a subgroup of order and acts by relabeling thevertices of K . (This is proven as Theorem 5.15 below.)We will also see that The group P + Aut R ( L , ) is in fact generated by the edge permutationsinduced by vertex relabeling, sign flip matrices, and the one Regge symmetry of ( (cid:63) ).Our proof of this second theorem is computer aided. Remark 1.4.
That P + Aut R ( L , ) contains a subgroup isomorphic to D is based on conversa-tions with Dylan Thruston (see [30]) and has antecedents in [8]. See [1, 32] for other geometricconnections. The central step for the proof of Theorem 1.1 is understanding which linear projection mapsacting on L d,n can have deficient dimensions. This is done in Theorem 4.2 below. That result canalso be of independent interest in unlabeled rigidity problems [10]. Additionally, in Appendix B,we study the large linear subspaces contained in L , , which can also be of independent use inunlabeled rigidity [10]. Acknowledgements
We would like to thank Dylan Thurston for numerous helpful conversations and suggestions through-out this project. His input on Regge symmetries, and on the use of covering space maps wasessential. We also thank Brian Osserman for fielding numerous algebraic geometry queries.Steven Gortler was partially supported by NSF grant DMS-1564473. Ioannis Gkioulekas andTodd Zickler received support from the DARPA REVEAL program under contract no. HR0011-16-C-0028.
We start by establishing our basic terminology. We relegate our needed definitions and theoremsfrom algebraic geometry to Appendix A.
Definition 2.1.
Fix positive integers d and n . Throughout the paper, we will set N := (cid:0) n (cid:1) , C := (cid:0) d +12 (cid:1) , and D := (cid:0) d +22 (cid:1) .These constants appear often because they are, respectively, the number of pairwise distancesbetween n points, the dimension of the group of congruences in R d , and the number of edges in acomplete K d +2 graph. efinition 2.2. A configuration , p = ( p , . . . , p n ) is a sequence of n points in R d . (If we want totalk about points in C d , we will explicitly call this a complex configuration .) The affine span of aconfiguration need not be all of R d .We think of the integers in [1 , . . . , n ] as the vertices of an abstract complete graph K n . An edge , { i, j } , is an unordered distinct pair of vertices. The complete edge set of K n has cardinality N .Fixing a configuration p in R d , we define the length of an edge { i, j } to be the Euclidean distancebetween the points p i and p j , a real number. Next we will study the basic properties of two related families of varieties, the squared andunsquared measurement varieties.The squared variety is very well studied in the literature, but the unsquared variety is muchless so. Since we are interested in integer sums of unsquared edge lengths, we wish to understandthe structure of this unsquared variety.
Definition 2.3.
Let us index the coordinates of C N as ij , with i < j and both between and n .We also fix an ordering on the ij pairs to index the coordinates of C N as i with i between and N . Let us begin with a complex configuration p of n points in C d with d ≥
1. We will alwaysassume n ≥ d + 2. There are N vertex pairs (edges), along which we can measure the complex squared length as m ij ( p ) := d (cid:88) k =1 ( p ki − p kj ) where k indexes over the d dimension-coordinates. Here, we measure complex squared length usingthe complex square operation with no conjugation. We consider the vector [ m ij ( p )] over all of thevertex pairs, with i < j , as a single point in C N , which we denote as m ( p ). Definition 2.4.
Let M d,n ⊂ C N be the the image of m ( · ) over all n -point complex configurationsin C d . We call this the squared measurement variety of n points in d dimensions. When n ≤ ( d + 1), then M d,n = C N . Definition 2.5.
If we restrict the domain to be real configurations, then we call the image under m ( · ) the Euclidean squared measurement set denoted as M E d,n ⊂ R N . This set has real dimension dn − C . The following theorem reviews some basic facts. Most of the ideas are discussed in [2], but weinclude a detailed proof here for completeness and ease of reference.
Theorem 2.6.
Let n ≥ d + 2 . The set M d,n is linearly isomorphic to S n − d , the variety of complex,symmetric ( n − × ( n − matrices of rank d or less. Thus, M d,n is a variety. It is irreducible. Itsdimension is dn − C . Its singular set Sing( M d,n ) consists of squared measurements of configurationswith affine spans of dimension strictly less than d . This ordering choice does not matter as long as we are consistent. It is there to lets us switch between coor-dinates indexed by edges of K n and indexed using flat vector notation. For n = 4, N = 6 we will use the order:12 , , , , , roof. Such an isomorphism is developed in [33] and further, for example, in [13], see also [11,Section 7]. The basic idea is as follows. We can, w.l.o.g., translate the entire complex configuration p in C d such that the last point p n is at the origin. We can then think of this as a configurationof n − C d . Any such complex configuration gives rise to a symmetric ( n − × ( n − G ( p ), of rank at most d . Conversely, anysymmetric complex matrix G of rank d or less can be (Tagaki) factorized, giving rise to a complexconfiguration of n − C d , which, along with the origin, gives us an n -point complexconfiguration p so that G = G ( p ).With this in place, let ϕ be the invertible linear map from the space of ( n − × ( n − G , to C N (indexed by vertex pairs ij , with i < j ) defined as ϕ ( G ) ij := G ii + G jj − G ij (where G in and G nj is interpreted as 0). (For invertibility see [11, Lemma 7].)When G = G ( p ) is the gram matrix of a complex configuration p in C d , then ϕ ( G ) computesthe squared edge lengths of p . Since every symmetric matrix of rank at most d arises as the Grammatrix, G ( p ) from some complex configuration p in C d , we see that the image of ϕ acting on S n − d ,is contained in M d,n . Conversely, since every point in M d,n arises from a complex configuration p ,and p gives rise to a Gram matrix G ( p ), we see that the image of ϕ acting on rank constrainedmatrices is onto M d,n . This gives us our isomorphism of varieties (Lemma A.4.)Irreducibility of M d,n follows from the fact that it is the image of an affine space (complexconfiguration space) under a polynomial (the squared-length map). The dimension follows fromthe dimension of S n − d which is d ( n − − (cid:0) d (cid:1) (this is consistent with a degree of freedom count;see e.g., [17] for details).For the description of the singular set of determinantal varieties of rank-constrained matrices,see for example [16, Page 184] (which can also be applied to the symmetric case). Meanwhile, weknow that G = G ( p ) has rank < d iff p has a deficient affine span in C d (see for example [11,Lemma 26]). For an explicit statement about the singular set of M d,n , see [2, Proposition 4.5]. (cid:3) Remark 2.7.
We note, but will not need, the following: For d ≥ , the smallest complex varietycontaining M E d,n is M d,n . We note the following minimal instances where n = d + 2. In these cases, the variety hascodimension 1.The variety M , ⊂ C is defined by the vanishing of the simplicial volume determinant , thatis, the determinant of the following matrix (cid:18) m ( m + m − m )( m + m − m ) 2 m (cid:19) where we use ( m , m , m ) to represent the coordinates of C . This is the Gram matrix, ϕ − ( m ( p )), described in the proof of Theorem 2.6.The variety M , ⊂ C is defined by the vanishing of the determinant of the matrix m ( m + m − m ) ( m + m − m )( m + m − m ) 2 m ( m + m − m )( m + m − m ) ( m + m − m ) 2 m . The variety M , ⊂ C is defined by the vanishing of the determinant of the matrix m ( m + m − m ) ( m + m − m ) ( m + m − m )( m + m − m ) 2 m ( m + m − m ) ( m + m − m )( m + m − m ) ( m + m − m ) 2 m ( m + m − m )( m + m − m ) ( m + m − m ) ( m + m − m ) 2 m . n > d + 2, then M d,n has higher codimension, and requires the simultaneous vanishingof more than one minor, characterizing the rank d .Next we move on to unsquared lengths. Definition 2.8.
Let the squaring map s ( · ) be the map from C N onto C N that acts by squaringeach of the N coordinates of a point. Let L d,n be the preimage of M d,n under the squaring map.(Each point in M d,n has N preimages in L d,n , arising through coordinate negations). We call thisthe unsquared measurement variety of n points in d dimensions. Definition 2.9.
We can define the
Euclidean length map of a real configuration p as l ij ( p ) := (cid:118)(cid:117)(cid:117)(cid:116) d (cid:88) k =1 ( p ki − p kj ) where we use the positive square root. We call the image of p under l the Euclidean unsquaredmeasurement set denoted as L E d,n ⊂ R N . Under the squaring map, we get M E d,n . We denote by l ( p ) ,the vector [ l ij ( p )] over all vertex pairs. We may consider l ( p ) either as a point in the real valued L E d,n or as a point in the complex variety L d,n . Indeed, L E d,n is the set we are often interested in for applications, but it will be easier to workwith the whole variety L d,n . Remark 2.10.
The locus of L , where the edge lengths of a triangle, ( l , l , l ) , are held fixedis studied in beautiful detail in [6], where this is shown to be a Kummer surface. The following theorem is the main result of this section.
Theorem 2.11.
Let n ≥ d + 2 . L d,n is a variety. It has pure dimension dn − C . Assuming that d ≥ , we also have the following: L d,n is irreducible. The proof is in the next subsection. The non-trivial part will be showing irreducibility, whichwe will do in Proposition 2.24 below. Indeed, in one dimension, the variety L , is reducible. Remark 2.12.
We note, but will not need the following: For d ≥ , the smallest complex varietycontaining L E d,n is L d,n . Returning to our minimal examples: The variety L , ⊂ C is defined by the vanishing of thedeterminant of the following matrix (cid:18) l ( l + l − l )( l + l − l ) 2 l (cid:19) where we use ( l , l , l ) to represent the coordinates of C .The variety L , ⊂ C is defined by the vanishing of the determinant of the matrix l ( l + l − l ) ( l + l − l )( l + l − l ) 2 l ( l + l − l )( l + l − l ) ( l + l − l ) 2 l . The variety L , ⊂ C is defined by the vanishing of the determinant of the matrix l ( l + l − l ) ( l + l − l ) ( l + l − l )( l + l − l ) 2 l ( l + l − l ) ( l + l − l )( l + l − l ) ( l + l − l ) 2 l ( l + l − l )( l + l − l ) ( l + l − l ) ( l + l − l ) 2 l . igure 1: A model of the real locus of L , , a subset of R . It comprises 4 planes. Coordinateaxes are in white. Remark 2.13.
It turns out that L , is reducible and consists of the four hyperspaces defined,respectively, by the vanishing of one of the following equations: l + l − l l − l + l − l + l + l l + l + l This reducibility can make the one-dimensional case quite different from dimensions 2 and 3.Notice that the first octant of the real locus of of these hyperspaces arises as the Euclideanlengths of a triangle in R (that is, these make up L E , ). The specific hyperplane is determined bythe order of the points on the line. We will now develop the proof of Theorem 2.11. The main issue will be proving the irreducibilityof L d,n . The special case of n = d + 2 follows from [9], but we are interested in the general case, n ≥ d + 2. The basic idea we will use is that a variety whose smooth locus is connected must beirreducible. More specifically, our strategy is to define a “good” locus of points in L d,n , and showthat this locus is connected, made up of smooth points, and with its Zariski closure equal to L d,n .This, along with Theorem A.9, will prove irreducibility. Note that when the word “Zariski” is notattached to a topological term, you can interpret the term in the standard topology.We will show connectivity using a specific path construction. This will rely centrally on thecomplex setting that we have placed ourselves in. Showing (algebraic) smoothness will mostly bea technical matter. Definition 2.14.
Let the zero locus Z of C N be the points where at least one coordinate vanishes.Let the bad locus Bad( M d,n ) of M d,n be the union of its singular locus Sing( M d,n ) together withthe points in M d,n that are in Z . We will call the remaining locus Good( M d,n ) good .Let the bad locus Bad( L d,n ) of L d,n be the preimage of the bad locus of M d,n under the squaringmap s . We will call the remaining locus Good( L d,n ) good .We refer to points on the good locus as good points, and analogously for bad points. Lemma 2.15.
Good( M d,n ) is path-connected. x )Real( y ) q q q q Figure 2:
Our gadget. The imaginary x -direction is coming out of the page. Our path endswith the reflection of the configuration q along the x -axis. Proof.
Let m and m be any two good points in M d,n . These correspond to two configurations p and q . A path in configuration space, connecting p to q , will remain, under m ( · ), on Good( M d,n )when the affine span of the configuration does not drop in dimension, and no edge between anytwo points has zero squared length. This can always be done, as we have n ≥ d + 2 points. (Thisis even true for one-dimensional configurations in the complex setting, as a zero squared length isa condition that has complex-codimension of at least 1, and thus the bad locus is non-separating.) (cid:3) We next record a lemma that follows from basic results of covering space theory. See [26,Sections 53, 54] for more details.
Definition 2.16. A path τ on a space X is a continuous map from the unit interval to X . A loop is a path with τ (0) = τ (1) . Let p be a map from a space ˜ X to X . A lift ˜ τ of τ (under p ) is a mapsuch that p (˜ τ ) = τ . It is a path on ˜ X . Intuitively, a lift is just tracing out the path τ in the preimage through p . In what follows, C × is the punctured complex plane. Lemma 2.17.
Let p be the map C × → C × given by z (cid:55)→ z . Let x := p ( z ) . A loop τ starting at x uniquely lifts to a loop ˜ τ starting at z if τ winds around the origin an even number of times, andotherwise it lifts to a path that ends at − z .Proof Sketch. See [26, Chapters 53, 54] for definitions. The map C × → C × given by z (cid:55)→ z is acovering map. Call the base B and the cover F and the covering map p . Each loop τ in B , startingat x , lifts uniquely to a path ˜ τ in F , starting at z . The path ˜ τ ends at a uniquely defined point z (cid:48) ∈ p − ( x ) under the lifting correspondence . In our case the fiber is { z, − z } . Moreover every z (cid:48) inthe fiber can be reached under the lifting of some loop τ (see [26, Theorem 54.4]).The fundamental group of the base is π ( B ) = π ( C × ) ∼ = Z . The covering map determinesan induced map p ∗ : π ( F ) → π ( B ). The image of the induced map consists of loops that windaround the origin an even number of times in F so it is isomorphic to 2 Z . The lifting correspondenceinduces a bijective map from the group π ( B ) /p ∗ ( π ( F )) ∼ = Z to the fiber above x , and (only)loops in p ∗ ( π ( F )) lift to loops in F . (see [26, Theorem 54.6]).Thus, this lift, starting from z , is a path from z to − z if and only if τ winds around the originan odd number of times. (cid:3) Looking at the product space ( C × ) N , we can also view the squaring map s as a covering mapmapping this product space to itself, and we can apply Lemma 2.17 coordinate-wise.8 x complex squared distance on edge { , } complex squared distance on other edged y d x Figure 3:
Since the squared length along edge { , } arises from its x component, our path alongthis edge measurement winds once about the origin in C . For any other edge, the x componentof the squared distance is dominated by the other coordinates and the resulting path stays farfrom the origin in C . Lemma 2.18.
Assume d ≥ . Suppose l and l (cid:48) are two points in L d,n that differ only by a negationalong one coordinate. Then, there is a path that connects l to l (cid:48) and stays in Good( L d,n ) .Proof. W.l.o.g., we will negate the coordinate corresponding to the edge lengths between vertices1 and 2. But first, we need to develop a little gadget.Let q be a special configuration with the following properties: q is at the origin, q is placedone unit along the first axis of C d ; and the remaining points are arranged so that they all lie within (cid:15) of the second axis in C d , but such that they are greater than one unit apart along the secondaxis from each other and also from q . (Note that this step requires that d ≥ q has a full d -dimensional affine span. This configuration has thefollowing property: the squared distances of all of the edges are dominated by the contribution fromthe second coordinate, except for the squared distance along the edge { , } , which is dominatedby the contribution from its first coordinate. See Figure 2.Let a ( t ) be the path in configuration space, parameterized by t ∈ [0 , π ] where, for each i , wemultiply the first coordinate of q i by e − t √− . This path ends at a ( π ), a configuration which is areflection of q .Under m , this gives us a loop τ := m ( a ) in M d,n that starts and ends at the point y := m ( q ).By construction, the loop τ avoids any singularities or vanishing coordinates. Fixing one point z in s − ( y ), the loop τ lifts to a path ˜ τ in L d,n that ends at some point z (cid:48) in the fiber s − ( y ). Moreover,this path remains in Good( L d,n ).If we project τ onto the coordinate of C N corresponding to the edge { , } , we see that theimage maps to a loop that winds around the origin of C exactly once. If we project this loop ontoany of the other coordinates, we obtain a loop that cannot wind about the origin of C at all. SeeFigure 3. By Lemma 2.17, the lifted loop ˜ τ in L d,n must end at the point z (cid:48) that arises from z bynegating the first coordinate.Going now back to our problem, let p be any configuration such that m ( p ) = s ( l ). Let w be aconfiguration path from p to our special q . Let ω := m ( w ). From Lemma 2.15 this path can bechosen to avoid any singular points or points where a coordinate vanishes. Let the concatenatedpath σ be ω − ◦ τ ◦ ω . This is a loop in M d,n that starts and ends at m ( p ). The projection of σ ontothe coordinate of C N corresponding to the edge { , } , defined by forgetting all other coordinates,winds around the origin exactly once (any loops due to ω cancel out), while the other coordinate9rojections are simply connected in C × (any loops due to ω cancel out). Thus, fixing the point l in L d,n , from Lemma 2.17, σ must lift to a path ˜ σ that ends at l (cid:48) . Moreover, this path stays in thegood locus. (cid:3) Lemma 2.19.
For d ≥ , Good( L d,n ) is path-connected.Proof. Let l and l be two good points in Good( L d,n ). Define m i := s ( l i ). Let τ be a path in M d,n from m to m that avoids the singular set of M d,n , and such that no coordinate ever vanishes (asguaranteed by 2.15). Fixing l , the path τ lifts to a path ˜ τ in L d,n that remains in the good locusand that connects l to some point l (cid:48) in the fiber s − ( s ( l )). The only remaining issue is that l (cid:48) may have some of its coordinates negated from our desired target point l . This can be solved byrepeatedly applying the good negating paths guaranteed by Lemma 2.18. (cid:3) We now move on to the technical matters of smoothness.
Lemma 2.20.
Every point l ∈ Good( L d,n ) is smooth and with Dim l ( L d,n ) = dn − C . Every pointin Bad( L d,n ) − Z is singular.Proof. Every good point in M d,n is (algebraically) smooth, and thus, from Theorem A.11, is ana-lytically smooth of dimension dn − C . Also, from Theorem A.11, every singular point in M d,n isnot analytically smooth.The differential d s of the squaring map s on C N is represented by an N × N Jacobian matrix J at each point in C N . At points in C N where none of the coordinates vanish, J is invertible. Thus,from the inverse function theorem, every good point in L d,n is analytically smooth of dimension dn − C . Also every bad point in L d,n − Z is not analytically smooth.Again using Theorem A.11, we have each good point (algebraically) smooth and with Dim l ( L d,n ) = dn − C . Similarly, we also have that every bad point in L d,n − Z is singular. (cid:3) Note that there may be some bad points of L d,n in Z that are still smooth. Remark 2.21.
The above lemma can be proven directly using more machinery from algebraic geom-etry. In particular, away from Z , the squaring map from C N to itself is an “´etale morphism” [24,page 18]. This property transfers to the map s ( · ) acting on L d,n − Z , as this property transfersunder a “base change”. The results then follows immediately. Lemma 2.22.
The Zariski closure of
Good( L d,n ) is L d,n .Proof. Recall the following principle: Given any point z in C × , we can always find a neighborhood B of z , so that there is a well defined, single valued, continuous square root function from B to C , with √ z = z .Returning to our setting, let l be any point in L d,n , and let m := s ( l ) be its image in M d,n underthe coordinate squaring map. The good points of M d,n are dense in M d,n . (Letting m = m ( p )for some p , there is always a nearby configuration p (cid:48) with a full span and no edge with vanishingsquared length. Moreover, the map m ( · ) is continuous.) Thus we can always find an arbitrarilyclose point m (cid:48) that is in Good( M d,n ).Next we argue that we can find a point l (cid:48) such that s ( l (cid:48) ) = m (cid:48) (putting it in Good( L d,n )) with l (cid:48) is arbitrarily close to l . Given m (cid:48) , in order to determine l (cid:48) we need to select a “sign” for thesquare-root on each coordinate ij . When l ij (cid:54) = 0 then using the above principle, we can pick a signso that l (cid:48) ij is near to l ij . When l ij = 0 then we can use any sign to obtain an l (cid:48) ij that is sufficientlyclose to 0. 10ince this can be done for each l , then L d,n is in the standard-topology closure of Good( L d,n ).Thus, from Theorem A.3, L d,n is in the Zariski closure of Good( L d,n ). Since L d,n itself is closedand contains Good( L d,n ), we are done. (cid:3) Lemma 2.23.
Every component of L d,n is of dimension equal to dn − C .Proof. From Lemma 2.20 each good point has a local dimension of dn − C . Thus, the good locus iscovered by a set of components of L d,n , all of dimension dn − C . The Zariski closure of Good( L d,n )is L d,n (Lemma 2.22). Thus, no new components need to be added during the Zariski closure. (cid:3) We can now prove irreducibility.
Proposition 2.24.
For d ≥ , L d,n is irreducible.Proof. From Lemma 2.20, all of the points in Good( L d,n ) are smooth. From Lemma 2.19, Good( L d,n )is path-connected, and thus connected as a subspace of C n .Now we show that all of Good( L d,n ) lies in a single irreducible component V of L d,n . Fix anirreducible component V , such that G = Good( L d,n ) ∩ V is non-empty. Notice that G is a closedsubspace of Good( L d,n ) (Theorem A.3). Now let W be the union of all the remainining irreduciblecomponents of L d,n . By similar reasoning H = W ∩ Good( L d,n ) is closed in Good( L d,n ).From Theorem A.9, G and H are disjoint. On the other hand, V ∪ W = L d,n , so G ∪ H =Good( L d,n ). Because Good( L d,n ) is connected and G is closed and not empty, its complement H must be empty to be closed. Hence, G = Good( L d,n ).To finish the proof, recall that Lemma 2.22 says that the Zariski closure of Good( L d,n ) is L d,n .This closure must be contained in any variety, such as V , that contains Good( L d,n ). Since we alsohave V ⊆ L d,n , equality holds and we get irreducibility. (cid:3) And now we can complete the proof of our theorem:
Proof of Theorem 2.11. L d,n can be seen to be a variety by pulling back the defining equations ofthe variety M d,n through s . Dimension is Lemma 2.23. Irreducibility is Proposition 2.24. (cid:3) M d,n Definition 3.1. A linear automorphism of a variety V in C N is a non-singular linear transformon C N (that is, a non-singular N × N complex matrix A ) that bijectively maps V to itself. Definition 3.2. An N × N matrix P is a permutation if each row and column has a single non-zeroentry, and this entry is . A matrix P (cid:48) = DP , where D is diagonal and invertible, is a generalizedpermutation . Each row and column has exactly one non-zero entry. A generalized permutation has uniform scale if it is a scalar multiple of a permutation matrix. Definition 3.3.
A generalized permutation acting on an edge set is induced by a vertex relabeling when it has the same non-zero pattern as an edge permutation that arises from a vertex relabeling.
We now present the following slight generalization of [5, Lemma 2.4]. Here we deal with gener-alized permutations instead of permutations, but the same proof applies. In our setting, V will always be a cone, so linear isomorphisms (as opposed to affine ones) are natural. heorem 3.4 ([5, Lemma 2.4]) . Suppose that A is a generalized permutation that is a linearautomorphism of M d,n . Then A is induced by a vertex relabeling. The following material will help us slightly strengthen Theorem 3.4, and will also be used laterin Section 4.First we define the combinatorial notion of infinitesimally dependent and independent sets ofedges in d dimensions. Definition 3.5.
Let d be some fixed dimension and n a number of vertices. Let E := { E , . . . , E k } be an ordered subset of the N edges. The ordering on the edges of E fixes an association betweeneach edge in E and a coordinate axis of C k . Let m E ( p ) be the map from d -dimensional configurationspace to C k measuring the squared lengths of the edges of E .We denote by π ¯ E the linear map from C N to C k that forgets the edges not in E , and is consistentwith the ordering of E . Specifically, we have an association between each edge of K n and an indexin { , . . . , N } , and thus we can think of each E i as simply its index in { , . . . , N } . Then, π ¯ E isdefined by the conditions: π ¯ E ( e j ) = 0 when j ∈ ¯ E and π ¯ E ( e j ) = e (cid:48) i when E i = j , where { e , . . . , e N } denotes the coordinate basis for C N and { e (cid:48) , . . . , e (cid:48) k } denotes the coordinate basis for C k . We call π ¯ E an edge forgetting map .With this notation, the map m E ( · ) is simply the composition of the complex measurement map m ( · ) and π ¯ E . Definition 3.6.
We say the an edge set E is infinitesimally independent in d dimensions if thereexists a complex configuration p in C d , where we can differentially vary each of the | E | squaredlengths independently, by appropriately differentially varying our configuration p .Formally, this means that the image of the differential of m E ( · ) at p is | E | -dimensional. Thisexactly coincides with the notion of infinitesimal independence from graph rigidity theory [20].We call such a configuration p , E -regular . Every configuration in some appropriate neighbor-hood of an E -regular point is also E -regular (by semi-continuity). This neighborhood must includeconfigurations with full affine spans and no coincident pointsFor any configuration p with full affine span, m ( p ) is smooth (Theorem 2.6). Thus for any E -regular configuration p , with full affine span, using the chain rule, the differential image of π ¯ E at the point m ( p ) is | E | -dimensional. We call such a point of M d,n , E -regular . Such points mustexist when E is infinitesimally independent.For any smooth point x of M d,n with no zero coordinates, all of its preimages under the squaringmap, s () , are smooth in L d,n (Lemma 2.20). Thus for any preimage l of an E -regular point x , withno zero coordinates, the differential image of π ¯ E at the point l is | E | -dimensional. (As the Jacobianof s ( · ) at l is diagonal and bijective). We call such a point of L d,n , E -regular . Such points mustexist when E is infinitesimally independent.An edge set that is not infinitesimally independent in d dimensions is called infinitesimallydependent in d dimensions. The following is a standard result from rigidity theory (see, e.g., [14, Corollary 2.6.2]).
Proposition 3.7.
Let E be an edge set (with all its edges distinct). Suppose | E | ≤ (cid:0) d +22 (cid:1) and E is infinitesimally dependent in d dimensions. Then | E | = (cid:0) d +22 (cid:1) and E consists of the edges of a K d +2 subgraph (in some order).Proof Sketch. Assume, w.l.o.g., that E is infinitesimally dependent and inclusion-wise minimal withthis property. If E does not consist of the edges of a K d +2 subgraph, then it has a vertex v of degreeat most d . Let p be in general affine position. This means, in particular, that p v is not in the affine12pan of its neighbors. Hence, the ≤ d squared lengths of each edge in edge set E (cid:48) incident on v can be differentially varied independently (by exercising the d degrees of freedom in p v ). Thus theedges of E (cid:48) can be removed from E leaving the remainder, E \ E (cid:48) , still infinitesimally dependent.This contradicts the assumed minimality of E . (cid:3) Lemma 3.8.
Any linear automorphism A of M d,n is a linear automorphism of M ,n .Proof. The singular set of M d,n is M d,n − by Theorem 2.6. Thus, from Theorem A.8, A must be alinear automorphism of M d − ,n . We then see, by induction, that A is also a linear automorphismof M ,n . (cid:3) In fact, this kind of induction has been recently used to greatly strengthen Boutin and Kemper’sunique reconstructability result [5] to apply to a much larger class of graphs than just the completegraphs [12].
Lemma 3.9.
Let m , m and m be the squared edge lengths of a -dimensional triangle, andsuppose that s , s and s are scalars such that the simplicial volume determinant det (cid:18) m ( m + m − m )( m + m − m ) 2 m (cid:19) =2( m m + m m + m m ) − ( m + m + m ) (see Section 2) is mapped to a multiple of itself under the scaling m ij (cid:55)→ s ij m ij . Then the s ij areall equal.Proof. The hypothesis means that the desired statement holds for any specialization of the m ij .Consider the case where m = 0. The presence of the monomials m and m m then implythat s = s s , that is, s = s . Continuing the same way, we see that s = s . (cid:3) Now we can state the following slight strengthening of Theorem 3.4.
Theorem 3.10.
Suppose that A is a generalized permutation that is a linear automorphism of M d,n . Then A is induced by a vertex relabeling and has uniform scale.Proof. Theorem 3.4 tells us that A is induced by a vertex relabeling. Next we need to prove uniformscale. From Lemma 3.8, we can look at A as an automorphism of M ,n Let π ¯ K be an edge forgetting map that ignores all of the edges in the complement of an edgeset K , consisting of the edges of a fixed triangle. Under any ordering of the edges of K , wehave π ¯ K ( M ,n ) = M , . (which is cut out from C by the simplicial volume determinant as inLemma 3.9).We know that we can factor A into DP , where D is diagonal and P is a permutation inducedby a vertex relabeling. Since a vertex relabeling is a linear automorphism of M ,n , then so too is D . Since D is diagonal, and π ¯ K is an edge forgetting map, then π ¯ K D = D (cid:48) π ¯ K for an appropriate3 × D (cid:48) , making D (cid:48) an automorphism of M , . So it has to send thesimplicial volume determinant to a multiple of itself. This is the situation of Lemma 3.9, and weconclude that the scaling on each triangle is uniform.That A has a uniform scale then follows from applying the above argument repeatedly tooverlapping triangles until we have determined the scale on every edge. (cid:3) Linear maps from L d,n to C D In this section, which forms the technical heart of this paper, we will study how linear projectionsact on L d,n .Let d ≥
1. Recall that D := (cid:0) d +22 (cid:1) . In this section, E will be a D × N matrix representing arank r linear map from L d,n to C D , where r is some number ≤ D . Our goal is to study linearmaps where the dimension of the image is strictly less than r . In particular this will occur when E ( L d,n ) = L d,d +2 . Definition 4.1.
We say that E has K d +2 support if it depends only on measurements supportedover the D edges corresponding to a K d +2 subgraph of K n . Specifically, all the columns of thematrix E are zero, except for at most D of them, and these non-zero columns index edges containedwithin a single K d +2 . The main result of this section is:
Theorem 4.2.
Let E be a D × N matrix with rank r . Suppose that the image E ( L d,n ) , a con-structible set, is not of dimension r . Then r = D and E has K d +2 support. Remark 4.3.
Theorem 4.2 does not hold when L d,n is replaced by M d,n . As described in the intro-duction, the linear automorphism group of S n − d is quite large, and thus provides automorphisms A of M d,n that have dense support. Thus, even if some E has K d +2 support the composite map EA would not, and it could still have a low-dimensional image. The proof relies (crucially) on the more technical, linear-algebraic Proposition 4.4, proved below.The idea leading to it is as follows.If a point l is smooth in L d,n then so is any l (cid:48) obtained by negating various coordinates of l .Thus, the collection of complex analytic tangent spaces to L d,n , T l L d,n , at l and its orbit undercoordinate negations gives us an arrangement T of 2 N linear spaces (related through coordinatenegation). Any E meeting the hypothesis of Theorem 4.2 necessarily drops rank on every subspacein T . This would not be possible if E or the collection of tangent spaces T l L d,n , were sufficientlygeneral. On the other hand, we know that the geometry of our situation is special enough thatwhen E has rank D and K d +2 support, then E does drop rank on each of the T l L d n . Proposition 4.4asserts that this is the only possibility. This proof relies on the negation-based symmetry of L d,n and on the fact that K d +2 is the only graph on D or fewer edges that is infinitesimally dependent(Proposition 3.7).First we present the proof of Theorem 4.2, which effectively reduces our problem to the linearsituation covered in Proposition 4.4. Proof of Theorem 4.2.
Clearly, the image of the map must be contained in an r -dimensional linearspace spanned by the columns of E . Suppose that either r < D , or E does not have K d +2 support.Then, from Proposition 4.4 below, there must be a smooth point l (cid:48) such that Dim( E ( T l (cid:48) L d,n )) = r .Then, from the Local Submersion Theorem for smooth maps [15, page 20], the map must be locallysurjective onto the r -dimensional linear space. Thus the image (a constructible set) cannot havesmaller dimension. (cid:3) We are now ready to state the key technical result in this section.
Proposition 4.4.
Let E be a D × N matrix with rank r . Suppose that either r < D or E does not have K d +2 support. Then there is a smooth point l (cid:48) ∈ L d,n with with the property that Dim( E ( T l (cid:48) L d,n )) = r . .1 Proof of Proposition 4.4 The rest of the section is occupied with the proof, which we break down into steps. We use atechnical lemma about coordinate negation and determinants that is relegated to it own Section 4.2.
Definition 4.5. A sign flip matrix S is a diagonal matrix with ± on the diagonal. A coordinateflip of a point or subspace it its image under a sign flip matrix. Definition 4.6.
Let m be a smooth point in M d,n , and T m M d,n be its complex analytic tangent.We can describe T m M d,n by a ( dn − C ) × N complex matrix T m . (The row ordering is not relevant).Referring back to Definition 3.6, if E is an infinitesimally independent edge set, then the columnsof T m corresponding to E , at an E -regular point of M d,n , are linearly independent. The same istrue of the matrix T l that expresses the tangent space T l L d,n at an E -regular point l of L d,n . Suchpoints must exist when E is infinitesimally independent. The first step is to restrict to an interesting range of n . Lemma 4.7.
Proposition 4.4 holds when n < d + 2 .Proof.
When n ≤ d + 1, T l L d,n is equal to the full embedding space, and thus Dim( E ( T l L d,n )) = r .Proposition 4.4 is then trivial in this case. (cid:3) Thus, from now on, we may assume that n ≥ d + 2.Let T be a ( dn − C ) × N matrix with rows spanning the tangent space T l L d,n at some smoothpoint l . The complex analytic tangent space at a smooth point of a variety with pure dimensionhas the same dimension as the variety, which explains the shape of T . Block form and column basis
Each column of E and T corresponds to an edge in K n . We aregoing to make use of edge-permuted versions of these matrices that have particular block structures.To this end, we are now going to look at the columns of E and determine which subsets can forma basis, E , of a linear space of dimension r . So we permute and then partition the columns of E into a block form (cid:0) E E (cid:1) . where E is D × ( N − r ) and E is D × r . We define a column basis, E of E , to be good when r = D and the columns of E correspond to the edges of a K d +2 . Any other column basis E willbe called bad . We denote by E the edge set corresponding to the columns of E .Suppose that E has K d +2 support and r = D . then the r columns of E corresponding to theedges of this K d +2 must form the only column basis of E . Moreover, it is good. Lemma 4.8. If E does not have K d +2 support or r < D , then there is a bad column basis for E .Proof. If r < D , then by definition, no column basis can be good. From now on, then, assume that r = D .If E is supported on only D columns, there is a unique column basis E . Thus in this case,non- K d +2 support for E will imply that the unique column basis is bad.Suppose instead there are more than D non-zero columns of E . Thus, starting from, say, a goodbasis E , we can exchange a non-zero column of E with an appropriate one from E to obtainanother basis which is bad: removing an edge from a K d +2 and replacing it with any other edgeresults in a graph that cannot be a K d +2 (it has more vertices). (cid:3) emark 4.9. In light of the paragraph preceding this lemma, Lemma 4.8 can be made into an “ifand only if ” statement.
Going back to T and applying the same column used obtain (cid:0) E E (cid:1) , we get a block form (cid:0) T T (cid:1) where T is ( dn − C ) × ( N − r ) and T is ( dn − C ) × r . Lemma 4.10.
Assuming that E is a bad basis of E and l is E -regular, the matrix T has rank r (and in particular has linearly independent columns)Proof. Since ( E , E ) arises from a bad basis, and we have only applied column permutations,the columns of T corresponds to a subgraph G of K n with at most D edges which is not K d +2 .Proposition 3.7 tells us that the edges of G are infinitesimally independent. So, by E -regularity of l , these columns of T are linearly independent (Definition 4.6). (cid:3) Row rankLemma 4.11.
Assuming that E is a bad basis of E and l is E -regular. Then the block matrix (cid:0) T T (cid:1) contains r rows, (cid:0) T (cid:48) T (cid:48) (cid:1) , such that T (cid:48) forms a non-singular matrix.Proof. Since we have a bad basis, from Lemma 4.10, T has r linearly independent columns andthus r linearly independent rows. We can select any set of rows corresponding to a row basis of T . (cid:3) Similarly, we have
Lemma 4.12.
Let E be a column basis for E . Then the block matrix (cid:0) E E (cid:1) contains r rows, (cid:0) E (cid:48) E (cid:48) (cid:1) , such that E (cid:48) forms a non-singular matrix. Next, we derive an implication of E dropping rank on the tangent space. Lemma 4.13.
Suppose there is a smooth point l ∈ L d,n such that l and all of its coordinate flips l (cid:48) have the property that Dim( E ( T l (cid:48) L d,n )) < r . Let E be a bad basis for E . Let S be any any ( N − r ) × ( N − r ) sign flip matrix, and S , any r × r sign flip matrix.Then the r × r matrix Z := E (cid:48) S T (cid:48) (cid:62) + E (cid:48) S T (cid:48) (cid:62) is singular.Proof. Let S be the N × N be the sign flip matrix with the S i as its diagonals. Let l (cid:48) be the pointobtained from l under the sign flips of S . Because L d,n is symmetric under coordinate negations,then T l (cid:48) L d,n is spanned by the columns of ST (cid:62) . Then we have Dim( E ( T l (cid:48) L d,n )) = rank( EST (cid:62) ) =rank( E S T (cid:62) + E S T (cid:62) ) ≥ rank( E (cid:48) S T (cid:48) (cid:62) + E (cid:48) S T (cid:48) (cid:62) ).If for some S , the matrix Z were non-singular, then we would have a certificate that E doesnot drop rank on that coordinate flip of the tangent space, in contradiction to the hypothesis onDim( E ( T l (cid:48) L d,n )). (cid:3) Remark 4.14.
The rank of Z may change as the S i do, but it cannot rise to r . onclusion of the proof Assume that E does not have K d +2 support or r < D . FromLemma 4.8, there is a bad column basis E for E . From Lemma 4.11, for an E -regular l , T (cid:48) is a non-singular matrix.Suppose that at this l , we had for all of its coordinate flips l (cid:48) , the property that Dim( E ( T l (cid:48) L d,n )) Lemma 4.15. Suppose that Z = SX + Y is an r × r matrix and det( Z ) = 0 for all choices of signflips, S . Then det( Y ) = 0 .Proof. Multilinearity of the determinant allows us to express det( Z ) as det( Z (cid:48) ) + det( Z (cid:48)(cid:48) ), where Z (cid:48) is the matrix Z with its first row replaced by the first row of SX , and where Z (cid:48)(cid:48) is the matrix Z with its first row replaced by the first row of Y . We can likewise expand out each of det( Z (cid:48) ) anddet( Z (cid:48)(cid:48) ) by splitting their second rows. Applying this decomposition recursively we ultimately get:det( SX + Y ) = (cid:88) I ⊆ [ r ] det( Z S I )where [ r ] = { , , . . . , r } , and Z S I is the matrix that has the rows indexed by I from SX and therest from Y .Now sum the above over the 2 r choices of S and rearrange (cid:88) S det( SX + Y ) = (cid:88) S (cid:88) I ⊆ [ r ] det( Z S I ) = (cid:88) I ⊆ [ r ] (cid:63) (cid:122) (cid:125)(cid:124) (cid:123)(cid:88) S det( Z S I )For fixed I , each det( Z S I ) = ( − σ ( S ,I ) det( Z I I ), where σ ( S , I ) is the number of rows correspondingto I where S has a diagonal entry of − 1. Thus, for each I , ( (cid:63) ) is2 r −| I | · | I | (cid:88) k =0 (cid:18) | I | k (cid:19) ( − k · det( Z I I )(The power of two factor accounts for all of the sign choices in S over the complement of I .) Thecoefficient of det( Z I I ) equals 2 r when I is empty. Otherwise it is zero since the inner term is simplythe binomial expansion of (1 − | I | . Thus, (cid:88) S det( SX + Y ) = 2 r det( Y )Since this sum vanishes by hypothesis, we get det( Y ) = 0. (cid:3) Automorphisms of L d,n In this section we will characterize the linear automorphisms of L d,n for all d and n . One keyfeature will be that we are no longer restricted to the case of edge permutations.We will need to consider a few distinct cases for d and n . Definition 5.1. Set N := (cid:0) n (cid:1) and identify the rows and columns of an N × N matrix with theedges of K n .A signed permutation is an N × N matrix P (cid:48) that is the product SP of a sign flip matrix S and a permutation matrix P .A signed permutation P (cid:48) := SP is induced by a vertex relabeling if P is induced by a vertexrelabeling of K n . L d,n , n ≥ d + 3 Let d ≥ 1. This section will be concerned with L d,n where n is larger than the minimal value, d + 2. Theorem 5.2. Let n ≥ d + 3 . Then any linear automorphism A of L d,n of is a scalar multiple ofa signed permutation that is induced by a vertex relabeling. The plan is to use machinery from Section 4 to show that the automorphism must be in theform of a generalized edge permutation. We will then be able to switch over to the M d,n setting,where we can apply Theorem 3.10. Definition 5.3. Let A be an N × N matrix. We identify the rows and columns of A with the edgesof K n . This induces a map τ A from subgraphs of K n to subgraphs of K n by mapping the subgraphassociated with a collection of rows to the column support of this sub-matrix. Lemma 5.4. Let n ≥ d + 2 and suppose that A is a linear automorphism of L d,n . Then theassociated combinatorial map τ A induces a permutation on K d +2 subgraphs of K n .Proof. If E is any D × N matrix of rank D , with E ( L d,n ) ⊂ L d,d +2 , then the map EA also hasthese properties. Thus, by Theorem 4.2 both E and EA have K d +2 support. There is such an E for each K d +2 subgraph: simply take the matrix of the edge forgetting map π ¯ K , where K is an edgeset comprising the edges of this K d +2 . This situation is only possible if τ A ( T ) maps each K d +2 subgraph T to another K d +2 subgraph.If the map on K d +2 subgraphs induced by τ A is not injective, then the matrix A would havemore than D rows supported by only D columns, and thus A would be singular. Since A is a linearautomorphism of L d,n it has to be invertible, and the resulting contradiction completes the proof. (cid:3) This lets us prove the following. Lemma 5.5. Let n ≥ d + 3 and let A be a linear automorphism of L d,n . Then A is a generalizedpermutation.Proof. Suppose, w.l.o.g., that the row corresponding to the edge e := { , } has two non-zero entriescorresponding to edges { i, j } and { k, (cid:96) } . By Lemma 5.4, any K d +2 subgraph T containing the edge e must be mapped by τ A to a K d +2 subgraph T (cid:48) that contains the vertex set X := { i, j } ∪ { k, (cid:96) } .Since | X | ≥ (cid:0) n − d − (cid:1) choices for T (cid:48) . Meanwhile, there are (cid:0) n − d (cid:1) choices for T . Since n ≥ d + 3, we have (cid:0) n − d (cid:1) > (cid:0) n − d − (cid:1) , contradicting the permutation of K d +2 subgraphsguaranteed by Lemma 5.4. 18hus each row of A can have at most one non-zero entry. As a non-singular matrix, this makes A a generalized permutation. (cid:3) At this point, we want to move back to the setting of M d,n , which we do with this next result. Lemma 5.6. Let A := DP be a generalized permutation, where D is an invertible diagonal matrixand P is a permutation matrix. If A is a linear automorphism of L d,n then D P is a linearautomorphism of M d,n .Proof. Let l denote the vector of coordinate-wise square of a vector l ∈ C N ; in this proof squaresof vectors are coordinate-wise. Now we check that l ∈ M d,n ⇒ l ∈ L d,n ⇒ DPl ∈ L d,n ⇒ ( A is an automorphism)( DPl ) ∈ M d,n ⇒ D ( Pl ) ∈ M d,n ⇒ ( D is diagonal)( D P ) l ∈ M d,n ( P is a permutation) (cid:3) Proof of Theorem 5.2. From Lemma 5.5, any linear automorphism A of L d,n with n ≥ d + 3is a generalized permutation A = DP . Lemma 5.6 implies that A gives rise to a generalizededge permutation D P that is a linear automorphism of M d,n . Theorem 3.10 then tells us that D P = s P has uniform scale and also is induced by a vertex relabeling. Finally A is then a scalarmultiple of a signed permutation (Lemma 5.6 “forgets” the signs) as required. (cid:3) L d,d +2 , with d ≥ Our next case is when n is minimal, but we will only deal with the case of d ≥ Theorem 5.7. Let d ≥ . Then any linear automorphism A of L d,d +2 is a scalar multiple of asigned permutation that is induced by a vertex relabeling. The plan is to use some of the structure of the singular locus of L d,d +2 to reduce our problemto that of L d − ,d +2 . Then we can directly apply Theorem 5.2. Lemma 5.8. Let d ≥ . L d − ,d +2 is an irreducible subvariety of Sing( L d,d +2 ) .Proof. Looking first at the squared measurement variety, from Theorem 2.6, we know that Sing( M d,d +2 ) = M d − ,d +2 .Let Z be the locus of C N where at least one coordinate vanishes, and let S := L d − ,d +2 − Z .Thus from Lemma 2.20, the points in S , are (algebraically) singular in L d,d +2 . So S is containedin Sing( L d,d +2 ).From Theorem 2.11, when d ≥ 3, we have L d − ,d +2 is irreducible. The set S is obtained from L d − ,d +2 by removing a strict subvariety, which must be of lower dimension due to irreducibility.Thus S is a full-dimensional constructible subset of the irreducible L d − ,d +2 . Thus the Zariskiclosure of S is L d − ,d +2 .Since Sing( L d,d +2 ) is an algebraic variety, it must contain the Zariski closure of S which is L d − ,d +2 . (cid:3) emma 5.9. L d − ,d +2 has a full-dimensional affine span.Proof. Since L d − ,d +2 contains L ,d +2 , we just need to show that this smaller variety has a full-dimensional affine span.For a fixed i , let us look at configuration p of d + 2 points with p i placed at 1 and the rest ofthe points placed at the origin. Then l := l ( p ) has all zero coordinates except for the d + 1 edgesconnecting p i to the other points. Under the symmetry of L ,d +2 under sign negation, we can findpoints in L ,d +2 with the signs of the l flipped at will. Thus using affine combinations of theseflipped points we can produce a point on the l ij axis, for any j . Iterating over the i gives us ourresult. (cid:3) Now we wish to explore the decomposition of Sing( L d,d +2 ) into its irreducible components.For each ij , Let Z ij be the subvariety of Sing( L d,d +2 ) with a zero-valued ij th coordinate. Asdiscussed above in Lemma 2.20 any singular point that is not contained in L d − ,d +2 must have atleast one zero coordinate (in order to be in the “bad locus” described there). Thus we can writeSing( L d,d +2 ) as the union of L d − ,d +2 and the Z ij .For d ≥ L d − ,d +2 is irreducible, and thus from Lemma A.6 (applied to the union of componentsof Sing( L d,d +2 )) it must be fully contained in at least one component C of Sing( L d,d +2 ). And, againfrom from Lemma A.6 (applied to the union of L d − ,d +2 and the Z ij ), C must be fully containedin either L d − ,d +2 or one of the Z ij . Meanwhile, L d − ,d +2 it is not contained in any Z ij . Thus wecan conclude that: Lemma 5.10. Let d ≥ . L d − ,d +2 is a component of Sing( L d,d +2 ) . From Lemma A.6 (applied to the union of L d − ,d +2 and the Z ij ), any other component ofSing( L d,d +2 ) must be contained in one of the Z ij Thus, we can also conclude: Lemma 5.11. Let d ≥ . Any component of Sing( L d,d +2 ) that is not L d − ,d +2 cannot have afull-dimensional affine span. Now with this understanding of Sing( L d,d +2 ) established we can move on to the automorphisms. Lemma 5.12. Let d ≥ . Any linear automorphism A of L d,d +2 must be a linear automorphismof L d − ,d +2 .Proof. From Theorem A.8, A must be a linear automorphism of Sing( L d,d +2 ). And from Theo-rem A.5 must map components of Sing( L d,d +2 ) to components of Sing( L d,d +2 ).From Lemma 5.10, L d − ,d +2 is a component of this singular set and from Lemma 5.9 it has afull-dimensional affine span. Meanwhile, from Lemma 5.11, no other component can have a full-dimensional affine span. Thus, as a bijective linear map, A must map L d − ,d +2 to itself. (cid:3) And we can finish the proof. Proof of Theorem 5.7. The theorem now follows by combining Lemma 5.12 together with Theo-rem 5.2. (cid:3) .3 Automorphisms of L , The method of the previous section fails for L , as L , is reducible. In fact, the theorem itselffails in this case. The group of linear automorphisms is, in fact, larger than expected.In particular, Regge [27] (see also, Roberts [28]) gave a linear map that always takes the Eu-clidean lengths of the edges of a tetrahedral configuration in R to those of a different tetrahedralconfiguration in R . See Equation ( (cid:63) ) in the introduction.Below we will fully characterize the automorphism group of L , . When we restrict our auto-morphisms to have only non-negative entries, only the expected symmetries will remain. Definition 5.13. A linear automorphism A of L , is real if its matrix has only real entries, rational if its matrix has only rational entries, and non-negative if its matrix contains only real andnon-negative entries. Clearly there are 24 linear automorphism that arise by simply permuting the 4 vertices. Thereare also the 32 linear automorphisms that arise from optionally negating up to 5 of the coordinateaxes in C . Combining these gives us a discrete group of 768 linear automorphisms.Because any global scale will be an automorphism, the group of linear automorphisms of L , is not a discrete group. We now define several groups that will play a role in our analysis. Definition 5.14. Define Aut( L , ) to be the linear automorphisms of L , . Let the group P Aut( L , ) be induced on the equivalence classes of A ∈ Aut( L , ) under the relation “ A (cid:48) is a complex scaleof A ”. We define P Aut(Sing( L , )) via a similar construction. Importantly, we will see below that P Aut(Sing( L , )) is the automorphism group of a projective subspace arrangement and thus is adiscrete group. Also, we have P Aut( L , ) < P Aut(Sing( L , )) . Thus, all the “projectivized” groupswe define are discrete.We also consider the real subgroup Aut R ( L , ) . This has a counterpart P Aut R ( L , ) of equiv-alence classes up to real scale, and P + Aut R ( L , ) , on equivalence classes defined up to positivescale. It is well-defined to refer to an element of P + Aut R ( L , ) as being non-negative, since anyequivalence class containing a non-negative A consists entirely of non-negative matrices. The main theorem of this section characterizes the linear automorphisms of L , as follows. Theproof is in the next subsections. Theorem 5.15. The group P Aut( L , ) is of order · and is generated by linearautomorphisms of L , that are rational.The group P + Aut R ( L , ) is of order and is isomorphic to the Weyl group D . The subsetof non-negative elements of P + Aut R ( L , ) is a subgroup of order and acts by relabeling thevertices of K . Remark 5.16. The group P + Aut R ( L , ) is in fact generated by the edge permutations induced byvertex relabeling, sign flip matrices, and the one Regge symmetry of ( (cid:63) ) from the introduction (seesupplemental script). The rest of this section develops the proof of Theorem 5.15. The Singular Locus of L , In this section, we will study the singular locus of L , . This willbe used for the proof of Theorem 5.15, which characterizes the linear automorphisms of L , . Inparticular, a linear automorphism of a variety must also be a linear automorphism of its singularlocus. 21 heorem 5.17. The singular locus Sing( L , ) consists of the union of 60 3 -dimensional linearsubspaces. These subspaces can be partitioned into three types, which we call I, II and III.Type I: There are subspaces of this type. They arise from configurations of collinearpoints, and together make up L , . They are each defined by (the vanishing of ) three equations ofthe following form: l − s l + s l l − s l + s l s l − s l + s l where each s ij takes on the values {− , } .Type II: There are subspaces of this type. They arise when one pair of vertices is collapsedto a single point. For example, if we collapse p with p , we get the equations: l l − s l l − s l This gives us subspaces, and we obtain this case by collapsing any of the edges.Type III: There are subspaces of this type. They arise by setting the three edges lengths of onetriangle to zero. For example: l l l Proof. The singular locus of a variety V is defined by adding to the ideal I ( V ), the equations thatexpress a rank-drop in the Jacobian matrix of a set of equations generating I ( V ).We first verify in the Magma CAS that the ideal defined by our single simplicial volume deter-minant equation is radical. This also follows from [9].In Magma, we calculate the Jacobian of this equation to express the singular locus. Magma isthen able to factor this algebraic set into components (that are irreducible over Q ), and in this caseoutputs the above decomposition. (See supplemental script.) (cid:3) Flats and intersection graph Theorem A.8 tells us that any linear automorphism of L , mustbe a linear automorphism of its singular set, and so must map each of its singular three-dimensionalsubspaces to some three-dimensional singular subspace. As a linear automorphism, it must alsopreserve the intersection lattice of the three-dimensional singular subspace arrangement. Therefore,by finding the set of linear automorphisms that preserve the intersection lattice of these subspaces,we can constrain our search for automorphisms of L , to just that set. Combinatorial descriptionsof an intersection lattice of a subspace arrangement can be constructed in many ways. Here, itsuffices to consider a partial description that comprises the three-dimensional singular subspacesand their one-dimensional intersections. Definition 5.18. We denote by V the set of singular three-dimensional subspaces of L , . Wedenote by V the set of one-dimensional subspaces created as the intersections of all pairs andtriples of spaces in V . Magma does this check over the field Q , but since Q is a perfect field, this implies that the ideal is also radicalunder any field extension [23, Page 169]. emma 5.19. The set of one-dimensional subspaces V consists of elements. These come in classes:Type I: There are one-dimensional subspaces of this type. They are generated by vectors ofthe form e i where e i is one of the coordinate axes of C .Type II: There are one-dimensional subspaces of this type. They are generated by vectors ofthe form e i ± e j ± e k ± e l where i, j, k, l correspond to the four edges of a 4-cycle. These measurements correspond to collaps-ing two sets of two vertices that are connected by four edges.Type III: There are one-dimensional subspaces of this type. They are generated by vectors ofthe form e i ± e j ± e k where i, j, k correspond to three edges incident to one vertex. These measurements correspond tocollapsing one triangle.Proof. This follows directly from calculating the intersections of all pairs and triples of the 60singular subspaces of L , . This has been done in the Magma CAS. (See supplemental script.) (cid:3) Definition 5.20. We define ∆ as the bipartite graph that has one set of vertices corresponding tothe three-dimensional singular subspaces of L , (one vertex for each three-dimensional subspace),the other set of vertices corresponding to the one-dimensional intersection subspaces V (one vertexfor each one-dimensional subspace), and an edge between vertex i of the first set and vertex j of thesecond set whenever the i th three-dimensional subspace includes the j th one-dimensional subspace. Definition 5.21. A graph automorphism of a bipartite (two-colored) graph is a permutation ρ ofthe vertex set such that the color of vertex i is the same as the color of ρ ( i ) , and vertices ( i, j ) forman edge if and only if ( ρ ( i ) , ρ ( j )) also form an edge. By finding the automorphisms of the graph ∆ we can constrain our search for automorphismsof {V , V } , and thus of L , . Lemma 5.22. The bipartite graph ∆ has automorphisms. Under this automorphism group,the graph has three orbits. One orbit corresponds to the set of three-dimensional singular sub-spaces. Another orbit corresponds to the subset of one-dimensional subspaces in V of type I andII. A third orbit corresponds to the subset of one-dimensional subspaces of type III.Proof. We have computed this using Nauty [22] within Magma. (See supplemental script.) (cid:3) Graph automorphisms to arrangement automorphisms A priori, it might be the case that some of these graph automorphisms do not arise from a lineartransform of C act as an automorphism on the subspace arrangement {V , V } ⊂ C . We rule thisout. 23 emma 5.23. Each of the graph automorphisms of ∆ gives rise to a unique linear automorphismof the arrangement {V , V } on L , , up to a global scale. Each equivalence class of such linearmaps contains a rational-valued matrix.Proof. Each graph automorphism ρ gives rise to a permutation of the spaces in V . A 6 × A describing a linear transform that maps the three-dimensional subspaces in the same mannermust satisfy 540 = 60 · i, ρ ( i )) , i ∈ V .Magma gives us a generating set of size 6 for the group of graph automorphisms. For each ofthe 6 generators of the graph automorphism group, we write out the system of linear constraints.When doing so, we discover that this system always has a solution that is unique, up to a globalscale. The 540 × 36 constraint matrix can always be written as a rational-valued matrix, since thesubspace arrangement {V , V } can be defined using rational-valued coefficients. (See supplementalscript.) (cid:3) Arrangement automorphisms are L , automorphisms It might also be possible that there are linear transforms which preserve the subspace arrangement {V , V } , but do not preserve the entire L , variety. We rule this out as well. Lemma 5.24. Each of the graph automorphisms of ∆ gives rise to a unique linear automorphismon L , , up to a global scale. Each equivalence class of such linear maps contains a rational-valuedmatrix.Proof. From Lemma 5.23, each of the graph automorphisms gives rise to a, unique up to scale,rational-valued linear automorphism of our arrangement. When we pull back the single definingequation of L , through each such invertible linear map, we verify that we recover said equation.Thus this map is a linear automorphism of L , . (cid:3) Reflection group Next, we make a definition that will be helpful in establishing the connection between P + Aut R ( L , )and the Weyl group D . For definitions, see [18]. Definition 5.25. We define the reflection group W as the real matrix group generated by the setof reflections in R across the hyperplanes that are orthogonal to the one-dimensional realintersection subspaces of type I and II. The following lemma was based on conversions with Dylan Thurston. Lemma 5.26. The reflection group W is of order , and is isomorphic to the Weyl group D .The reflection group leaves the variety L , invariant.Proof. From the 30 vectors that generate W , we generate a larger set of 60 vectors φ that has thesame reflection group as follows: For each vector f in the original 30-set, we create two vectors ± f / (cid:107) f (cid:107) in the 60-set. Next, we verify that the set φ is a (reduced, crystallographic) root systemby: i) applying each generator of the group W to the set φ and verifying that it leaves the setinvariant; and ii) verifying that the set satisfies the integrality condition ∀ f , g ∈ φ, f · g ) / (cid:107) f (cid:107) ∈ Z .A reflection group of a root system is a Weyl group. To prove the first part of the lemma, weneed only classify the root system (and thus the Weyl group) according to the finite catalog of rank6 possibilities. We use the procedure described in [18, page 48], which we summarize here.24e begin by choosing any vector h ∈ Q that is not proportional or perpendicular to a vectorin φ , and then we identify the subset of positive roots φ + := { f : f ∈ φ, ( h · f ) > } . Since φ is a rootsystem, it will be the case that | φ + | = | φ | / simple roots as the vectors f ∈ φ + that cannot be decomposed as g + g for some g i ∈ φ + . Byconstruction, simple roots form a basis for the embedding vector space, so in the present case therewill be 6 of them. Finally, we can classify the group by examining the pattern of pairwise anglesbetween simple roots.Applying this calculation to our root system, we find that the pairwise angles between thesimple roots are 0 or 2 π/ 3. We draw a Dynkin diagram that has one vertex for each simple rootand an edge ( i, j ) whenever the angle between roots i and j is 2 π/ 3. Doing so, we find that thisdiagram is of type D . This means that the reflection group is isomorphic to the Weyl group D ,which is of order 23040. This proves the first part of the lemma.To prove the second part of the lemma, we use the fact that the reflection group W is generatedby the 6 reflections from the simple roots. We pull back the single defining equation of L , througheach of these 6 linear maps, and we verify that we recover said equation.Note that the group could also be identified from its computed order. (See supplemental script.) (cid:3) Proof The proof of our theorem is now nearly complete. Proof of Theorem 5.15. From Theorem A.8, a linear automorphism of L , must be a linear auto-morphism of its singular set V , and thus must preserve the incidence structure of {V , V } . Anylinear automorphism of this incidence structure must give rise to a graph automorphism of ∆. ByLemma 5.22, there are 11520 graph automorphisms of ∆, and from Lemma 5.24, each gives rise to arational valued linear automorphism of L , , unique up to scale. Summarizing, we have shown that P Aut( L , ) = P Aut(Sing( L , )), and that both of these groups are isomorphic to the automor-phism group of the graph ∆. Lemma 5.24 also implies that each equivalence class in P Aut( L , )contains a rational representative, so this group can be generated by rational matrices.Because of the rational generators mentioned above, the group P Aut R ( L , ) is isomorphic tothe others. It then follows that the order of P + Aut R ( L , ) is 23040 = 2 · P + Aut R ( L , ). By Lemma 5.26 (specifically the secondstatement), the elements of W generate some subgroup G of P + Aut R ( L , ). In fact, no two elementsof W are related by a positive scale, so W is isomorphic to this G . The first part of Lemma 5.26says that W has the same order as P + Aut R ( L , ), so W and P + Aut R ( L , ) are isomorphic.For the third part of the theorem, we need only test 23040 matrices and retain those that haveonly non-negative entries. This has been done in the Magma CAS, and indeed, it yields only the 24edge permutations induced by vertex relabeling. (See supplemental script.) This is, in particular,a subgroup of P + Aut R ( L , ). (cid:3) L , Theorem 5.27. Any linear automorphism A of L , is a scalar multiple of a signed permutationthat is induced by a vertex relabeling. roof. L , comprises 4 hyperplanes. Each permutation on these 4 planes gives us at most a singlelinear automorphism of L , up to scale. Thus P Aut( L , ) is isomorphic to a subgroup of S and,in particular, has order at most 24.Meanwhile P Aut( L , ) contains a subgroup of order 24 generated by vertex relabeling and signflips. By the above, this must be the whole group. (cid:3) Remark 5.28. If we want to see S acting by sign flips and coordinate permutations, we canobserve that these maps are symmetries of the cube that permute the opposite corner diagonals. A Algebraic Geometry Preliminaries We summarize the needed definitions and facts about complex algebraic varieties. For more see [16].In this section N and D will represent arbitrary numbers. Definition A.1. A (complex embedded affine) variety (or algebraic set ), V , is a (not necessarilystrict) subset of C N , for some N , that is defined by the simultaneous vanishing of a finite set ofpolynomial equations with coefficients in C in the variables x , x , . . . , x N which are associated withthe coordinate axes of C N .A variety can be stratified as a union of a finite number of complex analytic submanifolds of C N .A finite union of varieties is a variety. An arbitrary intersection of varieties is a variety.The set of polynomials that vanish on V form a radical ideal I ( V ) , which is generated by a finiteset of polynomials.A variety V is reducible if it is the proper union of two varieties V and V . (Proper means that V is not contained in V and vice versa.) Otherwise it is called irreducible . A variety has a uniquedecomposition as a finite proper union of its maximal irreducible subvarieties called components .(Maximal means that a component cannot be contained in a larger irreducible subvariety of V .)A variety V has a well defined (maximal) dimension Dim( V ) , which will agree with the largest D for which there is an open subset of V , in the standard topology, that is a D -dimensional complexsubmanifold of C N .The local dimension Dim l ( V ) at a point l is the dimension of the highest-dimensional irreduciblecomponent of V that contains l . If all components of V have the same dimension, we say it has pure dimension .Any strict subvariety W of an irreducible variety V must be of strictly lower dimension. Definition A.2. A constructible set S is a set that can be defined using a finite number of varietiesand a finite number of Boolean set operations.The Zariski closure of S is the smallest variety V containing it. The set S has the samedimension as its Zariski closure V .The image of a variety V of dimension D under a polynomial map is a constructible set S ofdimension at most D . If V is irreducible, then so too is the Zariski closure of S . (We say that S is irreducible .) Theorem A.3. Any variety V is a closed subset of C N in the standard topology. If a subset S of C N is standard-topology dense in a variety V , then V is the Zariski closure of S . We will need the following easy lemmas. Lemma A.4. Let A be a bijective linear map on C N . The image under A of a variety V is avariety of the same dimension. If V is irreducible, then so too is this image. roof. The image S := A ( V ) must be a constructible set.Since A is bijective, then there is also map A − acting on C N , and S must be the inverse imageof V under this map. Thus, by pulling back the defining equations of V through A − , we see that S must also be a variety.The dimension follows from the fact that maps cannot raise dimension, and our map is invertible. (cid:3) Theorem A.5. If A is a bijective linear map on C N that acts as bijection between two reduciblevarieties V and W , then it must bijectively map components of V to components of W .Proof. From Lemma A.4, A must map irreducible varieties to irreducible varieties. As a bijection,it also must preserve subset relations (which define maximality). (cid:3) Lemma A.6. Let V = V ∪ V be a union of varieties. Then any irreducible subvariety W of V must be fully contained in at least one of the V i .Proof. If W was not fully contained in either V i , then it could be written as the proper union ofvarieties W = (cid:83) i ( W ∩ V i ) contradicting its irreducibility. (cid:3) There are two approaches for defining smooth and singular points. One comes from our algebraicsetting, while the other comes from the more general setting of complex analytic varieties (which wewill explicitly refer to as “analytic”). It will turn out that (algebraic) smoothness implies analyticsmoothness, and that analytic smoothness implies (algebraic) smoothness. Definition A.7. The Zariski tangent space at a point l of a variety V is the kernel of the Jacobianmatrix of a set of generating polynomials for I ( V ) evaluated at l .A point l is called (algebraically) smooth in V if the dimension of the Zariski tangent spaceequals the local dimension Dim l ( V ) . Otherwise l is called (algebraically) singular in V . The locus of singular points of V is denoted Sing( V ) . The singular locus is itself a strict subvariety of V . Theorem A.8. If A is a bijective linear map on C N that acts as a bijection between two irreduciblevarieties V and W , then it must map singular points to singular points. This is a special case of the more general setting of “regular maps” and “isomorphisms ofvarieties” [16, Page 175]. Theorem A.9. If a point l is contained in two distinct components of V , then l cannot be a smoothpoint in V . See [29, II. 2. Theorem 6]. Definition A.10. If a point l in a variety V has a neighborhood in V that is a complex submanifoldof C N with some dimension D , then we call the point analytically smooth of dimension D in V , orjust analytically smooth in V . Otherwise we call the point analytically singular in V . The following theorem tells us that there is no difference between these to notions of smoothness. Theorem A.11. An (algebraically) smooth point l in a variety V must be an analytically smoothpoint of dimension Dim l ( V ) in V .A point l that is analytically smooth of dimension D in V must be an (algebraically) smoothpoint l in V with Dim l ( V ) = D . For discussions on this theorem see [16, Exercise 14.1], [25, Page 13]. See [21, Page 14] for thesetting where one does not assume irreducibility, or even pure dimension.Note that the second direction does not have a corresponding statement in the setting of realalgebraic varieties. 27 Fano Varieties of L , This section contains a bonus result about the linear subsets in L , . Though it is not needed forthe rest of the paper, it can be of use for unlabeled rigidity problems [10]. Definition B.1. Given an affine algebraic cone V ⊂ C N (an affine variety defined by a homo-geneous ideal), its Fano- k variety Fano k ( V ) is the subset of the Grassmanian Gr( k + 1 , N ) corre-sponding to k + 1 -dimensional linear subspaces that are contained in V . Theorem B.2. The only -dimensional linear subspaces that are contained in L , are the 60 3 -dimensional linear spaces comprising its singular locus. Moreover, there are no linear subspaces ofdimension ≥ contained in L , .Proof. This proposition is proven by calculating the Fano-2 variety of L , in the Magma CAS [4],and comparing it to the the Fano-2 variety of the singular locus of L , .We use the approach described in [16, Page 70] to compute the Fano ( L , ) variety. We summa-rize this approach here. We shall order the coordinates of C in the order ( l , l , l , l , l , l ).Let us specify a point in C as λ λ λ λ λ λ λ λ λ t t t where the λ i are variables that specify a three-dimensional linear subspace of C , and the t j arevariables that specify a point on that subspace. Note that this can only represent an affine opensubset of the Grassmanian; it cannot represent three-dimensional linear subspaces that are parallelto the first three coordinate axes.We can compute the polynomial in [ λ i , t j ] vanishing when the associated points in C are alsoin L , . We can then look at all of the coefficients (polynomials in λ i ) of the monomials in t j . Thesecoefficient polynomials vanish identically iff the linear subspace specified by the λ i is in L , . Thusthese coefficients generate an affine open subset of Fano ( L , ).To study the whole Fano variety, we must also look at the other affine subsets of the Grassma-nian. Due to the vertex symmetry of L , , we only need to consider the additional two matrices: λ λ λ λ λ λ λ λ λ and λ λ λ λ λ λ λ λ λ These three matrices represent the triplet of coordinate axes corresponding to, respectively,a triangle, a chicken-foot, and a simple open path. Thus, these 3 open subsets of Fano ( L , ),together with vertex relabeling, cover the full Fano variety.We compute these 3 open subsets of Fano ( L , ) in Magma, and verify that, in each of these opensubsets, Fano ( L , ) is 0-dimensional and | Fano ( L , ) | = | Fano (Sing( L , )) | . As Fano ( L , ) ⊃ Fano (Sing( L , )), we can conclude that Fano ( L , ) = Fano (Sing( L , )) (see supplemental script).As Fano-2 variety is discrete, the higher Fano varieties of L , must also be empty. (cid:3) emark B.3. We have been unable to fully compute any of the Fano varieties of L , in anycomputer algebra system, but partial results do not look promising. We have been able to verify that Fano ( L , ) is not empty (see supplemental script). This together with our (partial) understandingof Sing( L , ) suggests that L , indeed contains -dimensional linear spaces that are not containedin its singular locus. References [1] A. Akopyan and I. Izmestiev. The regge symmetry, confocal conics, and the schl¨afli formula. Bulletin of the London Mathematical Society , 51(5):765–775, 2019.[2] C. S. Borcea. Point configurations and Cayley-Menger varieties. Preprint,arXiv:math/0207110, 2002. URL https://arxiv.org/abs/math/0207110 .[3] C. S. Borcea. Symmetries of the positive semidefinite cone. Forum Mathematicum , 26(4):983–986, 2014.[4] W. Bosma, J. Cannon, and C. Playoust. The Magma algebra system I: The user language. Journal of Symbolic Computation , 24(3):235–265, 1997.[5] M. Boutin and G. Kemper. On reconstructing n -point configurations from the distribution ofdistances or areas. Adv. in Appl. Math. , 32(4):709–735, 2004. ISSN 0196-8858. doi: 10.1016/S0196-8858(03)00101-5.[6] M. Compagnoni, R. Notari, A. A. Ruggiu, F. Antonacci, and A. Sarti. The algebro-geometricstudy of range maps. J. Nonlinear Sci. , 27(1):99–157, 2017. ISSN 0938-8974. doi: 10.1007/s00332-016-9327-4.[7] I. Dokmani´c, Y. M. Lu, and M. Vetterli. Can one hear the shape of a room: The 2-D polyg-onal case. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE InternationalConference on , pages 321–324. IEEE, 2011.[8] P. Doyle and G. Leibon. 23040 symmetries of hyperbolic tetrahedra. Preprint,arXiv:math/0309187, 2003. URL https://arxiv.org/abs/math/0309187 .[9] C. D’Andrea and M. Sombra. The Cayley-Menger determinant is irreducible for n ≥ SiberianMathematical Journal , 46(1):71–76, 2005.[10] I. Gkioulekas, S. J. Gortler, L. Theran, and T. Zickler. Determining generic point configurationsfrom unlabeled path or loop lengths. arXiv preprint arXiv:1709.03936 , 2017.[11] S. J. Gortler and D. P. Thurston. Generic global rigidity in complex and pseudo-Euclideanspaces. In Rigidity and symmetry , volume 70 of Fields Inst. Commun. , pages 131–154. Springer,New York, 2014. doi: 10.1007/978-1-4939-0781-6 8.[12] S. J. Gortler, L. Theran, and D. P. Thurston. Generic unlabeled global rigidity. In Forum ofMathematics, Sigma , volume 7. Cambridge University Press, 2019.[13] J. C. Gower. Properties of Euclidean and non-Euclidean distance matrices. Linear AlgebraAppl. , 67:81–97, 1985. ISSN 0024-3795. doi: 10.1016/0024-3795(85)90187-9.2914] J. Graver, B. Servatius, and H. Servatius. Combinatorial rigidity , volume 2 of Graduate Studiesin Mathematics . American Mathematical Society, Providence, RI, 1993. ISBN 0-8218-3801-6.doi: 10.1090/gsm/002.[15] V. Guillemin and A. Pollack. Differential topology , volume 370. American Mathematical Soc.,2010.[16] J. Harris. Algebraic geometry: a first course , volume 133. Springer Science & Business Media,2013.[17] J. Harris and L. W. Tu. On symmetric and skew-symmetric determinantal varieties. Topology ,23(1):71–84, 1984.[18] J. E. Humphreys. Introduction to Lie algebras and representation theory . Springer-Verlag,New York-Berlin, 1972. Graduate Texts in Mathematics, Vol. 9.[19] P. Juh´as, D. Cherba, P. Duxbury, W. Punch, and S. Billinge. Ab initio determination ofsolid-state nanostructure. Nature , 440(7084):655–658, 2006.[20] G. Laman. On graphs and rigidity of plane skeletal structures. J. Engrg. Math. , 4:331–340,1970. ISSN 0022-0833.[21] D. B. Massey and L. D. Trang. Notes on real and complex analytic and semianalytic singu-larities. Available at , 2006.[22] B. D. McKay and A. Piperno. Practical graph isomorphism, II. Journal of Symbolic Compu-tation , 60:94–112, 2014.[23] J. S. Milne. Algebraic geometry. Online lecture notes (v5.20), available at , 2009.[24] J. S. Milne. Lectures on ´Etale cohomology. Online lecture notes (v2.21), available at , 2013.[25] J. Milnor. Singular Points of Complex Hypersurfaces . Annals of mathematics studies. Prince-ton University Press, 1968.[26] J. R. Munkres. Topology . Prentice Hall, 2000.[27] T. Regge. Simmetry properties of Racah’s coefficients. Il Nuovo Cimento (1955-1965) , 11(1):116–117, Jan 1959. ISSN 1827-6121. doi: 10.1007/BF02724914.[28] J. Roberts. Classical 6 j -symbols and the tetrahedron. Geometry & Topology , 3(1):21–66, 1999.[29] I. R. Shafarevich. Basic algebraic geometry . Springer-Verlag, Berlin-New York, study edition,1977. Translated from the Russian by K. A. Hirsch, Revised printing of Grundlehren dermathematischen Wissenschaften, Vol. 213, 1974.[30] D. Thurston. Unusual symmetries of the Cayley-Menger determinant for the volume of tetrahe-dra. MathOverflow. URL https://mathoverflow.net/q/259664 . Question, (version: 2017-01-15). 3031] A. U. Velten. Super resolution remote imaging using time encoded remote apertures. Technicalreport, University of Wisconsin-Madison Madison United States, 2018.[32] M. Wendt. Unusual symmetries of the Cayley-Menger determinant for the volume of tetra-hedra. MathOverflow. URL https://mathoverflow.net/q/259767 . Answer, (version: 2017-01-16).[33] G. Young and A. S. Householder. Discussion of a set of points in terms of their mutualdistances.