RRepresentations of stack triangulations in the plane
Thomas SeligLaBRI, CNRSUniversit´e Bordeaux 1November 4, 2018
Abstract
Stack triangulations appear as natural objects when defining an increasing family of trian-gulations by successive additions of vertices. We consider two different probability distributionsfor such objects. We represent, or “draw” these random stack triangulations in the plane R andstudy the asymptotic properties of these drawings, viewed as random compact metric spaces.We also look at the occupation measure of the vertices, and show that for these two distributionsit converges to some random limit measure. Keywords:
Stack triangulations; Occupation measure; Limit Theorem.
Consider a rooted triangulation of the plane, and some finite face f , say ABC , of this triangulation.We insert a vertex M in f , and add the three edges AM , BM , CM to the original triangulation.We obtain a triangulation with two faces more than the original triangulation (the face f hasbeen replaced by three new faces). Thus, starting from a single rooted triangle, after k suchinsertions, we get a triangulation with k internal vertices, that is which aren’t vertices of theoriginal rooted triangle, and 2 k + 1 finite faces. The set of triangulations with k internal verticeswhich can be reached through this growing procedure is denoted ∆ k . We call such triangulations stack triangulations . Note that through this construction we do not obtain the set of all rootedtriangulations. This iterative process is demonstrated in Figure 1. A BC M f Figure 1: Iterative construction of a stack triangulationWe endow the set ∆ k with two natural probability distributions: • The first is the uniform distribution U ∆ k . 1 a r X i v : . [ m a t h . P R ] S e p The second distribution H ∆ k is the distribution induced by the above construction where ateach step the face in which we insert the vertex is chosen uniformly at random among allfinite faces, independently from the past. In this paper, rather than look at stack triangulations as maps, that is up to homeomorphism, welook at particular representations, or drawings, of such objects in the plane, and the geometricalproperties of such representations. That is, at each insertion of a new vertex, we draw the linesegments corresponding to the edges added. We call such representations compact triangulations ,and view them as compact subspaces of R . The main difference is that while maps are graphsdrawn in the plane, they are considered only up to homeomorphism, whereas we are interested inthe actual representation. We are rather informal here, but will give formal definitions later in thepaper, in Section 2.1.We take A = (0 , B = (1 , C = e i π (identifying C and R ) to be the three pointsrepresenting the initial rooted triangle, with ( A, B ) its root. We start with T = T = [ AB ] ∪ [ BC ] ∪ [ CA ], and set D = { T } . We denote by ˜ T the filled triangle T , that is the union of T and of the finite connected component of R \ T . At time 1, we insert a point M somewhere in˜ T \ T . We then define T ( M ) = T ∪ [ AM ] ∪ [ BM ] ∪ [ CM ], and set D = { T ( M ); M ∈ ˜ T \ T } . Now let T ( M ) ∈ D for some M ∈ ˜ T \ T . Write T (1)1 ( M ) = [ AB ] ∪ [ BM ] ∪ [ M A ], T (2)1 ( M ) =[ BC ] ∪ [ CM ] ∪ [ M B ], T (3)1 ( M ) = [ CA ] ∪ [ AM ] ∪ [ M C ], and similarly, ˜ T i ) ( M ) for the correspondingfilled triangles. At time 2, we insert a point N somewhere in one of the ˜ T i ) ( M ) \ T ( i )1 ( M )’s. Wethen define T ( M, N ) := T ( M ) ∪ [ XN ] ∪ [ Y N ] ∪ [ M N ] , where ( X, Y ) = ( A, B ) if N ∈ ˜ T ( M ) \ T (1)1 ( M )( B, C ) if N ∈ ˜ T ( M ) \ T (2)1 ( M )( C, A ) if N ∈ ˜ T ( M ) \ T (3)1 ( M ) , and finally set D = (cid:40) T ( M, N ); M ∈ ˜ T \ T and N ∈ (cid:91) i =1 (cid:16) ˜ T i ) ( M ) \ T ( i )1 ( M ) (cid:17)(cid:41) . Figure 2 illustrates this initial construction.Iterating this construction by choosing at each step some triangular face of our drawing andsplitting it, we obtain representations of stack triangulations in the plane, and call these compacttriangulations. Denote D k the set of such objects with k internal vertices (that is, after k successiveinsertions of vertices), and for m ∈ D k write V ( m ) for its set of internal vertices (viewed as a set ofpoints in the plane). Finally, for m ∈ D k we define the occupation measure of m by µ ( m ) := 1 k (cid:88) x ∈V ( m ) δ x , (1)where δ x stands for the Dirac mass at x . 2 T ( M ) T ( M, N ) MM N
Figure 2: Construction of D , D and D We are interested here in the case where the successive insertions of vertices are done at random.We suppose that all random variables in this paper are defined on some probability space (Ω , F , P ).We denote E the expected value, and Var the variance. We consider a probability distribution ν on R such that if P = ( P , P , P ) has law ν then a.s. P i > i ∈ { , , } and P + P + P = 1 . We suppose that each insertion of a vertex M in a face QRS is done according to ν , that is wetake M to have barycentric coordinates ( Q, P ) , ( R, P ) , ( S, P ) where P = ( P , P , P ) has law ν ,independently from all previous insertions. Now the two distributions U ∆ k and H ∆ k on ∆ k introducedat the start of the section induce probability distributions U νk and H νk on D k . In words, they arethe distributions of the drawings of stack triangulations with distribution U ∆ k and H ∆ k , where theinsertions of vertices are made according to ν , independently from each other, and independentlyfrom the choice of the underlying stack triangulation. The object of the paper is to study theasymptotic behaviour of these two distributions. In Section 2, we formally define compact triangulations. We then enrich the classical bijectionbetween stack triangulations and ternary trees (see for instance [1], Proposition 1) to encode com-pact triangulations. For this, in Section 2.2, we introduce the notion of coordinate-labelled ternarytrees. These are ternary trees with labels at each vertex, which code compact triangulations via abijection we establish in Theorem 2.8. We end the section by formally defining the distributions U νk and H νk on D k .In Section 3, we study the asymptotic behaviour of the uniform distribution U νn as n → ∞ . Themain results are: • The weak convergence of the occupation measure as defined in (1) towards a Dirac mass ata random position (Theorem 3.1). • The weak convergence of the distribution U νn towards a distribution on compact subspaces of R (Theorem 3.9).Section 3.1 is thus dedicated to the statement and proof of Theorem 3.1. The statement is splitinto two parts. The first part states the convergence in distribution of an internal vertex of m n chosen uniformly at random, where m n has distribution U νn , to some limit vertex. Though thisis weaker than the second part, it is a key ingredient in its proof, and thus we choose to state itseparately. In Section 3.2 we introduce the notion of local convergence for trees (Definition 3.13). This is the notion of splitting law, defined formally in Section 2.1
3n Theorem 3.15, we show that the bijection established in Theorem 2.8 which maps a coordinate-labelled tree to a compact triangulation has a property of continuity with respect to this topologyof local convergence, from which we infer Theorem 3.9.Finally, in Section 4, we study the asymptotic behaviour of the occupation measure under H νn . The key ingredient is Poisson-Dirichlet fragmentation, which allows us to view the treescorresponding to the compact triangulations via Theorem 2.8 as the underlying tree of a certainfragmentation tree (Theorem 4.2). We then show the convergence of the occupation measure asdefined in (1) to some (random) limit measure µ (Theorem 4.5). In Section 4.3 we study theproperties of µ . We show (Proposition 4.8) that a.s. µ has no atom, and that it is supported on aset whose Hausdorff dimension is at most 23 log(3) (Theorem 4.10). Motivation for this work stems from the paper by Bonichon et al. [7], in which the authors lookat convex straight line drawings of triangulations, and establish bounds for the minimal grid sizenecessary for these drawings, with the constraint that all vertices are located at integer grid points.More precisely, they show that to draw any triangulation with n faces, a grid of size ( n − C ) × ( n − C )(for some constant C ) is sufficient, giving a constructive proof of this result by establishing analgorithm for drawing any triangulation. The aim of this paper if to provide an answer to thequestion: what do these drawings look like?More specifically, we aim to explore an approach for the convergence of maps which differs fromthe traditional combinatorial one. Indeed, maps are embeddings of graphs, but in the combinatorialapproach these are viewed up to homeomorphism, and equipped with the graph distance, that isevery edge is given the same length. Concerning this approach, we cite the groundbreaking workby Schaeffer in his thesis [15], where he establishes a crucial bijection between maps and a classof labelled trees, as well as the more recent work by LeGall [13] and Miermont [14], who showed(separately, using different techniques) that uniform quandrangulations with n faces, renormalisedso that every edge has length C.n (for some constant C ), converge in distribution to a continuouslimit object called the Brownian map. In this paper however, we look at the convergence ofthe embeddings themselves, viewed as (random) compact spaces. This approach is analogous tothe work of Curien and Kortchemski [9]. In this paper, the authors showed the universality of theBrownian triangulation introduced by Aldous [4], in that is the limit of a number of discrete familiescalled non-crossing plane configurations, such as dissections, triangulations, and non-crossing treesof the regular n -gon. As mentioned, Curien and Kortchemski view non-crossing plane configurationsas random compact subspaces of the unit disk, and it is these compact spaces which converge tothe limit object.In this paper, we also study the asymptotic behaviour of the occupation measure, as defined in(1). Similar work includes the paper by Fekete [10] on branching random walks. In this paper, heconsiders branching random walks where the underlying tree is a binary search tree (this is relatedto our distribution H νn in this paper). He shows that the occupation measure converges weaklyto a limit measure which is deterministic. More work concerning the study of random measuressimilar to ours can be found in [3]. In this paper, Aldous proposes a natural model for randomcontinuous “distributions of mass”, called the Integrated super-Brownian Excursion (ISE), whichis the (random) occupation measure of the Brownian snake with lifetime process the normalisedBrownian excursion. ISE is defined using random branching structures, and appears to be the4ontinuous limit of occupation measures of several discrete structures.Finally, let us mention that the combinatorial aspect of stack triangulations has been extensivelystudied, notably by Albenque and Marckert [1], and their paper will therefore be of great use tous. The authors studied both the uniform distribution U ∆ k and the other distribution H ∆ k . Moreprecisely, they showed that: • for the topology of local convergence, U ∆ n converges weakly to a distribution on the set ofinfinite maps. • For the Gromov-Hausdorff distance, with the normalising factor n , a map with the uniformdistribution U ∆ n converges weakly to the continuum random tree introduced by Aldous [2] • Under the distribution H ∆ n , the distance between random points rescaled by log n convergesto 1 in probability. In this section we code compact triangulations, that is the representations of triangulations in theplane, by some labelled trees. There are two main ideas in this coding. First there is the combina-torial bijection between the discrete objects: stack triangulations (viewed up to homeomorphism)and ternary trees. There is a well known bijection which maps internal vertices of the triangulationto internal nodes of the tree and faces of the triangulation to leaves of the tree (see for instance [1]Proposition 1 and references therein). We then enrich this bijection to include the drawing of thetriangulation by adding labels to the tree: these labels correspond to the barycentric coordinatesof the vertices of the triangulation.
Here we build formally the set D k of compact triangulations with k internal vertices. The construc-tion is done by induction, and is similar to the construction of stack triangulations. This allowsus to observe the tree-like structure of these objects. During the construction, we will define thevarious notions necessary for the encoding discussed above. Set as in the introduction A = (0 , B = (1 , C = e i π to be the three points of the original triangle, and define T = [ AB ] ∪ [ BC ] ∪ [ CA ].Now define D = { T } , and set V ( T ) = ∅ . The set V ( T ) will be the set of internal nodes of T . Nowassume we have constructed D k for some k ≥
0, such that D k is a set of compact subspaces of R and any m ∈ D k satisfies the following properties:1. The compact space m is the union of line segments in the plane.2. There are exactly 2 k + 1 finite connected components of R \ m , and these are all interiorsof triangles. Let F ( m ) be the set of these connected components, and call the elements of F ( m ) faces of m . For f ∈ F ( m ) we define ( A f , B f , C f ) as the three points of the triangle f . We can in fact define these points non ambiguously as follows. • X f = X for X ∈ { A, B, C } , if f is the interior of the original triangle T . • If a triangle f is split into three triangles f , f , f by adding a point M in its interior,and f is defined by the three points A f , B f , C f , then M = A f = B f = C f with the5ther two vertices of each triangle f i unchanged (that is, B f = B f , C f = C f and soforth). This is illustrated in Figure 3 below.3. Finally assume that for any m ∈ D k we have defined a set V ( m ) of k points of ˜ T , which arethe k points inserted at each step of the construction of m .Note that these properties are all satisfied for k = 0. A f B f C f f A f B f C f A f B f C f A f B f C f Figure 3: Ordering the vertices of a triangleWe now construct the set D k +1 . First, let˙ D k = { ( m, f ); m ∈ D k , f ∈ F ( m ) } be the set of compact triangulations with a marked face. Define a map I from ˙ D k onto the set ofcompact subspaces of R as follows. Let ( m, f ) ∈ ˙ D k , and let ( A f , B f , C f ) be the three (ordered)points of f . For any point M in the face f we define m (cid:48) = I M ( m, f ) := m ∪ { [ A f M ] , [ B f M ] , [ C f M ] } , that is the space m with those three new lines added, connecting the points of the face f with theinserted vertex M . The map I M is illustrated in Figure 4. We see that there are exactly 2 k + 3finite connected components of R \ m (cid:48) , and these are all interiors of triangles (we have replacedone of them, f , by three new ones). We also set V ( m (cid:48) ) = V ( m ) ∪ { M } , (2)and thus the set V ( m (cid:48) ) is a set of k + 1 points of ˜ T : it is the set of the internal vertices which definethe faces of m (cid:48) . Finally, we can define D k +1 := (cid:110) I M ( m, f ); ( m, f ) ∈ ˙ D k , M ∈ f (cid:111) to be the image of this map. In words, it is the set of “ drawings ” of stack triangulations with k internal vertices, with edges included. Definition 2.1.
Let k ≥ . For m ∈ D k , we call the elements of V ( m ) (where V ( m ) is defined stepby step by (2)) the internal vertices of m . The set D k is called the set of compact triangulationswith k internal vertices. Finally, we denote D = (cid:91) k ≥ D k the set of compact triangulations. M ( m, f ) m Mf Figure 4: The insertion map I Definition 2.2.
Let m ∈ D . We define the occupation measure of m by µ ( m ) = 1 |V ( m ) | (cid:88) x ∈V ( m ) δ x . (3) This is a probability measure in R . Note that D k is a set of compact subspaces of R . We aim to introduce some probability lawson these sets (as explained in the introduction), and for this we need to equip them with a σ -field.We first recall the definition of the Hausdorff distance for compact spaces. Definition 2.3.
Let ( E, d ) be a compact metric space. For A ⊆ E and ε > , define the ε -neighbourhood of A as the set of points of E whose distance to A is less than ε , that is V ε ( A ) = { x ∈ E, d ( x, A ) < ε } . Then for two compact sets
A, B ⊆ E , the Hausdorff distance between A and B is defined by d H ( A, B ) = inf { ε > , A ⊆ V ε ( B ) and B ⊆ V ε ( A ) } . This defines a distance on the set of compact subspaces of E . We equip the space of compact subspaces of ˜ T with the Hausdorff distance. It is a well-knowntopological fact that this makes it a complete metric space (see for instance [8] Section 7.3.1 p.252). In fact, ( D k , d H ) is a compact metric space. We equip the sets D k with the correspondingBorel σ -algebra. We now encode compact triangulations by certain labelled trees. We begin with the purely combi-natorial aspect. Let W := (cid:91) n ≥ N n be the set of all words on N = { , , ... } , and by convention set N = {∅} .If u = ( u , ..., u n ) ∈ W we write | u | = n and call this the height of u . Also, if we take two words u = ( u , ..., u n ) , v = ( v , ..., v m ) ∈ W , we write uv = ( u , ..., u n , v , ..., v m ) for the concatenation of u and v . By convention u ∅ = ∅ u = u . A planar tree is a subset t ⊆ W such that7. ∅ ∈ t .2. If u ( j ) ∈ t for some u ∈ W and j ∈ N , then u ∈ t .The notation u ( j ) is used here to mark the fact that we are concatenating the words u and( j ), the latter being written so as to differentiate it from the letter j .3. For every u ∈ t there exists k u ( t ) ∈ N such that u ( j ) ∈ t if and only if 1 ≤ j ≤ k u ( t ).The integer k u ( t ) corresponds to the number of children (or descendants) of u in t .We will denote by U the set of planar trees. If t ∈ U is a planar tree, its height h ( t ) is definedby h ( t ) := sup {| u | ; u ∈ t } ∈ (cid:74) , ∞ (cid:75) . If u ∈ t has no child (i.e. k u ( t ) = 0) we say that u is a leaf of t . Any vertex which isn’t a leaf is called an internal node of t . We denote t the set of internalnodes of a tree t . If t is a tree, and u, v are in t , we write u ∧ v for the highest common ancestor of u and v , i.e. the element of maximal height of the set { w ∈ t ; ∃ ( u (cid:48) , v (cid:48) ) , u = wu (cid:48) and v = wv (cid:48) } . If u ∈ t , we let θ u ( t ) = { v ∈ W ; uv ∈ t } . This is the subtree of t which has u as a root. A ternarytree t is a planar tree such that ∀ u ∈ t, k u ( t ) ∈ { , } , i.e. every internal node has exactly threechildren.We denote T the set of ternary trees, and henceforth we will simply call them trees. We denote T ∞ the infinite complete ternary tree, that is T ∞ = (cid:91) n ≥ { , , } n . It is a well known fact that T is mapped bijectively to the set of stack triangulations. However,compact triangulations contain more information than stack triangulations, since compact triangu-lations contain the information of where each internal vertex is placed. The additional informationwill be put at each vertex of the associated ternary tree, and will be the barycentric coordinates ofthe point associated with u , at the time it has been inserted. The idea is thus to associate with apoint M its triplet C ( M ) of barycentric coordinates with respect to ( A, B, C ), taken to be with sumequal to 1. As such C ( A ) = [1 , , , C ( B ) = [0 , , , C ( C ) = [0 , , T is given as in Figure 5, then C ( M ) = ( P , P , P ), where ( P , P , P ) are the respective ratiosof the areas of the triangles M BC, AM C, AM B over the area of
ABC . A BCM P P P Figure 5: The splitting of a triangle via P Write V := { ( x , x , x ) ∈ R ; x , x , x ≥ x + x + x = 1 } for the 3-dimensional simplex, so that any point M ∈ ˜ T corresponds bijectively to its (normalised)barycentric coordinates in V . Also, let V ∗ := { ( x , x , x ) ∈ R ; x , x , x > x + x + x = 1 }
8e the 3-dimensional simplex with its boundary removed.
Definition 2.4.
1. A fragmentation-labelled tree is a pair ( t, ( P ( u )) u ∈ t ) , t ∈ T , such that forany u ∈ t , P ( u ) ∈ V ∗ , that is a tree t and a set of triplets P ( u ) indexed by the internal nodesof t . P ( u ) is called the splitting triplet at u . We denote F T n the set of fragmentation-labelledtrees with n internal vertices, and F T = (cid:83) F T n .2. A coordinate-labelled tree is a pair ( t, ( λ ( u )) u ∈ t ) , t ∈ T , with labels λ ( u ) at each node u suchthat:(a) For each leaf l of the tree t , we have λ ( l ) ∈ V , and write C ( l ) = λ ( l ) . The elements of C ( l ) are called the coordinates of l .(b) For each internal node u we have λ ( u ) = ( C ( u ) , P ( u )) , with C ( u ) ∈ V , and P ( u ) ∈ V ∗ .The elements of C ( u ) are called the coordinates of u , and P ( u ) is called the splittingtriplet at u .(c) The coordinates of the root are C ( ∅ ) = ([1 , , , [0 , , , [0 , , .(d) If C ( u ) = ( C , C , C ) and P ( u ) = ( P , P , P ) then for i ∈ { , , } we have C ( u ( i )) =( ˜ C j ) j ∈{ , , } where ˜ C j is equal to C ( u ) .P ( u ) := P C + P C + P C if j = i and C j otherwise. This property is illustrated in Figure 6.We denote CT n the set of coordinate-labelled trees with n internal vertices and likewise CT = (cid:83) CT n . For t • ∈ CT we will denote p ( t • ) ∈ T the underlying tree. C ( u ) = ( C ,P ( u ) = ( P , P , P ) C ( u (1)) = ( C ( u ) .P ( u ) , C , C ) C ( u (2)) = ( C , C ( u ) .P ( u ) , C ) C ( u (3)) = ( C , C , C ( u ) .P ( u )) u (1) u (2) u (3) u Figure 6: The local labelling rule (d) for coordinate-labelled trees
Remark 2.5.
The condition P ( u ) ∈ V ∗ (as opposed to V ) means that the insertions of new verticesat each step are proper insertions, that is the new point is added in the interior of a face and noton its border. This is crucial for Theorem 2.8. Remark 2.6.
We can map a fragmentation-labelled tree ( t, ( P ( u )) u ∈ t ) ∈ F T k to a coordinate-labelled tree t • ∈ CT k by setting p ( t • ) = t , keeping for every internal node u of t • the same splittingtriplet P ( u ) as in t , and filling in the remaining triplets of coordinates using rules (c) and (d). Thisgives us a bijective mapping which we denote Φ k : F T k → CT k . Once more, we aim to define probability distributions on the sets
F T and CT . For this, weintroduce a σ -field on the sets F T k and CT k , with the help of a distance.9 efinition 2.7. Let k ≥ . The map d C : CT k × CT k → R + defined by d C ( t • , t • ) = p ( t • ) (cid:54) = p ( t • ) + p ( t • )= p ( t • ) (cid:18)(cid:18) max u ∈ p ( t • ) ,u ∈ p ( t • ) d ( λ ( u ) , λ ( u )) (cid:19) ∧ (cid:19) , where λ ( u ) is the label of a node u and d represents any distance on the set of labels (seen as asubspace of R i for some i ), is a distance on CT k (for usual reasons). We call it the coordinate-labeldistance . We define in an analogous manner a distance d F on F T k . The spaces CT k and F T k for k ≥ σ -algebras. We can now state the main theorem of thissection. Theorem 2.8.
Let n ≥ . Equip the set CT n with the coordinate-label distance d C and D n withthe Hausdorff distance d H . Then there exists a homeomorphism Ψ n : CT n → D n t • (cid:55)→ m, such that:1. Each internal node u of t • corresponds bijectively to an internal vertex M of m . Moreover, if λ ( u ) = ( C ( u ) , P ( u )) then the barycentric coordinates of the vertex M with respect to ( A, B, C ) are given by C ( u ) .P ( u ) .2. Each leaf l of t • corresponds bijectively to a face f of m . Moreover, if C ( l ) = ( C , C , C ) isthe label at l and the face f is defined by the three vertices ( A f , B f , C f ) then C ( A f ) = C , C ( B f ) = C , C ( C f ) = C . Remark 2.9.
Note that the spaces which are in one-to-one correspondence are both infinite, sothat it is not the existence of the bijection as such which is of interest, but the fact that via thisbijection all relevant information on a compact triangulation can be read in a coordinate-labelledtree. The measurability of the bijection will allow us to transport distributions.
Definition 2.10.
For a node u in a coordinate-labelled tree t • ∈ CT , we define T ( u ) to be thetriangle whose three points are given by the triplet of coordinates C ( u ) , and ˜ T ( u ) for the filledtriangle. This is illustrated in Figure 7. We proceed by induction on n . We follow a similar path to the proof of Proposition 1 in [1], byconstructing the bijection iteratively. For n = 0 there is no work to do. We have D = { T } , CT = {{∅} , C ( ∅ ) } . By property (c) of Definition 2.4 the coordinates C ( ∅ ) satisfy part 2 of thetheorem, as desired.Now assume we have constructed Φ n as in the statement of Theorem 2.8, for some n ≥
0. Let t • ∈ CT n +1 . Denote t = p ( t • ) and choose a node u ∈ t such that u (1) , u (2) , u (3) are leaves of t .Now define t (cid:48) := t \ { u (1) , u (2) , u (3) , } , and t (cid:48) • to be the coordinate-labelled tree such that its labelscoincide with those of t • except at u , and where we remove the splitting triplet P ( u ) = ( P , P , P ),10 T ((2)) ˜ T ((3)) ˜ T ((1 , T ((1 , T ((1 , u is now a leaf of t (cid:48) . Thus t (cid:48) • ∈ CT n and by induction we can define m (cid:48) := Ψ n ( t (cid:48) ) ∈ D n . Let f be the face of m (cid:48) corresponding to the leaf u via Ψ n . Write as in the statement of the theorem( A f , B f , C f ) for the three vertices defining f .Now let M be the point in f whose barycentric coordinates with respect to ( A f , B f , C f ) are( P , P , P ), and define m = Ψ n +1 ( t • ) = m (cid:48) ∪ [ A f , M ] ∪ [ B f , M ] ∪ [ C f , M ]. It follows that the barycen-tric coordinates of M with respect to ( A, B, C ) are P C ( A f ) + P C ( B f ) + P C ( C f ) = P ( u ) . C ( u ) byproperty 2 of the induction hypothesis applied to u in t (cid:48) . Thus, by mapping u to M and all other in-ternal nodes of t • to their corresponding internal vertex via Ψ n , we see that Ψ n +1 satisfies condition1 of Theorem 2.8. To satisfy condition 2, we map all the leaves of t (cid:48) except u to their correspondingfaces via Ψ n , noting that these faces are untouched by Ψ n +1 so that the condition remains satisfied.Finally, we map the leaves u (1) , u (2) , u (3) respectively to the faces M B f C f , A f M C f , A f B f M of m . Because of the local growth property (d) of Definition 2.4(2), we see that condition 2 is satisfiedfor these leaves. This iterative construction is illustrated in Figure 8.Two points remain. Firstly, that Ψ n +1 is a bijection. But this follows from our construction andthe definition of D n . Indeed D n +1 is obtained from D n through the insertion of a vertex anywhere ina given face of an element of D n , while we have a similar iterative structure for coordinate-labelledtrees. It is important here that each vertex is inserted in the interior of some face, and not on it’sboundary (since the splitting triplets are in V ∗ and not just in V ), so that the face it is insertedin is defined non ambiguously.The final point is to prove that Ψ n is bicontinuous with respect to the given distances. Forthis, we fix some m ∈ D n and ε >
0. Write t • = Ψ − n ( m ). Now there exists η > C = ( C , C , C ) , P = ( P , P , P )) , ( C (cid:48) = ( C (cid:48) , C (cid:48) , C (cid:48) ) , P (cid:48) = ( P (cid:48) , P (cid:48) , P (cid:48) )) ∈ V × V ∗ ,if (cid:107) ( C , P ) − ( C (cid:48) , P (cid:48) ) (cid:107) < η then (cid:107)C .P − C (cid:48) .P (cid:48) (cid:107) < ε . We may suppose that η <
1, so that if d C ( t • (cid:48) , t • ) < η we have p ( t • (cid:48) ) = p ( t • ). This implies that if d C ( t • (cid:48) , t • ) < η then for any vertex u ∈ p ( t • ) the corresponding vertex in m is at distance less than ε from the corresponding vertex in m (cid:48) := Ψ n ( t • (cid:48) ). As a consequence, we get that d H ( m (cid:48) , m ) < ε and the continuity of Ψ n is proved.11 ( u ) = ( P , P , P ) uu fMA f A f B f B f C f C f u (1) u (2) u (3) P P P Figure 8: Illustrating the growth property of the bijection Ψ n The bicontinuity stems immediately from the fact that it is a mapping between compact spaces.
So far, we have worked in a purely deterministic setting. In this paragraph, we formally introducethe two probability distributions on D k which will be of interest to us. Definition 2.11. A splitting law ν is a distribution on R such that if P = ( P , P , P ) is dis-tributed according to ν , then:1. For any permutation σ on { , , } , ( P σ (1) , P σ (2) , P σ (3) ) has same distribution as ( P , P , P ) ,that is the law of ν is symmetric.2. For any i ∈ { , , } , P i > a.s..3. P + P + P = 1 a.s..We denote M S ( V ∗ ) the set of splitting laws, and say that a random variable P = ( P , P , P ) is a splitting ratio if its distribution is a splitting law. Fix some n ≥
0. We define two probability distributions on T n . • The first, which we denote U T n , is the uniform distribution on T n . • The second, which we denote H T n , is defined by induction. For n = 0, the distribution H T takes value the unique tree reduced to its root {∅} a.s.. Now suppose we have defined adistribution H T n − on T n − . Choose t ∈ T n − according to H T n − . Conditionally to t , chooseone of its 2 n − H T n on T n . Note that the weight ofa tree is proportional to the number of histories leading to its construction (starting from asingle root node). 12e say that a random variable t ∈ T is an increasing tree if it has distribution H T n for some n ≥ Definition 2.12.
Let ν ∈ M S ( V ∗ ) be a splitting law, and ( P ( u )) u ∈T ∞ be an i.i.d. sequence ofrandom variables with law ν . Let n ≥ .1. We denote U T ,νn (resp. H T ,νn ) the distribution of t Pn • := Φ n ( t n , ( P ( u )) u ∈ t n ) where t n ∈ T n isindependent from ( P ( u )) u ∈T ∞ and has distribution U T n (resp. H T n ), and Φ n is as in Remark2.6.2. We define the distributions U νn and H νn to be the respective images of the distributions U T ,νn and H T ,νn via the bijection Ψ n of Theorem 2.8. These are therefore two probability distributionson D n . In this section, we study the asymptotic behaviour of the distribution U νn . That is, we look atrandom compact triangulations where the underlying stack triangulation is chosen uniformly, andthe insertion of vertices is done according to some splitting law ν ∈ M S ( V ∗ ), independent from thechoice of the underlying triangulation. We study both the occupation measure, and the asymptoticbehaviour of the distribution itself. Theorem 3.1.
Let ( m n ) n be a sequence of random compact triangulations, where m n has distri-bution U νn . Recall (Definition 2) that V ( m n ) denotes the set of internal vertices of m n . For every n , conditionally to m n , let U n be a vertex of V ( m n ) , chosen uniformly at random. Finally, let µ n be the occupation measure of m n , as in (3). Then1. The random point U n converges in distribution to some random limit point U ∞ as n tends toinfinity.2. We have µ n ( d ) −→ δ U ∞ as n → ∞ , where the convergence is in distribution on the space of probability measures on the filledtriangle ˜ T . Theorem 3.1 says is that in the uniform model, all the vertices of the compact triangulationare at the same place, except for a portion which tends to 0. Although point 2 is stronger thanpoint 1, we state both here as point 1 will be heavily used in the proof of point 2. In Figure 9 isa simulation of the vertices of the map m n where we take for ν the special case ν = δ ( , , ), and n ∼ We begin by recalling an elementary fact about uniform ternary trees.13igure 9: A simulation of the set V ( m n ) where m n has distribution U νn and n ∼ Fact 3.2.
Take U n as in the statement of Theorem 3.1, and write U (cid:48) n for the corresponding node inthe coordinate-labelled tree t • n := Ψ − n ( m n ) , as in Theorem 2.8. Write U (cid:48) n = ( u , u , · · · , u h ) where h is the height of U (cid:48) n . Then conditionally to h , the random variables u , u , ..., u h are i.i.d, and areuniformly distributed on { , , } .Proof. By construction of the law U νn , the tree t n := p ( t • n ) follows the uniform distribution on T n . We now use the following argument. If t is a random ternary tree, chosen uniformly amongtrees of a given size, then conditionally to their sizes the subtrees at the root θ (1) ( t ) , θ (2) ( t ) , θ (3) ( t )are independent, and also follow the uniform distribution. It immediately follows that the ( u i ) arei.i.d, and the fact that the law of u is uniform on { , , } stems from the symmetric nature of theuniform distribution in T n .By definition of a coordinate-labelled tree we have the following: let u be an internal nodein a coordinate-labelled tree t , with coordinates C ( u ) = ( C , C , C ) and splitting triplet P ( u ) =( P , P , P ), then: ∀ i ∈ { , , } , C ( u ( i )) T = M ( i ) ν . C ( u ) T , where M ( i ) ν is the three-by-three identity matrix in which the i -th line is replaced by P ( u ), i.e. M (1) ν = P P P , M (2) ν = P P P , M (3) ν = P P P . (4)Henceforth, we will leave out the subscript ν wherever there is no risk of confusion. Combiningthis and Fact 3.2 gives us the following result. Proposition 3.3.
Let m n , U n be as in the statement of Theorem 3.1. Write U (cid:48) n for the correspond-ing node in the coordinate-labelled tree t • n := Ψ − n ( m n ) , and C ( U (cid:48) n ) = ( C ( U (cid:48) n ) , C ( U (cid:48) n ) , C ( U (cid:48) n )) forthe coordinates of U (cid:48) n . Then for i ∈ { , , } , conditionally to h the height of U (cid:48) n , the law of C i ( U (cid:48) n )14 s given by the i -th row of the product M h · · · M where the M j are i.i.d random variables with law δ M (1) + δ M (2) + δ M (3) (the M ( k ) being defined as in (4)). Now to get the desired convergence of U n , it is of course sufficient to show the convergenceof the sequence of coordinates ( C n ) n ≥ where C n is the barycentric coordinates of the point U n .By Theorem 2.8(1) the law of C n is P C ( U (cid:48) n ) + P C ( U (cid:48) n ) + P C ( U (cid:48) n ) where P = ( P , P , P ) is asplitting ratio with distribution ν , independent from C ( U (cid:48) n ). The previous proposition gives us thelaw of C ( U (cid:48) n ). Moreover, for any A > P ( | U (cid:48) n | ≥ A ) tends to zero as n goes to infinity. Thus, toprove Theorem 3.1, it is sufficient to show the following. Proposition 3.4.
Let ( M i ) i ≥ be i.i.d. random variables with law δ M (1) + δ M (2) + δ M (3) . Thenthe product S n := M n · · · M converges a.s. as n → ∞ to some random matrix S whose three linesare identical.Proof. We write S n = L (1) n L (2) n L (3) n . We wish to show that there exists L ∞ such that a.s. for all i ∈ { , , } , L ( i ) n → L ∞ . Lemma 3.5.
Let ( n k ) be some sub-sequence of integers such that L ( i ) n k → L ( i ) a.s. for all i ∈{ , , } . Then L (1) = L (2) = L (3) a.s..Proof. We proceed by contradiction. To simplify notation we assume that L ( i ) n → L ( i ) a.s. for i ∈ { , , } . Write A, B, C for the three points whose respective coordinates are L (1) , L (2) , L (3) .Similarly write A n , B n , C n for the three points with respective coordinates L (1) n , L (2) n , L (3) n . We mayassume that P ( A (cid:54) = C ) >
0, and from now on work conditionally to this event.Fix some ε > ε < d ( A, C ). Now there exists N such that for any n ≥ N , wehave d ( X n , X ) < ε for X ∈ { A, C } . Thus by construction the balls B ( A n , ε ) and B ( C n , ε ) donot intersect, and B ( X, ε ) ⊆ B ( X n , ε ) for X ∈ { A, C } . Define Y n := P A n + P B n + P C n ,where P = ( P , P , P ) is a splitting ratio, independent from ( A n , B n , C n ). Then d ( Y n , A n ) ≥ ε or d ( Y n , C n ) ≥ ε , so that d ( Y n , A ) ≥ ε or d ( Y n , C ) ≥ ε . See Figure 10 below for an illustration of thissituation. Using the definition of the matrices ( M i ) we get that with probability equal to 1 (stillconditionally on the event A (cid:54) = C ) there exists n ≥ N such that one of the following occurs:1. We have d ( Y n , A ) ≥ ε and M n +1 = M (1) so that A n +1 = Y n , which contradicts d ( A n +1 , A ) <ε .2. We have d ( Y n , C ) ≥ ε and M n +1 = M (3) so that C n +1 = Y n , which contradicts d ( C n +1 , C ) <ε .This completes the proof of the lemma.Let us now prove Proposition 3.4. Let ( L, L, L ) be the a.s. limit along some subsequence of (cid:0) ( L (1) n , L (2) n , L (3) n ) (cid:1) . Write M for the point in ˜ T with (barycentric) coordinates L . Similarly, write( M (1) n , M (2) n , M (3) n ) for the points with respective coordinates ( L (1) n , L (2) n , L (3) n ). Now a.s. for any ε > N such that d ( M ( i ) N , M ) < ε for all i ∈ { , , } . But for n ≥ N the points M ( i ) n CA n B n Y n C n εε ε ε Figure 10: An illustration of the proof of Lemma 3.5are all in the filled triangle defined by ( M (1) N , M (2) N , M (3) N ) by construction. It follows therefore thatfor any n ≥ N , we have d ( M ( i ) n , M ) < ε for all i ∈ { , , } . This proves that a.s. (cid:16) M (1) n , M (2) n , M (3) n (cid:17) −→ ( M, M, M ) as n → ∞ , which is the desired result. The idea of the proof is as follows. Consider a uniform ternary tree t n ∈ T n and choose twoindependent nodes u (1) n , u (2) n uniformly at random in t n . Then the greatest common ancestor ofthese v n := u (1) n ∧ u (2) n is at height of order n . This says that the corresponding two vertices U (1) n , U (2) n are in a small triangle. Intuitively, this suggests they will asymptotically be near to eachother. This is made clear in the following lemma. Lemma 3.6.
Keeping the same notation as in the statement of Theorem 3.1, conditionally to m n ,choose two vertices U (1) n , U (2) n ∈ V ( m n ) independently, uniformly at random. Then the followingconvergence holds in probability: (cid:107) U (2) n − U (1) n (cid:107) P −→ , as n → ∞ . Proof.
Let as before U (cid:48) (1) n (resp. U (cid:48) (2) n ) be the node corresponding to U (1) n (resp. U (2) n ) in the tree t • n , via the bijection Ψ n established in Theorem 2.8. Write V n := U (cid:48) (1) n ∧ U (cid:48) (2) n for the greatestcommon ancestor of these two nodes. It is clear that (cid:107) U (2) n − U (1) n (cid:107) ≤ diam( ˜ T ( V n )), where diam( S )is the diameter of set S . Moreover, we know that for any A > P ( | V n | ≥ A ) tends to zero as n goes to infinity. Using this, and Proposition 3.2, it is sufficient to show the following. Lemma 3.7.
Let ( u k ) k ≥ be a sequence of i.i.d. random variables, uniform on { , , } . Write W n := u · · · u n ∈ T ∞ . Then the following convergence holds in probability: diam( ˜ T ( W n )) P −→ as n → ∞ . In fact, the convergence holds a.s.. Note that the sequence of triangles (cid:0) ˜ T ( W n ) (cid:1) n is non increas-ing for inclusion, therefore (cid:0) diam( ˜ T ( W n )) (cid:1) n is non increasing, so converges a.s. to some limit l ≥ n k ) such that the triangle ˜ T ( W n k ) converges to some limit triangle16 T = ( A , B , C ) . We can then use the same proof as for Lemma 3.5 to show that A = B = C a.s., and hence l = 0 a.s. as desired.We now use Lemma 3.6 to prove Theorem 3.1.(2). We denote, for any measure µ on the triangle˜ T and any measurable function f on ˜ T , (cid:104) f, µ (cid:105) := (cid:90) ˜ T f dµ. We show that for any real-valued function f continuous on ˜ T , (cid:104) f, µ n (cid:105) ( d ) −→ (cid:104) f, δ U ∞ (cid:105) = f ( U ∞ ). Itsuffices to show that (cid:16) (cid:104) f, µ n (cid:105) , f ( U n ) (cid:17) ( d ) −→ (cid:16) f ( U ∞ ) , f ( U ∞ ) (cid:17) , where U n is as in the statement of Theorem 3.1. Since point (1) of Theorem 3.1 implies that f ( U n ) ( d ) −→ f ( U ∞ ), it suffices to show that (cid:104) f, µ n (cid:105) − f ( U n ) ( d ) −→ . Now E ( (cid:104) f, µ n (cid:105) ) = E (cid:0) n (cid:80) x ∈V ( m n ) f ( x ) (cid:1) = E (cid:0) f ( U n ) (cid:1) , thus it is sufficient to show thatVar (cid:0) (cid:104) f, µ n (cid:105) − f ( U n ) (cid:1) −→ . Let U (1) n , U (2) n be as in the statement of Lemma 3.6. We haveVar (cid:0) (cid:104) f, µ n (cid:105) − f ( U n ) (cid:1) = E (cid:0) ( (cid:104) f, µ n (cid:105) − f ( U n )) (cid:1) = E (cid:0) f (cid:16) U (1) n (cid:17) − f (cid:16) U (1) n (cid:17) f (cid:16) U (2) n (cid:17) (cid:1) , so that Var (cid:0) (cid:104) f, µ n (cid:105) − f ( U n ) (cid:1) ≤ (cid:107) f (cid:107) ∞ E (cid:0) | f (cid:16) U (1) n (cid:17) − f (cid:16) U (2) n (cid:17) | (cid:1) . Since ˜ T is compact and f continuous, f is uniformly continuous. Fix some ε >
0. There exists η > x, y ∈ ˜ T with (cid:107) x − y (cid:107) ≤ η , we have | f ( x ) − f ( y ) | ≤ ε . ThenVar (cid:0) (cid:104) f, µ n (cid:105) − f ( U n ) (cid:1) ≤ (cid:107) f (cid:107) ∞ (cid:16) ε + 2 (cid:107) f (cid:107) ∞ P (cid:0) (cid:107) U (2) n − U (1) n (cid:107) > η (cid:1)(cid:17) . Using Lemma 3.6 we get thatlim sup n →∞ Var (cid:0) (cid:104) f, µ n (cid:105) − f ( U n ) (cid:1) ≤ ε (cid:107) f (cid:107) ∞ , and since this holds for any ε >
0, the desired result follows. This completes the proof of Theorem3.1.It may also be interesting to obtain information on the law of the limit point U ∞ , since thispoint is where the occupation measure is concentrated asymptotically. Proposition 3.4 tells us that When we say “the triangle converges” here, we mean that the triplet of points of the triangle converges C ∞ of the limit point U ∞ follow the law of one line of this matrix S , and satisfiesthe following equation in distribution: C ∞ ( d ) = C ∞ .M ν , where M ν has distribution 13 δ M (1) ν + 13 δ M (2) ν + 13 δ M (3) ν , (5)and the M ( i ) ν are defined as in (4). This can be interpreted as follows. Split the original triangle T in three using the splitting law ν , and pick one of the three subsequent triangles uniformly atrandom. Now choosing a point with respect to C ∞ in that triangle is the same (has the same law)as choosing a point with respect to C ∞ in T . The distribution of C ∞ is thus the limit distributionof a (very) simple Markov chain. Proposition 3.8.
Let M ( C ) be the set of symmetric (probability) laws on V ∗ . For any splittinglaw ν ∈ M S ( V ∗ ) , the distribution equation C ∞ ( d ) = C ∞ .M ν has a unique solution C ∞ ∈ M ( C ) . This tells us that Equation (5) characterises the distribution of the limit point.
Proof.
We endow M ( C ) with the usual L norm denoted (cid:107) . (cid:107) . This makes it complete, and thusby Banach’s fixed point theorem it is sufficient to show that the map (cid:26) M ( C ) → M ( C ) L ( X ) (cid:55)→ L ( X.M ν ) ,where L ( Y ) denotes the law of a random variable Y , is a contraction.Take µ ∈ M ( C ) and let X = ( X , X , X ) have distribution µ . Write for short m = E ( X i )and m , = E ( X i X j ) for i (cid:54) = j . Then (cid:107) X (cid:107) = 3 m . We now wish to compute (cid:107) XM ν (cid:107) . We have (cid:107) XM ν (cid:107) = 13 E (cid:16) (cid:88) i =1 XM ( i ) ( M ( i ) ) T X T (cid:17) . Now a computation gives us that M (1) ( M (1) ) T = | P | P P P P , where | P | := P + P + P . Thus XM (1) ( M (1) ) T X T = X | P | + X P + X P + 2( X X P + X X P ) . It follows that E (cid:16) XM (1) ( M (1) ) T X T (cid:17) = m (cid:16) E (cid:0) | P | (cid:1) + 23 (cid:17) + 43 m , . Here we use the symmetry of ν (hence in particular E ( P i ) = ). Since the above equality issymmetric, it immediately follows that (cid:107) XM ν (cid:107) = m (cid:18) E (cid:0) | P | (cid:1) + 23 (cid:19) + 43 m , . (6)18ow E ( P ) ≤ E ( P ) = since P ≤ P = 0 or 1a.s. is not allowed. Write a = 3 E ( P ) = E (cid:0) | P | (cid:1) <
1. By the Cauchy-Schwarz inequality m , = E ( X X ) ≤ (cid:113) E ( X ) (cid:113) E ( X ) = m . It follows from (6) that (cid:107) XM ν (cid:107) ≤ ( a + 2) m = a + 23 (cid:107) X (cid:107) , and since a < X → XM ν is indeed a contraction. Special case: when P = ( , , ) a.s., the law of U ∞ is the uniform distribution on ˜ T .Indeed, putting a point at the centre of gravity of a triangle, choosing one of the three resultingtriangles uniformly at random and placing a point uniformly in that triangle, is the same as placinga point uniformly in the original triangle. The previous results give us information on the asymptotic behaviour of the occupation measure,and thus tell us where the vertices are located asymptotically. In this section we obtain informationon the behaviour of the drawings themselves, that is the behaviour of compact triangulations under U νn . We immediately state the main result. Theorem 3.9.
Let ν ∈ M S ( V ∗ ) be a splitting law and let ( m n ) n ≥ be a sequence of compacttriangulations under the distribution U νn . There exists a random compact space m ∞ such that m n −→ m ∞ , as n → ∞ where the convergence holds in distribution in the set of compact subspaces of the filled triangle ˜ T equipped with the Hausdorff distance. The limit space m ∞ is characterised as follows. Start with the initial triangle T split in threeby adding a point according to ν . Pick one of these three triangles uniformly at random, call it T (cid:48) . For each of the other two triangles, consider independently a random critical Galton-Watsonternary tree - this object shall be defined later in the paper - and draw the corresponding compacttriangulation (each vertex insertion according to ν , independently from all previous insertions andfrom the trees). Iterate ad infinitum this construction, replacing T with T (cid:48) , and take the closure ofthe space obtained (so as to have a compact space). Figure 11 illustrates this convergence, showinga simulation of the map m n with n ∼ ν = ν := δ ( , , ) , (7)that is, a splitting ratio with law ν takes value (cid:0) , , (cid:1) a.s.. There is no additional difficulty inthe general case, but this special case simplifies certain statements such as Theorem 3.15, as wellas certain formulae such as (10). We will be careful to always specify how we would proceed in19he general case. Recall the definitions of the bijections Φ , Ψ in Remark 2.6 and Theorem 2.8. Wedefine a map Ψ : T −→ Et (cid:55)−→ Ψ ◦ Φ (cid:0) t, (cid:0)(cid:0) , , (cid:1) , u ∈ t (cid:1)(cid:1) , (8)where E is the set of compact subspaces of ˜ T , and ¯ S denotes the closure of a subspace S ⊆ ˜ T . Inwords, we take a tree t , make it a fragmentation labelled tree by adding the labels (cid:0) , , (cid:1) at eachinternal vertex, and map it to its corresponding compact triangulation via the bijections establishedin Section 2 (taking the closure if the tree is infinite, so as to always work with compact spaces).Our main tool is the local convergence of Galton-Watson trees, and our main reference [11].Figure 11: A simulation of a map m n under the distribution U ν n , with n ∼ Galton-Watson trees and local convergenceDefinition 3.10. A ζ Galton-Watson (or GW( ζ ) -) tree is a random variable τ ∈ U such that1. k ∅ ( τ ) has law ζ , i.e. P ( k ∅ ( τ ) = k ) = ζ ( k ) for any k ∈ N .2. For any k such that ζ ( k ) > , under the conditional probability P ( . (cid:12)(cid:12) k ∅ ( τ ) = k ) , the trees θ (1) ( τ ) , θ (2) ( τ ) , · · · , θ ( k ) ( τ ) are i.i.d and have the same law as τ under P . Proposition 3.11.
Let ξ have law δ + δ , and τ be a GW( ξ )-tree. Then a.s. τ ∈ T and forany n ≥ , conditionally to the event τ ∈ T n , τ is uniform in T n .Proof. First, τ is a.s. a ternary tree since by definition of ξ every node has three children or none.Now for any t ∈ T n , for some n ≥ P ( τ = t ) = (cid:18) (cid:19) n (cid:18) (cid:19) n +1 , t ∈ T n has n internal nodes which each have three children and 2 n + 1 leaves. Thus alltrees with the same size have the same weight.We can therefore view a uniform ternary tree in T n as a GW( ξ )-tree, conditional to have size n . We now define the topology of local convergence on trees. Definition 3.12.
Let t ∈ U be a planar tree, and r > some real number. Then B r ( t ) is thesubtree of t whose vertices all have height at most r , that is B r ( t ) = { u ∈ t ; | u | ≤ r } . Definition 3.13.
Let t, t (cid:48) ∈ U be two planar trees. Define the distance ˜ d between t and t (cid:48) by ˜ d ( t, t (cid:48) ) = inf (cid:26) r + 1 ; B r ( t ) = B r ( t (cid:48) ) , r ∈ R (cid:27) . One checks that ˜ d is indeed a distance. The following proposition is a consequence of Proposition3.11 and Theorem III.3.1 in [11]. Proposition 3.14.
Let t n be a uniform tree in T n . Then there exists a random variable t ∞ ∈ T such that t n −→ t ∞ , as n → ∞ , where the convergence holds in distribution in T equipped with the distance ˜ d . Moreover, thefollowing properties hold a.s.:1. t ∞ has a unique infinite branch, written t ∞ = ∅ , t ∞ , t ∞ , · · · .2. For any k , conditionally to t k ∞ , the law of t k +1 ∞ is ( δ t k ∞ (1) + δ t k ∞ (2) + δ t k ∞ (3) ) . That is, theinfinite branch is an infinite sequence of i.i.d. uniform left, middle, and right turns.3. For any u on the infinite branch, the two finite subtrees among θ u (1) ( t ∞ ) , θ u (2) ( t ∞ ) , θ u (3) ( t ∞ ) are independent GW( ξ ) trees, where ξ has law δ + δ . Now to prove Theorem 3.9 in the special case (7), it is sufficient to have some continuity of thefunction Ψ defined by (8). In fact, this function is not continuous on T . However, Theorem 25.7in [6] says that to transport convergence in distribution via a function f , it is sufficient that f becontinuous on the support of the limit in distribution. Therefore, given the properties of t ∞ listedin Proposition 3.14, the following suffices. Theorem 3.15.
Let T be equipped with the distance of local convergence ˜ d . Let t ∈ T be a treewith exactly one infinite branch, and assume that along the infinite branch there are an infinity ofleft, middle, and right turns, i.e. if ( u , u , · · · ) is the infinite branch, then |{ i ; u i = j }| = ∞ for any j = 1 , , . (9) Then the map Ψ : T −→ E defined by (8) is continuous at t , where E is equipped with theHausdorff distance. emark 3.16. In the general case where ν (cid:54) = ν this theorem should be re-stated as a continuitytheorem of a function which maps the set of distributions on F T to the set of distributions on E ,both sets equipped with the topology of weak convergence. The path of the proof remains unchanged,though formula (10) is more complicated as is therefore the proof of Lemma 3.17.Proof. Let t be as in the statement of the Theorem 3.15 and write ( u , u , · · · ) for its infinitebranch. Let ( t n ) be a sequence of trees in T such that t n → t as n tends to infinity for the distance˜ d . Define m n = Ψ ( t n ) and m = Ψ ( t ). We wish to show that m n → m for the Hausdorffdistance. Recall from Definition 2.10 that for a node u ∈ t , we write ˜ T ( u ) for the corresponding(filled) triangle in the compact triangulation. Lemma 3.17.
Let ( u , u , · · · ) be the infinite branch of t satisfying condition (9). Then diam (cid:16) ˜ T (( u , u , · · · , u k )) (cid:17) −→ , as k → ∞ . That is, the diameter of the triangle corresponding to the k -th node of the infinite branch of t tendsto zero as k tends to infinity.Proof. Consider Figure 12, where M is the centre of gravity of the triangle. Then a computationgives us d = 13 (cid:112) a + 2 c − b ≤
23 max { a, b, c } . (10) a bc Md Figure 12:It follows that if k := min { k ≥ |{ i ≤ k ; ∀ j ∈ { , , } , u i = j }| ≥ } (that is, there is at leastone left, right and middle turn in the k first steps), then diam (cid:16) ˜ T (( u , · · · , u k )) (cid:17) ≤ diam( ˜ T ).Define inductively, for l ≥ k l +1 := min { k ≥ k l ; ∀ j ∈ { , , } , |{ k l < i ≤ k ; u i = j }| ≥ } . Condition (9) implies that for any l , k l is finite. Moreover, we have:diam (cid:16) ˜ T (( u , · · · , u k l )) (cid:17) ≤ (cid:18) (cid:19) l diam( ˜ T ) , and taking l → ∞ completes the proof of Lemma 3.17.Now to prove Theorem 3.15, fix some ε >
0, and choose k such that diam (cid:16) ˜ T (( u , · · · , u k )) (cid:17) ≤ ε .Write u k = ( u , · · · , u k ). By the assumptions made on t and by definition of the distance ˜ d , forsufficiently large n the trees t n and t coincide except perhaps on the subtrees θ u k ( t n ) and θ u k ( t ).But this immediately implies that d H ( m n , m ) ≤ diam (cid:0) ˜ T ( u k ) (cid:1) ≤ ε , and the theorem is proved.22 The increasing case
In this section we study the asymptotic behaviour of H νn . That is, at every step, we choose oneof the faces of our triangulation uniformly at random and split it into three. We call this the increasing case .We will see that the asymptotic behaviour of the occupation measure is different to the uniformcase. Intuitively this is because, in the uniform case, the distance between two vertices chosen atrandom tends to zero, since the height of their greatest common ancestor tends to infinity, whereasin the increasing case its law converges to a geometric distribution. Here we give a construction of the increasing ternary tree as the underlying tree of a fragmentationtree. First, let us describe the deterministic fragmentation tree associated to a sequence of choices u = ( u i ) i ≥ with u i ∈ [0 ,
1) for any i , and a sequence y = ( y u ) u ∈T ∞ where for all u ∈ T ∞ , y u = ( y u , y u , y u ) ∈ V ∗ .With these sequences, we associate a sequence F n = F ( n, u , y ) of fragmentation trees with2 n + 1 leaves, each node being marked with a sub-interval of [0 , • At time 0, F is the root tree {∅} marked by I ∅ = [0 , • Assume that at time k the tree F k is built, and that it is a ternary tree with 2 k + 1 leaves,each node u being marked by a semi-open interval I u = [ a u , b u ) ⊆ [0 , I l , l is a leaf of F k ) form a partition of [0 , F k +1 is thenbuilt as follows. Denote ˜ l the (unique) leaf of F k such that u k +1 ∈ I ˜ l . We give to ˜ l threechildren ˜ l (1) , ˜ l (2) , ˜ l (3) and mark each of these with a sub-interval of I ˜ l whose lengths areprescribed by y ˜ l . More specifically, if I ˜ l = [ a, b ) then we take I ˜ l (1) = [ a, a + ( b − a ) y ˜ l ), I ˜ l (2) = [ a + ( b − a ) y ˜ l ) , a + ( b − a ) y ˜ l ) + ( b − a ) y ˜ l ), I ˜ l (3) = [ a + ( b − a ) y ˜ l ) + ( b − a ) y ˜ l , b ).Given a fragmentation tree F we will write π ( F ) for the underlying tree (that is, the fragmen-tation tree with marks removed). Definition 4.1.
The -dimensional Dirichlet distribution with parameter α ∈ (0 , + ∞ ) , denoted Dir ( α ) is the probability measure on V with density f α, ( x , x , x ) := Γ(3 α )Γ( α ) x α − x α − x α − with respect to the uniform measure on V . The following fundamental result is due to Albenque and Marckert [1].
Theorem 4.2.
Let U = ( U i ) i ≥ be a sequence of i.i.d. random variables, uniform on [0 , ,and Y = ( Y u ) u ∈T ∞ be a sequence of i.i.d. random variables with Dir ( ) distribution. Now let F n = F ( n, U , Y ) be the sequence of corresponding random fragmentation trees as described above.Then for any n ≥ the underlying ternary tree π ( F n ) follows the distribution of an increasingternary tree on T n . t n ∈ T n be a family of increasingternary trees. Then the proportion of internal nodes in each of the first three subtrees ( P , P , P ),where P i is the proportion of internal nodes in the i -th first subtree of t n , that is P i := n (cid:93) { u ∈ t n ; u = ( i, u , · · · , u h ) } , converges in distribution to a Dir ( ) distribution. In this section we show that the occupation measure µ n defined by (3) converges to a randommeasure µ . Let ν be a splitting law and ( P ( u ) , u ∈ T ∞ ) a sequence of i.i.d. splitting ratios withdistribution ν . Recall the previous construction. Let U = ( U i ) i ≥ be a sequence of i.i.d. randomvariables, uniform on [0 , Y = ( Y u ) u ∈T ∞ be a sequence of i.i.d. random variables withDir ( ) distribution. Let F n = F ( n, U , Y ) be the sequence of corresponding random fragmentationtrees, and let I u be the interval marked at the node u . If we write t n = π ( F n ) for the underlyingternary tree, then t n is an increasing tree of size n according to the previous theorem. Let m n :=Ψ n ◦ Φ n (cid:0) ( t n , ( P ( u ) , u ∈ t n )) (cid:1) be the corresponding compact triangulation. By definition, m n hasdistribution H νn . Write µ n for its occupation measure as defined by (3). The remark at the end ofSection 4.1 says that ∀ u ∈ t , µ n (cid:16) ˜ T ( u ) (cid:17) → | I u | . (11)This is in fact just the law of large numbers. Indeed, the quantity µ n ( ˜ T ( u )) is the proportion ofuniform random variables on [0 ,
1] which fall in a sub-interval I u . as such, with this constructionthe convergence in (11) is a.s.. Definition 4.3.
Let u , u , · · · , u k be k nodes of T ∞ . We say that u , · · · , u k are covering if thefollowing two conditions hold:1. We have (cid:83) i ˜ T ( u i ) = ˜ T .2. For any i (cid:54) = j , Int (cid:16) ˜ T ( u i ) (cid:17) ∩ Int (cid:16) ˜ T ( u j ) (cid:17) = ∅ , where Int( S ) denotes the interior of a set S . Another important property of the occupation measure µ n is that it has weight zero along theedges of the triangles ˜ T ( u ). Indeed, the vertices are always added to the interior of the triangles,so that ∀ u ∈ T ∞ , µ n (cid:16) ∂ (cid:16) ˜ T ( u ) (cid:17)(cid:17) → , (12)where ∂S represents the boundary of a set S . Once again, in our construction this convergenceholds a.s..We now state the main result of this section. Theorem 4.4.
Let ( l u ) u ∈T ∞ be a sequence of positive random variables such that:for any nodes u , · · · , u k ∈ T ∞ , if u · · · , u k are covering, then a.s. l u + · · · + l u k = 1 . (13) Then there exists an a.s. unique random measure µ on the triangle ˜ T such that the following holda.s.:1. For any u ∈ T ∞ , µ (cid:16) ˜ T ( u ) (cid:17) = l u .2. For any u ∈ T ∞ , µ (cid:16) ∂ (cid:16) ˜ T ( u ) (cid:17)(cid:17) = 0 . | I u | satisfy condition (13), using the convergences of (11) and (12)we obtain the following consequence. Theorem 4.5.
Let l u := | I u | for all nodes u ∈ T ∞ , and let µ be the unique measure of Theorem4.4 for this choice of ( l u ) . Then the following convergence µ n ( d ) −→ µ, as n → ∞ holds in distribution in the set of probability measures on ˜ T equipped with the topology of weakconvergence. Remark 4.6.
Theorem 4.4 tells us that the information on the triangles ˜ T ( u ) is sufficient tocharacterise the measure µ . It is crucial that there is no mass on the edges of the triangles here.Indeed, if there were some mass on the edge [ AB ] of the original triangle, the knowledge of justthe values of (cid:16) µ ( ˜ T ( u )) , u ∈ T ∞ (cid:17) would not be sufficient to obtain information on how that mass isdistributed.Proof. The existence of µ is a consequence of the property (13) and Kolmogorov’s extension theo-rem. Let us prove uniqueness. Let µ, µ (cid:48) be two measures on ˜ T satisfying the conditions of Theorem4.4. For the remainder of the proof, we work with a fixed ω in our probability space Ω, where theproperties (1) and (2) of Theorem 4.4 hold.Define the set ˆ T as the triangle T with the boundaries of all the triangles ˜ T ( u ) removed, that isˆ T := ˜ T \ (cid:91) u ∈T ∞ ∂ (cid:16) ˜ T ( u ) (cid:17) . Because of property (2) of Theorem 4.4, we may view µ and µ (cid:48) as measures on ˆ T . Now the sets (cid:16) ˜ T ( u ) ∩ ˆ T , u ∈ T ∞ (cid:17) form a basis of open sets for a certain topology on ˆ T , say O (cid:48) . We first showthe following. Lemma 4.7.
Let O denote the topology induced by the usual metric topology on ˆ T . Then O (cid:48) = O .Proof. First, note that O (cid:48) ⊆ O , since for any u ∈ T ∞ , the set T ( u ) ∩ ˆ T is an open set for the metrictopology on ˆ T . To show the converse, we show that ∀ O ∈ O , ∃ u ∈ T ∞ , T ( u ) ⊆ O. (14)Fix O ∈ O and x ∈ O . Define u n ( x ) to be the unique vertex u ∈ T ∞ s.t. | u | = n and x ∈ T ( u ).The uniqueness of u n ( x ) stems from the fact that x / ∈ (cid:83) u ∈T ∞ ∂ (cid:16) ˜ T ( u ) (cid:17) . For simplicity we write T n ( x ) := T ( u n ( x )). Now to show (14), it is sufficient to show thatdiam( T n ( x )) → , as n → ∞ . We write, for any n , u n ( x ) = ( u ( x ) , · · · , u n ( x )). Notice that by construction, the u i ( x ) are welldefined (i.e. they do not depend on n ). Now if we show that the sequence ( u i ( x ) , i ≥
1) satisfiescondition (9), then we can follow the path of the proof of Lemma 3.17 to get the desired result.Let us therefore show that ∀ j ∈ { , , } , |{ i ; u i ( x ) = j }| = ∞ . (15)We proceed by contradiction. If (15) doesn’t hold, then there are two possibilities:251) There exists N ∈ N and j ∈ { , , } , such that for all i ≥ N , u i ( x ) = j , that is there is exactlyone value of j such that |{ i ; u i ( x ) = j }| is infinite.(2) There exists N ∈ N and j ∈ { , , } , such that for all i ≥ N , u i ( x ) (cid:54) = j and for k ∈ { , , } \ j , |{ i ; u i ( x ) = k }| = ∞ , that is there are exactly two values of j such that |{ i ; u i ( x ) = j }| isinfinite.Consider case (1). Let N ∈ N so that for example for all i ≥ N , u i ( x ) = 1. Write T i ( x ) =( A i ( x ) , B i ( x ) , C i ( x )) for any i . Now for any i ≥ N , we have B i ( x ) = B N ( x ) := B ( x ) and C i ( x ) = C N ( x ) := C ( x ) (recalling the ordering of triangles after splitting as shown in Figure 3). Now if A ( x ) is a limit point of some subsequence of ( A n ( x )) n ≥ N we can show, using similar arguments asin the proof of Lemma 3.5, that A ( x ) ∈ [ B ( x ) C ( x )]. This implies that as n tends to infinity, thedistance between A n ( x ) and the line segment [ B ( x ) C ( x )] tends to zero (see Figure 13 below). Butthis would imply that x ∈ [ B ( x ) C ( x )], which is impossible since x / ∈ (cid:83) u ∈T ∞ ∂ (cid:16) ˜ T ( u ) (cid:17) . Figure 13provides an illustration of this case. A N ( x ) B ( x ) C ( x ) A N +1 ( x ) A N +2 ( x )Figure 13: Case (1)Now consider case (2). We suppose that for i ≥ N , u i ( x ) (cid:54) = 1 and that |{ i ; u i ( x ) = j }| = ∞ for j = 1 ,
2. We still write T i ( x ) = A i ( x ) , B i ( x ) , C i ( x )) for any i . Now for any i ≥ N we have A i ( x ) = A N ( x ) := A ( x ). As above, one shows that the sequences d ( A ( x ) , B n ( x )), d ( A ( x ) , C n ( x ))both tend to zero as n tends to infinity, so that we should have x = A ( x ). But this contradictsonce more the fact x / ∈ (cid:83) u ∈T ∞ ∂ (cid:16) ˜ T ( u ) (cid:17) . Thus, we have proved (15) which concludes the proof ofLemma 4.7.To complete the proof of Theorem 4.4, we use Dynkin’s π − λ theorem (Theorem 3.2 in [6]).For any set S of compact subspaces of ˜ T , denote σ ( S ) the σ -algebra generated by S , so that σ ( O )is the usual Borel σ -algebra on ˜ T . To prove Theorem 4.4, it is sufficient to show that σ (cid:16) { ˜ T ( u ); u ∈ T ∞ } (cid:17) = σ ( O ) . (16)But σ (cid:16) { ˜ T ( u ); u ∈ T ∞ } (cid:17) is a Dynkin system (since it is a σ -algebra). Moreover, since T ∞ iscountable, Lemma 4.7 implies that O ⊂ σ (cid:16) { ˜ T ( u ); u ∈ T ∞ } (cid:17) , and Dynkin’s theorem immediately implies (16).26 .3 Properties of the limit measure We have seen that the occupation measure µ n converges in distribution to a limit measure µ ,which satisfies µ ( ˜ T ( u )) = | I u | where I u is the interval marking the node u in the fragmentationconstruction introduced in Section 4.1. Moreover, for any node u , µ ( ∂ ˜ T ( u )) = 0. In this section,we determine additional properties of the measure µ . Proposition 4.8.
The atomic part of µ is a.s. zero. That is, a.s. there is no point x ∈ T s.t. µ ( { x } ) > .Proof. Define T ( n ) := { ˜ T ( u ); u ∈ T ∞ , | u | = n } , that is the set of triangles “ at height n ”. Itsuffices to show that a.s. sup τ ∈ T ( n ) µ ( τ ) → , as n → ∞ . (17)Indeed, if there exists with positive probability some x ∈ T such that µ ( { x } ) >
0, then, using thenotation T n ( x ) introduced in the proof of Theorem 4.4, with positive probabilitylim sup n sup τ ∈ T ( n ) µ ( τ ) ≥ lim sup n µ ( T n ( x )) ≥ lim sup n µ ( { x } ) = µ ( { x } ) > , and therefore proving (17) is sufficient.For this, we will use a branching process result. Notice that if | u | = n , then the law of µ ( ˜ T ( u ))is P · · · P n where the P i are i.i.d. random variables with distribution the first (or equivalently,any) marginal of a Dir ( ) distribution. We shall show thatinf τ ∈ T ( n ) − log( µ ( τ )) → + ∞ , as n → ∞ , (18)which is equivalent to (17).Now the law of inf τ ∈ T ( n ) − log( µ ( τ )) is the law of the time of first birth at generation n fora branching process with birth times − log( P ), − log( P ), − log( P ) where ( P , P , P ) has lawDir ( ) (and every vertex has exactly three children).We define Φ to be the Laplace transform of the reproduction law:Φ( θ ) := E (cid:34) (cid:88) i =1 exp( − θ. ( − log( P i ))) (cid:35) . Thus Φ( θ ) = 3 E (cid:0) ( P ) θ (cid:1) . Kingman proved in [12] that if a, θ > θ ) e θa < B n ) satisfies lim inf n B n n ≥ a .Now since Φ( θ ) tends to zero as θ tends to infinity, we can choose θ such that Φ( θ ) ≤ (2 e ) − and taking a = θ − will give us the desired result. This is clearly enough to show (18) (since a > ν on R d can be decomposed as ν = ν Leb + ν atom + ν sing , where ν Leb is absolutely continuous with respect to the Lebesgue measure on R d , ν atom is acountable (weighted) sum of Dirac atoms, and ν sing has no atoms and is singular with respect tothe Lebesgue measure on R d . The previous theorem means that a.s. µ atom = 0. We seek additionalinformation on µ . 27 efinition 4.9. Let M be a metric space, and X ⊆ M a subspace of M . For any d ≥ , we definethe d -dimensional Hausdorff measure µ d of X by µ d ( X ) = lim ε → inf (cid:88) i ∈ I (diam( U i )) d , where the infimum is taken over all countable coverings ( U i ) i ∈ I of X such that for any i ∈ I , diam( U i ) < ε . This infimum is non decreasing as ε decreases, thus the limit exists. The Hausdorffdimension dim H of X is then defined by dim H ( X ) = sup { d ≥
0; 0 < µ d ( X ) < ∞} . Theorem 4.10.
The limit measure µ is supported by a subset S ν ( µ ) of ˜ T which satisfies dim H ( S ν ( µ )) = 23 E ( − log( P )) a.s. , where P = ( P , P , P ) is a splitting ratio with distribution ν . Using a convexity inequality we get that23 E ( − log( P )) ≤ − E ( P )) = 23 log(3) , and since in particular the latter quantity is strictly less than 2 we get that µ is a.s. singular withrespect to the Lebesgue measure, i.e. a.s. ν Leb = 0. Notice also that we have equality in the abovewhen we take the special case P = a.s., that is ν = ν as defined in (7). Proof.
We apply the second point of Corollary IV.b in [5]. In our case, c = 3, ( W , W , W ) followsthe Dir ( ) distribution, and ( L , L , L ) = P = ( P , P , P ), where P is a splitting ratio withdistribution ν . Barral’s result states that the measure µ is supported by a set with Hausdorffdimension dim H ( S ( µ )) = E (cid:16)(cid:80) c − j =0 W j log W j (cid:17) E (cid:16)(cid:80) c − j =0 W j log L j (cid:17) = E ( W log( W )) E ( W ) E (log( P )) , and the desired result follows from a computation. References [1] M. Albenque and J.-F. Marckert. Some families of increasing planar maps.
Electron. J. Probab. ,13:no. 56, 1624–1671, 2008.[2] D. Aldous. Cambridge University Press, 1991.[3] D. Aldous. Tree-based models for random distribution of mass.
J. Stat. Phys , 73:625–641,1993. 284] D. Aldous. Recursive self-similarity for random trees, random triangulations and brownianexcursion.
The Annals of Probability , 22(2):pp. 527–545, 1994.[5] J. Barral. Moments, continuit´e, et analyse multifractale des martingales de mandelbrot.
Prob-ability Theory and Related Fields , 113:535–569, 1999.[6] P. Billingsley.
Probability and measure. 3rd ed.
Chichester: John Wiley & Sons Ltd., 1995.[7] N. Bonichon, S. Felsner, and M. Mosbah. Convex drawings of 3-connected plane graphs.
Algorithmica , 47(4):399–420, 2007.[8] D. Burago, Y. Burago, and S. Ivanov.
A course in metric geometry.
Providence, RI: AmericanMathematical Society (AMS), 2001.[9] N. Curien and I. Kortchemski. Random non-crossing plane configurations: A conditionedgalton-watson tree approach. 2012.[10] E. Fekete. Branching random walks on binary search trees: convergence of the occupationmeasure.
ESAIM, Probab. Stat. , 14:286–298, 2010.[11] F. Gillet.
Etude d’algorithmes stochastiques et arbres . 2003.[12] J. Kingman. The first birth problem for an age-dependent branching process.
Ann. Probab. ,3:790–801, 1975.[13] J.-F. Le Gall. Uniqueness and universality of the Brownian map.
ArXiv e-prints , May 2011.[14] G. Miermont. The Brownian map is the scaling limit of uniform random plane quadrangula-tions.
ArXiv e-prints , Apr. 2011.[15] G. Schaeffer.