3-Connected Cores In Random Planar Graphs
aa r X i v : . [ m a t h . C O ] J u l NIKOLAOS FOUNTOULAKIS AND KONSTANTINOS PANAGIOTOU
Max-Planck-Institute for InformaticsSaarbr¨ucken, Germany
Abstract.
The study of the structural properties of large random planar graphs has becomein recent years a field of intense research in computer science and discrete mathematics.Nowadays, a random planar graph is an important and challenging model for evaluatingmethods that are developed to study properties of random graphs from classes with structuralside constraints.In this paper we focus on the structure of random biconnected planar graphs regardingthe sizes of their 3-connected building blocks, which we call cores . In fact, we prove a generaltheorem regarding random biconnected graphs from various classes. If B n is a graph drawnuniformly at random from a class B of labeled biconnected graphs, then we show that withprobability 1 − o (1) as n → ∞ , B n belongs to exactly one of the following categories:(i) Either there is a unique giant core in B n , that is, there is a 0 < c = c ( B ) < ∼ cn vertices, and every other core contains at most n α vertices, where 0 < α = α ( B ) < B n contain O (log n ) vertices.Moreover, we find the critical condition that determines the category to which B n belongs,and also provide sharp concentration results for the counts of cores of all sizes between 1and n . As a corollary, we obtain that a random biconnected planar graph belongs to category(i), where in particular c = 0 . . . . and α = 2 / Introduction
A fundamental discipline of computer science is the theoretical and practical evaluation ofthe performance of algorithms. From a theoretical viewpoint, many important computationalproblems turn out to be
N P -hard or even very difficult to approximate, which leaves littlehope for finding efficient algorithms that solve all instances. On the other hand, in practiceit is widely observed that one can find satisfactory solutions efficiently, even when usingalgorithms that are known to perform very badly on certain inputs. In other words, it seemsthat “typical” instances are in some sense easier and the classical worst-case analysis may betoo pessimistic. This motivates the study of the performance of algorithms from an average point of view. However, in order to perform such an analysis, it is necessary to specify anappropriate probability distribution on the set of all inputs.After having a suitable model at hand, an important ingredient of a meaningful average-case analysis is the precise knowledge of the typical structure of an input sampled from thespecified distribution. For instance, in the context of graph problems, one wants to study theproperties of graphs in the corresponding random graph model. However, this may be a verychallenging task: if the space of inputs consists of graphs with global structural constraints,then the existence of dependencies can make the analysis formidable.A natural graph class with such constraints that has attracted the attention of researchersin computer science and discrete mathematics in recent years is the class of random planargraphs . In fact, since it was first studied by Denise, Vasconcellos, and Welsh [3], this model has evolved as the primary example of studying random graphs from constrained classes, seee.g. [10, 7]. From the perspective of the average-case analysis, it would be helpful to know thatalthough a random planar graph inherits the dependencies that arise from the requirement ofplanarity, essentially most of its vertices behave independently of each other. However, theplanarity condition makes almost all the tools and methods that have been used in the pastdecades for the analysis of classical random graphs models fail in the case of such a problem.Consequently, the development of new approaches is necessary and essential.One attempt to resolve this issue was taken recently by the second author and Steger [11].The precise setting they considered is as follows. Let C be a class of labeled connected graphs,and let C n be a random graph from C with n vertices. The main idea in their work is toconsider the maximal biconnected components of a connected graph C n . In this context, theyshowed, among other results, that under certain assumptions C n belongs a.a.s. to exactlyone of the following categories:(i) There is a unique giant biconnected component in C n . More precisely, there is a0 < c = c ( C ) < ∼ cn vertices,while every other component contains o ( n ) vertices.(ii) All biconnected components contain O (log n ) vertices.Additionally, in [11] it was shown that random planar graphs belong to the former category,whereas e.g. random outerplanar graphs belong to the latter. Observe that for graphs thatbelong to category (ii), almost all pairs of vertices lie in different biconnected components,while this is not the case for graphs from the first category. A consequence of these facts isthe following important observation. Random graphs from classes that belong to category (ii)“contain” in a well-defined sense plenty of independence. In particular, any such graph canbe generated by choosing independently every one of its biconnected components, and gluingthem together at the cut-vertices. As the biconnected components contain few vertices, andas they intersect each other only at single vertices, the impact of each block to the whole graphis small. So, such graphs resemble in a certain way the behavior of classical random graphs,where each edge is included independently with a specified probability, with the differencethat here we choose the blocks independently of each other. However, random graphs fromclasses that belong to category (i) do not have this property: a lot of structure that we cannotcontrol is “hidden” in the giant biconnected component, which contains a constant fractionof the vertices.This motivates a finer analysis regarding the typical structure of random biconnectedgraphs, which is the main topic of this work. In particular, we investigate how and un-der which conditions we can decompose a biconnected graph into building blocks of highercomplexity. As we shall see shortly, such a decomposition is well-known and possible. Wewill show that we encounter again a fundamental dichotomy : depending on some criticalcondition, which we determine explicitly, we prove that large biconnected graphs have eitheronly “small” building blocks of higher complexity, or a constant fraction of the vertices iscontained in such a building block. Hence, we discover a picture that is completely analogousto the distribution of the sizes of the blocks in random connected graphs.Concerning the actual analysis, our methods not only differ, but also extend significantlysome of the ideas presented in [11] with respect to two major issues. Firstly, essential ingre-dients of our proofs are precise asymptotic estimates for the number of biconnected graphs inthe class in question. Here we resolve this issue in an analytic context and obtain very general asymptotically almost surely, i.e. with probability tending to 1 when n → ∞ -CONNECTED CORES IN RANDOM PLANAR GRAPHS 3 and precise enumeration results that are applicable in a wide variety of scenarios frequentlyencountered in modern theories of asymptotic enumeration. Secondly, regarding the combi-natorial aspect, the further decomposition of biconnected graphs in building blocks of highercomplexity is considerably more involved than the decomposition of connected graphs intotheir biconnected components. So, although we use the same basic idea as in [11], namelyto sample algorithmically a graph from the class in question, the nature of the resulting al-gorithms is significantly more complex and cannot be analyzed with standard methods fromprobability theory. We resolve this issue by proposing a powerful method that relates byequation systems the simultaneous evolution of several random variables during the samplingprocess. We believe that this analysis opens up the possibility to study parameters of arbi-trarily complex classes that can be described within the framework of the symbolic method ,which nowadays is a widely used tool to decompose various classes of combinatorial objects. Our results.
The main idea of graph classification and enumeration that we will exploitgoes back to Tutte [13] and consists of describing graphs in terms of building blocks of higher connectivity . For example, an arbitrary graph can be described by its connected components.Similarly, any connected graph decomposes uniquely into the set of its maximal biconnectedcomponents. For the class of biconnected graphs there is a similar decomposition with respectto 3-connected building blocks. Informally, one can give a recursive description of biconnectedgraphs in which 3-connected graphs have their edges replaced by biconnected graphs. Thisdecomposition has been formalized through the notion of a network whose precise definitionwill be given in Section 2. The 3-connected building blocks which play the above role arecalled the cores of a biconnected graph. Using this decomposition, we can define classesof biconnected graphs by restricting the cores to be within a certain class of 3-connectedgraphs. For example, if we use the class of all 3-connected planar graphs as the base ofthe decomposition, then we obtain the classes consisting of all biconnected, connected, andgeneral planar graphs.In this paper we study the distribution of the sizes of the cores in large random biconnectedgraphs, that are specified in Tutte’s sense by their 3-connected building blocks. Our firstresult, in a very weakened form, is about random planar graphs. Here and in the remainderwe write “ x ± y ” for the interval ( x − y, x + y ). Theorem 1.1.
Let B n denote a random labeled biconnected planar graph with n vertices, andlet ω = ω ( n ) be any function such that lim n →∞ ω ( n ) = ∞ . Let ε > . Then, B n containsa.a.s. a unique largest core with (1 ± ε ) cn vertices, where c = 0 . . . . is given explicitely.Moreover, every other core contains at most n / ω ( n ) vertices, and there are (1 ± ε ) c ℓ n coreswith ℓ vertices, where ≤ ℓ = O ( n / ) and the c ℓ are given explicitely. The main results of this paper are far more general. In fact, we show that a random planargraph is just a special case of a universal phenomenon. Let B n be a graph drawn uniformly atrandom from a class B of labeled biconnected graphs. In Theorems 2.3 and 2.4 we determinea critical condition according to which B n belongs a.a.s. to exactly one of those categories:(i) There is a unique giant core in B n : there are 0 < c = c ( B ) , α = α ( B ) < ± ε ) cn vertices, while every other core contains O ( n α )vertices.(ii) All cores of B n contain O (log n ) vertices.Moreover, in each of the above cases we determine for all 4 ≤ ℓ ≤ n the expected numberof cores with ℓ vertices, and provide very sharp accompanying large deviation results. We NIKOLAOS FOUNTOULAKIS AND KONSTANTINOS PANAGIOTOU refer the reader to Section 2 for the formal presentation of our setting, and to Theorems 2.3and 2.4 for a precise formulation of the above statements.Our results have implications for several classes of biconnected graphs different from planargraphs. Let us mention selectively two examples. Writing Ex( G , . . . , G k ) for the class ofbiconnected graphs that do not contain G , . . . , G k as minors, the main result in [8] togetherwith our result imply that a random graph from Ex( K , ) on n vertices a.a.s. exhibits abehavior as above: it has a unique giant core whereas every other core is of lower order. Also,for the class Ex( K − e ), which consists of all biconnected graphs whose cores belong to theset { K , , K × K } ∪ { W n } n ≥ , where W n denotes a wheel on n + 1 vertices, again the mainresult in [8] together with our results imply that a random graph on n vertices from this classcontains a.a.s. only cores having at most O (log n ) vertices. Outline.
As we mentioned above, the decomposition of a class of biconnected graphs isfacilitated by the notion of a network, which we formally describe in the next section. Thisis a recursive decomposition that will allow us in Section 5 to design efficient algorithmsthat sample networks according to the so-called
Boltzmann model . Such ideas were usedfor the first time e.g. in [1], where the number of vertices of a given degree in randomdissections of convex polygons was studied. However, the nature of the decomposition inthis work is very complex, and this makes the analysis of the samplers significantly moreinvolved compared to the analysis in [11, 1]. In particular, in order to sample networks,we use four different randomized algorithms, which call each other recursively in a nestedand unpredictable fashion. This difficulty requires a completely new treatment as far as theanalysis is concerned, and the details are presented in Section 5.Another crucial ingredient in our proof is a precise counting estimate for the number ofnetworks, parametrized by the number of vertices and edges. To this end, we extend andfurther develop in Section 4 certain known analytic tools, in order to make them applicablein our actual setting. Furthermore, they allow us to show a central limit theorem as well astail bounds for the number of edges of a (general) random network with n vertices. Finally,in Section 7 we give the proofs of the main theorems and in Section 8 we give the proofs of anumber of auxiliary results which are used in our arguments. Notational preliminaries.
At this point let us introduce some necessary notation. Let C be a class of labeled graphs. For any positive integers n, m ≥
1, we denote by C n,m thesubclass of C consisting of those graphs that have n labeled vertices and m edges, and we set C n = ∪ m ≥ C n,m . Moreover, we denote by C ( x, y ) the exponential generating function (egf ) for C , where x marks the vertices and y marks the edges, i.e., the coefficient [ x n y m ] C ( x, y )equals | C n,m | n ! . For a given y , we denote by ρ C ( y ) the singularity of C ( x, y ) with respect to x and, in particular, when y = 1 we write ρ C = ρ C (1). Also, we write C ( y ) := C ( ρ C ( y ) , y )and C := C ( ρ C , X and Y we denote by X × Y the cartesian product of X and Y followed by a relabeling step, so as to guarantee that all labels are distinct.Note that the relation “ A = X × Y ” expresses the fact that there is a bijection between theelements of A and pairs of elements from X and Y , but it does not provide any informationabout how this bijection looks like, i.e., how to construct a graph in A from two graphs in X and Y . The same is true for the operators described in the remainder. The class X + Y consists of graphs that are either in X or in Y . We denote by Set ≥ k ( X ) the graph class -CONNECTED CORES IN RANDOM PLANAR GRAPHS 5 such that each object in it is an unordered collection of at least k graphs in X . Finally, theclass X ◦ e Y consists of all graphs that are obtained from graphs from X , where each edgeis replaced by a graph from Y . Here we will usually assume that Y is a class of graphs withtwo distinguished vertices, which simply means that we attach a graph G e ∈ Y at each edge e ∈ E ( G ), where G ∈ X , by identifying the special vertices of G e with the endpoints of e .This set of combinatorial operators (cartesian product, disjoint union, set, and substitution)appears frequently in modern theories of combinatorial enumeration, and it is beyond thescope of this work to survey them. For a very detailed description and numerous applicationswe refer to [6] and references therein. 2. Networks
Biconnected Graphs & Networks.
Let B be a class of labeled biconnected graphs. Inthe remainder we assume that the graph consisting of a single edge is in B . Before we studythe typical properties of a graph drawn uniformly at random from B n , that is, the subclass of B consisting of graphs on n vertices, let us introduce an auxiliary graph class that plays animportant role in the decomposition of biconnected graphs. Following Trakhtenbrot [12] andTutte [13] we define a network as a connected graph with two “special” vertices, called the leftpole and the right pole , such that adding the edge between the poles the resulting (multi-)graphis biconnected. The remaining vertices of a network are called labeled vertices . Followingstandard notation, we will assume that in the egf of a class of networks the parameter x always marks the number of labeled vertices.The above description provides us with an explicit relation between the class B and the(corresponding) class of networks N , which we shall describe now. Let ~ B be the class contain-ing ordered pairs ( B \ e, e ) where B ∈ B and e ∈ E ( B ), that is, an edge of B is distinguishedand removed. Note that each ~B ∈ ~ B n , except from that where the underlying graph is a singledistinguished edge, gives rise to two networks with n − { , . . . , n − } appear), and the other is obtainedby adding the distinguished non-edge to the underlying graph of ~B , removing the labels andperforming a relabeling as before. Moreover, the single distinguished edge gives rise to thenetwork that consists of two isolated poles. With X denoting the class consisting of a singlelabeled isolated vertex, and e being the network consisting of a single edge, this implies thatthe classes B and N are due to the definition of N related through(2.1) ( N + 1) × X = (1 + e ) × ~ B . Note that this immediately translates to the relation ∂B ( x,y ) ∂y = x N ( x,y )1+ y , which is the well-known dependency linking the egf’s for B and (the associated) N , see also e.g. [14]. Our Setting.
The main setting in this work is the following. We will begin with a class ofnetworks N , which fulfills certain closure properties that are described below. This, combinedwith (2.1), defines a class of biconnected graphs B . Those B are precisely the ones that wewill consider here. The following statement, along with our probability estimates, impliesthat the structural properties of random networks that occur asymptotically almost surelytranslate into almost sure properties of random biconnected graphs. Consequently, in theremainder of the paper we only need to consider random networks. Let P denote a graphproperty, that is, a set of graphs which is closed under automorphisms. With a slight abuse NIKOLAOS FOUNTOULAKIS AND KONSTANTINOS PANAGIOTOU = S = P ≥ ··· ≥ ··· + e e + P + H N S + HS + H S + HS + H Figure 1.
Series and Parallel Networks.of terminology we say that a network N has a property P , if the graph that is obtained byputting labels to the poles has property P . The proof can be found in Section 8.1. Proposition 2.1.
Let N be a class of networks and let B be the class of the correspondingbiconnected graphs. Moreover, let B n be a uniform random graph from B n , and N n a networkthat is drawn uniformly at random from N n . Suppose that P ( N n − ∈ P ) ≥ − f ( n − ,where P is any property of graphs that is closed under automorphisms. Then P ( B n ∈ P ) ≥ − κf ( n − , where κn is the maximum number of edges in a graph in B n . With the above facts at hand we are ready to define the classes of networks that we willconsider. Let T denote a class of labeled 3-connected graphs that is closed under automor-phisms. Following [12, 13], we define a class of networks with respect to T , denoted by N ( T ),inductively as follows (see also Figures 1 and 2):A network in N ( T ) is either an edge, whose endvertices are the poles, or is of type S ( seriesnetwork ), or of type P ( parallel network ), or of type H ( core network ). Series networks : A network of type S consists of two networks N and N , such that theright pole of N is identified with the left pole of N . Here, N is restricted to be either anedge, or of type S or H , and N ∈ N ( T ). Parallel networks : A network of type P consists either of an edge and a non-empty set ofnetworks, either of type S or of type H , where their right poles (left poles) are identified intoa single right pole (left pole), or a set of networks of size at least 2, either of type S or of type H where the identification of the poles is as before. Core networks:
Let ¯ T be the class of networks which are created by taking any graph in T ,deleting an edge, and then turning its endvertices into poles. A network of type H consistsof a network from ¯ T , where each edge is replaced by a network whose poles are identified ina unique way with the endvertices of the edges.In any of the above cases, we do the necessary relabeling in case there are conflicts, when twoor more networks are joined through one of the above operations. With the above notationat hand, we can now introduce formally the classes of networks that we are interested in. N NN N N
Figure 2.
A core network whose core is K . -CONNECTED CORES IN RANDOM PLANAR GRAPHS 7 Definition 2.2.
We say that N ( T ) is α - nice , if the following conditions hold. (A) For all y > , ¯ T ( x, y ) is analytic in C or has a unique dominant singularity at ρ ¯ T ( y ) > , and admits a local singular expansion of the form (2.2) ¯ T ( x, y ) = P k ≥ t k ( y ) (cid:16) − xρ ¯ T ( y ) (cid:17) k/m , where m ∈ N , and the t k ( y ) are analytic functions. Furthermore, there exist δ, ε > such that ¯ T ( x, y ) is analytic in a domain ∆ = ∆( δ, ε ) = { z : | z | < ρ ¯ T ( y ) + δ, arg( z − ρ ¯ T ( y )) > ε } . Moreover, the smallest integer k such that t k ( y ) and k/m N satisfies k/m = α . Also, the function ρ ¯ T ( y ) is strictly decreasing and continuouslytwice differentiable. (B) For y in a neighborhood of 1, let ρ N ( y ) be the radius of convergence of the egf enu-merating N ( T ) with respect to vertices and edges. Then − ρ ′′ N (1) ρ N (1) − ρ ′ N (1) ρ N (1) + (cid:16) ρ ′ N (1) ρ N (1) (cid:17) = 0 . The assumptions in the definition above are satisfied by several classes of graphs thathave been studied in the literature, e.g. by series-parallel, planar or K , -minor free graphs.However, we believe that the assumption (B) is redundant, in the sense that it follows from(A), but we are unfortunately unable to show it. The remaining conditions are minimal in thecontext of analytic combinatorics, as they form the “backbone” of essentially any countingresult or limit theorem that can be derived.For notational convenience and in slight abuse of notation we will be writing T ( x, y ) forthe egf ¯ T ( x, y ). With all the above definitions at hand we are now ready to present our mainresults. Given a network N , we denote by C ( N ) the number of vertices in a largest coreof N . If ℓ is a positive integer and N is a network, we denote by c ( ℓ ; N ) the number of coresin N which have ℓ vertices. More generally, if ξ ≥
1, we denote by c ( ℓ, ξℓ ; N ) the number ofcores in N whose number of vertices is at least ℓ and no more than ξℓ . Set(2.3) Φ( x, y, z ) = T ( x, z ) − log (cid:18) z y (cid:19) + xz xz . An important property of this function is that it defines implicitly the egf enumeratingnetworks through the relation Φ( x, y, N ( x, y )) = 0; see Lemma 3.2. Our results implythat the sign of ∂∂z Φ( ρ N , , N ) =: λ dictates whether a random network has only at mostlogarithmically-sized cores ( λ > λ < u ∈ { x, y, z } , we shall denote in the remainder by Φ u ( x, y, z ) the partialderivative of Φ( x, y, z ) with respect to u . Moreover, we denote by N n a random networkfrom N n throughout and without further reference. In the case λ > Theorem 2.3.
Let N ( T ) be an α -nice class for some α ∈ R \ { } , and let δ, ε > . Assumethat Φ z ( ρ N , , N ) > and set τ = ρ N ρ T ( N ) . Then the following holds a.a.s. (i) For all ≤ ℓ ≤ (1 − δ ) log /τ n we have c ( ℓ ; N n ) ∈ (1 ± ε ) c ℓ n , where c ℓ is given inLemma 7.1. (ii) c (cid:16) (1 − ε ) log /τ n, log /τ n ; N n (cid:17) ≤ n ε . (iii) The largest core in N n contains at most (5 / δ ) log /τ n vertices.Moreover, for large ℓ the Equation (7.4) provides an asymptotic expression for c ℓ . NIKOLAOS FOUNTOULAKIS AND KONSTANTINOS PANAGIOTOU
Our next theorem deals with the case λ <
0, where it is shown that the correspondingrandom networks have a.a.s. a unique giant core.
Theorem 2.4.
Let N ( T ) be an α -nice class for some α ∈ R \ { } , and let δ, ε > . Assumethat Φ z ( ρ N , , N ) < . Let ω ( n ) be a function such that ω ( n ) → ∞ as n → ∞ . Then thefollowing holds a.a.s. (i) Let γ T > be as in Lemma 7.4. Then C ( N n ) ∈ (1 ± ε ) γ T n . (ii) For all ( nω ( n )) /α ≤ ℓ < C ( N n ) we have c ( ℓ ; N n ) = 0 . (iii) For all ≤ ℓ ≤ (cid:16) nω ( n ) log n (cid:17) / ( α +1) we have c ( ℓ ; N n ) ∈ (1 ± ε ) c ℓ n , where c ℓ is given inLemma 7.1. (iv) Let ξ ≥ . If ≤ ℓ ≤ (cid:16) nω ( n ) log n (cid:17) /α , then c ( ℓ, ξℓ ; N n ) ∈ (1 ± ε ) c ℓ,ξ n , where c ℓ,ξ isgiven in Lemma 7.6.Moreover, for large ℓ the Equations (7.5) , (7.7) provide asymptotic expressions for c ℓ and c ℓ,ξℓ . Recently, Gim´enez, Noy and Ru´e [8] obtained independently results regarding the asymp-totic distribution of the largest core in a random planar graph. Their methods are completelydifferent and are based on techniques that are widely used in analytic combinatorics. However,in the present work, we obtain a fairly precise picture of the typical structure of a randombiconnected graph whose 3-connected cores are taken from various general families, and weperform a very precise census of the smaller cores in all possible analytic regimes.Apart from the structural results concerning the sizes of the cores of N n , we also obtain alimit law for the number of edges of N n . Theorem 2.5.
Let N ( T ) be an α -nice class of networks with respect to a class T . Let N n be a random network with n vertices. Then e ( N n ) satisfies a central limit theorem, wherethe expected value is − ρ ′ N (1) ρ N (1) n + o ( n ) , and the variance is proportional to n . Moreover, thereexists ε > and a C > such that for all ≤ ε < ε we have P ( | e ( N n ) − E ( e ( N n )) | > ε E ( e ( N n ))) ≤ exp (cid:0) − Cε n (cid:1) . Preliminaries
Boltzmann Sampling.
The above decomposition of networks can be expressed in termsof certain combinatorial operators, which are omnipresent in modern theories of enumerationas well as random generation of combinatorial structures. We refer the reader to the book byFlajolet and Sedgewick [6] for a detailed exposition of these techniques.Duchon et al. [5] have shown that the above operations can be used to design
Boltzmannsamplers . If γ is a graph, we denote by e ( γ ) and v ( γ ) the number of edges and the numberof labeled vertices of γ , respectively. A Boltzmann sampler for G is a randomized algorithmthat generates any γ ∈ G with probability G ( x,y ) x v ( γ ) y e ( γ ) v ( γ )! . Note that if we set y = 1 andwe condition on the order of the output graph being n , then the distribution of the outputgraphs is the uniform distribution on the set of graphs of order n .3.2. Networks.
The decomposition of networks in Section 2 implies directly the followingstatement.
Lemma 3.1.
Let e denote the class of networks that consist of precisely one edge and X denote the class which consists of the empty graph with one labeled vertex. Moreover, let N -CONNECTED CORES IN RANDOM PLANAR GRAPHS 9 be an α -nice class of networks with 3-cores from the class T . Then the classes N , S , P , H satisfy the following relations. N = e + S + P + H , S = ( e + P + H ) × X × N , P = e × Set ≥ ( S + H ) + Set ≥ ( S + H ) , H = T ◦ e N . Using this fact we immediately obtain the following relation for the corresponding expo-nential generating functions. This result was shown, among others, in [12, 14].
Lemma 3.2.
The exponential generating functions enumerating networks satisfy N ( x, y ) = y + S ( x, y ) + P ( x, y ) + H ( x, y ) ,S ( x, y ) = ( y + P ( x, y ) + H ( x, y )) xN ( x, y ) ,P ( x, y ) = (1 + y ) (cid:16) e S ( x,y )+ H ( x,y ) − (cid:17) − S ( x, y ) − H ( x, y ) ,H ( x, y ) = T ( x, N ( x, y )) . (3.1) Moreover, N ( x, y ) satisfies the equation Φ( x, y, N ( x, y )) = 0 , where Φ is as in (2.3) . Singularity Analysis for Networks
A typical situation that is commonly encountered in modern scenarios of asymptotic enu-meration is the following (see also [6]): we want to determine the precise asymptotic behaviorof the coefficients of an unknown generating function F ( x ), which is given implicitly by somefunctional equation G ( x, F ( x )) = 0. Of course, only in very seldom cases it is possible to ob-tain F ( x ) explicitly, and generally, we have to resort to a completely different “program” toextract enough information from the given functional equation, so as to be able to understandthe behavior of [ x n ] F ( x ).The fundamental principle behind this program is that the knowledge of the behavior of ananalytic function in the vicinity of its dominant singularities gives us very precise informationabout the asymptotics of its coefficients. So, we have to accomplish two tasks: first, todetermine the dominant singularities of F ( x ), and second, to find the asymptotic expansionof our function near those singular points. The main objective of this section is to accomplishthis program for the classes of α -nice networks.According to Lemma 3.2 the function N ( x, y ) is given by the solution of Φ( x, y, N ( x, y )) = 0,where Φ is as in (2.3). Let y > N ( x, y ) has only non-negativecoefficients, by applying Pringsheim’s Theorem (see e.g. [6]), we infer that if the radius ofconvergence of N ( x, y ) is ρ N ( y ), then ρ N ( y ) is a singularity of N ( x, y ).Note that Φ( x, y, z ) is not analytic at points that satisfy x > ρ T ( z ), as T ( x, z ) ceasesto be analytic there. Hence, we always have that ρ N ( y ) ≤ ρ T ( N ( y )). Moreover, supposethat for some pair ( x ′ , z ′ ) such that x ′ < ρ T ( z ′ ) we have Φ( x ′ , y, z ′ ) = 0 and simultaneouslyΦ z ( x ′ , y, z ′ ) = 0. Then, by the Implicit Function Theorem, see e.g. [6], Φ is locally invert-ible, which means that there exists an analytic continuation of N ( x, y ) around x = x ′ , and N ( x ′ , y ) = z ′ . We can therefore obtain information about ρ by studying the distribution ofthe zeros of Φ z ( x, y, z ), i.e., the points around which Φ fails to be invertible.It turns out that the precise localization and nature of the singularity of N ( x, y ) is dictatedby one of the following analytic conditions: • Subcritical case : We have Φ z ( ρ N ( y ) , y, N ( y )) = 0. In this case, Φ has a branch point at ( ρ N ( y ) , y, N ( y )). Moreover, if Φ was analytic at ( ρ N ( y ) , y, N ( y )), then it readilyfollows that ρ N ( y ) < ρ T ( N ( y )). We further show in Lemma 4.2 that under certainmild assumptions the singularity of N ( x, y ) is of square-root type. • Supercritical case : We have Φ z ( ρ N ( y ) , y, N ( y )) = 0. Here, the function Φ is notanalytic at ( ρ N ( y ) , y, N ( y )), and inevitably we have that ρ N ( y ) = ρ T ( N ( y )). In thissetting, we determine the conditions (see Theorem 4.3 for a precise statement) underwhich the singularity type of some implicitly given function is “inherited” from thesingularity of the implicit function.The following lemma will become very handy when we study the set of zeros of Φ z . Proposition 4.1.
Let y > and suppose that a dominant singularity of N ( x, y ) is givenby ρ N ( y ) . Let ρ θ = ρ N ( y ) e iθ . Then, for any < θ < π | Φ z ( ρ θ , y, N ( ρ θ , y )) | > − Φ z ( ρ , y, N ( ρ , y )) . Proof.
A simple calculation shows that Φ z ( x, y, z ) = T y ( x, z ) − − xz (2+ xz )(1+ z )(1+ xz ) . Let us abbrevi-ate N ( ρ θ , y ) = N θ . By applying the inequality | a − b | ≥ | b | − | a | , which is valid for all complexnumbers a, b , we obtain | Φ z ( ρ θ , y, N ( ρ θ , y )) | ≥ (cid:12)(cid:12)(cid:12)(cid:12) − ρ θ N θ (2 + ρ θ N θ )(1 + N θ )(1 + ρ θ N θ ) (cid:12)(cid:12)(cid:12)(cid:12) − | T y ( ρ θ , N ( ρ θ , y )) | . As T ( x, y ) and N ( x, y ) have only non-negative coefficients, and T is aperiodic, the triangleinequality implies that | T y ( ρ θ , N θ ) | < T y ( | ρ θ | , | N ( ρ θ , y ) | ) ≤ T y ( | ρ θ | , N ( | ρ θ | , y )) = T y ( ρ , N ) . Similarly, we obtain that | − ρ θ N θ (2 + ρ θ N θ ) | ≥ − | ρ θ N θ (2 + ρ θ N θ ) | ≥ − ρ N (2 + ρ N ) , and, | (1 + N θ )(1 + ρ θ N θ ) | ≤ (1 + N )(1 + ρ N ) . The proof completes by putting everything together. (cid:3)
The remainder of this section deals separately with the subcritical and the critical case.
The Subcritical Case.
Our result in this section says that if Φ ceases to be invertible atsome analytic point inside its domain, then the egf enumerating networks has a singularity ofthe square-root type.
Lemma 4.2.
Let Φ( x, y, z ) be as in (2.3) , and let Y be a compact subset of (0 , + ∞ ) . Supposethat for any y ∈ Y there exists a minimal N > and a minimal < x < ρ T ( N ) such that (4.1) Φ( x , y , N ) = 0 and Φ z ( x , y , N ) = 0 . Then there exists an ε > such that N ( x, y ) admits a local representation N ( x, y ) = g ( x, y ) − h ( x, y ) r − xρ N ( y ) , for y ∈ Y , | x − ρ N ( y ) | < ε and | arg ( x − ρ N ( y )) | 6 = 0 , where g ( x, y ) , h ( x, y ) and ρ N ( y ) areanalytic around x = x and y = y , and ρ N ( y ) = x and g ( x , y ) = N . Moreover, forany y ∈ Y , N ( x, y ) is analytic in a domain ∆( ρ ( y )) . -CONNECTED CORES IN RANDOM PLANAR GRAPHS 11 Remark . Proofs of similar statements were found in the literature for e.g. the egf enumer-ating series-parallel networks [2]. However, the proof of the above claim requires more work:the function Φ turns out to have in general negative coefficients, which makes the appli-cation of standard tools (see e.g. [4]) for this purpose impossible. Nevertheless, the proofthat is presented here makes more or less use of standard techniques, and we include it forcompleteness.
Proof.
Let us first collect some basic properties of Φ. As T has non-negative coefficients, forany x, y ≥ z > zz ( x, y, z ) = T zz ( x, z ) + 1 + 7 xz + 3 x z + x z + 2 x + 2 xz (1 + z ) (1 + xz ) > , and(4.2) Φ x ( x, y, z ) = T x ( x, z ) + z (1 + xz ) > . As Φ( x , y , N ) = Φ z ( x , y , N ) = 0 and Φ zz ( x , y , N ) = 0, the Weierstrass PreparationTheorem guarantees the existence of a function H ( x, y, z ), which is analytic around ( x , y , N )and H ( x , y , N ) = 0, and two functions p ( x, y ) , q ( x, y ) that are analytic around ( x , y ) andsatisfy p ( x , y ) = q ( x , y ) = 0, such that(4.3) Φ( x, y, z ) = H ( x, y, z ) (cid:0) ( z − N ) + p ( x, y )( z − N ) + q ( x, y ) (cid:1) . So, every function that satisfies Φ( x, y, N ( x, y )) = 0 is given in a neighborhood of ( x , y ) byone of the two functions N ( x, y ) = N − p ( x, y )2 ± r p ( x, y ) − q ( x, y ) . Note that p ( x ,y ) − q ( x , y ) = 0. Moreover, by differentiating both sides of (4.3) by x weobtain that q x ( x , y ) = 0, as Φ x ( x , y , N ) = 0. Hence p x ( x ,y ) − q ( x , y ) = 0, and againthe Preparation Theorem guarantees the existence of a function K ( x, y ), which is analyticaround ( x , y ) and K ( x , y ) = 0, and a function r ( y ), which is analytic around y and r ( y ) = 0, such that p ( x, y ) − q ( x, y ) = K ( x, y )( x − x + r ( y )) . By putting everything together we obtain that N has a local representation around ( x , y )of the kind N ( x, y ) = g ( x, y ) ± h ( x, y ) r − xρ N ( y ) , where g, h are analytic around ( x , y ) and g ( x , y ) = N , and ρ N ( y ) is analytic around y and ρ N ( y ) = x . Moreover, we choose h without loss of generality such that h ( x , y ) > − ”-sign above is the right choice. Denote the solution withthe − -sign by N , and the other one by N . Note that by the minimality of x and N wehave Φ z ( x, y, N ) = 0 for 0 ≤ x < x and 0 ≤ N < N such that Φ( x, y, N ) = 0, which impliesthat there is a unique analytic function that satisfies Φ( x, y, N ( x, y )) = 0 around 0. Moreover,note that the function N is increasing as x approaches x from left, while N is decreasing.To show that the increasing branch is the function we are looking for, we show that there isa analytic curve that connects N to the function that is analytic around x = 0. We show the claim by contradiction. Let y ∈ Y . First, suppose that there is a 0 < ˜ x < x such that N (˜ x , y ) = N . Then Φ(˜ x , y, N ) = Φ( x , y , N ) = 0, which is a contradiction,as Φ is monotone with respect to its first variable, due to (4.2). On the other hand, supposethat there is a ˜ N < N such that N ( x , y ) = ˜ N . Note that Φ z ( x , y ,
0) = − <
0, anddue to the minimality of N , for all 0 < N < N we have that Φ z ( x , y , N ) <
0. This againcontradicts the assumption Φ( x , y , ˜ N ) = Φ( x , y , N ). In other words, we have shownthat the the solution of Φ( x, y , N ( x, y )) = 0 never “leaves” the rectangle { ( x, N ) | ≤ x ≤ x , ≤ N ≤ N } , thus implying that that the increasing branch analytically continues thesolution around 0.To complete the proof we will show that N ( x, y ) is analytic in a ∆-domain. By applyingProposition 4.1 we obtain for any ρ θ = ρ N ( y ) e iθ , where 0 < θ < π , thatΦ z ( ρ θ , y, N ( ρ θ , y )) > − Φ z ( ρ , y, N ( ρ , y )) = 0 . Thus, the Implicit Function Theorem shows that there is an analytic continuation of N ( x, y )around ρ θ , where 0 < θ < π . This completes the proof. (cid:3) The Critical Case & Transfer of Singular Expansions.
The following result providesus with a local expansion of a function f that is given implicitly by a functional equation f = G ( x, y, f ). In contrast to the previous result, here we handle general functions G atpoints where they are not analytic, thus extending a result from [4], where just the case ofsingular behavior of type 3 / Theorem 4.3.
Suppose that G ( x, y, f ) has a local representation of the form G ( x, y, f ) = g ( x, y, f ) + h ( x, y, f ) (cid:18) − fr ( x, y ) (cid:19) k/m , where k, m ∈ N , < k/m N , the functions g, h, r are analytic around ( x , y , f ) , andsatisfy g f ( x , y , f ) = 1 , h ( x , y , f ) = 0 , r ( x , y ) = 0 , r x ( x , y ) = g x ( x , y , f ) . Suppose that f = f ( x, y ) is a solution of the functional equation f = G ( x, y, f ) such that f ( x , y ) = f . Then f admits a local representation of the form (4.4) f ( x, y ) = ˜ g ( x, y ) + ˜ h ( x, y ) (cid:18) − xρ ( y ) (cid:19) k/m , where ˜ g, ˜ h, ρ are analytic at ( x , y ) and ˜ h ( x , y ) = 0 and ρ ( y ) = x . The next lemma takes care of the cases in which Φ is not analytic at ( ρ N ( y ) , y, N ( ρ N ( y ) , y )). Lemma 4.4.
Let Φ( x, y, z ) be as in (2.3) , and let Y be a compact subset of (0 , + ∞ ) . Supposethat for all y ∈ Y and all N > and < x ≤ ρ T ( N ) such that Φ( x , y , N ) = 0 it holds Φ z ( x , y , N ) = 0 . -CONNECTED CORES IN RANDOM PLANAR GRAPHS 13 Then, if T ( x, y ) admits a local representation of the form (2.2) and is analytic in a ∆ -domain,there exists an ε > such that N ( x, y ) admits a local representation N ( x, y ) = g ( x, y ) + h ( x, y ) (cid:18) − xρ N ( y ) (cid:19) k/m , for y ∈ Y , | x − ρ N ( y ) | < ε and | arg ( x − ρ N ( y )) | 6 = 0 , where g ( x, y ) , h ( x, y ) and ρ N ( y ) areanalytic around x = x and y = y , and g ( ρ N ( y ) , y ) = N . Moreover, for any y ∈ Y , N ( x, y ) is analytic in a domain ∆( ρ N ( y )) .Proof. First of all, note that the assumption on T ( x, y ) implies that there are two functions g ( x, y, z ) and h ( x, y, z ), which are analytic at ( x , y , N ), such that locallyΦ( x, y, z ) = g ( x, y, z ) + h ( x, y, z ) (cid:18) − xρ T ( z ) (cid:19) k/m . Our aim is to apply Theorem 4.3 in order to determine a local expansion for N ( x, y ), whichis implicitly defined by Φ( x, y, N ( x, y )) = 0. To achieve this, we shall first show that we can“switch” in the above expression between local expansions in terms of x and z . Moreover, wefurther will check the remaining conditions in Theorem 4.3.Recall that ρ T is a strictly decreasing function that attains only positive values. Hence wehave that ρ T ( N ) = 0 and ρ ′ T ( N ) = 0. The Weierstrass preparation theorem implies thatthere is a function H ( x, z ), which is analytic at ( x , N ) and H ( x , N ) = 0, such that ρ T ( z ) − x = H ( x, z )( r ( x ) − z ) , where r ( x ) is the analytic inverse function of ρ T in a neighborhood of x . Hence we infer thatthere is an analytic function ˜ h ( x, y, z ) such thatΦ( x, y, z ) = g ( x, y, z ) + ˜ h ( x, y, z ) (cid:18) − zr ( x ) (cid:19) k/m . It is now routine to check from our assumptions the remaining preconditions of Theorem 4.3.Hence, we obtain the claimed local representation for N .To complete the proof we show that N is analytic in a ∆-domain. For the sake of con-tradiction, suppose that there is another singularity on the circle of convergence of N ( x, y ),and write ˜ ρ θ = ρ N ( y ) e iθ , where 0 < θ < π . Note that Φ is analytic at (˜ ρ ( y ) , y, N (˜ ρ ( y ) , y )),because T is analytic there. So, we must have Φ z (˜ ρ ( y ) , y, N (˜ ρ ( y ) , y )) = 0, as otherwise theImplicit Function Theorem would guarantee the existence of an analytic continuation of N around ρ θ . Then, by applying Proposition 4.1 we obtain(4.5) 0 = Φ z ( ρ θ , y, N ( ρ θ , y )) > − Φ z ( ρ , y, N ( ρ , y ))Note that Φ(0 , y, y ) = 0, and hence N (0 , y ) = y (this follows also from the definition ofnetworks, as there is precisely one network that consists of a single edge). A straightforwardcalculation shows that Φ z (0 , y, y ) = − y <
0. Hence, due to our assumptions we have forall 0 ≤ x ≤ x that Φ z ( x, y, N ( x, y )) <
0, which implies that (4.5) is a contradiction. (cid:3)
Proof of Theorem 2.5.
Let y be in some fixed complex neighborhood of 1. By combiningLemma 4.2 with Lemma 4.4 we obtain that there are analytic functions g, h such that locallyaround ρ N ( y )(4.6) N ( x, y ) = g ( x, y ) + h ( x, y ) (cid:18) − xρ N ( y ) (cid:19) r , where r = 1 / ρ N ( y ) , y, N ( y )) >
0, and r = α if Φ( ρ N ( y ) , y, N ( y )) <
0. By applyingthe Transfer Theorem (see Corollary VI.1 in [6]) we obtain that the probability generatingfunction p n ( u ) for the number of edges in N n satisfies p n ( u ) = h ( x, u ) n − r − Γ( r ) ρ N ( u ) − n (1 + o (1)) . The proof finishes by applying the Quasi-Powers Theorem (see Theorem IX.8 in [6]), and byexploiting Propoerty (B) in the definition of nice classes of networks. (cid:3) Sampling Networks
Although the existence of Boltzmann samplers for several classes of networks is guaranteedby the work of Duchon et al. [5], we shall nevertheless describe them explicitly in this section.This will not only allow us to introduce some necessary notation that will be used in the restof the paper, but it will also make possible to relate the number of 3-cores in networks tocertain random decisions performed by the samplers. We will make such statements moreprecise in Section 5.2.5.1.
Boltzmann Samplers for Networks.
Using the rules that are described in Section 3.1we construct the Boltzmann samplers for several classes of networks.We begin with the class N . Note that any network in N is either an edge, or an S -network,or a P -network, or a H network. Hence, a Boltzmann sampler for N will call a sampler for asubclass with probability proportional to the value of the generating function of this subclass.More precisely, we say that a variable X is network-distributed with parameters x and y , X ∼ Net ( x, y ), if its domain is the set of symbols Ω Net = { e, S, P, H } and for any s ∈ Ω Net itholds P ( X = s ) = s ( x,y ) N ( x,y ) . Then the sampler for N can be described concisely as follows.Γ N ( x, y ) : s ← Net ( x, y ) return Γ s ( x, y )Next we describe the sampler for S . For two networks N and N we will write N N for the network that is obtained by identifying the right pole of N with the left pole of N .Recall that S = ( e + P + H ) ×N . Hence, Γ S ( x, y ) has to choose among the classes e , P , and H with the right probability. More precisely, we say that a variable X is series-distributed withparameters x and y , X ∼ Ser ( x, y ), if its domain is the set of symbols Ω Ser = { e, P, H } andfor any s ∈ Ω Ser it holds P ( X = s ) = s ( x,y ) S ( x,y ) . Then the sampler for S is given by the followingprocedure.Γ S ( x, y ) : s ← Ser ( x, y ) ℓ ← s ( x, y ) r ← N ( x, y ) return ℓr , relabeling randomly its vertices -CONNECTED CORES IN RANDOM PLANAR GRAPHS 15 In the sequel we describe the sampler for P . A parallel network either consists of an edgeand a set of at least one S or H network (= Set ≥ ( S + H )), or it consists of a set of at leasttwo S or H networks (= Set ≥ ( S + H )). This implies that Γ P ( x, y ) has to first to choose oneof the two possibilities with the right probability, and then to sample a set (with a given lowerbound on the number of elements) from S + H according to the Boltzmann distribution.Let us introduce some notation before we describe formally the sampler. We say thata variable X is parallel-distributed with parameters x and y , and write X ∼ Par ( x, y ), if X ∼ Be ( e S ( x,y )+ H ( x,y ) − − S ( x,y ) − H ( x,y ) P ( x,y ) ). Finally, we say that a variable is sh-distributed with parameters x and y , X ∼ sh ( x, y ), if its domain is the set of symbols Ω sh = { S, H } andfor s ∈ Ω sh it holds P ( X = s ) = s ( x,y ) S ( x,y )+ H ( x,y ) . Then Γ P works as follows.Γ P ( x, y ) : p ← Par ( x, y ) k ← Po ≥ p ( S ( x, y ) + H ( x, y )) for i = 1 . . . kb i ← sh ( x, y ) p i ← Γ b i ( x, y )construct a network P by identifying the left and right poles of p , . . . , p k relabel randomly the vertices of P if p = 1 then return P , where the poles are joined by an edge else return P Finally, we describe the sampler for H . Recall that a H -network is obtained by substitutingthe edges of some graph from T by graphs from N . Here we will assume that we havean auxiliary sampler Γ T ( x, y ), which samples graphs from T according to the Boltzmanndistribution. Then the sampler for H can be described as follows.Γ H ( x, y ) : T ← Γ T ( x, N ( x, y )) foreach edge e of Tγ e ← Γ N ( x, y )replace every e in T by γ e return T , relabeling randomly its verticesThis completes the definitions of all samplers that we shall exploit. From the theorydeveloped in Duchon et al. [5] we immediately obtain the following statement. Lemma 5.1.
Let N ∈ N such that N has n labeled vertices and m edges. Then P (Γ N ( x, y ) = N ) = x n y e n ! N ( x,y ) . The same statement is also true for networks in the classes S , P , H and thecorresponding samplers given above. Using the definition of networks we obtain readily the following property.
Lemma 5.2.
Let N ( T ) be an α -nice class of networks such that α ∈ R \ { } . Let β = (cid:26) / , if Φ z ( ρ N , , N ) > α , if Φ z ( ρ N , , N ) < . Then there is a b > such that P (Γ N ( ρ N , ∈ N n ) ∼ bn − β . Proof.
The proof follows from 4.6 for the special case y = 1 and the Transfer Theorem(Corollary VI.1 in [6]). (cid:3) Combinatorics of the Samplers.
Although the Boltzmann samplers are randomizedalgorithms, we are going to analyze them as if they were deterministic algorithms. In par-ticular, we are going to assume that all their parameters are read from lists, which containindependent random samples of the appropriate parameters.Let ℓ Net be an infinite list of values from Ω
Net . Similarly, ℓ Ser , ℓ Par and ℓ sh denote infinitelists of values from Ω Ser , Ω
Par and Ω sh , respectively, and ℓ j list of integers ≥ j for j ∈ { , } .Finally, let ℓ T be a sequence of graphs from T .Note that if we choose the values in the lists according to the corresponding probabilitydistributions (i.e., the values in ℓ Net according to independent network-distributed variableswith parameters x and y , the values in ℓ Ser according to independent series-distributed vari-ables, . . . ), then the probability that the deterministic counterparts of the samplers generatea network is equal to the probability in the Boltzmann model.Using this observation, the proofs of our main results proceed according to the followingschema.(1) Relate properties of a network generated by a sampler to properties of the values inthe lists ℓ Net , ℓ
Ser , ℓ
Par , ℓ sh , ℓ , ℓ and ℓ T .(2) Show that the desired properties are observed with high probability, if the values inthe lists are chosen independently according to the corresponding probability distri-butions.The next two statements take care of step (1). More precisely, the following statement givesus the number of 3-cores in a network generated by Γ N . Its proof is straightforward andtherefore omitted. Lemma 5.3.
Suppose that the network N was generated by Γ N by using the first a T graphsin ℓ T . Then c ( k ; N ) = (cid:12)(cid:12)(cid:8) ≤ i ≤ a T (cid:12)(cid:12) v ( ℓ T [ i ]) = k − (cid:9)(cid:12)(cid:12) . The next claim gives us some general relations among the values used by the sampler, whichwill be very useful later. The proof can be found in Section 5.2, and can be performed bylooking closer at how the sampler constructs a large network out of smaller ones. We denoteby ( E ) the indicator variable for the event E , i.e., ( E ) = 1 if E occurs, and ( E ) = 0otherwise. Lemma 5.4.
Suppose that the network N was generated by Γ N by using the first A s valuesin ℓ s , where s ∈ { Net , Ser , Par , sh , , , T } . Then the following statements are true. v ( N ) = A Ser + X ≤ i ≤ A T v ( ℓ T [ i ]) , (5.1) e ( N ) = X ≤ i ≤ A Net ( ℓ Net [ i ] = e ) + X ≤ i ≤ A Ser ( ℓ Ser [ i ] = e ) + X ≤ i ≤ A Par ( ℓ Par [ i ] = 1) , (5.2) A Net = 1 + A Ser + X ≤ i ≤ A T e ( ℓ T [ i ]) , (5.3) A j = X ≤ i ≤ A Par ( ℓ Par [ i ] = j ) , (5.4) A Par = X ≤ i ≤ A Net ( ℓ Net [ i ] = P ) + X ≤ i ≤ A Ser ( ℓ Ser [ i ] = P ) . (5.5) -CONNECTED CORES IN RANDOM PLANAR GRAPHS 17 Moreover, A sh = X ≤ i ≤ A ℓ [ i ] + X ≤ i ≤ A ℓ [ i ] , (5.6) A Ser = X ≤ i ≤ A Net ( ℓ Net [ i ] = S ) + X ≤ i ≤ A sh ( ℓ sh [ i ] = S ) , (5.7) A T = X ≤ i ≤ A Net ( ℓ Net [ i ] = H ) + X ≤ i ≤ A Ser ( ℓ Ser [ i ] = H ) + X ≤ i ≤ A sh ( ℓ sh [ i ] = H ) . (5.8) 6. Cores In Random Networks
In our proofs we will be using repeatedly the Chernoff bound on the probability that abinomially distributed random variable deviates significantly from its expected value. Wewill use the version which appears in [9]. In particular, let X be a binomially distributedrandom variable and let t >
0. Then(6.1) P ( | X − E ( X ) | > t ) ≤ (cid:18) − t E ( X ) + t/ (cid:19) . Another technical ingredient in our proof is the following lemma. It gives a Chernoff-typebound for the sum of independent Po ≥ j ( µ ) variables, and its proof uses exponential generatingfunctions. For the sake of completeness, it can be found in Section 8.3. Lemma 6.1.
Let X , . . . , X r be independent Po ≥ j ( µ ) variables, where µ > and j ∈ { , } .Let Y r ′ = P ≤ i ≤ r ′ X i , where ≤ r ′ ≤ r . Then, there is a C > such that for any log r √ r ≤ ε ≤ and sufficiently large r P (cid:0) ∃ ≤ r ′ ≤ r : | Y r ′ − E ( Y r ′ ) | ≥ εr (cid:1) ≤ e − Cε r . In the following we denote by A s the random variable counting the number of valuesfrom ℓ s used by an execution Γ N ( ρ N ( y ) , y ), where s ∈ { Net , Ser , Par , sh , , , T } . More-over,we write V T and E T for the total number of labeled vertices and edges in all coresof Γ N ( ρ N ( y ) , y ), i.e., V T = A T X i =1 v ( ℓ T [ i ]) = X k ( k − c ( k ; Γ N ( ρ N ( y ) , y )) , and E T = A T X i =1 e ( ℓ T [ i ]) . Finally, let us denote by N the output of the Boltzmann sampler Γ N ( ρ N , Lemma 6.2.
Let < ε < . There is a constant C > such that for sufficiently large n andany Z ∈ { A Net , A
Ser , A
Par , V T , E T } P ( | Z − zn | ≤ εn | N ∈ N n ) ≥ − e − Cε n , and z ∈ { a Net , a
Ser , a
Par , v T , e T } , where α = [ a Net , a
Ser , a
Par , v T , e T ] T is the unique solutionof the linear system M α = r , with (6.2) M = N ρ N N S N − P S N − S N P P N ρ N P N S − − , r = µ , and µ = − ρ ′ N (1) ρ N (1) . Proof.
First, by applying Lemma 5.4, Statements (5.1) and (5.3), we obtain that for everypoint in the conditional probability space “ N ∈ N n ” n = A Ser + V T and A Net = A Ser + E T , from which we immediately obtain that 1 = A Ser n + V T n and A Net n = A Ser n + E T n . These two factsare the second and the last line of the linear system above, and thus hold with probability 1.In the remainder we argue that all other equations are true with probability at least 1 − e − Cε n .This completes the proof of the lemma with the following reasoning. The determinant of M equals D = ρ N N + ( ρ N + 1) N + 1 . Now, since N ( x ) has only non-negative coefficients, we infer that D = 0 and the proof isfinished.Let us continue with an auxiliary observation. Consider e.g. Statement (5.4) in Lemma 5.4.Our aim is to translate this statement into a high probability statement for the relation of therandom variables A j and A Par . For this, let p j = P ( Par ( ρ N ,
1) = j ) and note by applyingLemma 5.2 we infer that there is a c > n P ( A j (1 ± ε ) p j A Par ± εn | N ∈ N n ) ≤ cn β P ( A j (1 ± ε ) p j A Par ± εn ) . By applying (5.4) we obtain A j = P A Par i =1 ( ℓ Par [ i ] = j ). Using this, we infer that the proba-bility of the above event is at most cn α P ∃ L ≥ L X i =1 ( ℓ Par [ i ] = j ) (1 ± ε ) p j L ± εn ! and a simple union bound together with the Chernoff bounds imply that there is a C > P ( A j (1 ± ε ) p j A Par ± εn | N ∈ N n ) ≤ e − Cε n . In other words, we have demonstrated that (5.4) translates into the statement(6.3) A j ∈ (1 ± ε ) p j A Par ± εn with probability ≥ − e − Cε n . Now, with exactly the same line of reasoning we can inferfrom (5.2) and (5.5) that if we condition on “ N ∈ N n ”, with probability at least 1 − e − Cε n e ( N ) ∈ (1 ± ε ) (cid:18) N A Net + ρ N N S A Ser + e S + H − P A Par (cid:19) ± εn, (6.4) A Par ∈ (1 ± ε ) (cid:18) P N A Net + ρ N P N S A Ser (cid:19) ± εn. (6.5)Moreover, by applying Lemma 6.1 instead of the Chernoff bounds we infer from (5.6) thatwith probability at least 1 − e − Cε n (6.6) A sh ∈ (1 ± ε ) (cid:18) ( S + H ) e S + H e S + H − A + ( S + H )( e S + H − e S + H − − S − H A (cid:19) ± εn. -CONNECTED CORES IN RANDOM PLANAR GRAPHS 19 Finally, from (5.7) and (5.8) we infer again by the Chernoff bounds that with probability atleast 1 − e − Cε n A Ser ∈ (1 ± ε ) (cid:18) S N A Net + S S + H A sh (cid:19) ± εn, (6.7) A T ∈ (1 ± ε ) (cid:18) H N A Net + ρ N H N S A Ser + H S + H A sh (cid:19) ± εn. (6.8)Moreover, Theorem 2.5 implies that there is a B > P ( e ( N ) ∈ (1 ± ε ) µn | N ∈ N n ) ≥ − e − Bε n . Let ~X = n [ A Net , A
Ser , A
Par , V T , E T ] T . By combining all the above facts together, and usingthe fact N ( x, y ) = y + (1 + y )( e S ( x,y )+ H ( x,y ) − − e − C ′′ ε n there is a c > ε ) M · ~X ≥ r − ε · ~c and r + ε · ~c ≥ (1 − ε ) M · ~X where ~c = [ c, c, c, c, c ] T . The proof completes by elementary linear algebra algebra, and bychoosing the ε in this proof as ε/c ′ , for a suitable c ′ > (cid:3) Note that the above statement does not yield any information about the total number ofcores in a random network with n vertices. However, with a little additional work we arriveat the following result. Corollary 6.3.
Let < ε < . Then, there is a C > such that P [ | A T − a T n | ≥ εa T n | N ∈ N n ] ≤ e − Cε a T n , where a T = 2 µT ( ρ N , N ) and µ = − ρ ′ N (1) ρ N (1) .Proof. By solving (6.2) we obtain explicit (but lengthy) expressions for the highly probablevalues of A Net , A Ser , and A sh . Then, by using (6.8) we obtain after elementary algebraicmanipulations the claimed statement. (cid:3) Proof of Theorems 2.3 and 2.4
In this section we perform the proof of our main results. We shall denote throughout by N n a network drawn uniformly at random from N n and by N a graph generated by Γ N ( ρ N , Small Cores.
As a first application of Corollary 6.3 we prove that the counts of “small”cores in N n are sharply concentrated around a specific value. Before we proceed let us makean auxiliary observation that will be used several times. Let N ( T ) be an α -nice class ofnetworks. Let B n ⊆ N n be some (bad) property of networks, and let N ∈ N n . Moreover, let A T = A T ( N ) be the total number of cores in N . As the random network N has the samechance of being any N ∈ N n we obtain by applying Corollary 6.3 that there is a C ′ > P ( N n ∈ B n ) = P ( N ∈ B n | N ∈ N n ) ≤ P ( N ∈ B n , A T ∈ (1 ± ε/ a T n | N ∈ N n ) + e − C ′ ε n , where a T = − ρ ′ N (1) ρ N (1) T ( ρ N , N ). Applying Lemma 5.2, we deduce that(7.1) P ( N n ∈ B n ) ≤ O ( n β ) · P ( N ∈ B n , A T ∈ (1 ± ε/ a T n ) + e − C ′ ε n . The parameter β is as in Lemma 5.2. In particular, β = 5 / z ( ρ N , , N ) >
0, and β = α otherwise. This is a fact that we shall use below several times. We start by counting cores ofa given size. Lemma 7.1.
Let N ( T ) be an α -nice class of networks for some α ∈ R \{ } , and let < ε < .Let a T be defined as in Corollary 6.3, β as in Lemma 5.2, and define for k ≥ the quantities (7.2) p k := [ x k − ] T ( x, N ) · ρ k − N T ( ρ N , N ) and k := max { ℓ | p ℓ n ≥ ε − a − T ( β + 1) log n } . Then there exists an constant C = C ( N ) > such that for large n and k ≤ k P ( c ( k ; N n ) ∈ (1 ± ε ) a T p k n ) ≥ − e − Cε p k n . Proof.
Let B n ⊆ N n be the set of networks whose number of cores with precisely k verticesis not in (1 ± ε ) a T p k n . By applying (7.1) we infer that it sufficient to show that, say, P ( N ∈B n , A T ∈ (1 ± ε/ a T n ) ≤ e − C ′ ε p k n .Note that Lemma 5.3 asserts that c ( k ; N ) = P A T i =1 ( v ( ℓ T [ i ]) = k − T ∈ T has k vertices iff it has k − p k = P ( v ( ℓ T [ i ]) = k − P (cid:16) N ∈ B n , A T ∈ (cid:16) ± ε (cid:17) a T n (cid:17) ≤ P ∃ L ∈ (cid:16) ± ε (cid:17) a T n : L X i =1 ( v ( ℓ T [ i ]) = k − (1 ± ε ) a T p k n ! . By the Chernoff bounds and a union bound this probability is, say, at most 2 ne − ε a T p k n .The proof then completes with (7.1) and the choice of k for large n . (cid:3) With the above Lemma at hand we are ready to prove the first statement in Theorem 2.3and the third statement in Theorem 2.4. Note that the analytic properties of T asserted inDefinition 2.2 imply with the Transfer Theorem (see e.g. Corollary VI.1 in [6]) that(7.3) [ x k − ] T ( x, N ) ∼ t αm ( N )Γ( α ) k − α − ρ T ( N ) − k +2 . Proof of Theorem 2.3, (i).
The condition Φ z ( ρ N , , N ) > ρ N < ρ T ( N ). Moreover, the definition of p k in (7.2) and (7.3) assert that there is a C > p k ∼ Ck − α − (cid:18) ρ N ρ T ( N ) (cid:19) k = Ck − α − τ k . As τ <
1, we infer that we can apply Lemma 7.1 for k = (1 − δ ) log /τ n whenever n issufficiently large. This completes the proof. (cid:3) Proof of Theorem 2.4, (iii).
Here, the condition Φ z ( ρ N , , N ) < ρ N = ρ T ( N ). The definition of p k in (7.2) and (7.3) assert that there is a C > p k ∼ Ck − α − (cid:18) ρ N ρ T ( N ) (cid:19) k = Ck − α − . We infer that we can apply Lemma 7.1 for k = ( nω ( n ) log n ) / ( α +1) whenever n is sufficientlylarge. (cid:3) -CONNECTED CORES IN RANDOM PLANAR GRAPHS 21 The next lemma deals with cores that contain more than ( nω ( n ) log n ) / ( α +1) vertices. Theproof is essentially the same as the proof of Lemma 7.1 and hence omitted. Lemma 7.2.
Let N ( T ) be an α -nice class of networks for some α ∈ R \{ } , and let < ε < .Let a T be defined as in Corollary 6.3, β as in Lemma 5.2, and define for k ≥ and ξ ≥ p k,ξk := ξk X ℓ = k [ x ℓ − ] T ( x, N ) · ρ ℓ − N T ( ρ N , N ) and k := max { ℓ | p ℓ,ξℓ n ≥ ε − a − T ( β + 1) log n } . Then there exists an constant C = C ( N ) > such that for large n and k ≤ k P ( c ( k, ξk ; N n ) ∈ (1 ± ε ) a T p k,ξk n ) ≥ − e − Cε p k,ξk n . Proof of Theorem 2.4, (iv).
The condition Φ z ( ρ N , , N ) < ρ N = ρ T ( N ). The definition of p k,ξk in (7.6) and (7.3) assert that for ξ > C > p k,ξk ∼ ξk X ℓ = k Cℓ − α − ∼ Ck − α (1 − ξ − α ) . We infer that we can apply Lemma 7.2 for k = ( nω ( n ) log n ) /α whenever n is sufficiently large. (cid:3) We now consider the case Φ z ( ρ N , , N ) >
0. The next statement deals with the cases inTheorem 2.3 not covered by Lemma 7.1.
Lemma 7.3.
Let N ( T ) be an α -nice class of networks for some α ∈ R \ { } , and let ε > .Assume that Φ z ( ρ N ( y ) , y, N ( y )) > , and let τ = ρ N ρ T ( N ) . Then P (cid:16) C ( N n ) > (5 / ε ) log /τ n (cid:17) = o ( n − ε ) . Moreover, we have P (cid:18) c (cid:18) (1 − ε ) log /τ n,
52 log /τ n ; N n (cid:19) > n ε (cid:19) = o (1) . Proof.
Let B n ⊂ N n be the set of networks in which the largest core has size > (5 / ε ) log /τ n . By applying (7.1) we obtain P ( N n ∈ B n ) ≤ O ( n / ) · P ( N ∈ B n , A T ∈ (1 ± ε/ a T n ) + e − C ′ ε n . Note that Lemma 5.3 asserts that N ∈ B n = ⇒ A T X i =1 (cid:16) v ( ℓ T [ i ]) > (5 / ε ) log /τ n − (cid:17) > . As P ( v ( ℓ T [ i ]) = k ) = [ x k − ] T ( x,N )] · ρ k − N T ( ρ N ,N ) we obtain by using (7.3) that there is a C > n P (cid:16) v ( ℓ T [ i ]) > (5 / ε ) log /τ n − (cid:17) ≤ C X k> (5 / ε ) log /τ n − k − α − τ k . The condition Φ z ( ρ N , , N ) > ρ N < ρ T ( N ), andhence τ <
1. Thus, P (cid:16) v ( ℓ T [ i ]) > (5 / ε ) log /τ n (cid:17) = o ( n − / − ε ). The proof of the firststatement completes with Markov’s inequality.The second statement follows similarly by estimating the probability P (cid:16) (1 − ε ) log /τ n ≤ v ( ℓ T [ i ]) ≤ (5 / ε ) log /τ n (cid:17) with the same technique as above. The straightforward details are left to the reader. (cid:3) The Largest Core.
In this section we prove the first two statements in Theorem 2.4.In particular, we show the following.
Lemma 7.4.
Let N ( T ) be an α -nice class of networks for some α ∈ R \ { } , and let ε > .Assume that Φ z ( ρ N ( y ) , y, N ( y )) < , and let ω ( n ) be a function such that lim n →∞ ω ( n ) = ∞ .Then, a.a.s. (cid:12)(cid:12) C ( N n ) − γ T n (cid:12)(cid:12) < εn, where γ T := v T − α T ρ N T x ( ρ N ,N ) T ( ρ N ,N ) , and v T is as in Lemma 6.2 and a T as in Corollary 6.3.Moreover, there are a.a.s. no other cores in N n with more than ω ( n ) · n /α vertices.Proof. Set n := ω ( n ) · n /α . We will use a counting argument to show that the number ofnetworks that have a unique core with more than n vertices is asymptotically much largerthan the number A of networks that have at least two such cores.The singularity expansion of N ( x, y ) with respect to x (Lemma 4.4) together with theTransfer Theorem (Corollary VI.1 in [6]) imply that there exists a constant c such thatas ℓ → ∞ (7.8) |N ℓ | = (1 + o (1)) cρ − ℓN ℓ − α − ℓ ! . We bound A as follows. Let N be a network having n labeled vertices. Assume that N contains at least two cores each having at least n vertices. Then there is either a cut-edgethat splits N in two networks that contain at least n vertices each, or such a splitting can beobtained if we choose the two poles as the cut-set. So, every such network can be describedby a triple ( N , e, N ), where e ∈ N (or, in slight abuse of notation, e contains the polesof N ), v ( N ) , v ( N ) ≥ n −
2, and we can construct N by identifying the poles of N withthe endpoints of e .Note that by Theorem 2.5 we can assume that with probability 1 − o (1) there are atmost Kn choices for e , for some constant K >
0. Moreover, there are (cid:0) nv ( N ) (cid:1) ways to choosethe labels for N . By summing over all choices for v ( N ) we obtain for n large enough that A ≤ Kn X n / ≤ s ≤ n − n (cid:18) ns (cid:19) |N s | |N n − s | + o ( |N n | ) ≤ Kn X n / ≤ s ≤ n − n (cid:18) ns (cid:19) · ρ − sN s − α − s ! · ρ − n + sN ( n − s ) − α − ( n − s )! + o ( |N n | )= 4 Knρ − nN n ! X n / ≤ s ≤ n − n s − α − ( n − s ) − α − + o ( |N n | )= 8 Knρ − nN n ! X n / n X k = n − ( k − c ( k ; N n ) ≤ Cξn n X k = n − ( k −
2) [ x k − ] T ( x, N ) · ρ ℓ − N T ( ρ N , N ) = o ( n ) . This shows (7.12), and the proof is completed. (cid:3) Remaining Proofs
Proofs of Section 2.
Proof of Proposition 2.1.
Firstly, let us observe that there is a natural projection of ( N +1) × X onto N which maps an ordered pair in ( N + 1) × X to its first element, which wecall the underlying network . Note also that for each network in N n − its preimage under thismap consists of n ( n −
1) elements of ( N + 1) × X . Therefore, since P is a property thatis closed under automorphisms the proportion of networks in N n − that have P equals theproportion of elements of ( N + 1) × X whose underlying network has P . In other words, theprobability that N n − ∈ P equals the probability that if we choose uniformly at random anelement of ( N + 1) × X then the underlying network has P . We denote by P the subset of( N + 1) × X which is the preimage of the networks in N n − which have the property P . Sothe proportion of P in ( N + 1) × X is also at least 1 − f ( n − e ) × ~ B n . So, in particular, P ismapped bijectively into a set P ′ ⊆ (1 + e ) × ~ B n . Let us consider the subspace e × ~ B n whichcontains precisely half of the elements of (1 + e ) × ~ B n . We have | P ′ ∩ e × ~ B n || e × ~ B n | ≥ − f ( n − . On the other hand, the subset e × ~ B n is mapped bijectively into the set ~ B + n := { ( B, e ) : B ∈ B n , e ∈ E ( B ) } . Therefore, if P ′′ is the image of P ′ under this isomorphism, then | P ′′ || ~ B + n | ≥ − f ( n − ~ B + n with probabilities inthe space B n . Note that there is a natural projection of ~ B + n onto B n where a pair ( B, e ) ∈ ~ B + n is mapped to B ∈ B n ; we denote this by π . So | ~ B + n | ≤ κn |B n | , as every graph in B n has at most κn edges. Furthermore, let P ′′ be the complement of P ′′ in ~ B + n . Since everygraph in B n contains at least n edges, we have | π ( P ′′ ) | ≤ | P ′′ | /n . Therefore we arrive atthe relation | π ( P ′′ ) ||B n | ≤ κ | P ′′ || ~ B + n | . Since the latter ratio is at most 2 f ( n − | π ( P ′′ ) ||B n | ≤ κf ( n − (cid:3) Proofs of Section 4.
Proof of Theorem 4.3.
Let s = ⌊ k/m ⌋ and abbreviate F = (1 − f /r ( x, y )) /m . The analytic-ity of g, h implies that G can be represented as G ( x, y, f ) = a + a m F m + · · · + a sm F sm + a k F k + . . . , where the a i = a i ( x, y ) are auxiliary analytic functions. In particular, we have that(8.1) a = g ( x, y, r ( x , y )) , a m = − g f ( x, y, r ( x , y )) r ( x , y ) , and a k ( x , y ) = 0 . The definition of F implies that f = r (1 − F m ), where r = r ( x, y ). The (unknown) function f hence satisfies the equation(8.2) r − a = ( a m + r ) F m + a m F m + · · · + a sm F sm + a k F k + . . . . -CONNECTED CORES IN RANDOM PLANAR GRAPHS 25 Note that a ( x , y ) = g ( x , y , r ( x , y )) = r ( x , y ), which implies that the the left-hand sideof the above equation vanishes at ( x , y ). Moreover, our assumptions imply that r x ( x , y ) − ( a ) x ( x , y ) = r x ( x , y ) − g x ( x , y , r ( x , y )) = 0. By applying the Preparation Theoremby Weierstrass we thus infer the existence of analytic functions H ( x, y ) and ρ ( y ) such that H ( x , y ) = 0, ρ ( y ) = x and locally around ( x , y ) r − a = H ( x, y )( x − ρ ( y )) . Set X = (1 − x/ρ ( y )) /m . Equation (8.2) is then around ( x , y ) equivalent to( − H ( x, y ) ρ ( y )) X m = F m (cid:16) ( a m + r ) + a m F m + · · · + a sm F ( s − m + a k F k − m + . . . (cid:17) . Recall that for any | x | < α ∈ C we have that (1 + x ) α = P k ≥ (cid:0) αk (cid:1) x k . Using this, theabove is equivalent to( − H ( x, y ) ρ ( y )) /m X = F (cid:16) ( a m + r ) /m + ˜ a m F m + · · · + ˜ a sm F ( s − m + ˜ a k F k − m + . . . (cid:17) , where the ˜ a i ’s are functions given in terms of the a i ’s, and in particular ˜ a k = a k m ( a m + r ) ( m − /m .As H ( x , y ) ρ ( y ) = 0 and a m ( x , y ) + r ( x , y ) (8.1) = (1 − g f ( x , y , f )) r ( x , y ) = 0 , the above relation between X and F is locally invertible around ( x , y ). By indeterminatecoefficients we obtain F = (cid:18) − H ( x, y ) ρ ( y ) a m + r (cid:19) /m X + b m +1 X m +1 + · · · − ˜ a k ( − H ( x, y ) ρ ( y )) k − m +1 m ( a m + r ) k − m +2 m X k − m +1 + . . . , where the b i ’s, i ∈ { jm + 1 : 1 ≤ j ≤ s } are analytic functions of the ˜ a i ’s. By taking the m thpower of both sides of the above equation we obtain1 − fr ( x, y ) = − H ( x, y ) ρ ( y ) a m + r X m + · · · − ˜ a k m ( − H ( x, y ) ρ ( y )) k/m ( a m + r ) ( k +1) /m X k + . . . . This completes the proof of (4.4), as it readily follows from our assumptions that the coefficientof X k above is = 0 in a neighborhood of ( x , y ). (cid:3) Proofs of Sections 5 and 6.
Proof of Lemma 5.4.
To see (5.1), we define a mapping from the set of vertices of the resultingnetwork into the set of calls of the samplers Γ N, Γ S, Γ P and Γ H . More specifically, we mapa vertex to the call of the subroutine where this vertex appeared for the first time. Notethat, in fact, the image of this map is the union of the calls of Γ S and Γ H only, as these arethe only subroutines where vertices are created. The preimage of each call of Γ S consists ofonly one vertex, whereas the preimage of each call of Γ H consists of as many vertices as thenumber of vertices of the sample from T , which is used in the call of Γ H . Note that once avertex has been created it can never be identified with another vertex but only with a pole.Thus (5.1) follows.Similarly, (5.2) follows from a similar mapping of the set of edges of the resulting networkto the set of calls of the samplers Γ N, Γ S, Γ P and Γ H , where an edge is mapped to that callwhere it was created. In this case, the image of the mapping is contained into the union ofthe sets of calls of Γ N , Γ S and Γ P . This is the case, since in any call of Γ H each edge of the sample from T is replaced by a network which is the result of a call of Γ N . and it is the callsof Γ N which may yield an edge. Also, among the calls of Γ S those which create an edge arethose whose left part consists of an edge. Finally, the calls of Γ P which create an edge arethose which result into a parallel network of the first type, that is, a parallel network whichconsists of an edge and a set of at least one S or H network. Thus, (5.2) follows.Equation (5.3) follows by mapping the set of calls of Γ N (apart from the initial call) to theunion of the sets of calls of Γ S or Γ H , where a call of Γ N is mapped to the subroutine whichstarted it. (Note that, apart from the initial call of Γ N , it is only these samplers that areable to call Γ N .) In particular, a call of Γ H makes precisely one call of Γ T and, conversely,each call of Γ T is made by a call of Γ H . Furthermore, for each edge of the sampled networkof type T precisely one call of Γ N is made. This concludes the proof of (5.3).As far as (5.4) is concerned this is simply counting the number of calls of Γ P which yield aparallel network of the first type (if j = 1) or a parallel network of the second type (if j = 2).Equation (5.5) follows with a similar argument by mapping each call of Γ P to the callof Γ N or Γ S which created it, as these are the only types among our Boltzmann samplerswhich can call directly Γ P . Thus (5.5) holds and the same kind of argument works also for(5.7) as well as for (5.8). For the latter, we need the observation that there is a one-to-onecorrespondence between the calls of Γ H and the calls of Γ T . (cid:3) Proof of Lemma 6.1.
We start with the case j = 1. For a Po ≥ ( µ ) variable X and ξ ≥ E (cid:0) ξ X (cid:1) = e ξµ − e µ − . Using Markov’s inequality, this implies for any ξ ≥
1, 0 ≤ r ′ ≤ r ,and ε > P (cid:0) Y r ′ ≥ (1 + ε ) r ′ E ( X ) (cid:1) ≤ e r ′ f ( ξ ) , where f ( ξ ) = log (cid:18) e ξµ − e µ − (cid:19) − (1 + ε ) E ( X ) log ξ. Note that f ′ ( ξ ) = µe ξµ e ξµ − − (1 + ε ) E ( X ) ξ and f ′′ ( ξ ) = − µ e ξµ ( e ξµ − + (1 + ε ) E ( X ) ξ . The function x ( x − is strictly monotone decreasing for x >
1. Together with the triangleinequality this implies for ξ ≥ | f ′′ ( ξ ) | ≤ µ e µ ( e µ − + (1 + ε ) E ( X ) ≤ ( E ( X ) + 1 + ε ) E ( X ) . Write ξ = 1 + δ , where δ ≥
0. We obtain by applying Taylor’s Theorem f ( ξ ) ≤ f (1) + f ′ (1) δ + ( E ( X ) + 1 + ε ) E ( X ) δ . As f (1) = 0 and f ′ (1) = − ε E ( X ), by setting δ = ε ε + E ( X )) and using (8.3) we infer that P (cid:0) Y r ′ ≥ (1 + ε ) r ′ E ( X ) (cid:1) ≤ e − ε ε + E ( X )) E ( Y r ′ ) . We can deal completely analogously with the lower tail of the distribution of Y r ′ – the straight-forward details are omitted.Using the above bounds we obtain readily for any 0 ≤ r ′ ≤ r that there is a constant C = C ( µ ) > P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) Y r ′ − r ′ µ − e − µ (cid:12)(cid:12)(cid:12)(cid:12) ≥ εr (cid:19) ≤ e − C ′ ε r . -CONNECTED CORES IN RANDOM PLANAR GRAPHS 27 The claim for j = 1 then follows with plenty of room to spare by applying the union boundand using that ε ≥ log r √ r . As the calculations are very similar for the case j = 2 we leave themto the reader. (cid:3) References [1] N. Bernasconi, K. Panagiotou and A. Steger, On properties of random dissections and triangulations, In
Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’08) , pp. 132–141,2008.[2] M. Bodirsky, O. Gim´enez, M. Kang, and M. Noy. On the number of series parallel and outerplanargraphs. In , volume AE of
DMTCS Proceedings , pages 383–388. Discrete Mathematics and Theoretical ComputerScience, 2005.[3] A. Denise, M. Vasconcellos and D.J.A. Welsh, The random planar graph,
Congressus Numerantium (1996), 61–79.[4] M. Drmota.
Random Trees: an Interplay between Combinatorics and Probability , Springer, 2009.[5] P. Duchon, P. Flajolet, G. Louchard and G. Schaeffer, Boltzmann samplers for the random generation ofcombinatorial structures,
Combinatorics, Probability and Computing (2004), 577–625.[6] F. Flajolet and R. Sedgewick, Analytic Combinatorics , Cambridge University Press, 2009.[7] O. Gim´enez and M. Noy, Asymptotic enumeration and limit laws of planar graphs,
Journal of the AmericanMathematical Society (2009) 309–329.[8] O. Gim´enez, M. Noy and J. Ru´e, Graph classes with given 3-connected components: extended abstract, Electronic Notes in Discrete Mathematics (2007), 521–529.[9] S. Janson, T. Luczak, and A. Ruci´nski. Random Graphs . John Wiley & Sons, 2000.[10] C. McDiarmid, A. Steger and D. Welsh, Random planar graphs,
Journal of Combinatorial Theory B (2005), 187–205.[11] K. Panagiotou and A. Steger, Maximal biconnected subgraphs of random planar graphs, In Proceedingsof the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA ’09) , pp. 432-440, 2009.[12] B.A. Trakhtenbrot, Towards a theory of non-repeating contact schemes,
Trudi Mat. Inst. Akad. NaukSSSR (1958), 226–269.[13] W.T. Tutte, Connectivity in graphs , University of Toronto Press, Toronto, 1966.[14] T.R.S. Walsh, Counting labelled 3-connected and homeomorphically irreducible 2-connected graphs,
Jour-nal of Combinatorial Theory, B32