Characteristic Points of Recursive Systems
aa r X i v : . [ m a t h . C O ] J un CHARACTERISTIC POINTS OF RECURSIVE SYSTEMS
JASON P. BELL, STANLEY N. BURRIS, AND KAREN A. YEATS
Abstract.
Characteristic points have been a primary tool in the study ofa generating function defined by a single recursive equation. We investigatethe proper way to adapt this tool when working with multi-equation recursivesystems.Given an irreducible non-negative power series system with m equations,let ρ be the radius of convergence of the solution power series and let τττ bethe values of the solution series evaluated at ρ . The main results of the paperinclude:(a) the set of characteristic points form an antichain in R m +1 ,(b) given a characteristic point ( a, b ), (i) the spectral radius of the Jacobianof G at ( a, b ) is ≥
1, and (ii) it is = 1 iff ( a, b ) = ( ρ, τττ ),(c) if ( ρ, τττ ) is a characteristic point, then (i) ρ is the largest a for ( a, b ) acharacteristic point, and (ii) a characteristic point ( a, b ) with a = ρ isthe extreme point ( ρ, τττ ). Introduction and Preliminaries
Recursively defined generating functions play a major role in combinatorial enu-meration; see the recently published book [9] for numerous examples. The impor-tant technique of expressing a generating function as a product of geometric series(as well as other kinds of products) was introduced by Euler in the mid 1700s, inhis study of various problems connected with the number of partitions of integers.This investigation of partition problems was continued by Sylvester and Cayley(see, for example, [5], [19]), starting in the mid 1850s. The expressions they usedfor partition generating functions were explicit, whereas the fundamental equation X n ≥ t n x n = x · Y n ≥ (1 − x n ) − t n , (1)introduced in 1857 by Cayley [6], for rooted unlabeled trees, defined the coefficients t n implicitly, yielding a recursive procedure to compute the t n . Cayley used thisto recursively calculate (with some errors) the first dozen values of t n , and laterapplied his method to recursively enumerate certain kinds of chemical compounds.Let T ( x ) = P n ≥ t n x n . In 1937 P´olya (see [18]) converted (1) into T ( x ) = x · exp (cid:16) X m ≥ T ( x m ) /m (cid:17) , (2)a form to which he was able to apply analytic techniques to find asymptotics forthe t n , namely he proved t n ∼ Cρ − n n − / (3) Mathematics Subject Classification. where ρ is the radius of convergence of T ( x ), and C a positive constant. A sim-ilar result held for the various classes of chemical compounds studied by Cayley.Although the function T ( x ) was not expressible in terms of well-known functions,nonetheless P´olya showed how to determine C and ρ directly from (2). P´olya’smethods were applied to nearly regular classes of trees in 1948 by Otter [17].In 1974 Bender [1], following P´olya’s ideas, formulated a general result for how todetermine the radius of convergence ρ of a power series T ( x ) defined by a functionalequation F ( x, y ) = 0. Bender’s hypotheses guaranteed that ρ was positive andfinite, and that τ := T ( ρ ) was also finite. His method was simply to find ( ρ, τ )among the solutions ( a, b ) (called characteristic points ) of the characteristic system F ( x, y ) = 0 ∂F∂y ( x, y ) = 0 . A decade later Canfield [4] found a gap in the hypotheses of Bender’s formula-tion when there were several characteristic points. In the case of a polynomialfunctional equation, Canfield sketched a method to determine which of the charac-teristic points gives the radius of convergence of the solution y = T ( x ).In the late 1980s Meir and Moon [15] focused on a special case of Canfield’swork, namely when F ( x, y ) = 0 is of the form y = G ( x, y ), where G ( x, y ) is a powerseries with nonnegative coefficients. The interesting cases were such that setting T ( x ) = G ( x, T ( x )), with T ( x ) an indeterminate power series, gave a recursive determination of the coefficients of T ( x ). One advantage of their restricted form ofrecursive equation was that there could be at most one characteristic point. Thisformulation was adopted by Odlyzko in his 1995 survey paper [16] as well as inthe recent book [9] of Flajolet and Sedgewick. These publications have focused oncharacteristic points in the interior of the domain of convergence of G ( x, y ), in thecontext of proving that ρ is a square root singularity of the solution y = T ( x ). If( ρ, τ ) is on the boundary of the domain of G ( x, y ) then ρ may not be a square-rootsingularity of T ( x ).Most areas of application actually require a recursive system of equations(4) y = G ( x, y , . . . , y m )... y m = G m ( x, y , . . . , y m ) , written more briefly as y = G ( x, y ). (A precise definition of the systems consideredin this paper is given in § y , . . . , y m . However it was not until the 1990sthat publications started appearing that used multi-equation non-linear systems.Following the trend with single recursion equations y = G ( x, y ), the focus has beenon systems y = G ( x, y ) where the G i ( x, y ) are power series with non-negativecoefficients.In 1993 Lalley [12] considered polynomial systems in his study of random walkson free groups. In 1997 Woods [20] used one particular system to analyze the In [2] we found this law so ubiquitous among naturally defined classes of trees defined by asingle equation that we referred to it as the universal law for rooted trees.
HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 3 asymptotic densities of monadic second-order definable classes of trees in the classof all trees. In the same year Drmota [7] extended Lalley’s results to power seriessystems. Lalley’s and Drmota’s results were for a wide range of irreducible systems,that is, systems in which each variable y i (eventually) depends on any variable y j .An irreducible system of the kind they studied behaves in some ways like a singleequation system, for example, the standard solution y i = T i ( x ) is such that all the T i ( x ) have the same finite positive radius ρ , the τ i := T i ( ρ ) are all finite, and theasymptotics for the coefficients of T i ( x ) is of the P´olya form C i ρ − n n − / .Thus, as has been the case with single equation systems, it is desirable to find theradius of convergence ρ even though the solutions T i ( x ) may be fairly intractable.The natural method was to extend the definition of the characteristic system from asingle equation to a system of equations, by adding the determinant of the Jacobianof the system, set equal to zero to, to the original system. The solutions of such acharacteristic system will again be called characteristic points.Under suitable conditions one can find ( ρ, τττ ) among the characteristic points.To-date, however, the necessary study of characteristic points ( a, b ) for systems, sothat one can locate ( ρ, τττ ), has been essentially non-existent. Filling this void is thegoal of this paper. In December, 2007, we discovered, in the polynomial systemsstudied by Flajolet and Sedgewick, and thus in the more general systems studiedby Drmota, that it was possible for there to be more than one characteristic point— this was communicated to Flajolet and appears as an example in [9] (p. 484).The main objective of this paper is to give conditions to locate ( ρ, τττ ) among thecharacteristic points, if indeed ( ρ, τττ ) is a characteristic point. A review of, andimprovements to, the theory of the single equation case (see Proposition 15 andCorollary 17) are also given.It turns out that, even if there is a characteristic point of a system y = G ( x, y )in the interior of the domain of G ( x, y ), one cannot claim that the asymptotics forthe coefficients of the solutions T i ( x ) will be of the above P´olya form (see Examples30, 31). We do not investigate the case when ( ρ, τττ ) is not a characteristic point, conclud-ing only that it must be on the boundary of the domain of G ( x, y ) and that thespectral radius of the Jacobian of G ( x, y ) at ( ρ, τττ ) is <
1. Note that for polynomialsystems, ( ρ, τττ ) is always a characteristic point, and in general the spectral radiuscondition (see Lemma 12) makes it possible to recognize when ( ρ, τττ ) is among thecharacteristic points.1.1.
Outline.
Appendix B discusses standard background and notation for powerseries, including a statement, Proposition 37, of the key results of Perron-Frobeniustheory.Section 2 sets up the equational systems of interest. Section 3 begins by reduc-ing to the case where the Jacobian matrix J G ( x, y ) has nonzero entries and thenproceeds to the more interesting discussion of properties of characteristic points, In 1997 Drmota [7] appears to claim that having a characteristic point in the interior of thedomain would lead to P´olya asymptotics—however these examples show this not to be the case.In his 2009 book [8] this hypothesis is replaced with one regarding minimal characteristic points,which seems somewhat at odds with our Proposition 11, which says that the characteristic pointsform an antichain with the characteristic point ( a, b ) of interest having the largest value of a among the characteristic points. Theorem 22 of § G ( x, y ) has 1 as its largest real eigenvalue. JASON P. BELL, STANLEY N. BURRIS, AND KAREN A. YEATS including notably Proposition 11. This leads to the main result of the section, The-orem 14, followed by the single equation result, Proposition 15. Section 4 introducesan eigenvalue criterion for critical points leading to the main result of the paper,Theorem 21. Section 5 then uses the preceding results to correct an inaccuracy inthe literature. The main body of the paper concludes with some open problems.Appendix A contains a large number of examples illustrating the various possi-bilities and results. It is best read along side the main body of the paper.2.
Well-conditioned systems
The next definition gives a version of essentially well-known conditions whichensure that a system y = G ( x, y ) as in (4) has power series solutions y i = T i ( x ) ofthe type encountered in generating functions for classes of trees. (See Drmota [7],[8].) Definition 1.
A system y = G ( x, y ) is well-conditioned if it satisfies (a) each G i ( x, y ) is a power series with nonnegative coefficients (b) G ( x, y ) is holomorphic in a neighborhood of the origin (c) G (0 , y ) = (d) for all i , G i ( x, ) = 0(e) the system is irreducible (f) for some i, j, k , ∂ G i ( x, y ) ∂y j ∂y k = 0 (so the system is nonlinear in y ). Remark 2.
Since G ( x, y ) has non-negative coefficients, condition (b) is equivalentto (b ′ ) : G ( x, y ) converges at some positive ( a, b ) . Solutions of Well-Conditioned Systems.
The following proposition isstandard.
Proposition 3. If y = G ( x, y ) is a well-conditioned system then the followinghold: (i) There is a unique vector T ( x ) of formal power series T i ( x ) with nonnegativecoefficients such that one has the formal identity (5) T ( x ) = G (cid:0) x, T ( x ) (cid:1) . (ii) Equation (5) gives a recursive procedure to find the coefficients of the T i ( x ) . (iii) Equation (5) holds for x ∈ [0 , ∞ ] . (iv) All T i ( x ) have the same radius of convergence ρ ∈ (0 , ∞ ) and all T i ( x ) converge at ρ , that is, τ i := T i ( ρ ) < ∞ . (v) Each T i ( x ) has a singularity at x = ρ . (vi) If ( ρ, τττ ) is in the interior of the domain of G ( x, y ) then det (cid:0) I − J G ( ρ, τττ ) (cid:1) = 0 . Proof.
Apply Proposition 36, Pringsheim’s Theorem, and the Implicit FunctionTheorem. (cid:3)
The sequence T ( x ) of power series described in Proposition 3 is the standardsolution of the system, and the point ( ρ, τττ ) is the extreme point (of the standard This means the non-negative matrix J G is irreducible. HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 5 solution, or of the system). From (5) one has T (0) = , so the standard solutiongoes through the origin. The set Dom + ( G ) := (cid:8) ( a, b ) : a, b , . . . , b m > G i ( a, b ) < ∞ , ≤ i ≤ m (cid:9) is the positive domain of G . For ( a, b ) ∈ Dom + ( G ) letΛ( a, b ) := Λ (cid:0) J G ( a, b ) (cid:1) , the largest real eigenvalue of the Jacobian matrix J G ( a, b ). Since J G ( a, b ) is amatrix with non-negative entries, Λ( a, b ) is the spectral radius of J G ( a, b ).2.2. Characteristic Systems, Characteristic Points.
Flajolet and Sedgewick[9] VII.6 define the characteristic system of (4) to be y = G ( x, y , . . . , y m )... y m = G m ( x, y , . . . , y m )0 = det (cid:0) I − J G ( x, y ) (cid:1) . Let the positive solutions ( a, b ) ∈ R m +1 to this system be called the character-istic points of the system. Requiring that ( ρ, τττ ) be a characteristic point in theinterior of the domain of G ( x, y ) has been crucial to proofs that x = ρ is a square-root singularity of the T i ( x ), leading to the asymptotics t i ( n ) ∼ C i ρ − n n − / forthe non-zero coefficients. There is, thus, considerable interest in finding practicalcomputational means of estimating ρ .For the case that the G i ( x, y ) are polynomials we know that ( ρ, τττ ) will be amongthe characteristic points and in the interior of the domain of G . However until now,even in the polynomial case, no general attempt has been made to characterize( ρ, τττ ) among the characteristic points of the system —with one exception, namelythe 1-equation systems.3. Characteristic Points of Well-Conditioned Systems
From now on it is assumed, unless stated otherwise, that we are working with awell-conditioned system Σ : y = G ( x, y ) of m equations.3.1. Making substitutions in an irreducible system.
A careful analysis of thecharacteristic points of Σ is easier if J G ( a, b ) is a positive matrix for positive points( a, b ); this is the case precisely when no entry of J G ( x, y ) is 0. Fortunately there isa substitution procedure to transform the original system Σ into a well-conditionedsystem Σ ⋆ with(i) exactly the same positive solutions ( a, b ), and(ii) exactly the same set CP of characteristic points,and such that for the new system y = G ⋆ ( x, y ), the Jacobian J G ⋆ ( x, y ) has no zeroentries. Indeed, given any positive integer n , one can carry out the substitutions sothat all n th partial derivatives of G ( x, y ) with respect to the y i are non-zero. Thegoal of this section is to prove these claims. Flajolet and Sedgewick ([9] Chapter VII p. 468) only consider characteristic points in theinterior of
Dom + ( G ). When dealing with polynomial systems in Chapter VII of [9], Flajolet and Sedgewick do notuse characteristic systems—they prefer to work with the singularities, and their connections viabranches, of the algebraic curves y i ( x ) defined by the system. JASON P. BELL, STANLEY N. BURRIS, AND KAREN A. YEATS
The simplest substitutions are n -fold iterations G ( n ) of the transformation G .These are used in [9] (see p. 492) as they suffice for aperiodic polynomial systemsΣ. In general, however, iteration of G does not suffice to obtain a system Σ ⋆ asdescribed above—see Example 33.Given a system Σ : y = G ( x, y ), a minimal self-substitution transformation creates the system Σ ( α ) : y = G ( α ) ( x, y ) by selecting α ∈ [0 ,
1] and a pair ofindices i, j (possibly the same) with ∂G i ( x, y ) /∂y j = 0 and then substituting αG j ( x, y ) + (1 − α ) y j for a single occurrence of y j in the power series G i . Suppose H ( x, y ; y ) is the result of replacing the single occurrence of y j in G i by a newvariable y . Then the system Σ ( α ) isΣ ( α ) : y = G ( α )1 ( x, y ) := G ( x, y )... y i = G ( α ) i ( x, y ) := H (cid:0) x, αG j ( x, y ) + (1 − α ) y j ); y (cid:1) ... y m = G ( α ) m ( x, y ) := G m ( x, y )More generally, a system Σ ⋆ : y = G ⋆ ( x, y ) is a self-substitution transform ofΣ : y = G ( x, y ) if there is a sequence Σ , Σ , . . . , Σ r of systems such that Σ = Σ ,Σ ⋆ = Σ r , and for 0 ≤ i < r the system Σ i +1 is a minimal self-substitution transformof Σ i . Lemma 4.
For Σ ( α ) and Σ ⋆ as described above: (a) Σ = Σ . (b) If Σ is irreducible and α ∈ [0 , then Σ ( α ) is irreducible. (c) Suppose Σ is irreducible. Then Σ ⋆ is irreducible iff each step Σ i is irreducible. (d) Suppose Σ is well-conditioned and α ∈ [0 , . Then Σ ( α ) is well-conditionediff it is irreducible. In particular Σ ( α ) is well-conditioned if α ∈ [0 , . (e) Suppose Σ is well-conditioned. Then Σ ⋆ is well-conditioned iff it is irreducible.Proof. Straightforward. (cid:3)
Lemma 5.
Suppose Σ ⋆ : y = G ⋆ ( x, y ) is a self-substitution transform of a well-conditioned Σ : y = G ( x, y ) . Then thefollowing hold: (a) G ( x, y ) and G ⋆ ( x, y ) have the same positive domain of convergence. (b) Σ ⋆ and Σ have the same positive solutions and the same characteristic points. (c) If Σ ⋆ is well-conditioned then Σ and Σ ⋆ have the same standard solution T ( x ) and extreme point ( ρ, τττ ) . (d) If Σ ⋆ is well-conditioned then the Jacobians J G ( x, y ) and J G ⋆ ( x, y ) have allentries finite at the same positive points ( a, b ) in the domain of G .Proof. It suffices to prove this for the case thatΣ ⋆ = Σ ( α ) , A well-conditioned system y = G ( x, y ) is aperiodic if the coefficients of each T i ( x ) are even-tually positive, T ( x ) being the standard solution—see [9], p. 489. HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 7 a minimal self-substitution transform of Σ as described above, namely substituting αG j ( x, y ) + (1 − α ) y j for a single occurrence of y j in the power series G i ( x, y ). Let H ( x, y ; y ) = A ( x, y ) y + B ( x, y ) , where A ( x, y ) and B ( x, y ) are power series with non-negative coefficients, and nei-ther is 0, be such that G i ( x, y ) = A ( x, y ) y j + B ( x, y ) G ( α ) i ( x, y ) = A ( x, y ) (cid:0) αG j ( x, y ) + (1 − α ) y j (cid:1) + B ( x, y ) . For item (a), first suppose that ( a, b ) ∈ Dom + ( G ). Then A ( a, b ) and B ( a, b )are finite, so G ( α ) i ( a, b ) is finite. This suffices to show ( a, b ) ∈ Dom + ( G ( α ) ) sincethe other G ( α ) j ( x, y ) are the same as those in Σ. Conversely, suppose ( a, b ) ∈ Dom + ( G ( α ) ). Again A ( a, b ) and B ( a, b ) are finite, so G i ( a, b ) is finite; and asbefore, the other G j ( a, b ) are finite. Thus ( a, b ) ∈ Dom + ( G ).For item (b), if i = j then clearly the two systems have the same positive solutionssince y j = G j ( x, y ) is in both systems.If i = j first note that every positive solution of Σ is also a solution of Σ ( α ) . Forthe converse we have G ( α ) i ( x, y ) = A ( x, y ) (cid:16) α (cid:0) A ( x, y ) y i + B ( x, y ) (cid:1) + (1 − α ) y i (cid:17) + B ( x, y )= αA ( x, y ) y i + αA ( x, y ) B ( x, y ) + (1 − α ) A ( x, y ) y i + B ( x, y ) . Let ( a, b ) be a positive solution of Σ ( α ) . Then ( a, b ) solves all equations y j = G j ( x, y ) of Σ where j = i since these equations are also in Σ ( α ) . Now b i = G ( α ) i ( a, b )= αA ( a, b ) b i + αA ( a, b ) B ( a, b ) + (1 − α ) A ( a, b ) b i + B ( a, b ) , so (cid:16) − αA ( a, b ) − (1 − α ) A ( a, b ) (cid:17) b i = (cid:16) αA ( a, b ) (cid:17) B ( a, b ) . Since 1 + αA ( a, b ) is positive, one can cancel to obtain b i = A ( a, b ) b i + B ( a, b ) , which says that ( a, b ) satisfies the i th equation of Σ, and thus all the equations ofΣ. Consequently Σ and Σ ( α ) have the same positive solutions ( a, b ).To show both systems have the same characteristic points, compute ∂G ( α ) i ( x, y ) ∂y k = ∂G i ( x, y ) ∂y k + α ∂A ( x, y ) ∂y k · (cid:0) G j ( x, y ) − y j (cid:1) + αA ( x, y ) · (cid:16) ∂G j ( x, y ) ∂y k − δ jk (cid:17) . (6)At a positive solution ( a, b ) to Σ (hence to Σ ⋆ ), this gives ∂G ( α ) i ( a, b ) ∂y k = ∂G i ( a, b ) ∂y k + αA ( a, b ) · (cid:16) ∂G j ( a, b ) ∂y k − δ jk (cid:17) . Thus, since ( a, b ) is positive, one obtains J α ( a, b ) := I − J G ( α ) ( a, b ) from J ( a, b ) := I − J G ( a, b ) by an elementary row operation. It follows that det( J ( a, b )) = 0 if andonly if det( J α ( a, b )) = 0. Combining this with the fact that Σ and Σ ( α ) have thesame positive solutions shows that they also have the same characteristic points. JASON P. BELL, STANLEY N. BURRIS, AND KAREN A. YEATS
For a well-conditioned system Σ, the standard solution is the unique sequence T ( x ) of non-negative power series with T (0) = that solve the system. Thestandard solution of Σ is clearly a solution of Σ ( α ) . Thus if Σ ( α ) is well-conditionedthen it has the same standard solution, and hence the same extreme point, as Σ,so (c) holds.For the final item, let ( a, b ) be a point in Dom + ( G ), hence a point in Dom + ( G ( α ) ). A ( a, b ) is finite by looking at the expression above for G i ( x, y ). Then, since G ( α ) j ( x, y ) = G j ( x, y ) for j = i , (6) shows that ∂G ( α ) i ( a, b ) ∂y k is finite iff ∂G i ( a, b ) ∂y k isfinite, so one has item (d). (cid:3) Lemma 6.
A well-conditioned system
Σ : y = G ( x, y ) can be transformed by aself-substitution into a well-conditioned system Σ ⋆ : y = G ⋆ ( x, y ) such that theJacobian matrix J G ⋆ ( x, y ) has all entries non-zero. Indeed, given any n > , onecan find a Σ ⋆ such that all n th partials of the G ⋆i with respect to the y j are non-zero.Proof. The goal is to show that there is a sequence Σ , . . . , Σ r of minimal self-substitution transforms that go from Σ to the desired Σ ⋆ , and such that eachsystem Σ i is well-conditioned. The following four cases give the key steps in theproof. CASE I:
Suppose some G i is such that all n th partials are non-zero. If G j isdependent on y i (there is at least one such j ) then substituting (1 / G i + (1 / y i for some occurrence of y i in G j gives a well-conditioned system Σ ′ such that for G ′ i = G i and G ′ j , all n th partials are non-zero. Continuing in this fashion oneeventually has the desired system Σ ⋆ . CASE II:
Suppose ∂ mn G i ∂y imn = 0 for some i . This means y imn divides some monomialof G i . Use the fact that for any j = i there is a dependency path from y i to y j toconvert, via self-substitutions that preserve the well-conditioned property, a productof n of the y i in this monomial into a power series which has y jn dividing one ofits monomials. By doing this for each j = i one obtains a well-conditioned G ′ i with ∂ mn G ′ i ∂y n · · · ∂y mn = 0 . Σ ′ is now in Case I. CASE III:
Suppose ∂ G i ∂y i = 0 for some i . Substituting G i for a suitable occurrenceof y i in G i gives a well-conditioned Σ ′ where ∂ G ′ i ∂y i = 0. Continuing in this fashionleads to Case II. CASE IV:
Suppose ∂ G i ∂y j ∂y k = 0 for some i, j, k . If j = i there is a dependencypath from y j to y i which shows how to make self-substitutions (that preserve thewell-conditioned property) leading to ∂ G i ∂y i ∂y k = 0. Likewise, if k = i there is adependency path from y k to y i which shows how to make self-substitutions (with HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 9 each minimal step being well-conditioned) leading to ∂ G i ∂y i = 0, which is Case III.Since Σ is non-linear in y , for some i, j, k we have ∂ G i ∂y i ∂y k = 0 . Thus starting with Case IV and working back to Case I we arrive at the desiredΣ ⋆ . (cid:3) Lemma 7.
Let
Σ : y = G ( x, y ) be a well-conditioned system and let Σ ⋆ : y = G ⋆ ( x, y ) be a self-substitution transform of Σ that is also well-conditioned. If ( a, b ) is a characteristic point of Σ , hence of Σ ⋆ , then Λ( a, b ) = 1 iff Λ ⋆ ( a, b ) = 1 .Proof. Let ( a, b ) be a characteristic point of Σ. It suffices to consider the case whereΣ ⋆ is obtained from Σ by a minimal self-substitution. Let G i ( x, y ) depend on y j ,and let H ( x, y ; y ) be the result of replacing a single occurrence of y j in G i ( x, y )by y . Then let Σ ( α ) : y = G ( α ) ( x, y ), α ∈ [0 , y ← αG j ( x, y ) + (1 − α ) y j to H ( x, y ; y ) to obtain G ( α ) i ( x, y ) = H (cid:0) x, αG j ( x, y ) + (1 − α ) y j ); y (cid:1) . Let Λ α := Λ α ( a, b ), the largest real eigenvalue of J G ( α ) ( a, b ).The only information that we need from the above construction of the G ( α ) i isthat the function α J G ( α ) ( a, b ) is continuous on [0 , J G ( α ) ( a, b ) has1 being an eigenvalue. Since Λ is continuous on non-negative matrices by Corollary38, it follows that α Λ α is continuous on [0 , = 1 iff Λ α = 1.Since ( a, b ) is a characteristic point of Σ it is also a characteristic point of Σ ( α ) ,by Lemma 5, for α ∈ [0 , J G ( α ) ( a, b ) for α ∈ [0 , = 1. Suppose there is a β ∈ (0 ,
1] with Λ β >
1. From the continuity ofΛ α there is a γ ∈ [0 , β ) such that: Λ γ = 1, and Λ α > α ∈ ( γ, β ].Let p α ( x ) be the characteristic polynomial of J G ( α ) ( a, b ). From p α (1) = p α (Λ α ) = 0one has, for each α ∈ ( γ, β ), a c α ∈ (1 , Λ α ) such that dp α dx ( c α ) = 0 . Since Λ α is continuous on [0 , α → γ + Λ α = Λ γ = 1 . This implies lim α → γ + c α = 1 , and thus dp γ dx (1) = lim α → γ + dp α dx ( c α ) = 0 . But from the Perron-Frobenius theory (see Proposition 37) we know that Λ γ = 1implies that 1 is a simple root of p γ ( x ), giving a contradiction. Thus Λ = 1 impliesΛ α = 1. A similar proof gives the converse, that if Λ α = 1 then Λ = 1, proving thelemma. (cid:3) Remark 8.
In view of the last two lemmas, given a well-conditioned system
Σ : y = G ( x, y ) , when one wants to prove something about the positive solutions, thecharacteristic points, or whether or not Λ( a, b ) = 1 at a characteristic point ( a, b ) ,one can, given any n > , assume without loss of generality that all n th partialsof each G i with respect to the y j are non-zero. In the rather scant literature onnonlinear systems one finds a preference for working with aperiodic systems (see,e.g., [9] ), no doubt because of the simplicity of using uniform substitutions to convertsuch a system into one where the Jacobian of G has non-zero entries. With Lemmas6 and 7, the need for the aperiodic hypothesis is avoided. Basic Properties of ( ρ, τττ ) and CP . Now we turn to the question of howto find information about the extreme point ( ρ, τττ ) of a well-conditioned system Σwithout solving the system for the standard solution T ( x ). Lemma 9.
Let y = G ( x, y ) be a well-conditioned system with all entries of J G non-zero. (a) One has the formal equality (7) T ′ ( x ) = G x (cid:0) x, T ( x ) (cid:1) + J G (cid:0) x, T ( x ) (cid:1) · T ′ ( x ) , which also holds for x ∈ [0 , ∞ ] . (b) All T ′ i ( ρ ) are finite or all T ′ i ( ρ ) = ∞ . (c) For all i, j the following hold: < ∂G i ∂y j ( ρ, τττ ) · ∂G j ∂y i ( ρ, τττ ) ≤ < ∂G i ∂y j ( ρ, τττ ) < ∞ < ∂G i ∂y i ( ρ, τττ ) ≤ . Proof.
Differentiating (5) gives (7), so T ′ ( x ) is a solution to the irreducible system u = G x (cid:0) x, T ( x ) (cid:1) + J G (cid:0) x, T ( x ) (cid:1) · u , implying (b). For x ∈ (0 , ρ ), for each i, j , (7)implies T ′ i ( x ) > ∂G i ∂y j (cid:0) x, T ( x ) (cid:1) · T ′ j ( x ) , and thus 1 > ∂G i ∂y j (cid:0) x, T ( x ) (cid:1) · ∂G j ∂y i (cid:0) x, T ( x ) (cid:1) > , giving the inequalities in (c) since the value of ∂G i ∂y j (cid:0) ρ, τττ (cid:1) is the limit of ∂G i ∂y j (cid:0) x, T ( x ) (cid:1) as x approaches ρ from below. (cid:3) Lemma 10.
Let y = G ( x, y ) be a well-conditioned system. (a) If ( a, b ) ∈ CP then Λ( a, b ) ≥ . (b) 0 < Λ (cid:0) a, T ( a ) (cid:1) < , for < a < ρ . HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 11
Proof.
For (a) note that ( a, b ) ∈ CP implies that 1 is an eigenvalue of J G ( a, b ), soΛ( a, b ) ≥ < a < ρ , by the Perron-Frobenius theory of nonnegative matriceswe know that there is a positive left eigenvector (a row vector) v belonging toΛ (cid:0) a, T ( a ) (cid:1) . By (7) v · T ′ ( a ) = v · G x (cid:0) a, T ( a ) (cid:1) + v · J G (cid:0) a, T ( a ) (cid:1) · T ′ ( a ) , so v · T ′ ( a ) = v · G x (cid:0) a, T ( a ) (cid:1) + Λ (cid:0) a, T ( a ) (cid:1) v · T ′ ( a ) . Since v · T ′ ( a ) > v · G x (cid:0) a, T ( a ) (cid:1) > (cid:0) a, T ( a ) (cid:1) < (cid:3) Proposition 11.
Let y = G ( x, y ) be a well-conditioned system. Suppose ( a, b ) and ( c, d ) are characteristic points and ( a, b ) ≤ ( c, d ) . Then ( a, b ) = ( c, d ) . Thusthe set of characteristic points of the system forms an antichain under the partialordering ≤ .Proof. For the proof assume, in view of Remark 8, that all second partials of the G i with respect to the y j do not vanish. If b = d then G ( a, b ) = b = d = G ( c, d ),which forces a = c by the monotonicity of each G i .Now assume b = d . Since b ≤ d , all entries of d − b are non-negative. Usingpart of a Taylor series expansion, G ( c, d ) ≥ G ( a, b ) + J G ( a, b )( d − b ) + 12 ∂ G ( a, b ) ∂y ( d − b ) ... ∂ G m ( a, b ) ∂y m ( d m − b m ) . Since G ( a, b ) = b and G ( c, d ) = d , d − b ≥ J G ( a, b )( d − b ) + 12 ∂ G ( a, b ) ∂y ( d − b ) ... ∂ G m ( a, b ) ∂y m ( d m − b m ) . Let λ be the largest real eigenvalue of the positive matrix J G ( a, b ), and let v be apositive left eigenvector belonging to λ . Then v ( d − b ) ≥ v J G ( a, b )( d − b ) + 12 v ∂ G ( a, b ) ∂y ( d − b ) ... ∂ G m ( a, b ) ∂y m ( d m − b m ) = λ v ( d − b ) + 12 v ∂ G ( a, b ) ∂y ( d − b ) ... ∂ G m ( a, b ) ∂y m ( d m − b m ) so (1 − λ ) v ( d − b ) ≥ v ∂ G ( a, b ) ∂y ( d − b ) ... ∂ G m ( a, b ) ∂y m ( d m − b m ) > , and this forces λ <
1, contradicting Lemma 10 (a). (cid:3)
Lemma 12.
Let y = G ( x, y ) be a well-conditioned system. (a) ( ρ, τττ ) is in the domain of J G ( x, y ) , that is, all entries of the matrix J G ( ρ, τττ ) are finite. (b) If ( ρ, τττ ) is in the interior of the domain of G ( x, y ) then it is a characteristicpoint. (c) 0 < Λ( ρ, τττ ) ≤ . (d) Λ( ρ, τττ ) = 1 iff is an eigenvalue of J G ( ρ, τττ ) iff ( ρ, τττ ) ∈ CP .Proof. For item (a), first let Σ ⋆ be a well-conditioned self-substitution transform ofΣ with all entries in J G ⋆ ( x, y ) non-zero (see Remark 8). By Lemma 9, all entries of J G ⋆ ( ρ, τττ ) are finite. Then Lemma 5(d) shows that all entries of J G ( ρ, τττ ) are finite.For the remainder of the proof we can assume that all entries in J G are non-zero. For part (b) one argues just as in the case of a single equation—if ( ρ, τττ ) is aninterior point but not a characteristic point then by the implicit function theoremthere would be an analytic continuation of T ( x ) at ρ , which is impossible.For (c), since Λ is a continuous nondecreasing function by Corollary 38, andsince the limit of J G (cid:0) x, T ( x ) (cid:1) as x approaches ρ from below is J G ( ρ, τττ ), it followsfrom Lemma 10 (b) that Λ( ρ, τττ ) ≤ . For (d), clearly Λ( ρ, τττ ) = 1 implies 1 is an eigenvalue of J G ( ρ, τττ ), and this in turnimplies that ( ρ, τττ ) ∈ CP . Now suppose that ( ρ, τττ ) ∈ CP . Then 1 is an eigenvalueof J G ( ρ, τττ ), so Λ( ρ, τττ ) ≥ . Thus (c) gives Λ( ρ, τττ ) = 1 . (cid:3) Lemma 13.
Let y = G ( x, y ) be a well-conditioned system. If ( a, b ) is a charac-teristic point and ( a, b ) = ( ρ, τττ ) then either (a) b i > τ i for all i , or (b) a < ρ and b i > T i ( a ) for all i , and some b j > τ j .Proof. Conditions (c) and (d) in the definition of well-conditioned ensures that each G i ( x, y ) depends on x . In view of Remark 8 assume that all second partials of each G i ( x, y ) with respect to the y j are non-zero. Suppose that (a) does not hold. Claim 1:
If some b i > τ i and some b j ≤ τ j then a < ρ and T i ( a ) < b i for 1 ≤ i ≤ m .WLOG assume that b ≤ τ , . . . , b k ≤ τ k and b k +1 > τ k +1 , . . . , b m > τ m . From the monotonicity and continuity of the T i on [0 , ρ ] it follows that for 1 ≤ i ≤ k there exist unique ξ i ∈ (0 , ρ ] such that b i = T i ( ξ i ) . WLOG assume that 0 < ξ ≤ · · · ≤ ξ k ≤ ρ. For i ∈ { , . . . , k } T i ( ξ ) ≤ T i ( ξ i ) = b i and for k + 1 ≤ i ≤ m T i ( ξ ) ≤ T i ( ρ ) < b i . HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 13
Now suppose ξ < a . Then b = G (cid:0) ξ , T ( ξ ) , . . . , T m ( ξ ) (cid:1) < G ( a, b , . . . , b m ) = b , a contradiction. Thus 0 < a ≤ ξ ≤ · · · ≤ ξ k ≤ ρ. Using this one has, for 1 ≤ i ≤ k : T i ( ξ i ) = G i (cid:0) a, T ( ξ ) , . . . , T k ( ξ k ) , b k +1 , . . . , b m (cid:1) > G i (cid:0) a, T ( a ) , . . . , T k ( a ) , T k +1 ( a ) , . . . , T m ( a ) (cid:1) = T i ( a ) . Thus for 1 ≤ i ≤ k , 0 < a < ξ i ≤ ρT i ( a ) < T i ( ξ i ) = b i . Furthermore, for k + 1 ≤ i ≤ m , T i ( a ) < T i ( ρ ) < b i . Thus, in this case, for 1 ≤ i ≤ m one has T i ( a ) < b i . Claim 2: If b i ≤ T i ( ρ ) for all i then a < ρ and b i = T i ( a ) for all i .Choose ξ i ∈ (0 , ρ ] such that b i = T i ( ξ i ). WLOG one can assume 0 < ξ ≤ · · · ≤ ξ m ≤ ρ . If ξ < a then b = G (cid:0) a, T ( ξ ) , . . . , T m ( ξ m ) (cid:1) > G (cid:0) ξ , T ( ξ ) , . . . , T m ( ξ ) (cid:1) = T ( ξ ) = b , a contradiction. Thus a ≤ ξ ≤ · · · ≤ ξ m ≤ ρ .Next one has b m = G m (cid:0) ξ m , T ( ξ m ) , . . . , T m ( ξ m ) (cid:1) ≥ G m (cid:0) a, T ( ξ ) , . . . , T m ( ξ m ) (cid:1) = b m , so the ≥ step must be an equality, and this implies ξ m = a . Thus all ξ i = a ,and then for all i one has b i = T i ( a ). Since ( a, b ) = ( a, T ( a )) is assumed to be adifferent characteristic point from ( ρ, τττ ), it follows that a < ρ . Claim 3:
It is not the case that b i ≤ τ i for all i .Otherwise by Claim 2 we would have ( a, b ) = ( a, T ( a )) with 0 < a < ρ , and thenby Lemma 10 it would follow that ( a, b ) / ∈ CP . But by assumption, ( a, b ) ∈ CP . (cid:3) Theorem 14.
Suppose ( ρ, τττ ) is a characteristic point of a well-conditioned system y = G ( x, y ) . Then: (a) ρ is the largest first coordinate of any characteristic point, that is ρ = max n a : ( a, b ) ∈ CP o , (b) ( ρ, τττ ) is the only characteristic point whose first coordinate is ρ .Proof. Use Proposition 11 and Lemma 13. (cid:3)
Turning to 1-equation systems, we have the following results.
Proposition 15.
A well-conditioned 1-equation system y = G ( x, y ) has a mostone characteristic point; if there is such a point it must be the extreme point ( ρ, τ ) of the standard solution T ( x ) .Proof. The characteristic system is y = G ( x, y )1 = G y ( x, y ) . Suppose ( a, b ) ∈ CP is different from ( ρ, τ ). Then b > τ by Lemma 13.CASE 1: Suppose a > ρ . Then ( ρ, τ ) is in the interior of Dom + ( G ), so ( ρ, τ ) ∈ CP by Lemma 12(b). But this violates the antichain condition of Proposition 11 for CP .CASE 2: Suppose a ≤ ρ . Then b = G ( a, b ) and T ( a ) = G ( a, T ( a )) leads to1 = G y ( a, ξ ) for some T ( a ) < ξ < b . But G y ( a, b ) = 1 since ( a, b ) ∈ CP , so againwe have a contradiction by the strict monotonicity of G y ( x, y ) in Dom + ( G ).Thus the only possible ( a, b ) ∈ CP is ( ρ, τ ). (cid:3) Remark 16.
Meir and Moon [15] prove that well-conditioned 1-equation systemshave at most one characteristic point in the interior of
Dom + ( G ) ; and if such apoint exists then it must be ( ρ, τ ) . See also Flajolet and Sedgewick [9] , Chapter VII § The simple y = xA ( y ) studied by Meir and Moon appearfrequently in the book [9] of Flajolet and Sedgewick. Letting ρ A be the radius ofconvergence of A ( y ), they use the hypothesis(8) lim y → ρ A − yA ′ ( y ) A ( y ) > ρ, τ ) is in the interior of the domain of convergence of xA ( y ).The following corollary improves on their results by giving a precise condition forthere to be a characteristic point (which must be ( ρ, τ ) by Proposition 15), andgiving a precise condition for when ( ρ, τ ) is a characteristic point on the boundary[in the interior] of Dom + ( G ). Corollary 17.
Suppose y = G ( x, y ) is a well-conditioned 1-equation system with G ( x, y ) = xA ( y ) , that is, A ( y ) is a power series P n ≥ a n y n with non-negative coefficients, and both A (0) and A ′′ ( y ) are non-zero. Let B ( y ) = yA ′ ( y ) − A ( y ) + A (0) . Then the charac-teristic system is equivalent to B ( y ) = A (0) x = yA ( y ) , and, one has (a) CP = Ø iff B ( ρ A ) < A (0)(b) B ( ρ A ) ≥ A (0) implies CP = { ( ρ, τ ) } (c) B ( ρ A ) = A (0) implies ( ρ, τ ) is on the boundary of Dom + ( G )(d) B ( ρ A ) > A (0) implies ( ρ, τ ) is in the interior of Dom + ( G ) . HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 15
Proof.
It is easy to verify the alternative form of the characteristic equations givenin the corollary, and then note that B ( y ) = X n ≥ ( n − a n y n is strictly increasing on [0 , ρ A ]. (cid:3) Remark 18.
In Proposition VI.5 of [9] on simple 1-equation systems, the full well-conditioned hypothesis is not used, but instead the non-linearity condition A ′′ ( y ) = 0 is replaced by the stronger condition (8) . This implies B ( ρ A ) > A (0) , and thus onehas ( ρ, τ ) in the interior of Dom + ( G ) .In the sentence following this proposition it is claimed that replacing (8) by ρ A = ∞ gives hypotheses which imply (8) . This is not correct unless one addsin the condition A ′′ ( y ) = 0 , that is, the correct formulation is: well-conditioned plus ρ A = ∞ implies (8) . Eigenpoints
The results developed so far do not give a practical way of locating ( ρ, τττ ) forwell-conditioned systems with more than one equation. Even if one is successfulin finding all the characteristic points, no means has yet been formulated to de-termine if ( ρ, τττ ) is among them. In this section special characteristic points calledeigenpoints are shown to provide the correct analog of characteristic points whenmoving from 1-equation systems to multi-equation systems.
Proposition 19.
Suppose ( a, b ) is a characteristic point of the well-conditionedsystem y = G ( x, y ) . Then Λ (cid:0) a, b (cid:1) = 1 iff ( a, b ) = ( ρ, τττ ) .Proof. We can assume that no partial ∂G i /∂y j is zero. The direction ( ⇐ ) followsfrom Lemma 12 (d). To prove the direction ( ⇒ ) assume ( a, b ) = ( ρ, τττ ). By Lemma13 one has two cases to consider:(I) a > ρ and for all i , b i > τ i (II) a ≤ ρ and for all i , b i > T i ( a ).For (I), ( ρ, τττ ) is in the interior of the domain of G , so by Lemma 12 (b) itis a characteristic point. However this contradicts Proposition 11 which says thecharacteristic points form an antichain.For (II), from the equations G ( a, b ) − b = (cid:0) a, T ( a ) (cid:1) − T ( a ) = one can apply a multivariate version of the mean value theorem to derive:(9) (cid:18) ∂G i ∂y j ( a, v ij ) (cid:19) (cid:0) b − T ( a ) (cid:1) = b − T ( a )with v ij = (cid:0) v ij (1) , . . . , v ij ( m ) (cid:1) satisfying v ij ( r ) = T j ( a ) if r > jT i ( a ) < v ij ( r ) < b i if r = jv ij ( r ) = b j if r < j. Clearly (9) shows that λ = 1 is an eigenvalue of (cid:18) ∂G i ∂y j ( a, v ij ) (cid:19) , and from theproperties of the v ij we see that for all i, j∂G i ∂y j ( a, v ij ) < ∂G i ∂y j ( a, b )since each ∂G i /∂y j depends on all the variables x, y , . . . , y m .From these remarks and the monotonicity of Λ one has1 ≤ Λ (cid:18) ∂G i ∂y j ( a, v ij ) (cid:19) < Λ( a, b ) , showing that ( a, b ) = ( ρ, τττ ) implies Λ( a, b ) > (cid:3) Definition 20.
A characteristic point ( a, b ) is an eigenpoint if Λ (cid:0) a, b (cid:1) = 1.The following theorem summarizes the key results for well-conditioned systems. Theorem 21.
Let
Σ : y = G ( x, y ) be a well-conditioned system. Then the follow-ing hold: (a) ( ρ, τττ ) ∈ Dom + ( G )(b) If ( ρ, τττ ) is in the interior of Dom + ( G ) then it is an eigenpoint. (c) The system Σ has at most one eigenpoint. (d) If there is an eigenpoint of Σ then it must be ( ρ, τττ ) . (e) If there is no eigenpoint of Σ then ( ρ, τττ ) lies on the boundary of Dom + ( G ) and one has Λ( ρ, τττ ) < . This result can be superior to Proposition 14 for computing purposes since thelatter requires that one know all characteristic points of Σ before being able toisolate the one candidate for ( ρ, τττ ). Theorem 21 says that if one can find a charac-teristic point ( a, b ) with J G ( a, b ) having largest positive eigenvalue 1, it is ( ρ, τττ ).As with the 1-equation case, if there are no eigenpoints of Σ, then new methodsare needed.Flajolet and Sedgewick do not make use of the theory of characteristic pointsin their work on multi-equation systems in [9] beyond citing the work of Drmota.Instead, they consider the polynomial case in the general setting of arbitrary non-degenerate m -equation systems P ( x, y ) = 0 in Chap. VII.Let C be the set of solution points ( a, b ) ∈ C m +1 of such a system. The non-degeneracy condition implies that each C i := { ( a, b i ) : ( a, b ) ∈ C} is an algebraiccurve. For such curves there is a simple procedure to find a finite set X i of points( a, b i ) such that all singularities of C i are in X i .When applying the general method of [9] to the special case of well-conditionedsystems y = G ( x, y ), to find the extreme point ( ρ, τττ ), one can bypass the consid-erable work of (1) determining the branch points ( a, b i ) of the algebraic curves C i among the points in X i , and then (2) studying the Puiseux expansions of branchesof C i about these branch points. Instead one only needs to test the finitely manypoints in { ( a, b ) : ( a, b i ) ∈ X i } to see which is the eigenpoint of the system — thiswill be ( ρ, τττ ). HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 17 Drmota’s Theorem Revisited
In 1993 Lalley [12] proved that the solutions y i = T i ( x ) to a well-conditioned polynomial system y = G ( x, y ) would have a square-root singularity at ρ , and thusone had the familiar P´olya asymptotics for the coefficients. In 1997 [7], and againin 2009 [8], Drmota presented the first sweepingly general theorem concerning theasymptotic behavior of the coefficients of solutions of a well-conditioned system,namely the coefficients will again satisfy the same law that P´olya found to be truefor several classes of trees (see [18]). However, as explained in Footnote 2, thehypotheses that Drmota has for the characteristic points of the system seem to beincorrect in the first publication, and vague in the second. To prove the theoremone needs to be able to show that ( ρ, τττ ) is in the interior of the domain of G ( x, y ).The following subsection gives a clear statement of the hypotheses needed, alongwith a slightly different proof of the key induction step for the proof.5.1. Drmota’s Theorem.
The following version is somewhat simpler than thatpresented by Drmota since there are no parameters.
Theorem 22.
Let
Σ : y = G ( x, y ) be a well-conditioned system with standardsolution T ( x ) . Suppose Σ has an eigenpoint ( ρ, τττ ) in the interior of Dom + ( G ) .Then each T i ( x ) is the standard solution to a well-conditioned 1-equation system y i = b G i ( x, y i ) with ( ρ, τ i ) in the interior of Dom + ( b G i ) . Thus each T i ( x ) has asquare-root singularity at ρ , and the familiar P´olya asymptotics (see, e.g., [2] ) holdfor the non-zero coefficients.Proof. One only needs to consider the case that the system has at least two equa-tions, and one can assume all second partials of the G i with respect to the y j arenon-zero. The following shows that eliminating the first equation (and y ) yieldsa well-conditioned system with one less equation which has the standard solution (cid:0) T ( x ) , . . . , T m ( x ) (cid:1) and an eigenpoint in the interior of the domain of the system.By the Implicit Function Theorem one can solve the first equation y = G ( x, y )for y , say y = H ( x, y , . . . , y m ) , where H is holomorphic in a neighborhood of the origin, that is, H (0 , ) = 0 and H ( x, y , . . . , y m ) = G (cid:0) x, H ( x, y , . . . , y m ) , y , . . . , y m (cid:1) in a neighborhood of the origin.Since the T i ( x ) take small values near the origin (as they are continuous functionsthat vanish at x = 0), it follows that H (cid:0) x, T ( x ) , . . . , T m ( x ) (cid:1) = G (cid:16) x, H (cid:0) x, T ( x ) , . . . , T m ( x ) (cid:1) , T ( x ) , . . . , T m ( x ) (cid:17) Having a polynomial system is a very strong condition since it immediately tells you that ρ isa branch point, which leads to a Puiseux expansion; it is only a matter of determining the orderof the branch point (which is nonetheless a nontrivial task). The book [9] gives a detailed study of well-conditioned polynomial systems, but only statesthe result for general well-conditioned systems. This statement is the 1997 version of Drmota’stheorem, including the error in the hypotheses. The simplest patch is to replace the conditionthat ‘some characteristic point ( a, b ) is in the interior of the domain’ with the requirement that‘( ρ, τττ ) is in the interior of the domain’. holds in a neighborhood of the origin. Also one has T ( x ) = G (cid:0) x, T ( x ) , T ( x ) , . . . , T m ( x ) (cid:1) holding in a neighborhood of the origin, so by the uniqueness of solutions in sucha neighborhood, we must have T ( x ) = H (cid:0) x, T ( x ) , . . . , T m ( x ) (cid:1) in a neighborhood of the origin. By Proposition 36, this equation actually holdsglobally for | x | ≤ ρ ; in particular H converges at ( ρ, τ , . . . , τ m ). By Corollary 38(a)the Jacobian 1 − ∂G ∂y of the equation y = G ( x, y ) does not vanish at ( ρ, τττ ).Thus, by the Implicit Function Theorem, H is holomorphic at (cid:0) ρ, τ , . . . , τ m (cid:1) .Now discarding the first equation and substituting H ( x, y , . . . , y m ) for y inthe remaining equations gives a well-conditioned system of m − y i = G ⋆i ( x, y , . . . , y m ) , ≤ i ≤ m , with standard solution (cid:0) T ( x ) , . . . , T m ( x ) (cid:1) whose extreme point (cid:0) ρ, τ , . . . , τ m (cid:1) is an eigenpoint, since it is a characteristic point of the system that is in the interiorof Dom + ( G ⋆ ). Thus the elimination procedure can continue if G ⋆ consists of morethan one equation. (cid:3) The extreme point of a well-conditioned polynomial system, such as Example32, is always a characteristic point, and, as Lalley [12] proved, the coefficients ofthe solutions T i ( x ) have the classical P´olya form C i ρ − n n − / . Drmota [7] extendedLalley’s result to well-conditioned power series systems with the extreme point inthe interior of the domain of the system. A natural (and desirable) direction toconsider for further research would be to drop the irreducible requirement. However,even in the polynomial case, this leads to substantial challenges, see Example 34.5.2. A Wealth of Examples.
In [2] we showed that single equation systemsformed from a wide array of standard operators like Multiset, Cycle and Sequenceled to square-root singularities and P´olya asymptotics for the coefficients. The ar-guments used there easily carry over to the setting of systems of equations sincethe conditions in that paper force the positive domain to be an open set, and thisguarantees that ( ρ, τττ ) is an interior point of the domain of the system, leading to awealth of examples.6.
Some Open Problems about Characteristic Points ofWell-Conditioned Systems
Question 1.
How can one locate ( ρ, τττ ) if it is not a characteristic point? Question 2.
Is the set of characteristic points always finite?
As one can see in the examples, Appendix A, a system can have multiple charac-teristic points; the two equation polynomial system in Example 32 has four charac-teristic points. Example 35 shows that the set of real solutions to the characteristicsystem need not be finite. However Question 2 asks if the set of positive solutionsis finite.
HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 19
Appendix A. A Collection of Basic Examples
The following examples explore the behavior of characteristic points of well-conditioned systems—the computational steps have been omitted. However thereader can find complete details online in the original preprint [3].A.1.
Examples for 1-equation systems.
For 1-equation systems the followingtwo examples show the three kinds of possible behavior, namely: (i) there is acharacteristic point which is an interior point and thus equal to ( ρ, τ ), (ii) there isa characteristic point which is a boundary point and thus equal to ( ρ, τ ), and (iii)there is no characteristic point. If ( ρ, τ ) is in the interior of the domain of G then x = ρ is a square-root singularity of T ( x ). Each example starts with an equation y = G ( x, y ) where the characteristic point( ρ, τ ) is in the interior of the domain of G ( x, y ). Then the example is modified togive a system y = G ⋆ ( x, y ) with ( ρ ⋆ , τ ⋆ ) on the boundary of the domain of G ⋆ ( x, y ).( ρ ⋆ , τ ⋆ ) is a characteristic point in Example 23 but not in Example 24. Example 23.
Let G ( x, y ) = x (1 + y ) . For the characteristic system (cid:26) y = x (1 + y )1 = 2 xy of y = G ( x, y ) one has the characteristic point (1 / , , an interior point of thedomain of G ( x, y ) , so for the standard solution y = S ( x ) of y = G ( x, y ) one has ( ρ, τ ) = (1 / , . The established theory for such a system (see [9] , Chapter VII)shows that S ( x ) has a square-root singularity at x = ρ .Next let G ⋆ ( x, y ) = S ( x )(1 + y ) / . For the characteristic system (cid:26) y = S ( x )(1 + y ) /
21 = S ( x ) y once again the characteristic point is (1 / , , but now it is a boundary point of thedomain of G ∗ ( x, y ) . An examination of the standard solution (see Proposition 27)of y = G ∗ ( x, y ) , namely y = T ( x ) = S (cid:0) S ( x ) / (cid:1) , shows that it has a fourth-rootsingularity at x = 1 / . Example 24.
Let G ( x, y ) = x (cid:0) y + 2 y (cid:1) . The characteristic system (cid:26) y = x (cid:0) y + 2 y (cid:1) x (1 + 2 y ) of y = G ( x, y ) has the characteristic point (cid:18) √ − , √ (cid:19) , an interior point of the domain of G ( x, y ) , so for the standard solution y = S ( x ) of y = G ( x, y ) one has ρ = (cid:0) √ − (cid:1) / and τ = √ / . S ( x ) has a square-rootsingularity at x = ρ . The possibilities for the nature of this singularity when ( ρ, τ ) is on the boundary of the domainof G have not been classified. Examples constructed along the lines of Proposition 27 show thatone can have 2 k -root singularities. Comments VI.18 and VI.19 on p. 407 of [9] state that one canhave α -root singularities, for 1 < α ≤ Next let G ⋆ ( x, y ) = x (cid:0) S ( x ) + y + 2 y (cid:1) . The standard solution of y = G ⋆ ( x, y ) is again y = S ( x ) , so ( ρ ∗ , τ ∗ ) = ( ρ, τ ) . The characteristic system (cid:26) y = x (cid:0) S ( x ) + y + 2 y (cid:1) x (1 + 4 y ) of y = G ⋆ ( x, y ) has no characteristic point since the only candidate is ( ρ, τ ) and ρ (1 + 4 τ ) = (1 / (cid:0) √ − (cid:1)(cid:0) √ (cid:1) = 1 . ( ρ, τ ) is a boundary point of the domain of G ∗ ( x, y ) whose location is not detectedby the method of characteristic points. Remark 25.
On p. 83 of their 1989 paper [15]
Meir and Moon offer an interestingexample of a 1-equation system without a characteristic point, namely y = A ( x ) e y where A ( x ) = (1 / P n x n /n . The characteristic system is y = A ( x ) e y , A ( x ) e y , so a characteristic point ( a, b ) must have b = 1 , A ( a ) = 1 /e . But /e is not in therange of A ( x ) , so there is no characteristic point. One can nonetheless easily find ( ρ, τ ) in this case since ( ρ, τ ) must lie on the boundary of the domain of A ( x ) e y .Thus ρ = 1 , and then τ = A (1) e τ = ( π / e τ , so τ ≈ . .The paper goes on to claim that by differential equation methods one can showthat the standard solution y = S ( x ) has coefficient asymptotics s ( n ) ∼ C/n . How-ever this cannot be true since such a solution would diverge at its radius of conver-gence ρ = 1 (see [2] ), whereas the given equation y = A ( x ) e y is nonlinear in y , sothe solution must converge at ρ . A.2.
This subsection gives a framework for 1-equationexamples which will be useful for building the 2-equation examples in § A.3.
Proposition 26.
Let A ( x ) be the standard solution of (10) y = x (1 + a y + b y ) where a ≥ and b > . Then the following hold: (a) A ( x ) = 12 b x (cid:16) (1 − a x ) − p (1 − a x ) − b x (cid:17) . (b) A ( x ) has non-negative coefficients. (c) A sufficient condition for A ( x ) to have integer coefficients is that a and b areintegers. (d) A ( x ) has a positive radius of convergence ρ A given by ρ A = 1 a + 2 √ b . (e) τ A := A ( ρ A ) is finite and is given by τ A = 1 √ b . (f) ρ A is a square-root branch point of the algebraic curve defined by (10) . HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 21 (g) ( ρ A , τ A ) is the unique characteristic point of (10) , that is, it is the uniquepositive solution ( x, y ) to y = x (1 + a y + b y )1 = x ( a + 2 b y ) . Proof. (Exercise.) (cid:3)
Proposition 27.
Given a , c ≥ and b , d > let A ( x ) be the standard solution of y = x (1 + a y + b y ) and let S ( x ) be the standard solution of y = x (1 + c y + d y ) . Let T ( x ) be the standard solution of y = A ( x )(1 + c y + d y ) . Then the following hold: (a) T ( x ) = S ( A ( x )) . (b) T ( x ) = 12 d A ( x ) (cid:16) (1 − c A ( x )) − p (1 − c A ( x )) − d A ( x ) (cid:17) . (c) T ( x ) has non-negative coefficients. (d) A sufficient condition for T ( x ) to have integer coefficients is that a , b , c , d areintegers. (e) If √ b = c + 2 √ d then ( ρ T , τ T ) = ( ρ A , τ S ) = (cid:16) a + 2 √ b , √ d (cid:17) , and T ( x ) has a fourth-root singularity at ρ T .Proof. (Exercise.) (cid:3) The restriction √ b = c + 2 √ d is called the critical composition condition (CCC); this is the condition needed for T ( x ) = S ( A ( x )) to be a critical composition(as defined by Flajolet and Sedgewick [9], p. 411).A.3. Multi-equation systems.Proposition 28.
Suppose a , c ≥ , b , c , d > , √ b = c + 2 √ d , c = c + c . Let A ( x ) , S ( x ) , and T ( x ) be as in Proposition 27 Then the following hold: (a) The quadratic system ( SY S ) : (cid:26) y = A ( x ) (cid:0) c T ( x ) + c y + d y (cid:1) y = A ( x ) (cid:0) c T ( x ) + c y + d y (cid:1) is well-conditioned, and the standard solution is y = y = T ( x ) . (b) The extreme point ( ρ, τ, τ ) of ( SY S ) is given by ( ρ, τ, τ ) = (cid:18) a + 2 √ b , √ d , √ d (cid:19) . It is on the boundary of the domain of ( SY S ) . (c) T ( x ) = S ( A ( x )) has a fourth-root singularity at x = ρ . (d) A positive point ( x, y, y ) is a characteristic point of ( SY S ) iff either ( ⋆ ) A ( x ) (cid:18) c + 2 q d (cid:0) c T ( x ) (cid:1) (cid:19) y = 1 − c A ( x )2 d A ( x ) or ( ⋆⋆ ) A ( x ) (cid:18) c + 2 q c + d (cid:0) c T ( x ) (cid:1) (cid:19) y = 1 + c A ( x )2 d A ( x ) . (e) If c = 0 then there are exactly two characteristic points of the form ( x, y, y ) :the first is ( ρ, τ, τ ) , a boundary characteristic point obtained from ( ⋆ ) , and thesecond is the unique positive solution to ( ⋆⋆ ) , an interior characteristic point.This is the only case where ( ⋆ ) contributes a characteristic point, namely ( ρ, τ, τ ) , and this is the only case where ( ρ, τ, τ ) is a characteristic point. (f) If < c = 2 c then there is a unique characteristic point of the form ( x, y, y ) :it is the unique positive solution to ( ⋆⋆ ) and it is a boundary point differentfrom ( ρ, τ, τ ) . (g) If < c < c then there is a unique characteristic point of the form ( x, y, y ) :it is the unique positive solution to ( ⋆⋆ ) and it is an interior point that isdifferent from ( ρ, τ, τ ) . (h) If c < c then there are no characteristic points of the form ( x, y, y ) , soagain ( ρ, τ, τ ) is not a characteristic point. (i) The second characteristic point in (e) and the unique characteristic points in(f ) and (g) are given explicitly by x = c + p c + fac + 2 c + f + b + ( a + 2 c ) p c + f y = c + c + p c + f d where f = − c c + 3 c + 4 d . Proof. (Exercise.) (cid:3)
Now we look at three well-conditioned examples that show some of the variedbehavior of characteristic points when one has more than one equation in the sys-tem. In the first example there are two characteristic points, both in the interiorof the domain of G ( x, y ) and one of them is ( ρ, τττ ). In the second example onehas a characteristic point in the interior of the domain of G ( x, y ) and ( ρ, τττ ) isa characteristic point on the boundary of the domain. In the third example onehas a characteristic point in the interior of the domain of G ( x, y ) but ( ρ, τττ ) is nota characteristic point. In the second and third examples, ρ is not a square-rootsingularity of the solutions. Such examples show the need for a more subtle useof characteristic points in the pursuit of information on ( ρ, τττ ) for multi-equationsystems. HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 23
Example 29.
For the system of two equations y = x · (cid:0) y + 2 y (cid:1) y = x · (cid:0) y + 2 y (cid:1) add (1 − xy )(1 − xy ) − x = 0 to obtain the characteristic system. This is a polynomial system, so all character-istic points will be in the interior of the domain; and since ( ρ, τ , τ ) is also in theinterior it must be a characteristic point. Let ( a, b, c ) be a characteristic point. Bya computation we see that b = c is impossible. Thus the characteristic points arethe positive triples ( a, b, b ) satisfying b = a (cid:0) b + 2 b (cid:1) a = (1 − ab ) . From this the system has two characteristic points: (cid:18) √ − , √ , √ (cid:19) ≈ (0 . , . , . (cid:18) √ − , √ , √ (cid:19) ≈ (0 . , . , . . Now we are left with determining which of the two characteristic points is ( ρ, τ , τ ) .By applying either Proposition 14 or Proposition 19, it is the first of these. Example 30.
Let a = 0 , b = 9 , c = 0 , c = 1 , and d = 1 . These numberssatisfy (CCC) . Following the hypotheses of Proposition 28, let A ( x ) be the standardsolution to y = x (1 + 9 y ) and consider the system y = A ( x ) · (cid:0) y + y (cid:1) y = A ( x ) · (cid:0) y + y (cid:1) . Since c = 0 there are two characteristic points of the form ( a, b, b ) . The first is theextreme point ( ρ, τ , τ ) = (1 / , , which lies on the boundary of the domain, and the second is the interior pointobtained from the formulas in Proposition 28 (i): (cid:18) √ , √ , √ (cid:19) . Example 31.
Let a = 0 , b = 16 , c = 1 , c = 1 , and d = 1 . These numberssatisfy (CCC) . Following the hypotheses of Proposition 28, let A ( x ) be the standardsolution to y = x (1 + 16 y ) , and let T ( x ) be the standard solution to y = A ( x )(1 +2 y + y ) . Consider the system y = A ( x ) · (cid:0) T ( x ) + y + y (cid:1) y = A ( x ) · (cid:0) T ( x ) + y + y (cid:1) . Since < c < c , the extreme point ( ρ, τ , τ ) = (1 / , , is not a characteristic point, but there is a characteristic point of the form ( a, b, b ) in the interior of the domain of G given by the formulas of Proposition 28: ( a, b, b ) = (cid:18)
30 + 17 √ , √ , √ (cid:19) . A.4.
Other examples.
The next example shows some characteristic points whichare not of the form ( x, y, y ) Example 32.
The well-conditioned polynomial system y = G ( x, y , y ) := x (1 + 2 y + 2 x y y ) y = G ( x, y , y ) := x (1 + x y + 2 y y ) has four characteristic points which, to 6 places of accuracy are: (0.1818598, 1.556545, 0.3647603)(0.2640956, 1.210710, 0.5353688)(0.3867644, 0.6661246, 3.834789)(0.4153198, 0.6217456, 0.4743552) One sees that these four points form an antichain, as required by Proposition 11.The extreme point ( ρ, τ , τ ) of a polynomial system is a characteristic point. ByProposition 14 it must be the last one since it has the largest x -value, assuming onehas found all characteristic roots of this system. If one is not sure that there areonly four characteristic points then, by Theorem 21, it suffices to verify that theindicated characteristic point is an eigenpoint. This example demonstrates that iteration is not sufficient to obtain a new sys-tems Σ ⋆ such that the Jacobian matrix J G ⋆ ( x, y ) has non-zero entries. Example 33.
Consider the irreducible system y = G ( x, y ) of 4 equations: Σ = y = G ( x, y , . . . , y ) := x (cid:0) y + y (cid:1) y = G ( x, y , . . . , y ) := x (cid:0) y + y (cid:1) y = G ( x, y , . . . , y ) := x (cid:0) y (cid:1) y = G ( x, y , . . . , y ) := x (cid:0) y (cid:1) . Let M = J G ( n ) . Then it is easy to check that M = 0 iff n is odd, and M = 0 iff n is even. Thus for n ≥ , J G ( n ) ( x, y ) has entries which are 0.One can transform Σ into a system Σ ⋆ where the Jacobian of G ⋆ has all entriesnon-zero by doing selective substitutions. For example, in the first equation of Σ replace one of the two y ’s by G ( x, y ) , giving the system y = x (1 + y G ( x, y ) + y ) y = x (1 + y + y ) y = x (1 + y ) y = x (1 + y ) The first equation in this system is such that the right hand side depends on all 4of the y i . Continuing in this manner one obtains a system in which every G i ( x, y ) depends on each of y , . . . , y . This example shows complications which can arise with reducible systems.
HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 25
Example 34.
Consider the reducible polynomial system y = y · (cid:0) y + y (cid:1) y = y · (cid:0) y + y (cid:1) y = x · (1 + 9 y ) . Let the third equation have the standard solution y = A ( x ) . One then sees that thisexample is really just an alternate presentation of Example 30 where the solutionsfor y and y have a fourth-root singularity at their radius of convergence. This final example shows that there can be infinitely many real solutions to acharacteristic system, in contrast to what has been observed so far for characteristicpoints, see Question 2.
Example 35.
For the characteristic system (belonging to a 2-equation system) y − x · (cid:0) y + y y (cid:1) = 0 y − x · (cid:0) y + y y (cid:1) = 0( x − · (cid:0) x + xy + xy − (cid:1) = 0 the real solutions include the infinite curve (cid:8) ( x, y , y ) : x = 1 , y y = − (cid:9) . Appendix B. Background Material
B.1.
The extended nonnegative real numbers.
Extend the usual operations on [0 , ∞ ) to [0 , ∞ ] in the obvious way as follows: c + ∞ = ∞ for c ∈ [0 , ∞ ] c · ∞ = ∞ for c ∈ (0 , ∞ ] X n c n = ( the usual infinite sum if all c n ∈ [0 , ∞ ) ∞ if some c n = ∞ . Here the usual infinite sum is ∞ if the series diverges. Note that 0 · ∞ is leftundefined since it is indeterminate.B.2. Formal power series in several variables.
This section gives the essentialdefinitions that lay the foundations for working with formal power series in severalvariables. The standard number systems are:the set N = { , , . . . } of nonnegative integers, the set Q of ratio-nal numbers, the set R of real numbers , and the set C of complex numbers.For the linearly ordered set R of real numbers one has the posets of real-valuedfunctions on X , where the partial ordering is given by f ≤ g if f ( x ) ≤ g ( x ) for all x ∈ X . Familiar examples are:(a) n - vectors v = ( v , . . . , v n ), by setting X = { , . . . , n } (b) m × n - matrices M , by setting X = { , . . . , m } × { , . . . , n } (c) formal power series in k -variables A ( x , . . . , x k ) by setting X = N k . In thiscase a function a from N k to R provides the coefficients, and one writes A ( x ) := X i ∈ N k a ( i ) x i A matrix (or vector) M of real numbers is non-negative (written M ≥
0) if eachentry is non-negative, and positive (written
M >
0) if each entry is positive. Apower series A ( x ) is non-negative (written A ( x ) ≥
0) if each coefficient is non-negative.B.2.1.
Composition of formal power series.
For power series A ( w , . . . , w m ) and B ℓ ( x ), 1 ≤ ℓ ≤ m , where the constant term of each B ℓ is zero, that is, b ℓ ( ) = 0,define the formal composition C ( x ) := A (cid:0) B ( x ) , . . . , B m ( x ) (cid:1) by defining the coefficient function as follows: c ( i ) := X j ≥ (cid:2) x i (cid:3) a ( j ) · B ( x ) j · · · B m ( x ) j m Requiring that the constant term of the B ℓ ( x ) be 0 guarantees that for each i onlyfinitely many terms in this sum are nonzero. Consequently C ( x ) is indeed a formalpower series.B.2.2. The function defined by a formal power series.
A power series A ( x ) in k variables defines a partial function, also denoted A ( x ), on R k (or C k ) by setting(11) A ( c ) := X n ≥ X i + ··· + i k = n a ( i ) c i ( c ∈ R k )whenever the sum converges.For A ( x ) a nonnegative power series in k variables, and for c ∈ [0 , ∞ ] k , A ( c ) = ∞ if the series (11) diverges, that is, iflim n →∞ X j ≤ n X i + ··· + i k = j a ( i ) c i = ∞ . A nonnegative power series A ( x ) in k variables defines a left-continuous functionfrom [0 , ∞ ] k to [0 , ∞ ] and is monotone nondecreasing in each variable on [0 , ∞ ] k .B.2.3. The derivatives of a formal power series.
Derivatives of [nonnegative] formalpower series give [nonnegative] formal power series: ∂A ( x ) ∂x j := X i ≥ i j a ( i ) x i · · · x i j − j · · · x i k k . The notation A x j is also used for the partial derivative ∂A/∂x j .B.2.4. Holomorphic functions and a law of permanence.
A complex-valued func-tion f ( x ) of several complex variables is holomorphic at c if it is continuous anddifferentiable in a neighborhood of c . The notation [ a , b ] is short for[ a , b ] × · · · × [ a k , b k ] . Proposition 36 (A Law of Permanence for Functional Equations) . Suppose A ( x ) , B ( x , y ) ≥ . If there is an ε > such that A ( x ) = B (cid:0) x , A ( x ) (cid:1) < ∞ for x ∈ [ , εεε ] then A ( x ) = B (cid:0) x , A ( x ) (cid:1) for x ∈ [ , ∞∞∞ ] . HARACTERISTIC POINTS OF RECURSIVE SYSTEMS 27
If furthermore a > and A ( a ) < ∞ then A ( x ) = B (cid:0) x , A ( x ) (cid:1) for | x i | ≤ a i , ≤ i ≤ k and A ( x ) is holomorphic for | x i | < a i , ≤ i ≤ k .Proof. This is a special case of Hille’s law of permanence for functional equations given in § . (cid:3) B.3.
The Perron-Frobenius theory of nonnegative matrices.
The key to themain results of this paper are some simple observations based on the well-knownPerron-Frobenius theory of nonnegative matrices that was developed ca. 1910.
Proposition 37.
Let M be an irreducible nonnegative nonzero k × k matrix withreal entries. (a) M has a real eigenvalue. (b) The largest real eigenvalue Λ( M ) is positive and is given by Λ( M ) = max x > min ≤ i ≤ k ( M x ) i x i . (c) Λ( M ) is a simple root of the characteristic polynomial p M ( λ ) = det( λI − M ) . (d) The eigenspace belonging to Λ( M ) is 1-dimensional, generated by a uniquepositive normalized eigenvector v M . (Normalized means the sum of the en-tries is 1).Proof. (See § (cid:3) Note that Proposition 37(b) implies that for some x > one has Λ( M ) equal tomin ≤ i ≤ k ( M x ) i x i . Corollary 38. (a)
A positive k × k matrix M , k ≥ , has all diagonal entries < Λ( M ) . (b) Λ( X ) is a nondecreasing function on the set of nonnegative matrices, that is, M ≤ M implies Λ( M ) ≤ Λ( M ) . Furthermore if every row [column] sumof M is less than the corresponding row [column] sum of M then Λ( M ) < Λ( M ) . (c) Λ( X ) is a continuous function on the set of nonnegative matrices, where thematrices are thought of as points in k -space.Proof. (Exercise.)(Note: A special case of item (c) is stated on p. 2103 of Lalley [12], for certainJacobian matrices denoted J z , evaluated along certain curves.) (cid:3) References [1] Edward A. Bender,
Asymptotic methods in enumeration.
SIAM Rev. (1974), 485–515.[2] Jason P. Bell, Stanley N. Burris, and Karen A. Yeats, Counting Rooted Trees: The UniversalLaw t ( n ) ∼ C · ρ − n · n − / . The Electron. J. Combin. (2006), R63 [64pp.][3] Jason P. Bell, Stanley N. Burris, and Karen A. Yeats, Characteristic Points of RecursiveSystems . Preprint 2009, http://arxiv.org/abs/0905.2585v1[4] E. Rodney Canfield,
Remarks on an asymptotic method in combinatorics . J. Combin. TheorySer. A (1984), no. 3, 348–352.[5] A. Cayley, Researches on the partition of numbers . Phil. Trans. Roy. Soc. London (1856),127–140. [6] A. Cayley,
On the theory of the analytical forms called trees . Phil. Magazine (1857),172–176.[7] Michael Drmota, Systems of functional equations.
Random Structures and Algorithms (1997), 103–124.[8] Michael Drmota, Random Trees.
Springer, 2009.[9] Philippe Flajolet and Robert Sedgewick,
Analytic Combinatorics . Cambridge UniversityPress, 2009.[10] F.R. Gantmacher,
Applications of the Theory of Matrices.
Interscience Publishers, Inc., NewYork, 1959.[11] E. Hille,
Analytic function theory , Blaisdell Publishing Company, Waltham, 1962, 2 Volumes.[12] Stephen P. Lalley,
Finite range random walk on free groups and homogeneous trees.
TheAnnals of Probability , No. 4 (1993), 2087–2130.[13] A. Meir and J. W. Moon, On the altitude of nodes in random trees , Canadian Journal ofMathematics (1978), 9971015.[14] A. Meir and J.W. Moon, Some asymptotic results useful in enumeration problems.
Aequa-tiones Math. (1987), 260–268.[15] A. Meir and J.W. Moon, On an asymptotic method in enumeration.
J. Combin. Theory Ser.A 51 (1989), no. 1, 77
Erratum:
J. Combin. Theory Ser. A 52 (1989), no. 1, 163.[16] A.M. Odlyzko,
Asymptotic enumeration methods.
Handbook of Combinatorics, Vol. 1, 2,1063–1229, Elsevier, Amsterdam, 1995.[17] R. Otter,
The number of trees. Annals of Mathematics (1948), 583–599.[18] G. P´olya and R.C. Read, Combinatorial enumeration of groups, graphs and chemical com-pounds . Springer Verlag, New York, 1987.[19] J.J. Sylvester,
On the partition of numbers.
Quarterly J. Math. , (1855), 141–152.[20] Alan R. Woods, Coloring rules for finite trees, probabilities of monadic second order sen-tences . Random Structures Algorithms10