aa r X i v : . [ m a t h - ph ] N ov A model for coagulation with mating
Raoul Normand ∗ Laboratoire de Probabilit´es et Mod`eles Al´eatoires, UPMC,175 rue du Chevaleret, 75013 Paris, France
Abstract
We consider in this work a model for aggregation, where the coalescing particles ini-tially have a certain number of potential links (called arms) which are used to performcoagulations. There are two types of arms, male and female, and two particles may co-agulate only if one has an available male arm, and the other has an available femalearm. After a coagulation, the used arms are no longer available. We are interested in theconcentrations of the different types of particles, which are governed by a modification ofSmoluchowski’s coagulation equation — that is, an infinite system of nonlinear differentialequations. Using generating functions and solving a nonlinear PDE, we show that, up tosome critical time, there is a unique solution to this equation. The Lagrange InversionFormula allows in some cases to obtain explicit solutions, and to relate our model to tworecent models for limited aggregation. We also show that, whenever the critical time isinfinite, the concentrations converge to a state where all arms have disappeared, and thedistribution of the masses is related to the law of the size of some two-type Galton-Watsontree. Finally, we consider a microscopic model for coagulation: we construct a sequenceof Marcus-Lushnikov processes, and show that it converges, before the critical time, tothe solution of our modified Smoluchowski’s equation.
MSC : Primary 34A34, 60K35; Secondary 60B12, 82C23, 82D60
Keywords : Coagulation equations; Gelation; Generating functions; Method of characteristics; Marcus-Lushnikov Process; Hydro-dynamic limit
In this work, we study a model for coagulation of particles, generalizing the original modelof Smoluchowski [26], and a recent model of Bertoin [2]. We consider particles which areinitially given a certain number of male and female arms. These arms are used to perform thecoagulations: two particles coagulate when a male arm of one and a female arm of another bind.This can be used to model the formation of polymers. For instance, consider male particles(which have only male arms), and female particles. Then a coagulation between a male anda female particle can be thought of as an ionic bond between a cation and a anion. Thiskind of models has also been investigated in the physical literature. For instance, in [13], [14],the authors study coalescing monomers with two types, A and B, with bonding only allowed ∗ E-mail address: [email protected]. etween A and B, hence forming alternating linear polymers. In this work, this corresponds togiving to each particle exactly one male arm and one female arm.In our model, a particle is characterised by a triple ( a, b, m ), a ∈ N being its number of malearms, b ∈ N its number of female arms, and m ∈ N ∗ its mass. Two particles may coagulatewhen one has an available female arm and the other has an available male arm, and when acoagulation occurs, the used arms disappear. Hence, we may only observe the transition { ( a, b, m ) , ( a ′ , b ′ , m ′ ) } → ( a + a ′ − , b + b ′ − , m + m ′ ) . We will assume that this transition occurs with a rate given by the number of pairs formed ofa female arm and of a male arm, that is a ′ b + ab ′ . We wish to study how the concentration ofeach type of particle evolves when time passes. The precise mathematical formulation is givenin Section 2.This model is a modification of the well-known model of Smoluchowski [26]. Recall thatSmoluchowski’s coagulation equations [26] describe the evolution of the concentrations of parti-cles in a medium, where particles are characterised only by their masses. When two particles ofmasses m and m ′ coagulate, they merge into a single particle of mass m + m ′ . Such a coagulationoccurs with rate κ ( m, m ′ ), where κ is some symmetric nonnegative kernel. In Smoluchowski’soriginal model, the masses are assumed to be positive integers. The concentration c t ( m ) of par-ticles of mass m is governed by the following infinite system of nonlinear differential equationsdd t c t ( m ) = 12 m − X m ′ =1 c t ( m ′ ) c t ( m − m ′ ) κ ( m ′ , m − m ′ ) − c t ( m ) + ∞ X m ′ =1 c t ( m ′ ) κ ( m, m ′ ) , for m ∈ N ∗ . The first term accounts for the creation of particles of mass m by coagulationof particles of mass m ′ and m − m ′ ; the second for disappearance of particles of mass m bycoagulation with other particles.For general kernels κ , explicit solutions are not known. However, some have been obtained indifferent cases, notably whenever the kernel is constant [26], additive [11] or multiplicative [20].In the multiplicative case, solutions are obtained up to a critical time, known as the gelationtime. This is interpreted as the time when a particle of infinite mass appears. It absorbs someof the particles, and the total mass starts to decrease.Smoluchowski’s equation (and some variations) have been extensively studied, both froman analytical (e.g. [5], [8], [15]) and a probabilistic point of view (e.g. [6], [16], [18], [24], andsee also the review by Aldous [1]). In general, little is known after the gelation time, and mostresults are obtained before (see however [9], [10]). The existence and uniqueness of a solutionbefore gelation has been obtained only in 1999 by Norris [24], under the assumption that κ issublinear, i. e. κ ( n, m ) / ( nm ) is bounded.From a probabilistic point of view, some microscopic models have been studied, beginningwith Marcus [23] and Lushnikov [21]. Heuristically, one considers a finite number of particles,and each couple of particles with masses m and m ′ coalesces with rate κ ( m, m ′ ). After suit-able change of time and renormalization, one expects this system to converge to a solution ofSmoluchowski’s equation. This has been shown by Jeon in 1998 [18] (up to extraction of asubsequence), provided the rate is strictly sublinear (i.e. κ ( n, m ) /n → n → + ∞ ). Inparticular, there is no gelation in this case. Norris [24] extended his results one year later byshowing the convergence of the model before gelation, whenever the rate is sublinear. Otherpoints of view are also considered; e.g. in [16], the authors show that coagulating Brownianparticles form clusters whose size evolves according to Smoluchowski’s equation.2n interesting question is to deal with the case when the coagulations are restricted by somedevice. Typically, one may think of covalent bonds: a given atom can only perform a givennumber of bonds. In this direction, Bertoin [2] studied two models where a particle is charac-terised by its number of arms and by its mass, and it uses its arms to perform aggregations.The concentrations of each type of particle is governed by a modification of Smoluchowski’sequation. In [2], he obtains solution up to some time T , and shows that whenever gelationdoes not occur (i.e. T = + ∞ ), there is a limit state where all the arms have disappeared: theconcentrations converge to limiting concentrations which bear a striking resemblance with thelaw of the size of some Galton-Watson tree. This fact is explained in [3] and [4]. It is also worthnoticing that Bertoin’s model can be related to Smoluchowski’s for the constant, additive andmultiplicative kernels. We will also see that our sexed model contains Bertoin’s: the orientedmodel corresponds indeed to ours if each particle is given precisely one female arm, and thesymmetric model corresponds to the sexed one if the particles are given a gender uniformly atrandom.This paper is divided in two parts. In the first one (Sections 2 to 5), we shall study the sexedSmoluchowski’s equation, which is an infinite system of nonlinear differential equations. We first(Section 2) introduce the problem, and prove some physically intuitive facts. Then (Section 3),we prove our main result: up to some critical time, there exists a unique solution to the system,and its moment generating function can be expressed explicity in terms of the initial data.The tools used are analoguous to those in [2], but since we are dealing with a two-dimensionalproblem, several technical issues need to be addressed. The outline of the proof is as follows.First, we transform the system into a PDE problem by considering the generating functions ofthe concentrations. This PDE is not quasilinear, but it may however be solved by the methodof characteristics. This method requires the inversion of a two-dimensional mapping, and thiscan be done precisely up to the critical time. Unfortunately, even for monodisperse initialconditions (i.e. there are only particles of mass 1 at time 0), the inversion is not explicit (onecould use the two-variable Lagrange inversion formula, but in general, the expression it providesis too cumbersome). Nonetheless, in some specific cases (Section 4), the Lagrange InversionFormula yields explicit results. In particular, we recover the solutions obtained in [2]. Finally,we show (Section 5) that there exist limiting concentrations when t → + ∞ , and that they arerelated to the distribution of the total progeny of some two-type Galton-Watson process.In the second part (Section 6), we study a microscopic model. Given a finite number ofparticles, we let them coagulate and observe the evolution of the concentrations of the differenttypes of particles. This is a Marcus-Lushnikov process, and we show that it converges, beforethe critical time, to a process solving Smoluchowski’s equation (1). As pointed out earlier, thiskind of convergence had already been proved by Norris ([24], see as well [18]). The differencehere is that we consider a model with male or female arms. Moreover, the proof is made mucheasier by the fact that the rate of coagulation is explicit. In particular, we will appeal tothe PDE obtained in the first part. This discrete model provides a justification to the sexedSmoluchowski’s equation (1).Finally, note that our construction can also provide a model for random oriented graphs,called the configuration model, since a coagulation can be seen as the creation of an orientededge between two vertices in a graph, whose orientation is given e.g. from the male arm to thefemale arm. Hence, we can consider a large number n of particles and let them coagulate. Whenall the coagulations are performed, we obtain a set of oriented graphs. When n → + ∞ , we maywonder what the distribution of their sizes is, what a typical graph looks like, etc. A heuristic3nswer, motivated by the works [3], [4], and by the results obtained in this paper (Section5), is that a typical graph would be a two-type Galton-Watson tree (with the convention oforientation above), provided there are few arms (with the notations of this paper, this means T c = + ∞ and µ is not degenerate). Let us first introduce some notations and Smoluchowski’s equation, and state our main result. • N = { , , , . . . } and N ∗ = { , , . . . } . • S = N × N × N ∗ is the set of the different types of particles. A generic element of S willbe denoted by p , and if p = ( a, b, m ), we will call a p -particle a particle with a male arms, b female arms, and mass m . • For p = ( a, b, m ) ∈ S and p ′ = ( a ′ , b ′ , m ′ ) ∈ S , we will denote p.p ′ = a ′ b + ab ′ the rate of coagulation and p ◦ p ′ = ( a + a ′ − , b + b ′ − , m + m ′ )the type of the particle resulting from such a coagulation. We say that p ′ (cid:22) p if a ′ ≤ a + 1, b ′ ≤ b + 1 and m ′ ≤ m −
1. When p ′ (cid:22) p , we write p \ p ′ = ( a + 1 − a ′ , b + 1 − b ′ , m − m ′ )the type of particle such that p ′ ◦ ( p \ p ′ ) = p . • For two functions c, f : S → R , we will denote, when the series converge absolutely, h c, f i := X p ∈ S c ( p ) f ( p ) . When using this notation, we will write, with a slight abuse of notation, a for the function( a, b, m ) a , b for ( a, b, m ) → b , etc.Let us recall our goal. We are interested in a system of coagulating particles with male andfemale arms. We assume that each couple formed of a p -particle and of a p ′ -particle coagulatewith rate p.p ′ , to form a p ◦ p ′ -particle. This means that if we denote c t ( p ) the concentrationof p -particles then ( c t ( p ) , p ∈ S ) solves the following infinite system of nonlinear differentialequations dd t c t ( p ) = 12 X p ′ (cid:22) p p ′ . ( p \ p ′ ) c t ( p ′ ) c t ( p \ p ′ ) − c t ( p ) X p ′ ∈ S p.p ′ c t ( p ′ ) . (1)The first term accounts for the creation of p -particles by coagulation of p ′ - and p \ p ′ -particles(the factor 1 / p -particles by coagulation with other particles. Let us once write down this formula explicitly.For all ( a, b, m ) ∈ S , the concentration of ( a, b, m )-particles verifiesdd t c t ( a, b, m ) = 12 m − X m ′ =1 a +1 X a ′ =0 b +1 X b ′ =0 ( a ′ ( b + 1 − b ′ ) + b ′ ( a + 1 − a ′ )) × c t ( a ′ , b ′ , m ′ ) c t ( a + 1 − a ′ , b + 1 − b ′ , m − m ′ ) − X m ′ ≥ X a ′ ≥ X b ′ ≥ ( ab ′ + a ′ b ) c t ( a, b, m ) c t ( a ′ , b ′ , m ′ ) . Let us now define what we call a solution to Smoluchowski’s equation.
Definition 1.
We call a family (( c t ( p )) p ∈ S , t ∈ [0 , T )) of differentiable functions a solution ofSmoluchowski’s equation (or of system (1) ), if1. For every t ∈ [0 , T ) , h a + b, | c t |i < + ∞ ,2. h a + b , | c t |i < + ∞ for t in a neighbourhood of ,3. The family ( c t ( p )) solves the system (1) for t ∈ [0 , T ) . Remark 1. • We will always assume that at time 0, h a + b + 1 , c i < + ∞ , and that themean number of male arms h a, c i and the mean number of female arms h b, c i are equal.Physically, it is then obvious that they will remain equal as time passes. This shall beproven later on, in Lemma 3. • It is easy to see that if ( c t ) t ∈ [0 ,T ) is a solution to (1) with initial conditions c , and λ > λc t/λ ) t ∈ [0 ,T ) is a solution to (1) with initial conditions λc . Hence, it is enough toassume that h a, c i = h b, c i = 1, what will always be the case from now on. Our main result is existence and uniqueness of a solution to (1) up to a critical time. Inall the statements and proofs, we are given nonnegative initial concentrations c such that h , c i < + ∞ , h a, c i = h b, c i = 1 and h a + b , c i < + ∞ . We can then define the critical time T c . Definition 2.
Let M = h ab, c i + p h a − a, c ih b − b, c i and T c = (cid:26) + ∞ if M ≤ M − if M > . We will also constantly use the generating function of ( c ) g ( x, y, z ) := X ( a,b,m ) ∈ S c ( a, b, m ) x a y b z m . Since h , c i < + ∞ , g is well-defined on [0 , . Using the assumption h a + b, c i = 1, ande.g. monotone convergence, we also see that its partial derivatives with respect to x and y arewell-defined and continuous on [0 , . For the same reason, they remain in [0 , heorem 1. (i) Smoluchowski’s equation (1) with initial conditions c has a unique solution ( c t ) defined on [0 , T c ) . (ii) For t ∈ [0 , T c ) , h a + b , c t i < + ∞ , and h a + b , c t i → + ∞ when t → T c . (iii) For t ∈ [0 , T c ) and z ∈ [0 , , the mapping φ t ( ., ., z ) , given for ( x, y ) ∈ [0 , by φ t ( x, y, z ) = (cid:18) (1 + t ) x − t ∂g ∂y ( x, y, z ) , (1 + t ) y − t ∂g ∂x ( x, y, z ) (cid:19) , has a right inverse h t = ( h (1) t , h (2) t ) which is well-defined and analytic on (0 , . Then thegenerating function g t of ( c t ) is given by g t ( x, y, z ) = 11 + t (cid:16) ˜ H (2) t ( x, y, z ) + ˜ H (1) t (0 , y, z ) (cid:17) + G t ( z ) . (2) where for t > h (1) t = 1 + tt h (1) t ( x, y, z ) − xt ; ˜ h (2) t := 1 + tt h (2) t ( x, y, z ) − yt (3) and • ˜ H (1) t is the antiderivative of ˜ h (1) t with respect to y , vanishing at y = 0 , • ˜ H (2) t is the antiderivative of ˜ h (2) t with respect to x , vanishing at x = 0 , • G t ( z ) is the antiderivative of ∂g ∂z (cid:16) h (1) t (0 , , z ) , h (2) t (0 , , z ) , z (cid:17) (4) with respect to z , vanishing at 0. (iv) The total mass h m, c t i is constant on [0 , T c ) . In this section, we give some physically intuitive results, and deduce the “weak” form of theequation. Let us start with the following lemma (recall that c t ( p ) is meant to model a concen-tration). Lemma 1.
Any solution to Smoluchowski’s equation remains nonnegative, i.e. if ( c t ) t ∈ [0 ,T ) isa solution to (1) , then for all t ∈ [0 , T ) and p ∈ S , c t ( p ) ≥ .Proof. Take some t ∈ [0 , T ). System (1) givesdd t c t ( a, b,
1) = − c t ( a, b, X m ′ ≥ X a ′ ≥ X b ′ ≥ ( ab ′ + a ′ b ) c t ( a ′ , b ′ , m ′ ) := − γ ( t ) c t ( a, b, . G ( t ) = R t γ ( s ) d s . Then c t ( a, b,
1) = c ( a, b, e − G ( t ) , so it remains nonnegative. Let now m ≥
1, and suppose that the c t ( a, b, m ′ ) are nonnegative for a, b ≥ ≤ m ′ ≤ m . Forsome p = ( a, b, m + 1), we havedd t c t ( p ) = 12 X p ′ (cid:22) p p ′ . ( p \ p ′ ) c t ( p ′ ) c t ( p \ p ′ ) − c t ( p ) X p ′ ∈ S p.p ′ c t ( p ′ )= β ( t ) − c t ( p ) γ ( t ) . So we may write c t ( p ) = (cid:18) c ( p ) + Z t β ( s ) e G ( s ) d s (cid:19) e − G ( t ) . But β ( t ) ≥ c t ( a, b, m ′ )for a, b ≥ ≤ m ′ ≤ m . So c t ( a, b, m + 1) is nonnegative, what gives the result byinduction.The following two lemmas are straightforward generalizations of Lemma 1 and 2 in [2]. Notehowever that the monotone convergence used in the proofs requires that the coefficients ( c t ) benonnegative. Lemma 2. (i) If ( c t ) is a solution to Smulochowski’s equation (1) , then t
7→ h , c t i , t
7→ h a, c t i and t
7→ h b, c t i are decreasing. (ii) A family ( c t ) is a solution to (1) if and only if it solves dd t h c t , f i = 12 X p,p ′ ∈ S p.p ′ c t ( p ) c t ( p ′ )( f ( p ◦ p ′ ) − f ( p ) − f ( p ′ )) (5) for every bounded f : S → R . Remark 2. • The derivative in this lemma has to be understood in the weak sense, i.e. theformula actually holds in the integral form. But if f ( a, b, m ) → a, b, m ) → ∞ ,then it is easy to check that the formula holds in the strong sense. • Consider in particular, the generating function of c t , g t ( x, y, z ) = h x a y b z m , c t i . Then g ( ., ., z ) is regular, in the sense of Definition 3 below. Definition 3.
We say that a function ( t, x, y ) g t ( x, y ) defined on [0 , T ) × (0 , is regular if • t g t ( x, y ) is C and ( x, y ) ∂g t ∂t ( x, y ) are C , • ( x, y ) g t ( x, y ) is C , t ∂g t ∂x ( x, y ) and t ∂g t ∂y ( x, y ) are C . Lemma 3.
Let ( c t ) be a solution to Smoluchowski’s equation, and let Γ r = inf { t ≥ , h a + b , c t i > r } and Γ ∞ = sup r> Γ r . Consider the mean numbers of male and female arms A t = h a, c t i and B t = h b, c t i , and assume A = B = 1 . Then A t = B t = 11 + t (6) for all t ∈ [0 , T ∧ Γ ∞ ) . Proof of the theorem
In this section, we give a sketch of the proof which contains all the important ideas. Therigorous proof however requires some care, and it is given in detail afterwards. So, consider asolution ( c t ) t ∈ [0 ,T ) to Smoluchowski’s equation (1), and g t ( x, y, z ) = h x a y b z m , c t i = X a ≥ X b ≥ X m ≥ c t ( a, b, m ) x a y b z m . Using (5) and Lemma 3, it is easy to see that g t solves the following PDE ∂g t ∂t = ∂g t ∂x ∂g t ∂y −
11 + t (cid:18) x ∂g t ∂x + y ∂g t ∂y (cid:19) . (7)Now, we can solve this PDE using the method of characteristics: we want to find a trajectory( x ( t ) , y ( t )) starting from some ( x, y ) ∈ [0 , such that g t ( x ( t ) , y ( t ) , z ) is easy to compute. Solet ( p ( t ) , p ( t )) = (cid:18) ∂g t ∂x ( x ( t ) , y ( t ) , z ) , ∂g t ∂y ( x ( t ) , y ( t ) , z ) (cid:19) . An easy calculation shows that˙ p ( t ) = ∂ g t ∂x (cid:18) ˙ x ( t ) + p ( t ) − x ( t )1 + t (cid:19) + ∂ g t ∂x∂y (cid:18) ˙ y ( t ) + p ( t ) − y ( t )1 + t (cid:19) − p ( t )1 + t , (8)and a similar formula for ˙ p . Now, if we require˙ x ( t ) + p ( t ) − x ( t )1 + t = ˙ y ( t ) + p ( t ) − y ( t )1 + t = 0 , then ˙ p i ( t ) = − p i ( t )1 + t , i = 1 , . These ODE’s are readily solved, with p (0) = ∂g ∂x ( x, y ) and p (0) = ∂g ∂y ( x, y ), and we obtain p i ( t ) = p i (0)1 + t (9)and x ( t ) = x + ( x − p (0)) t ; y ( t ) = y + ( y − p (0)) t. Using the PDE, we now see thatdd t g t ( x ( t ) , y ( t ) , z ) = − p (0) p (0)(1 + t ) , (10)so by integrating g t ( x ( t ) , y ( t ) , z ) = g t ( φ t ( x, y, z ) , z ) = g ( x, y, z ) − t t ∂g ∂x ( x, y, z ) ∂g ∂y ( x, y, z ) . (11)8o obtain g t , it only remains to invert φ t , for, if φ t ( h t ) = Id, then g t ( x, y, z ) = g ( h t ( x, y, z )) − t t ∂g ∂x ( h t ( x, y, z )) ∂g ∂y ( h t ( x, y, z )) . We may now start a rigorous proof, which consists mainly of 3 steps: study the map φ t ,then solve the PDE (14), and show that the generating function of a family ( c t ) solves (14) ifand only if ( c t ) solves Smoluchowski’s equation (1). The conclusion is then easy to obtain. In this section, we study the map φ t , which is useful both for solving theorically the PDE, andfor obtaining explicit solutions. We will need two preliminary lemmas. Lemma 4.
Let α > , β, γ ≥ and K = [0 , α ] × [0 , β ] × [0 , γ ] . For ( r, s, t ) ∈ K , denote A ( r, s, t ) := (cid:18) r st r (cid:19) . Then for every ǫ > , there is a norm k . k on R such that max ( r,s,t ) ∈ K k A ( r, s, t ) k ≤ α + p βγ + ǫ, where we also denote by k . k the induced norm on the × matrices. Remark 3.
This is a uniform version of the well-known result (see e.g. [25]) which states that • For every (square) matrix A and norm k . k , one has k A k ≥ ρ ( A ), where ρ ( A ) is the spectralradius of A , • For every matrix A and ǫ >
0, there is a norm k . k such that k A k ≤ ρ ( A ) + ǫ .Note indeed that α + √ βγ is the spectral radius of A ( α, β, γ ). Proof.
1. First assume that β and γ are positive. We can diagonalize A := A ( α, β, γ ). If welet a := α , b := √ β and c := √ γ then A = P (cid:18) a + bc a − bc (cid:19) P − , where P = (cid:18) b − bc c (cid:19) ; P − = 12 bc (cid:18) c b − c b (cid:19) . Now, consider the following norm: for x ∈ R , let k x k = k P − x k ∞ , where k ( x , x ) k ∞ =max( | x | , | x | ). Then for any 2 × M , k M k = max x =0 k M x kk x k = max x =0 k P − M x k ∞ k P − x k ∞ = max y =0 k P − M P y k ∞ k y k ∞ = k P − M P k ∞ . An easy computation shows that for ( r, s, t ) ∈ K , P − A ( r, s, t ) P = (cid:18) r + bt c + cs b − bt c + cs bbt c − cs b r − bt c − cs b (cid:19) . M , k M k ∞ = max i X j | M i,j | , so that, since r ≥ k P − A ( r, s, t ) P k ∞ = r + bt c + cs b + (cid:12)(cid:12)(cid:12)(cid:12) bt c − cs b (cid:12)(cid:12)(cid:12)(cid:12) := F ( r, s, t ) . It remains to find the maximum of F on K . First, note that for ( r, s, t ) ∈ K ,0 ≤ F ( r, s, t ) ≤ F ( α, s, t ) . Then, for every ( s, t ) ∈ [0 , β ] × [0 , γ ], we can write t = ps , p ≥
0. If p ≤ c /b , then cs/ (2 b ) ≥ bt/ (2 c ), so that F ( α, s, t ) = α + cs/b . But s ≤ b , so F ( α, s, t ) ≤ α + bc = α + √ βγ . And if p > c /b , then cs/ (2 b ) ≤ bt/ (2 c ), so that F ( α, s, t ) = α + bt/c . But t ≤ c , so F ( α, s, t ) ≤ α + bc = α + √ βγ . Finally, the maximum of F on K , i.e. themaximum of k A ( r, s, t ) k on K , is α + √ βγ .2. Assume now that β or γ is zero, say e.g. γ = 0. Take ǫ >
0, and
M > β/M < ǫ . Consider the norm k x k = k P x k ∞ , where P is a diagonal matrix with diagonal(1 , M ). For ( r, s, ∈ K , we have as before k A ( r, s, k = k P A ( r, s, P − k ∞ = (cid:13)(cid:13)(cid:13)(cid:13)(cid:18) r s/M r (cid:19)(cid:13)(cid:13)(cid:13)(cid:13) ∞ . Since s ≤ β , this shows that k A ( r, s, k ≤ α + ǫ .We will deal often with real-analytic functions in the remaining of the proofs. For thedefinitions and results on this topic, we refer to [22]. We will show the following result. Proposition 1.
For t ∈ [0 , T c ) and z ∈ [0 , , define φ t ( ., ., z ) : [0 , → R by φ t ( x, y, z ) = (cid:18) (1 + t ) x − t ∂g ∂y ( x, y, z ) , (1 + t ) y − t ∂g ∂x ( x, y, z ) (cid:19) , (12) and let K t ( z ) be the closed subset of [0 , : K t ( z ) = φ t ( ., ., z ) − ([0 , ) . Then (i) φ t ( ., ., z ) : K t ( z ) → [0 , is a homeomorphism. Denote h t ( ., ., z ) = ( h (1) t ( ., ., z ) , h (2) t ( ., ., z )) its inverse. (ii) For i = 1 , , ( x, y, z, t ) h ( i ) t ( x, y, z ) is an analytic function on (0 , × (0 , T c ) .Proof. (i) Fix some z ∈ [0 ,
1] and some t ∈ (0 , T c ), and keep the notations of the statement.For notational simplicity, we omit the parameter z . Let 0 ≤ t < T c . We first want toshow that φ t : K t → [0 , is one-to-one and onto. Fix ( u, v ) ∈ [0 , and let us check10hat there is a unique couple ( x, y ) ∈ [0 , such that φ t ( x, y ) = ( u, v ). This requirementis equivalent to finding a unique fixed point to F t ( x, y ) = (cid:18) u t + t t ∂g ∂y ( x, y, z ) , v t + t t ∂g ∂x ( x, y, z ) (cid:19) . Because of the remark above, F t is a mapping from [0 , to [0 , . It remains to checkthat it is contracting. Its differential is DF t ( x, y ) = t t ∂ g ∂x∂y ∂ g ∂y ∂ g ∂x ∂ g ∂x∂y := t t (cid:18) α ( x, y, z ) β ( x, y, z ) γ ( x, y, z ) α ( x, y, z ) (cid:19) . (13)Let α = α (1 , ,
1) = h ab, c i , β = β (1 , ,
1) = h b , c i and γ = γ (1 , ,
1) = h a , c i . Since t < T c , then t t ( α + √ βγ + ǫ ) < ǫ >
0. Hence, by Lemma 4,there is a norm k . k such thatmax ( x,y ) ∈ [0 , k DF t ( x, y ) k ≤ t t ( α + p βγ + ǫ ) < , so that F t is contracting. Hence it has a unique fixed point. As a consequence, there is aunique couple ( x, y ) ∈ [0 , such that φ t ( x, y ) = ( u, v ). Moreover, since F t is continuouswith respect to ( u, v ) and uniformly contracting in ( u, v ), then the mapping ( u, v ) ( x, y )is continuous, that is h t : [0 , → K t is a homeomorphism. (ii) For t ∈ (0 , T c ), z ∈ (0 ,
1) and ( x , y ) ∈ U t , the matrix Dφ t ( x , y , z ) is invertible. ThenTheorem 2.5.3 in [22] shows that the inverse mapping of φ t has real-analytic coeeficients,i.e. h ( i ) t are real-analytic functions on (0 , × (0 , T c ). The following (non-quasilinear) PDE is a central feature of our discussion ∂g t ∂t = ∂g t ∂x ∂g t ∂y −
11 + t (cid:18) x ∂g t ∂x + y ∂g t ∂y (cid:19) . (14)A preliminary result to the proof is the following. Its proof is exactly the same as the one ofLemma 1. Lemma 5.
Let ( c t ) t ∈ [0 ,T ) be a solution to the system dd t c t ( p ) = 12 X p ′ (cid:22) p p ′ . ( p \ p ′ ) c t ( p ′ ) c t ( p \ p ′ ) − a + b t c t ( p ) (15) for p = ( a, b, m ) ∈ S , with nonnegative initial conditions. Then for all t ∈ [0 , T ) and p ∈ S , c t ( p ) ≥ . roposition 2. (i) For every z ∈ [0 , , the PDE (14) with initial conditions g = g ( ., ., z ) has a unique regular (in the sense of Definition 3) solution ( t, x, y ) g t ( x, y, z ) definedon [0 , T c ) × (0 , . (ii) The solution of the PDE is given by g t ( x, y, z ) = g ( h t ( x, y, z ) , z ) − t t ∂g ∂x ( h t ( x, y, z ) , z ) ∂g ∂y ( h t ( x, y, z ) , z ) , (16) where h t is defined in Proposition 1. (iii) We have the alternative expression g t ( x, y, z ) = 11 + t (cid:16) ˜ H (2) t ( x, y, z ) + ˜ H (1) t (0 , y, z ) (cid:17) + G t ( z ) (17) in the notations of Theorem 1. (iv) For every t ∈ [0 , T c ) , g t has an analytic expansion g t ( x, y, z ) = X c t ( a, b, m ) x a y b z m (18) for ( x, y, z ) ∈ [0 , , where c t ( a, b, m ) ≥ . Remark 4.
Formula (17) will be useful to compute explicit solutions, since with it, it is enoughto have the analytic expansion of h t around 0 to obtain the one of g t (whose coefficients areprecisely the solution to (1)). Note however that G may be tedious to compute in general, butsince it is a function of z only, it is relevant only when we wish to compute the concentrations ofparticles with no arms. Nonetheless, their concentrations can be obtained thanks to the sytem(1), since dd t c t (0 , , m ) = m − X m =1 c t (1 , , m ′ ) c t (0 , , m − m ′ ) . (19) Proof.
We will prove the statement in three steps. Fist we will show that a solution has to bewritten as in (16). Next that this formula does provide a solution. Proving formula (17) is thenan easy matter. In all the proof, some z ∈ [0 ,
1] is fixed.1. Let U t = φ t ( ., ., z ) − ((0 , ), and consider g t a regular solution of (14) on [0 , T ) × (0 , .Fix t ∈ (0 , T ) and ( x, y ) ∈ U t , and let( p ( t ) , p ( t )) := (cid:18) ∂g t ∂x ( φ t ( x, y, z ) , z ) , ∂g t ∂y ( φ t ( x, y, z ) , z ) (cid:19) . It is easy to see that U t decreases with t , so for t ≤ t , this definition makes sense and wecan differentiate p i . The regularity assumptions on g t are just those needed to allow theuse of Schwarz’s theorem, and an easy computation shows that, on [0 , t ], ( p , p ) solvesa linear differential system with continuous coefficients, whose solution is given by (9).Hence ∂g t ∂x ( φ t ( x, y, z ) , z ) = ∂g ∂x ( x, y, z ) / (1+ t ) ; ∂g t ∂y ( φ t ( x, y, z ) , z ) = ∂g ∂y ( x, y, z ) / (1+ t ) (20)for all ( x, y ) ∈ U t . Then, it is easy to check that for all ( x, y ) ∈ U t , (10) and (11) hold.Replacing ( x, y ) by h t ( x, y, z ) (recall h t : (0 , → U t is the right-inverse of φ t ), we finallyobtain (16). This shows that the PDE has at most one solution.12. The existence of a solution is now straightforward. Let g t be defined as in (16). Becauseof the regularity of h t and of g , g has the required regularity properties. It then sufficesto show that it is actually a solution. To this end, let us first compute( p ( t ) , p ( t )) := (cid:18) ∂g t ∂x ( φ t ) , ∂g t ∂y ( φ t ) (cid:19) for some fixed t ∈ [0 , T c ) and ( x, y ) ∈ U t . By differentiating g t ( φ t ) with respect to x and y , it is easy to see that it solves a linear system, which has, before T c , a unique solution,given by equation (20). To conclude, we may differentiate g t ( φ t ) in two different ways: oneusing (16) and (20). The other with the chain rule. Compounding by h t in the obtainedequality readily shows that g t solves the PDE (14) for t ∈ [0 , T c ), ( x, y ) ∈ (0 , .3. The formula (2) is easy to obtain, by differentiating g ( h t ( x, y, z ) , z ) with respect to x , y and z , and using the fact that ∂g ∂x ( h t , z ) = ˜ h (2) t ; ∂g ∂y ( h t , z ) = ˜ h (1) t . in the notations of Theorem 1.4. To prove the last point, consider t ∈ [0 , T c ). φ t is well-defined and analytic (in ( t, x, y, z ))in a neighbourhood of ( t , , , Dφ t (0 , ,
0) is invertible. So, by theorem 2.5. in[22], h t is analytic near ( t , , , g t = g ( h t ). So we may write g t ( x ) = X ( a,b,m ) ∈ S c t ( a, b, m ) x a y b z m (21)for ( t, x, y, z ) in a neighbourhood of ( t , , ,
0) and infinely differentable (even analytic) c t . By analytic continuation, the ( c t ) are uniquely defined, so we can let E = { t ∈ [0 , T c ) , ∀ p ∈ S c t ( p ) ≥ } . By continuity, E is a closed set containing 0. On the other hand, (21) holds for ( t, x, y, z )in a neighbourhood of ( t , , , t ∈ E , there is a ǫ > t ∈ ( t − ǫ, t + ǫ ) and ( x, y, z ) ∈ ( − ǫ, ǫ ) . In particular, since g t solves the PDE (14), itis easy to see, using a Cauchy product and identifying the coefficients, that c t solves (15)for t ∈ ( t − ǫ, t + ǫ ). So by Lemma (5), ( t − ǫ, t + ǫ ) ⊂ E . So E is open and E = [0 , T c ).Finally, recall from Proposition 1 that h t , and so g t , are analytic on [0 , . But we havejust shown that g t has an analytic expansion around 0 with nonnegative coefficients. So(see e.g. the proof of Berstein’s theorem in [22]), this expression actually holds on [0 , . Smoluchowski’s equation is solved thanks to the PDE (14).
Proposition 3. (i)
Let ( c t ) t ∈ [0 ,T ) be a solution to Smoluchowski’s equation, and let g t ( x, y, z ) := h c t , x a y b z m i be its generating function. Then for all z ∈ [0 , , ( t, x, y ) g t ( x, y, z ) is aregular solution to the PDE (14) on [0 , T ∧ Γ ∞ ) × (0 , , with initial conditions g ( ., ., z ) . ii) Conversely, let ( c t ( p )) p ∈ S , t ∈ [0 , T ) be a family of differentiable functions. Let g t ( x, y, z ) be its generating function and assume it is defined for t ∈ [0 , T ) , ( x, y ) ∈ (0 , and z ∈ [0 , . Assume that for every z ∈ [0 , , g t ( ., ., z ) is a regular solution to the PDE (14) with initial conditions g ( ., ., z ) . Then • For all p ∈ S and t ∈ [0 , T ) , c t ( p ) ≥ , • ( c t ) is a solution to Smoluchowski’s equation for t ∈ [ T ∧ T c ) , with initial conditions c . Remark 5.
An important feature of this result is that the PDE (14) and the system (1) areequivalent only before the critical time ( T c or Γ ∞ ). This fact is crucial when we study themicroscopic model. We indeed obtain a family of coefficients whose generating function solvesthe PDE (on [0 , + ∞ )), but we cannot ensure that they solve Smoluchowski’s equation after T c (actually, we believe that they do not). Proof of Proposition 3. (i)
First note that g is regular according to Remark 2. If one takes f ( a, b, m ) = x a y b z m in (5), for some fixed ( x, y, z ) ∈ (0 , × [0 , t g t ( x, y, z ) = ∂g t ∂x ∂g t ∂y − A t y ∂g t ∂y − B t x ∂g t ∂x . Recall from Lemma 3 that when t < Γ ∞ , A t = B t = 1 / (1 + t ). Replacing in the equationabove shows that g t solves (14) for ( x, y ) ∈ (0 , and 0 ≤ t < T ∧ Γ ∞ . (ii) As in the fourth part of the proof of Proposition 2, we see that the ( c t ( p )) solve (15),and hence that they are nonnegative. By uniqueness of a solution to the PDE (14),for t ∈ [0 , T ∧ T c ), these coefficients are those obtained in (18). Now, let t < T c ∧ T , U t = φ − t ( ., ., , ), K t = K t (1) = φ − t ( ., ., , ), and recall from (20) that since g is a regular solution to (14), then for all ( x, y ) ∈ U t ∂g t ∂x ( φ t ( x, y, ,
1) = ∂g ∂x ( x, y,
1) 11 + t , what we can write X ( a,b,m ) ∈ S ac t ( a, b, m ) φ (1) t ( x, y, a φ (2) t ( x, y, b = X ( a,b,m ) ∈ S ac ( a, b, m ) x a y b ×
11 + t . (22)Note now that since t < T c , then φ t ( ., .,
1) : K t → [0 , is a homeomorphism, so U t = K t .Since (1 , ∈ K t , we can pass to the limit in the equality above when ( x, y ) → (1 , φ t , we obtain X ( a,b,m ) ∈ S ac t ( a, b, m ) = X ( a,b,m ) ∈ S ac ( a, b, m ) ×
11 + t = 11 + t . The same reasoning shows that h b, c t i = 1 / (1 + t ) for t < T c . Hence, we may re-write (15)before T c by substituting a t = a X ( a ′ ,b ′ ,m ′ ) ∈ S b ′ c t ( a ′ , b ′ , m ′ ) ; b t = b X ( a ′ ,b ′ ,m ′ ) ∈ S a ′ c t ( a ′ , b ′ , m ′ ) , which shows that ( c t ) solves Smoluchowski’s equation (1) before T c .14 .5 Existence and uniqueness of a solution With these results, proving Theorem 1 is now an easy matter.
Proof of theorem 1.
1. Let us first prove that h a + b , c t i is finite before T c and tends to + ∞ when t → T c . So take ( c t ( a, b, m )) t ∈ [0 ,T ) a solution to the system (1), and g t its generatingfunction. Since h a + b , c t i < + ∞ in a neighbourhood of 0, then we have ∂ g t ∂x (1 , ,
1) = h c t , a − a i , as long as h a + b , c t i < + ∞ . Note that by Lemma 3 h c t , a i is bounded by 1, so h c t , a i explodes if and only if ∂ g t ∂x (1 , ,
1) explodes. Let us compute the latter. Differentiating(20) with respect to x and y and having ( x, y ) tend to (1 , (cid:18) t − tα − tγ − tβ t − tα (cid:19) (cid:18) ac (cid:19) = 11 + t (cid:18) γα (cid:19) , where α = ∂ g ∂x∂y = h ab, c i ; β = ∂ g ∂y = h b − b, c i ; γ = ∂ g ∂x = h a − a, c i . and a = ∂ g t ∂x (1 , ,
1) and b = ∂ g t ∂x∂y (1 , , . Hence a = ∂ g t ∂x (1 , ,
1) = X ( a,b,m ) ∈ S a ( a − c t ( a, b, m ) = γ (1 + t − tα ) − t γβ . This expression is valid as long as t < T c , since the determinant of the matrix is thennonzero. In the same way, we also have ∂ g t ∂y (1 , ,
1) = X ( a,b,m ) ∈ S b ( b − c t ( a, b, m ) = β (1 + t − tα ) − t γβ . If γ or β is nonzero, then h c t , a + b i → + ∞ when t → T c . If γ = β = 0, then h a + b , c t i remains finite, but this condition also imposes that M = 1, and so T c = + ∞ .2. Uniqueness is now easy to obtain: assume ( c (1) t ) and ( c (2) t ) solve the system 1 on [0 , T ), T ≤ T c , with initial conditions c . Let g (1) t and g (2) t be their generating functions. SinceΓ ∞ = T c and T ≤ T c , then by Proposition 3, for every z ∈ [0 , , T ) × (0 , , with initial conditions g ( ., ., z ). But by Proposition2 there is a unique regular solution to the PDE on [0 , T c ), so g (1) t = g (2) t on [0 , T ), so that( c (1) t ) = ( c (2) t ).3. The existence is given by Proposition 2, (iv), and Proposition 3, (ii).15. Let us finally prove that the total mass is conserved. Consider ψ t ( x, y, z ) = ( φ t ( x, y, z ) , z ), U ′ t = ψ − t ((0 , ) and K ′ t = ψ − t ([0 , ). For ( x, y, z ) ∈ U ′ t , we can differentiate g t ( ψ t ( x, y, z ))with respect to z , and using (20), we obtain ∂g t ∂z ( ψ t ( x, y, z )) = ∂g ∂z ( ψ t ( x, y, z )) . Now, ψ t is a homeomorphism from K ′ t to [0 , , so U ′ t = K t . But (1 , , ∈ K ′ t , so wemay pass to the limit when ( x, y, z ) → (1 , ,
1) in the equality above, to obtain ∂g t ∂z (1 , ,
1) = ∂g ∂z (1 , , , what precisely means h m, c t i = h m, c i . We give in this short section some explicit solutions, without giving the full details of thecomputations. We will always assume that at time 0, there are only particles of size 1 in themedium. So given a (finite) measure µ on N × N , we assume c ( a, b, m ) = µ ( a, b )1 { m =1 } and as usual h a, c i = h b, c i = 1. To obtain the solutions, we need to invert φ t , what can bedone using the (two-variable) Lagrange inversion formula (a statement is given by Good [12]).But it is much more involved than the one-dimensional formula, and the expressions it wouldprovide can hardly be called explicit. Let us however study three easy cases. Only the last onerequires the two-variable formula. The first case is when each particle has exactly one female arm, and a number of male armsdistributed according to a measure µ . So µ ( a, b ) = (cid:26) b = 1 µ ( a ) if b = 1and we will assume that A = B = 1, i.e. µ is a probability measure with unit mean. In thiscase, we obtain, for every a, b ≥ m ≥ c t ( a, b, m ) = b = 1 t m − (1 + t ) m + a m (cid:18) m + a − a (cid:19) µ ∗ m ( m + a −
1) if b = 1 . In particular there exists only particles with one female arm, what is physically obvious. More-over, the concentration c t ( a, , m ) is exactly the concentration of particles with a arms and mass m obtained in the “oriented model” of [2], with initial distribution µ . This is also natural,since in this case, ( a, , m )- and ( a ′ , , m ′ )-particles indeed coagulate with rate a + a ′ , which isthe rate of the oriented model. Note also that T c = + ∞ , like in the oriented model.16 .2 Arms with uniform random genders In this model, the total number of arms of a particle is chosen according to a measure µ , theneach arm is given a gender independently, with probability 1 /
2. That is, we let µ ( a, b ) = µ ( a + b ) (cid:18) a + bb (cid:19) a + b . We will assume that µ has mean 2, so that A = B = 1. Let ν ( j ) = ( j + 1) µ ( j + 1). Then weobtain, for ( a, b ) = (0 , c t ( a, b, m ) = 12 t m − (1 + t ) a + b + m − ( m + a + b − m ! a ! b ! (cid:16) ν (cid:17) ∗ m ( m + a + b − c t (0 , , m ) = 1 m ( m −
1) 12 1(1 + 1 /t ) m − (cid:16) ν (cid:17) ∗ m ( m − ν (0) >
0. If T c = + ∞ , this condition means that ν = δ . In particular, one easilychecks that X a + b = k c t ( a, b, m ) = 2 c symt ( k, m )where c symt ( k, m ) is the concentration of particles with k arms and mass m in the symmetricmodel of [2], with initial arm distribution µ /
2. The factor 2 comes from the normalisation: inour model, the total concentration of arms in the medium is 2, when it is 1 in the symmetricmodel. It is also worth stressing the stronger fact that for a, b ≥
0, we have c t ( a, b, m ) = 12 (cid:18) a + bb (cid:19) a + b c t ( a + b, m ) . Hence, at any given time, the distribution of the number of male (or female) arms is stillbinomial. So, if at some time we chose to reassign to each arm a gender uniformly and inde-pendently, and let the system evolve on from this state, no difference would be observed. Orwe could watch a system evolve like the symmetric model starting from an arm distribution µ /
2, and then at some time give the arms a gender uniformly at random and independently.The evolution afterwards will be the evolution of the sexed model with initial arm distribution µ . Note as before that the critical time is the same than in the symmetric model with initialdistribution µ / a, b ≥ µ ( a, b ) = µ ( b, a ), andthe solution ( c t ) t ∈ [0 ,T c ) to Smoluchowski’s equation (1). Then it is easy to check (by uniqueness)that for all t ∈ [0 , T c ) c t ( a, b, m ) = c t ( b, a, m ), and that, if we denote k t ( l, m ) := X a + b = l c t ( a, b, m ) , then ( k t ) is governed (up to a factor 1 /
2) by the symmetric Smoluchowski equation of [2].Hence, in this case too, k t ( l, m ) = 2 c symt ( l, m ).17 .3 Particles with one gender Let us finally consider the more intricate case where at time 0, the arms of each particle have allthe same gender. This is motivated by the idea of ionic bonds: a particle with only male (resp.female) arms can be considered as a cation (resp. an anion), and cations can only bond withanions. Hence, consider, for i = 1 , µ i two measures with mean 1 such that µ (0) = µ (0),and take µ ( a, b ) = µ ( a ) if b = 0 µ ( b ) if a = 00 elseand ν i ( j ) = ( j +1) µ i ( j +1). The two-variable Lagrange inversion formula gives, for ( a, b ) = (0 , c t ( a, b, m ) = t m − (1 + t ) m + a + b − m X k =0 ( m − k + b − k + a − m − k )! k ! a ! b ! ν ∗ ( m − k )1 ( k + a − ν ∗ k ( m − k − b ) . If we also let ν ⋄ ν ( m ) = ( m − m − X k =1 k ν ∗ ( m − k )1 ( k −
1) 1 m − k ν ∗ k ( m − k − m ≥ c t (0 , , m ) = 1 m − /t ) m − ν ⋄ ν ( m ) , provided ν (0) ν (0) > T c = + ∞ , that ν and ν are not δ ). In particular,we see that if T c = + ∞ and ν , ν = δ , then for m ≥ c t ( a, b, m ) → ( a, b ) = (0 , m − ν ⋄ ν ( m ) if a = b = 0 . Hence all the arms are used to coagulate. Chemically, this means that there are no more ionsin the medium. The limiting distribution of the sizes is given by ( m − − ν ⋄ ν ( m ). Wewill generalize this fact in the following section, and give a probabilistic interpretation of themeasure ν ⋄ ν . Also, if M i is the mean of ν i , then M = √ M M and T = 1 / ( M − ∞ if M ≤
1. If µ = µ = µ , then the critical time is the same as in the symmetric model withinitial distribution µ . In this section, we will study the limiting concentrations. Similarly to what happens in theoriented and symmetric model of [2], we expect the concentrations to converge when the timetends to + ∞ , whenever gelation does not occur. Physically, this would mean that the systemconverges to a terminal state where all arms have been used (otherwise, further coagulations“should” occur). This is actually true, and this is an easy consequence of the preceeding results. Corollary 1.
Assume T c = + ∞ , and let ( c t ) t ≥ be the solution to Smoluchowski’s equation (1) . i) When t → + ∞ , there exists limiting concentrations c ∞ ( m ) such that c t ( a, b, m ) → c ∞ ( m )1 { a = b =0 } in ℓ ( N × N × N ∗ ) . (ii) For z ∈ [0 , , the generating function g ∞ ( z ) of ( c ∞ ( m )) m ≥ is the antiderivative vanishingat of ∂g ∂z (cid:0) h (1) ∞ ( z ) , h (2) ∞ ( z ) , z (cid:1) , where ( h (1) ∞ , h (2) ∞ ) is characterised by h (1) ∞ ( z ) = ∂g ∂y (cid:16) h (1) ∞ ( z ) , h (2) ∞ ( z ) , z ) (cid:17) h (2) ∞ ( z ) = ∂g ∂x (cid:16) h (1) ∞ ( z ) , h (2) ∞ ( z ) , z ) (cid:17) . (23) Proof. (i)
Since Γ ∞ = T c = + ∞ , then (6) holds for all t ≥
0, so X ( a,b ) =(0 , c t ( a, b, m ) −→ t → + ∞ . Then, using (19), we get for all t ≥ c t (0 , , m ) = c (0 , , m ) + m − X m ′ =1 Z t c s (1 , , m ′ ) c s (0 , , m − m ′ ) d s. But the integrand is bounded by A s B s = 1 / (1 + s ) . Hence the integral has a finite limitwhen t → + ∞ , and so does c t (0 , , m ). Finally, h m, c t i is bounded by Theorem 1, and h m, c ∞ i < + ∞ by Fatou’s lemma, so the Cauchy-Schwarz inequality shows that X m ≥ | c t (0 , , m ) − c ∞ ( m ) | −→ t → + ∞ (ii) By ℓ -convergence, we have g ∞ ( z ) = lim t → + ∞ g t (0 , , z ) , so, using (2) and the fact that ˜ H (1) t and ˜ H (2) t are bounded by 1, g ∞ ( z ) = lim t → + ∞ G t ( z ) . It just remains to check that h (1) t (0 , , z ) and h (2) t (0 , , z ) do have a limit when t → + ∞ .From their definition (3), they have the same limit (if any) as k (1) t ( z ) := ˜ h (1) t (0 , , z ) and k (2) t ( z ) := ˜ h (2) t (0 , , z ). Now fix Z ∈ [0 , k (1) t and k (2) t as (continuous) mapson [0 , Z ]. But h t is the right-inverse of φ t , so ∂g ∂x ( k (1) t ( z ) , k (2) t ( z ) , z ) = k (2) t ( z ) ; ∂g ∂y ( k (1) t ( z ) , k (2) t ( z ) , z ) = k (1) t ( z ) (24)19nd, since h a + b , c i < + ∞ , ∂g ∂x ( x, y, z ) has a bounded differential on [0 , × [0 , Z ].Hence k (2) t , and for the same reason k (1) t , are Lipschitz-continuous on [0 , Z ], with a constantindependent of t . Ascoli’s theorem thus shows that the families ( k (1) t ) and ( k (2) t ), t ≥ , Z ]). So the family ( k (1) t , k (2) t ) lies ina compact set, and passing to the limit in (24) shows that any of its limit points solves(23). But since T c = + ∞ , the application( x, y ) (cid:18) ∂g ∂y ( x, y, z ) , ∂g ∂x ( x, y, z ) (cid:19) is contracting for every z ∈ [0 , Z ]. So there is a unique solution to (23), and ( k (1) t , k (2) t )converges to this solution. In [2], Bertoin shows that for monodisperse initial conditions (i.e. c ( a, m ) = µ ( a )1 { m =0 } for some measure µ ) and when gelation does not occur, the limiting concentrations can bedescribed in terms of Galton-Watson processes. The same kind of analogy is observed in ourcase. Precisely, consider a Galton-Watson tree with two genders, constructed as follows. Westart from a male or a female ancestor. It gives birth to a number a of male children, anda number b of female children, where ( a, b ) is distributed according to a law µ m ( a, b ) if theancestor is a male, µ f ( a, b ) if it is a female. Then each child gives birth to a certain numberof children, distributed according to µ m or µ f , depending on his gender, and so on. Consider T f ( µ m , µ f ) (resp. T m ( µ m , µ f )) the total population of such a Galton-Watson process, startingfrom a female (resp male) ancestor. Let for r ∈ [0 , g f ( r ) = E ( r T f ) ; g m ( r ) = E ( r T m )their generating functions. It is an easy exercise to check that they solve the following system,where φ f (resp. φ m ) is the generating function of µ f (resp. µ m ) (cid:26) g m ( r ) = rφ m ( g m ( r ) , g f ( r )) g f ( r ) = rφ f ( g m ( r ) , g f ( r )) . (25)Besides, if T ( µ m , µ f ) is the size of a Galton-Watson tree started from a male and a femaleancestor (each tree growing independently), then P ( T ( µ m , µ f ) = m ) = [ z m ] g m ( r ) g f ( r ) , (26)where [ z m ] h ( z ) is the coefficient of z m in the expansion around 0 of an analytic function h .Now, let us go back to our study. Assume monodisperse initial conditions, i.e. there is afinite measure µ on N × N such that c ( a, b, m ) = 1 { m =0 } µ ( a, b ), and assume T c = + ∞ . We willuse the same notations as in the previous section. Using (23), we obtain g ′∞ ( z ) = 1 z h (1) ∞ ( z ) h (2) ∞ ( z ) ,
20o that for n ≥ z n ] g ∞ ( z ) = 1 n − z n ] h (1) ∞ ( z ) h (2) ∞ ( z ) . (27)Let • ν m ( a, b ) = ( b + 1) µ ( a, b + 1) the probability measure with generating fuction φ m := ∂g ∂y , • ν f ( a, b ) = ( a + 1) µ ( a + 1 , b ) the probability measure with generating fuction φ f := ∂g ∂x (they are probability measures because of the assumption h a, c i = h b, c i = 1), and consider thetwo-type Galton-Watson process as above with these reproduction laws. Because of (23) and(25), ( g m , g f ) and ( h (1) ∞ , h (2) ∞ ) solve the same equation, which has a unique solution by Corollary1, so h (1) ∞ = g m , h (2) ∞ = g f . Hence by (26) and (27) P ( T ( ν m , ν f ) = m ) = [ z m ] h (1) ∞ ( z ) h (2) ∞ ( z ) = ( m − z m ] g ∞ ( z ) = ( m − c ∞ ( m ) . Finally, let us call a measure µ degenerate if µ = δ (1 , or µ = 12 ( δ (2 , + δ (0 , ), or µ ( a, b ) = 0for a = 1, or µ ( a, b ) = 0 for b = 1. We let the reader check (using e.g. Theorem 10.1 in [17])that under the assumptions T c = + ∞ , and ruling out the degenerate cases, T m ( ν m , ν f ) and T f ( µ m , µ f ) are finite a.s., and that the process is supercritical if T c < + ∞ . Corollary 2.
The limiting concentrations verify for m ≥ P ( T ( ν m , ν f ) = m ) = ( m − c ∞ ( m ) . Moreover, if µ is not degenerate, then T ( ν m , ν f ) < + ∞ a.s. Hence, the law ν ⋄ ν defined in Section 4.3 is the law of the total population of a two-typeGalton Watson process started from one male and one female ancestors, where the males givebirth to females according to the law ν , and the females give birth to males according to thelaw ν . In particular, if ν = ν = ν , then ν ⋄ ν is the distribution of the size of a Galton-Watsontree with reproduction law ν and starting from two ancestors. So we get (what is not obviousfrom the formula for ⋄ ), that for m ≥ ν ⋄ ν ( m ) = 2 m ν ∗ m ( m − . This corollary answers another question about gelation. By Theorem 1, the total mass h m, c t i is conserved as time passes, so gelation does not occur before T c . But if T c = + ∞ , it may occurat infinity: some mass may be lost then. For monodisperse initial conditions, Corollary 2 provesthat this cannot happen, except in the degenerate cases. Denote indeed C t = h c t , i . Because ofthe ℓ -convergence in Corollary 1, C t → P c ∞ ( m ), and Equation (5) yields C t = C − t/ (1 + t ).Hence C − X m ≥ c ∞ ( m ) . So, whenever µ is not degenerated, Corollary 2 gives X ( a,b,m ) ∈ S mc ∞ ( m ) = X ( a,b,m ) ∈ S (( m − c ∞ ( m ) + c ∞ ( m )) = P ( T ( µ m , µ f ) < + ∞ ) + X c ∞ ( m )=1 + X c ∞ ( m ) = 1 + C − C , which is precisely the total mass at time 0. For the degenerate cases, we can get explicitexpressions for the concentrations, and these show that the mass at infinity is 0.21 Microscopic model
The goal of this section is to construct a sequence of random processes modeling the coagulationof particles with male and female arms. We will start with n particles (and then let n → + ∞ ).Let us first set some notations. • Recall S = N × N × N ∗ . • J , n K = { , . . . , n } . • M >
M n (see the definition of E n ). • For p = ( a, b, m ) ∈ S and p ′ = ( a ′ , b ′ , m ′ ) ∈ S , we will denote p.p ′ = a ′ b + ab ′ the rate ofcoagulation, and p ◦ p ′ = ( a + a ′ − , b + b ′ − , m + m ′ ) the type of the particle resultingfrom such a coagulation. • The sequence of the number of p -particles is an element of E n = N ∈ J , n K S , X ( a,b,m ) ∈ S ( a + b + m ) N ( a, b, m ) ≤ M n which is a finite set. • n E n is a subset of E = C ∈ [0 , S , X ( a,b,m ) ∈ S ( a + b + m ) C ( a, b, m ) ≤ M . An element of E represents the sequence of concentrations of p -particles. E is a metricspace endowed with the distance d ( C (1) , C (2) ) = X p ∈ S (cid:12)(cid:12) C (1) ( p ) − C (2) ( p ) (cid:12)(cid:12) . • We will call C-convergence the compact convergence (i.e. uniform convergence on everycompact set) for functions from R + to E . • D ([0 , + ∞ ) , H ) is the space of c`adl`ag functions from [0 , + ∞ ) to a metric space ( H, d ),endowed with the Skorokhod distance. We will call S-convergence the convergence forSkorokhod’s distance. For the basic facts about Skorokhod distance for functions withvalue in a (complete separable) metric space, see [7].It is easy to check the following result.
Lemma 6. ( E, d ) is a compact metric space. In particular, it is a Polish space. .2 Model Let us now introduce the model. Informally, we consider a finite number n of particles withinteger mass, and assume that at time 0, the total mass of the system plus the total number ofarms is less than M n . Then, each pair formed of a p -particle and of a p ′ -particle may coagulatewith rate p.p ′ , independently of the other pairs, to form a p ◦ p ′ -particle, that is, the time onehas to wait to see them coagulate is exponential with parameter p.p ′ . In other words, assumethe system in in the state η at a given time, that is η ∈ E n and η ( p ) is the number of p -particles.There are η ( p ) η ( p ′ ) (or η ( p )( η ( p ) −
1) if p = p ′ ) pairs formed of a p -particle and of a p ′ -particle.Let λ η ( p, p ′ ) = (cid:26) p.p ′ η ( p ) η ( p ′ ) if p = p ′ p.pη ( p )( η ( p ) −
1) if p = p ′ . We set independently on each couple ( p, p ′ ) an exponential clock with parameter λ η ( p, p ′ ) (anexponential random variable with parameter 0 is assumed to be a.s. infinite). There is a.s. oneand only clock which rings first. If it is the clock on the couple ( p, p ′ ), then the system jumpsto the state η + ∆ p,p ′ where∆ p,p ′ ( p ) = ∆ p,p ′ ( p ′ ) = − p = p ′ ∆ p,p ′ ( p ) = − p = p ′ ∆ p,p ′ ( p ◦ p ′ ) = +1 . Then restart the construction afresh from the new state. Note that only finitely many η ( p ) arenonzero, so the first jump occurs after an exponential time with parameter λ η = X p,p ′ ∈ S λ η ( p, p ′ ) < + ∞ . We will consider the Markov chain constructed according to this rule. That is, we fix for every n ≥ • An element X ( n )0 of E n , which is the initial number of particles. • A pure-jump Markov process X ( n ) on E n , defined on some probability space (Ω n , A n , P n ),starting from X ( n )0 , and with generator Gf ( η ) = X ( p,p ′ ) ∈ S ( f ( η + ∆ p,p ′ ) − f ( η )) λ η ( p, p ′ )for every bounded function f : E n → R . The construction of such a process is obvioussince E n is finite. • The rescaled and time-changed process C ( n ) t = 1 n X ( n ) t/n . Note that C ( n ) is a pure-jump Markov process on n E n ⊂ E , starting from C ( n )0 = X ( n )0 /n , andwith generator G ( n ) f ( η ) = X ( p,p ′ ) ∈ S (cid:18) f (cid:18) η + 1 n ∆ p,p ′ (cid:19) − f ( η ) (cid:19) λ ( n ) η ( p, p ′ )23here λ ( n ) η ( p, p ′ ) = 1 n λ nη ( p, p ′ ) . The law P n of the process C ( n ) is a probability measure on D ([0 , + ∞ ) , E ). We will prove thatthe sequence ( P n ) is tight, and that for every limit point P , and almost every process ( C t )with law P , ( C t ) solves some system, which is Smoluchowski’s equation (1) before the criticaltime. Because of the uniqueness of such a solution, this will show that ( P n ) itself convergesto the solution of Smoluchowski’s equation before the critical time. The proof of tightness isanaloguous to the one in [18], up to some slight modifications. Lemma 7.
The sequence ( P n ) n ≥ is tight.Proof. We will use the classical tightness criterion stated in [19], page 34, or in [7], Theorem7.2. For t ≥
0, let P ( n ) t be the law of C ( n ) t , which is a probability measure on E . Since E iscompact by Lemma 6, the tightness of the sequence ( P ( n ) t ) n ≥ is obvious.Now, C ( n ) is a pure-jump procees on n E n ⊂ E , with generator G ( n ) . Hence, when theprocess is in the state η , then the time before the next jump is exponential with parameter λ ( n ) η := X p,p ′ ∈ S λ ( n ) η ( p, p ′ ) = 12 X p,p ′ ∈ S np.p ′ η ( p ) η ( p ′ ) − X p ∈ S p.pη ( p ) ! and, since η ∈ E , λ ( n ) η ≤ M n := cn . Now take N > β > ǫ >
0, and let δ > N = δl for some l ∈ N ∗ , and 3 cδe/β <
1. Define now w N ( Y, δ ) := inf π ∈ Π δ max t i ∈ π sup t i ≤ s 1. Then P n ( w N ( C ( n ) , δ ) > β ) ≤ P n (cid:18) max ≤ i ≤ l − Z i > β (cid:19) ≤ l max ≤ i ≤ l − P n ( Z i > β ) . But the size of a jump, that is d ( C ( n ) t − , C ( n ) t ), is 3 /n . Hence, if Z i > β , then the process hasjumped more than k := ⌈ βn/ ⌉ := ⌈ c ′ n ⌉ times between t i and t i +1 (where ⌈ x ⌉ is the first integerstrictly greater than x ). If S k if the time of the k -th jump, the Markov property tell us that P n ( Z i > β ) ≤ P n ( S k ≤ δ ) . But S k is the sum of k independent exponential random variables, with parameter smaller than cn . So, if S ′ k is the sum of k independent exponential random variables with parameter cn (ona probability space (Ω , A , P )), then S k is stochastically dominated by S ′ k , that is P n ( S k ≤ δ ) ≤ P ( S ′ k ≤ δ ) . To conclude, note that the last term is the probability that a Poisson process with parameter cn jumps more than k times on [0 , δ ], and Stirling’s formula shows that this tends to zero for( cδe/c ′ ) < 1. 24 .4 Convergence In this section, we prove the convergence of C ( n ) to a process solving the system (15), anddeduce that it solves Smoluchowski’s equation (1) before T c . Proposition 4. Assume that the following convergences in distribution hold • For every p ∈ S , C ( n )0 ( p ) → c ( p ) for some non random c ( p ) ≥ , • P ( a,b,m ) ∈ S aC ( n )0 ( a, b, m ) → , • P ( a,b,m ) ∈ S bC ( n )0 ( a, b, m ) → .Let P be a limit point of ( P n ) , and let ( c t ) be a process with law P . Then a.s. ( c t ) solves thesystem (15) , with initial conditions ( c ) . In the following proofs, we take a subsequence of ( C ( n ) ) which converges in law to somepossibly random c ∈ D ([0 , + ∞ ) , E ). For notational simplicity, we will assume that ( C ( n ) ) itselftends to c . Since E is compact, it is separable, and hence, so is D ([0 , + ∞ ) , E ). Skorokhod’srepresentation theorem (cf e.g. [7]) now allows us to assume that the C ( n ) are defined on thesame probability space (Ω , F , P ), that C ( n ) → c a.s. (that is, for almost every ω ∈ Ω, thefunction C ( n ) ( ω ) tends to c ( ω ) for Skorokhod’s distance), and that in the statement, there isa.s. convergence. We will also constantly use the fact that for every bounded Borel function f : E → R , the processes C ( n ) t − C ( n )0 − Z t G ( n ) f ( C ( n ) s ) d s := M ( n ) t (28)and (cid:16) M ( n ) t (cid:17) − Z t (cid:0) G ( n ) ( f )( C ( n ) s ) − f ( C ( n ) s ) G ( n ) f ( C ( n ) s ) (cid:1) d s (29)are martingales. Note also that if f : E → R is “linear”, then for all η ∈ EG ( n ) f ( η ) = 12 X p,p ′ ∈ S f (∆ p,p ′ ) p.p ′ η ( p ) η ( p ′ ) − n X p ∈ S f (∆ p,p ) η ( p ) (30)and that (cid:0) G ( n ) ( f ) − f G ( n ) f (cid:1) ( η ) = 12 n X p,p ′ ∈ S f (∆ p,p ′ ) η ( p ) η ( p ′ ) p.p ′ − n X p ∈ S f (∆ p,p ) η ( p ) p.p ! (31)We will also need the following convergence result. Lemma 8. Let A ( n ) s = X ( a,b,m ) ∈ S aC ( n ) s ( a, b, m ) and B ( n ) s = X ( a,b,m ) ∈ S bC ( n ) s ( a, b, m ) . Then ( A ( n ) ) and ( B ( n ) ) C-converge a.s. to t / (1 + t ) . roof. Obviously, we cannot pass to the limit immediately in these expressions. So considerthe maps from E to R : C 7→ h a, C i and C 7→ h b, C i , which are measurable and bounded (by M ). By (28), there are martingales M A, ( n ) and M B, ( n ) such that A ( n ) t = A ( n )0 − Z t A ( n ) s B ( n ) s d s + M A, ( n ) t ; B ( n ) t = B ( n )0 − Z t A ( n ) s B ( n ) s d s + M B, ( n ) t . (32)Now, (29) and (31) show that the quadratic variation of M A, ( n ) verifies (cid:10) M A, ( n ) (cid:11) t ≤ n Z t A s B s d s ≤ M tn . By Doob’s inequality, E (cid:18) sup ≤ t ≤ T M A, ( n ) t (cid:19) ! → T > 0. Hence there is a subsequence of ( M A, ( n ) ) which C-converges a.s. to 0. In par-ticular, it S-converges. For notational simplicity, we will assume that ( M A, ( n ) ) itself converges.For the same reason, we may assume that ( M B, ( n ) ) also S-converges a.s. to 0.Now, note that the proof of Lemma 7 still works for ( A ( n ) ), since the size of its jumps isbounded by 1 /n . So ( A ( n ) ) is tight, and by Prokhorov’s theorem, this means that for almostevery ω ∈ Ω, ( A ( n ) ( ω )) n ≥ lies in a compact of D ([0 , + ∞ ) , E ) (actually, this is a consequenceof the proof of Lemma 9, and we do not need this implication of Prokhorov’s theorem). Thesame works for ( B ( n ) ), so we can find Ω ′ ⊂ Ω with P (Ω ′ ) = 1, such that for every ω ∈ Ω ′ , C n ( ω ) → c ( ω ) (for Skorokhod’s distance), M A, ( n ) ( ω ) → M B, ( n ) ( ω ) → A ( n ) ( ω )) n ≥ and ( B ( n ) ( ω )) n ≥ lie in a compact.Next, fix ω ∈ Ω ′ , and let us find the limit of ( A ( n ) ( ω ) , B ( n ) ( ω )) (for the product topology— which is not Skorokhod’s topology on D ([0 , + ∞ ) , E )). Since it lies in a compact set, it isenough to show that it has only one limit point. So assume ( A ( n ) ( ω ) , B ( n ) ( ω )) converges tosome ( A, B ). Then ( A ( n ) t ( ω )) converges to A t for every t ∈ K , the set of continuity points of A . But A is c`adl`ag, so it has only countably many points of discontinuity, and hence K c hasLebesgue-measure 0. Hence ( A ( n ) ( ω )) converges to A Lebesgue-a.s., and ditto for ( B ( n ) ( ω )).Also, ( A ( n ) ( ω )) and ( B ( n ) ( ω )) are bounded by M , so using dominated convergence in (32) andrecalling that A ( n )0 ( ω ) and B ( n )0 ( ω ) → A t = 1 − Z t A s B s d s ; B t = 1 − Z t A s B s d s. Hence A t = B t = 11 + t . Finally there is only one limit point to ( A ( n ) ( ω ) , B ( n ) ( ω )). So ( A ( n ) ( ω )) and ( B ( n ) ( ω )) bothS-converge to t / (1 + t ), and, since this function is continuous, they C-converge. Remark 6. As pointed out in the proof, the convergence of A ( n ) and B ( n ) to the actual numberof arms A t = X ( a,b,m ) ∈ S ac t ( a, b, m ) and B t = X ( a,b,m ) ∈ S bc t ( a, b, m )26s not obvious. There is no such problem for a strictly sublinear coagulation rate (as in Jeon’sproof [18]). In our (linear) case, we prove below that this convergence holds before the criticaltime (we also refer to Norris [24] for general sublinear rates in a model with no arms). In fact,if there is a solution ( c t ) to (1) defined after T c , we believe that A ( n ) t and B ( n ) t do not convergeto A t and B t after T c (and that this number of arms is then stricly lesser than 1 / (1 + t )). Thiswould suggest that T c is actually a gelation time: some of the arms are lost in a “gel” (a particlewith an infinite mass and infinitely many arms). Proof of Proposition 4. 1. Take some p = ( a , b , m ) ∈ S , and let for C ∈ D ([0 , + ∞ ) , E ), f ( C ) = C ( p ). According to (28), C ( n ) t ( p ) − C ( n )0 ( p ) − Z t G ( n ) f ( C ( n ) s ) ds := M p , ( n ) t (33)is a martingale. Note also that for p, p ′ ∈ S , f (∆ p,p ′ ) is 0, except if p or p ′ or p ◦ p ′ is p .Hence, it is easy to check using (28) that G ( n ) f ( C ( n ) s ) = − X p ∈ S C ( n ) s ( p ) C ( n ) s ( p ) p .p + 12 X p (cid:22) p p. ( p \ p ) C ( n ) s ( p ) C ( n ) s ( p \ p ) − n X p ∈ S f (∆ p,p ) C ( n ) s ( p ) . (34)The last term is due to the difference between λ η ( p, p ′ ) when p = p ′ and when p = p ′ . Inany case, it tends to 0 uniformly on R + and uniformly in p .2. Let us now study the martingale term. By Doob’s inequality, we have for every T > E (cid:18) sup ≤ t ≤ T M p , ( n ) t (cid:19) ! ≤ E (cid:18)(cid:16) M p , ( n ) T (cid:17) (cid:19) and by (29), this last term is E (cid:18)Z T ( G ( n ) f − f G ( n ) f )( C ( n ) s ) d s (cid:19) . But by (31), and since f (∆ p,p ′ ) ≤ M for all p, p ′ ∈ S , then ( G ( n ) f − f G ( n ) f )( C ( n ) s ) ≤ M /n , so that E (cid:18) sup ≤ t ≤ T M p , ( n ) t (cid:19) ! → . Hence, there is a subsequence of ( M p , ( n ) ) which a.s. converges to 0 uniformly on R + . Fornotational simplicity, we will now assume that ( M p , ( n ) ) itself C-converges to 0. Usingthe diagonal method, we may as well assume that ( M p , ( n ) ) C-converges to 0 for every p ∈ S .3. We have already seen in the proof of Lemma 7 that d ( C ( n ) t , C ( n ) t − ) ≤ /n a.s. By continuityof X sup s ∈ [0 ,t ] d ( X s − , X s ) (cf [7]), this ensures that c is almost surely continuous , so C ( n ) actually C-converges to c . From the definition of d , it is also obvious that C ( n ) ( p )C-converges to c ( p ) for every p ∈ S . 27. With these results, we may now pass to the limit in (33) and (34). Write (34) in the form G ( n ) f ( C ( n ) s ) = − α n ( s ) + 12 β n ( s ) + ǫ n ( s ) . Equation (33) shows that C ( n ) t ( p ) = C ( n )0 ( p ) − Z t α n ( s ) d s + 12 Z t β n ( s ) d s + Z t ǫ n ( s ) d s + M ( n ) t . (35)By Point 3, C ( n ) t ( p ) C-converges a.s. to c ( p ). C ( n )0 ( p ) tends to c ( p ) by assumption. β n ( t ) is a finite sum, so β n ( t ) → X p ≤ p p. ( p \ p ) C ( n ) s ( p ) C ( n ) s ( p \ p )compactly. Finally, note that α n ( s ) = C ( n ) s ( p ) a X ( a,b,m ) ∈ S bC ( n ) s ( a, b, m ) + b X ( a,b,m ) ∈ S aC ( n ) s ( a, b, m ) = C ( n ) s ( p )( a B ( n ) s + b A ( n ) s ) . By Lemma 8, A ( n ) t and B ( n ) t converge compactly to t / (1 + t ), solim n → + ∞ α n ( s ) = a + b t c s ( p ) a . s . compactly. Since these are all compact convergences, we can pass to the limit in (35), forall p ∈ S . This readily shows that ( c ( p )) solves (15), with initial conditions ( c ). Theorem 2. Assume the same hypothesis as in Proposition 4, and assume as well X ( a,b,m ) ∈ S ( a + b ) c ( a, b, m ) < + ∞ . Let T c be defined as in Definition 2. Then ( C ( n ) t ) t ∈ [0 ,T c ) converges (in distribution) to the uniquesolution of Smoluchowski’s equation (1) . Remark 7. Obviously, convergence has to be understood with respect to Skorokhod’s topologyon [0 , T ) (which is the trace topology of Skorokhod’s topology on [0 , + ∞ )). In particular thesequence of the laws of ( C ( n ) t ) t ∈ [0 ,T ) is tight. Proof. Let Q n be the law of ( C ( n ) t ) t ∈ [0 ,T c ) . The sequence ( Q n ) is tight. Let Q one of its limitpoints, and let c a process with law Q . By Proposition 4 above, c solves a.s. the system (15),with initial conditions ( c ). Now, let g t ( x, y, z ) the (a priori random) generating function of c . It is easy to see that g is well defined for ( t, x, y, z ) ∈ [0 , T c ) × (0 , × [0 , g t ( ., ., z ) is regular for every z ∈ [0 , z ∈ [0 , g t ( ., ., z ) solves the PDE (14) with initial conditions ( x, y ) g ( x, y, z ) = P c ( a, b, m ) x a y b z m . Hence by Proposition 3, ( c t ) solves Smoluchowski’s equation 1 until T c .But by Theorem 1, there is a unique solution to this equation on [0 , T c ). Hence there is a uniquelimit point to ( Q n ), so the sequence itself converges to the solution of Smoluchowski’s equationon [0 , T c ). 28 cknowledgments This is a part of the author’s PhD thesis. I would like to thank my advisorsJean Bertoin and Lorenzo Zambotti for introducing this subject, for their useful advice, andfor their encouragement. My thanks also to the referees for their careful check and advice. References [1] D. J. Aldous, Deterministic and stochastic models for coalescence (aggregation and co-agulation): a review of the mean-field theory for probabilists , Bernoulli 5 (1999), no. 1,3–48.[2] J. Bertoin, Two solvable systems of coagulation equations with limited aggregations , Ann.Inst. H. Poincar´e Anal. Non Lin´eaire - AN (2008), doi:10.1016/j.anihpc.2008.10.007.[3] J. Bertoin, V. Sidoravicius, M. E. Var`es, A system of grabbing particles related toGalton-Watson trees , To appear in Random Structures Algorithms. Preprint available athttp://arxiv.org/abs/0804.0726.[4] J. Bertoin, V. Sidoravicius, The structure of typical clusters in large sparse random con-figurations . J. Stat. Phys. 135 (2009), 87–105.[5] J. Carr, F. P. da Costa, Instantaneous gelation in coagulation dynamics , Z. Angew. Math.Phys. 43 (1992), no. 6, 974–983.[6] M. Deaconu, E. Tanr´e, Smoluchowski’s coagulation equation: probabilistic interpretationof solutions for constant, additive and multiplicative kernels , Ann. Scuola Norm. Sup. PisaCl. Sci. (4) 29 (2000), no. 3, 549–579.[7] S. N. Ethier, T.G. Kurtz, Markov processes, characterisation and convergence , Wiley andSons, 2005.[8] M. Escobedo, S. Mischler, B. Perthame, Gelation in coagulation and fragmentation models ,Comm. Math. Phys. 231 (2002), no. 1, 157–188.[9] N. Fournier, J-S. Giet, Convergence of the Marcus-Lushnikov process , Methodol. Comput.Appl. Probab. 6 (2004), no. 2, 219–231.[10] N. Fournier, P. Lauren¸cot, Marcus Lushnikov processes, Smoluchowski’s and Flory’s mo-dels , Stochastic Process. Appl. 119 (2009), no. 1, 167–189.[11] A. M. Golovin, The solution of the coagulation equation for cloud droplets in a rising aircurrent , Izv. Geophys. Ser. 5 (1963), 482–487.[12] I. J. Good, Generalizations to several variables of Lagrange’s expansion, with applicationsto stochastic processes , Proc. Cambridge Philos. Soc. 56 (1960), 367–380.[13] F. Leyvraz, S. Redner, Non-Universality and Breakdown of Scaling in a Two-ComponentCoagulation Model , Phys. Rev. Lett. 57, 163 (1986); (Erratum) 57, 3123 (1986).[14] F. Leyvraz, S. Redner, Non-Universal Behavior and Breakdown of Scaling in Two-SpeciesAggregation , Phys. Rev. A 36, 4033 (1987).2915] A. Hammond, F. Rezakhanlou, Moment bounds for the Smoluchowski equation and theirconsequences , Comm. Math. Phys. 276 (2007), no. 3, 645–670.[16] A. Hammond, F. Rezakhanlou, The kinetic limit of a system of coagulating Brownianparticles , Arch. Ration. Mech. Anal. 185 (2007), no. 1, 1–67.[17] T.E. Harris, The Theory of Branching Processes , Dover Publications, 2002.[18] I. Jeon, Existence of gelling solutions for coagulation-fragmentation equations , Comm.Math. Phys. 194 (1998), no. 3, 541–567.[19] A. Joffe, M. M´etivier, Weak convergence of sequences of semimartingales with applicationsto multitype branching processes , Adv. in Appl. Probab. 18 (1986), no. 1, 20–65.[20] J.B. McLeod, On an infinite set of nonlinear differential equations , Quart. J. Math. Oxford13 (1962), 119–128.[21] A. A. Lushnikov, Certain new aspects of the coagulation theory , Izv. Atm. Ok. Fiz. 14(1978), 738–743.[22] S. G. Krantz, H.R. Parks, A primer of real analytic functions , 2nd ed., Birkh¨auser, 2002.[23] A. H. Marcus, Stochastic coalescence , Technometrics 10 (1968), 133–143.[24] J. R. Norris, Smoluchowski’s coagulation equation: uniqueness, nonuniqueness and a hy-drodynamic limit for the stochastic coalescent , Ann. Appl. Probab. 9 (1999), no. 1, 78–109.[25] D. Serre, Matrices: Theory and Applications , Springer, 2002[26] M. von Smoluchowski,