On the circle, GM C γ = lim ← − Cβ E n for γ= 2 β − − √ , (γ≤1)
aa r X i v : . [ m a t h . P R ] S e p ON THE CIRCLE,
GM C γ = lim ←− CβE n FOR γ = q β ( γ ≤ REDA
CHHAIBI
AND JOSEPH
NAJNUDELAbstract.
We identify an equality between two objects arising from different contextsof mathematical physics: Kahane’s Gaussian Multiplicative Chaos (
GM C γ ) on the circle,and the Circular Beta Ensemble ( CβE ) from Random Matrix Theory. This is obtainedvia an analysis of related random orthogonal polynomials, making the approach spectralin nature. In order for the equality to hold, the simple relationship between couplingconstants is γ = q β , which we establish when γ ≤ β ≥
2. Thiscorresponds to the sub-critical and critical phases of the
GM C .As a side product, we answer positively a question raised by Vir´ag. We also givean alternative proof of the Fyodorov-Bouchaud formula concerning the total mass ofthe
GM C γ on the circle. This conjecture was recently settled by R´emy using Liouvilleconformal field theory. We can go even further and describe the law of all moments.Furthermore, we notice that the “spectral construction” has a few advantages. Forexample, the Hausdorff dimension of the support is efficiently described for all β >
0, thanks to existing spectral theory. Remarkably, the critical parameter for
GM C γ corresponds to β = 2, where the geometry and representation theory of unitary groupslie. Contents
1. Introduction 21.1. Orthogonal Polynomials on the Unit Circle (OPUC) 21.2. The Circular Beta Ensemble (
CβE ) 31.3. The Gaussian Multiplicative Chaos (
GM C γ ) 42. Main result and consequences 63. Orthogonal polynomials and Gaussian field inside the disc 123.1. A universal bound on traces 163.2. A convergence result for OPUC with rotationally invariant ( α j ) j ≥ Q k ) k ≥ | Q j ( re iθ ) | Key words and phrases.
Orthogonal polynomials on the unit circle, Kahane’s Gaussian Multiple Chaos,Random Matrix Theory.
CHHAIBI
AND JOSEPH
NAJNUDEL
Notation • D := { z ∈ C | | z | < } is the open unit disc, ∂ D denotes its boundary, which isthe unit circle. • A measure ν on ∂ D induces a linear form on the space of continuous functions onthe circle. Thus its evaluation against f will be denoted ν ( f ). • Equality in law between the random variables X and Y is denoted by X L = Y . • The Vinogradov symbol ≪ is equivalent to the O notation: f ≪ g ⇔ f = O ( g ).Moreover, in all the computations of the article, we allow the implicit constant todepend on the parameter β . If the implicit constant depends on other quantities,they will be indicated by subscripts: f ≪ x g means that there exists C dependingonly on x and β such that | f | ≤ Cg . • All the random objects we consider in the paper are defined on a measurable space(Ω , B ). When changes of probability measures are not involved, the underlyingprobability measure is P , and the symbol E denotes the expectation under P .1. Introduction
The relationship we are pointing out between Gaussian Multiplicative Chaos and Ran-dom Matrices Theory is best expressed in terms of the classical theory of orthogonalpolynomials on the unit circle. As such, we start by recalling a few facts on the topic.1.1.
Orthogonal Polynomials on the Unit Circle (OPUC).
Consider a probabilitymeasure µ on the unit circle ∂ D , D being the unit disc. By applying the Gram-Schmidtorthogonalization procedure to monomials { , z, z , . . . } , one obtains a sequence (Φ n ) n ≥ of OPUC which satisfies the Szeg¨o recurrence: (cid:18) Φ k +1 ( z )Φ ∗ k +1 ( z ) (cid:19) = (cid:18) z − α k − α k z (cid:19) (cid:18) Φ k ( z )Φ ∗ k ( z ) (cid:19) , (1.0.1)where Φ ∗ n ( z ) := z n Φ n (1 / ¯ z ) . The Szeg¨o recurrence is the analogue of the three term recurrence for orthogonal poly-nomials on the line R . The coefficients α j belong to the closed disc, D , and are calledVerblunsky coefficients. If a measure µ determines the Verblunsky coefficients, the con-verse is also true (see [Sim05a, Theorem 1.7.11 p.97]): Theorem 1.1 (Verblunsky’s theorem) . Let M ( ∂ D ) be the simplex of probability measureson the circle, endowed with the weak topology, and let D := D N ⊔ (cid:0) ⊔ n ∈ Z + D n × ∂ D (cid:1) be endowed with the topology related to the following notion of convergence: a sequence ( A p ) p ≥ in D converges to an element A ∞ = ( α j ) ≤ j The Circular Beta Ensemble ( CβE ). For this paragraph, β > n points on the unit circle whose probability distributionis: ( CβE n ) 1 Z n,β Y ≤ k 0, the representation-theoretic picture is more complicated: CβE n isthe orthogonality measure for Jack polynomials in n variables (see Appendix A and thereferences therein). In turn, Jack polynomials are also intimately related to representationtheory via rational Cherednik algebras [DG10]. Our point of view will be more direct.From the work of Killip and Nenciu [KN04], the characteristic polynomial X n ( z ) := det (id − zU ∗ n ) = Y ≤ j ≤ n (cid:16) − ze − iθ ( n ) j (cid:17) can be realized as the last term of the Szeg¨o recurrence, whose distribution of the Verblun-sky coefficients is explicitly given. This distribution is described as follows: the coeffi-cients are independent, the last one is uniform on the unit circle, and for 0 ≤ j ≤ n − α j is rotationally invariant and | α j | is a Beta random variable with parameters 1 and β j := β ( j +1)2 : P (cid:0) | α j | ∈ dx (cid:1) = β j (1 − x ) β j − { CHHAIBI AND JOSEPH NAJNUDEL Moreover, thanks to [KN04, Proposition B.2], reversing the order of Verblunsky coef-ficients, except the last one η , changes the weights but preserves the distribution of thesupport.From this property, a fruitful idea consists in using the reversed order of Verblunksycoefficients and incorporating the weights in the definition of the CβE n . Therefore, we redefine the Circular β Ensemble with n points as the random probability measure: CβE n := V − ( α , α , . . . , α n − , η ) = n X j =1 π j δ θ ( n ) j ( dθ ) . (1.1.5)The support points (cid:16) θ ( n ) j (cid:17) ≤ j ≤ n are the zeroes of the last orthogonal polynomial associatedto the finite sequence of Verblunsky coefficients ( α , α , . . . , α n − , η ), which has the samelaw as X n , and they are distributed as in (1.1.1). Nevertheless, the distribution of theweights ( π j ) ≤ j ≤ n is not known explicitly, with a tractable form.With the definition (1.1.5), a remarkable fact is that the sequence of Verblunsky co-efficients is consistent . Indeed, even if CβE n and CβE n +1 have a priori no reason forliving on the same probability space, it is possible to couple them in such a way that the n − CβE n for all values of n ≥ 1: if ( α j ) j ≥ is an infinite sequenceof independent variables whose distribution is given as above, and if η is an independentvariable, uniform on the unit circle, then the last orthogonal polynomial given by thesequence of Verblunsky coefficients ( α , . . . , α n − , η ) has the same law as the CβE n for all n ≥ 1. With this particular coupling, the Verblunsky coefficients provide a sequence ofrandom measures indexed by n , supported by the points of the CβE n , and tending to alimiting random measure µ β , whose Verblunsky coefficients are ( α j ) j ≥ .In light of Verblunsky’s Theorem 1.1, this remark begs the question: Question 1.2. Is there anything remarkable or canonical about the projective limit lim ←− CβE n := V − ( α , α , α , . . . ) = µ β , obtained from using all Verblunsky coefficients? Does this measure arise in other circum-stances? Before discussing this question, it is worth explaining why the points CβE n can beseen as quadrature points of the infinite random measure lim ←− CβE n = µ β . Any sequenceof measures, indexed by n , whose n − n − α , α , . . . ) will converge to µ β , in the topology of weakconvergence. Moreover, if we assume that the Verblunsky coefficients are ( α , . . . , α n − , η )with | η | = 1, then the approximating measure is atomic, supported by n points. Thegeneral theory of orthogonal polynomials dictates that for all polynomials P of degreedeg P ≤ n − Z ∂ D P µ β = n X j =1 π j P ( e iθ ( n ) j ) . In the language of approximation theory, that is exactly to say that ( π j ) ≤ j ≤ n are (ran-dom) quadrature weights and that the n of points CβE n can be seen as the n (random)quadrature points for the (random) measure µ β = lim ←− CβE n .1.3. The Gaussian Multiplicative Chaos ( GM C γ ). In this paragraph, γ > MC γ = lim ←− CβE n the unit disc: G ( z ) := 2 ℜ ∞ X k =1 z k √ k N C k where ( N C k ) k ≥ denote i.i.d. complex Gaussian variables, such that E (cid:2) ( N C k ) (cid:3) = E (cid:2) N C k (cid:3) = 0 , E (cid:2) |N C k | (cid:3) = 1 . One can establish that: • Cov( G ( w ) , G ( z )) = − | − w ¯ z | . • The field can be extended to the closed unit disc D but its restriction to the circleis not a function. In fact, G | ∂ D is almost surely a random Schwartz distribution in ∩ ε> H − ε ( ∂ D ) where the Sobolev spaces are given for all s ∈ R by: H s ( ∂ D ) := ( f (cid:12)(cid:12)(cid:12) X n ∈ Z | n | s | b f ( n ) | < ∞ ) . • Because G is harmonic, G ( re iθ ) = ( G | ∂ D ∗ P r ) (cid:0) e iθ (cid:1) where ∗ denotes convolutionand P r is the Poisson kernel.We can define the measure GM C γr ( f ) := Z ∂ D dθ π f ( e iθ ) exp (cid:18) γG ( re iθ ) − γ Var( G ( re iθ )) (cid:19) (1.2.1) = Z ∂ D dθ π f ( e iθ ) e γG ( re iθ ) (cid:0) − r (cid:1) γ . The Gaussian Multiplicative Chaos with coupling constant γ > GM C γ := lim r → GM C γr . (1.2.2)To be exact, the above limit holds in probability, upon integrating against continuousfunctions. The existence of such a limit for all γ > + • γ < 1, Sub-critical phase. GM C γ is a non-degenerate random measure, which canbe seen from the following L convergence. Theorem 1.3 (Theorem 1.2 in [B + . For all nonnegative, smooth functions f ,and for γ < , i.e. in the sub-critical regime: GM C γr ( f ) r → −→ GM C γ ( f ) , the convergence being in probability and in L (Ω , B , P ) . • γ = 1, Critical phase. The limit in (1.2.2) is the trivial zero measure, howeverone can perform different normalizations in order to obtain the so-called critical GM C . A random renormalization via the so-called derivative martingale hasbeen implemented in [DRS + + GM C can bewritten as the limit of the subcritical GM C when the parameter tends to 1 frombelow. This allows us to bootstrap the construction of the sub-critical GM C andobtain the critical GM C via the limit in probability: GM C γ =1 = lim γ → − GM C γ − γ , (1.3.1) REDA CHHAIBI AND JOSEPH NAJNUDEL when the random measures GM C γ are constructed from the same field G forall values of γ ∈ (0 , GM C is known to be non-atomic and it isconjectured to assign full measure to a random set of Hausdorff dimension zero(see the overview section of [DRS + • γ > 1, Supercritical phase. In this case, there are two constructions resulting indifferent measures.A first point of view consists in noticing that the renormalization of Eq. (1.2.1)by a factor (1 − r ) γ is too strong, and the limit (1.2.2) is the zero measure. Oneneeds a different renormalization procedure so that a non-trivial limit holds. Thecorrect normalization at the exponential scale is given by the precise asymptoticbehavior of the maximum max θ ∈ R G ( re iθ ) as r → − . As such, one naturallyexpects the limit to be atomic, giving mass to the Gaussian field’s maxima. Thiswas done in [MRV + γ > γ because of the new renormalization. All in all,the result is that the limiting measure can be described as follows: one startswith the critical GMC, and conditionally on the corresponding random measure GM C γ =1 , one takes a strictly positive stable noise of scaling exponent γ andintensity GM C γ =1 . In loose terms, in the supercritical regime, one only seesDirac masses corresponding to the extrema of the underlying Gaussian field, andwhich are “sprinkled” on the circle with an intensity depending on the criticalmeasure.Another version of the supercritical Gaussian multiplicative chaos has been pre-viously constructed in [BJRV13] by taking a subcritical GMC with coupling con-stant γ ′ = γ , as the intensity of a stable noise of scaling exponent γ . We usea different normalization, hence extra factors 2 in [BJRV13]. The constructedmeasure is named the KPZ dual measure. As explained in that paper, the namestems from the relationship to the KPZ formula and its symmetry with respect tothe transform γ γ . This last construction cannot be naturally recovered from alogarithmically correlated Gaussian field on the circle without adding some extrarandomness, contrarily to the construction of [MRV + 16] with a freezing transition.Nevertheless, the KPZ dual measure seems to have better analyticity propertiesthan the construction with a freezing transition. We will make further remarks onthe topic at the end of the next section.2. Main result and consequences The Main Theorem of the present article provides a direct link between the a prioriunrelated objects introduced in the previous section: namely, it shows that up to a suitablenormalization, the random measure lim ←− CβE n and the Gaussian multiplicative chaos ofparameter γ := q β have the same distribution in the sub-critical and the critical cases,i.e. for β ≥ ←− CβE n bypasses the phase transition involved in thedefinition of the GMC, since the description in terms of Verblunsky coefficients is uniformfor all values of β > 0. However, we do not exactly know how the two random measures CβE ∞ and GM C γ are related in the supercritical case.The precise statement of the main result of the article is the following: MC γ = lim ←− CβE n Theorem 2.1 (Main Theorem - GM C γ = lim ←− CβE n ) . For β ≥ , let ( α j ) j ≥ be asequence of independent, rotationally invariant complex-valued random variables, suchthat | α j | is Beta-distributed with parameters and β j = β ( j + 1) . Let µ β be the randomprobability measure whose Verblunsky coefficients are given by the sequence ( α j ) j ≥ , andlet C := Q ∞ j =0 (cid:0) − | α j | (cid:1) − (cid:16) − β ( j +1) (cid:17) if β > (cid:0) − | α | (cid:1) − Q ∞ j =1 (cid:0) − | α j | (cid:1) − (cid:16) − β ( j +1) (cid:17) if β = 2 . Then, the product of C by the measure µ β has the same law as the measure correspondingto the Gaussian multiplicative chaos GM C γ , with parameter γ = q β ≤ . In particular, µ β has the same law as GM C γ , renormalized into a probability measure, and the totalmass of GM C γ has the same law as C .General structure of proof. First, the result can be bootstrapped quite easily from thesub-critical phase to the critical phase by using (1.3.1). Therefore, it is enough to dealonly with the sub-critical phase β > γ < ∗ n ) n ≥ associated to the Versbunsky coefficients( α j ) j ≥ . A general theorem in OPUC theory, due to Bernstein and Szeg¨o, implies that µ β is the limit, when n goes to infinity, of the probability measure µ βn whose density at e iθ isproportional to | Φ ∗ n ( e iθ ) | − . On the other hand, one can show that (Φ ∗ n ) n ≥ almost surelyconverges, uniformly on compact sets of the unit disc D , to a limiting random holomorphicfunction Φ ∗∞ , which is the exponential of a logarithmically correlated Gaussian field. Fromthe precise form of the correlation of this field, we deduce that the regularization of theGaussian multiplicative chaos can be written as: GM C γ = q β r ( dθ ) = (1 − r ) β | Φ ∗∞ ( re iθ ) | − dθ . To prove the Main Theorem, it is then enough to show that up to a delicate issue of renor-malization, the limit of the measure | Φ ∗ n ( e iθ ) | − dθ when n goes to infinity (the measure µ β ) is the same as the limit of the measure | Φ ∗∞ ( re iθ ) | − dθ when r goes to 1 from below(the Gaussian multiplicative chaos). In other words, up to a suitable normalization, thetwo limits n → ∞ and r → − commute when we start with the measure | Φ ∗ n ( re iθ ) | − dθ .One can sketch the following diagram: µ βn,r ( dθ ) ∝ | Φ ∗ n ( re iθ ) | dθ µ βr ( dθ ) ∝ GM C γ = q β r ( dθ ) µ βn ( dθ ) ∝ | Φ ∗ n ( e iθ ) | dθ µ β / C GM C γ = q β ( dθ ) r → n →∞ r → n →∞ , where the symbol ∝ stands for ”proportional to”, the multiplicative factor being arandom variable. In the end, the proof boils down to tracking the exact behavior of thesefactors. (cid:3) The diagram above shows in particular that the subcritical GM C is the limit of asuitable normalization of the measure | Φ ∗ n ( e iθ ) | − dθ when n goes to infinity. It is reason-able to expect a similar convergence for powers of | Φ ∗ n ( e iθ ) | with more general exponents.However, the techniques of the present paper do not directly apply to this case since theresult by Bernstein and Szeg¨o crucially depends on the fact that we consider a power ofexponent − 2. It is also natural to conjecture a convergence to the GM C when Φ ∗ n ( e iθ ) REDA CHHAIBI AND JOSEPH NAJNUDEL is replaced by the characteristic polynomial of the CβE n , since these two polynomialsare very strongly related. For the characteristic polynomial of the CU E ( β = 2), theconvergence has been proven in the L phase by Webb in [Web15], and then in the wholesubcritical phase by Nikula, Saksman and Webb in [NSW18]. For the CβE with general β > 0, convergence to the GM C has been proven in the subcritical phase by Lambertin [Lam19], in the case where we take the polynomial inside the unit disc, at a smallmesoscopic distance ( n − (log n ) ) from the unit circle.So far, we believe that the equality between GM C γ and CβE ∞ can be extended to thesupercritical regime, possibly after suitable adjustements. However, this extension doesnot seem to be straightforward and is beyond the scope of this paper. Remark 2.2 (The splitting phenomenon) . Just like the characteristic polynomial from CβE n evaluated at one point is a product of independent random variables (see [BHNY08] )so is the total mass C .As explained in [BHNY08] , the splitting phenomenon for the characteristic polynomialis the probabilistic manifestation of the product formula for the (circular) Selberg integrals.It will be apparent in the proof of the Fyodorov-Bouchaud formula (Corollary 2.5), thatthe splitting of the total mass is the probabilistic manifestation of another product formula,related to the Γ function. Before diving into technical considerations, let us provide a few corollaries.2.1. The law of the Verblunsky coefficients of the GMC. The Main Theoremgives a way to construct the Gaussian multiplicative chaos from a sequence of Verblunskycoefficients. We can think of it in the reverse way: Corollary 2.3. For γ ≤ , let ( α j ) j ≥ be the Verblunsky coefficients associated to therandom probability measure obtained by dividing GM C γ by its total mass. Then, therandom variables ( α j ) j ≥ are independent, rotationally invariant in distribution, and | α j | is distributed like a Beta variable of parameters and β ( j +1)2 , for β = γ . Moreover, thetotal mass of GM C γ is given by the formula defining C in the Main Theorem.Proof. The joint law of total mass and the Verblunsky coefficients associated to GM C γ is uniquely determined by the law of this measure, and then it is the same for any otherrandom measure with the same distribution.In particular, it is the same for the measure C µ β considered in the Main Theorem.Now, by construction, C µ β has Verblunsky coefficients with the desired distribution andits total mass is C . (cid:3) Coupling the CβE for different β . The different measures GM C γ for different γ > on the same probability space , as limits of measures built from theGaussian field G . From the previous corollary, dividing these measures by their totalmass gives a coupling of the measures CβE ∞ for β ≥ η on the unit circlegives a way to deduce a coupling of CβE n for all n ≥ β ≥ Hausdorff dimension of the support. At first, the description of Hausdorff di-mension of spectral measures for random Schr¨odinger operators was investigated by Kise-lev, Last and Simon in [KLS98]. The adaptation to OPUC was made in the book bySimon [Sim05b, Chapter 12]. Corollary 2.4. The Hausdorff dimension of the support of µ β = CβE ∞ is almost surelygiven as follows: • If β > (sub-critical), then dim H supp ( µ β ) = 1 − β . MC γ = lim ←− CβE n • If β = 2 (critical), then dim H supp ( µ β ) = 0 and µ β is non-atomic. • If β < (super-critical), then µ β is atomic.Proof. Apply [Sim05b, Theorem 12.7.7]. Notice that because our Verblunsky coefficients (cid:0) α j ( µ β ); j ∈ N (cid:1) are rotation invariant, the Alexandrov measures (cid:16) µ βλ ; λ ∈ ∂ D (cid:17) are in factall the same in law. Therefore the conclusion of that theorem, holding for almost every λ , is true for the measure lim ←− CβE n .Let us mention that [Sim05b, Theorem 12.7.7] depends on the previous [Sim05b, The-orem 12.7.2], whose hypotheses do not exactly match ours. Nevertheless the proofs carryverbatim. (cid:3) Combined with the Main Theorem, this result proves that the critical chaos is supportedon a set of Hausdorff dimension zero, as conjectured.In the sub-critical and super-critical phases, these values of the spectral dimension arein agreement with the GM C , which supports our expectation that the Main Theoremextends (up to normalization) to the super-critical phase. It is also worth mentioningthat a similar analysis for Gaussian ensembles has been tried in [BFS07].2.4. The Fyodorov-Bouchaud formula and beyond. The Main Theorem allows usto easily compute the distribution of the total mass of the chaos, which gives a proof ofa conjecture by Fyodorov and Bouchaud [FB08]. Corollary 2.5 (Fyodorov-Bouchaud formula on the total mass of GM C γ ) . In the sub-critical phase and critical phases ( γ ≤ ): GM C γ ( ∂ D ) L = K γ e − γ , where GM C γ ( ∂ D ) denotes the total mass of the GM C γ , e is a standard exponentialrandom variable, and K γ is an explicit constant: K γ := (cid:26) Γ(1 − γ ) − if γ < , if γ = 1 . Proof. In the case γ < 1, pick a z ∈ C , with ℜ ( z ) ≤ 0. From Theorem 2.1: E ( GM C γ ( ∂ D ) z ) = ∞ Y j =0 (cid:18) − β ( j + 1) (cid:19) z E (cid:16)(cid:0) − | α j | (cid:1) − z (cid:17) = ∞ Y j =0 (cid:18) − β ( j + 1) (cid:19) z β j β j − z = ∞ Y j =0 (cid:16) j +1) (cid:17) − zβ (cid:16) − zβ ( j +1) (cid:17) − (cid:20)(cid:16) j +1) (cid:17) − β (cid:16) − β ( j +1) (cid:17) − (cid:21) z = Γ (cid:16) − zβ (cid:17) Γ (cid:16) − β (cid:17) z , where on the last line, we used the Weierstrass product formula for the Gamma function.One recognizes that the Γ function is the Mellin transform of an exponential, which givesthe desired result for γ < γ = 1 is handled by taking the limit γ → − as in (1.3.1): GM C γ =1 ( ∂ D ) L = lim γ → − K γ − γ e − γ . CHHAIBI AND JOSEPH NAJNUDEL The constant K γ vanishes at γ = 1 and absorbs the renormalization: K γ − γ = 1(1 − γ )Γ(1 − γ ) = 1 + γ Γ(2 − γ ) γ → − −→ . (cid:3) This formula has recently been proven by R´emy [Rem17] using the partial differen-tial equations satisfied by correlation functions in Liouville conformal field theory. Theconformal field theory on the hyperbolic disc uses the GM C as an ingredient.Thanks to our complete description of the GM C on the circle, the previous corol-lary recovers the total mass which is the 0-th moment. We can also recover an explicitdescription of the other moments. Indeed, if we denote, for n ≥ C n := Z π GM C γ ( e iθ ) e − niθ dθ π = c n GM C γ ( ∂ D ) , we have, by using Verblunsky’s formula (see [Sim05a, Theorem 1.5.5 p.60]), c n = α n − n − Y j =0 (1 − | α j | ) + V ( n − ( α , . . . , α n − , α , . . . , α n − )where ( α j ) j ≥ are the Verblunsky coefficients of the measure GM C γ and V ( n − is anexplicitly computable polynomial with integer coefficients. For example, we have (see[Sim05a], formulas (1.3.51), (1.3.52), (1.3.53)) c = α ,c = α + α (1 − | α | ) ,c = ( α − α α )[ α + α (1 − | α | )]+ α α + α (1 − | α | )(1 − | α | ) . Since the joint law of ( α j ) j ≥ is known, these formulas uniquely determine the joint law ofthe moments ( c n ) n ≥ and ( C n ) n ≥ . It is possible to compute, when they exist, ”momentsof the moments”, i.e. the expectation of products of some powers of the c n ’s, theirconjugates, and a power of the total mass of the chaos. For example, it is not difficult(but not obvious) to deduce Conjecture 1 of [Rem17] from our Main Theorem. Similarly,we get immediately E [ | c | ] = 11 + β = γ γ where γ = q β , which is consistent to the computation of the Edwards-Anderson’s orderparameter in the circular model of the f noise: see formula (7) of [CD16]. One can alsorecover formula (42) of [CD16] when γ ≤ Further remarks. On the supercritical phase: For all β > 0, with the notation of the Main Theorem, onecan define the random measure C ′ µ β , where C ′ = ∞ Y j =0 (1 − | α j | ) − e − β ( j +1) . The paper shows that in the subcritical and the critical cases ( β ≥ β . Nevertheless, in the supercritical phase ( β < C ′ µ β is stillwell-defined, and gives a new way to construct a supercritical Gaussian multiplicativechaos. MC γ = lim ←− CβE n It is natural to ask how this construction can be compared to the very different con-structions given in [BJRV13] and [MRV + C ′ µ β are analyticin β , it is very unlikely that our construction gives the freezing transition appearing in[MRV + C ′ µ β is strongly related to the KPZ dual mea-sure of [BJRV13]: the two random measures may have the same law, up to a multiplicativeconstant. As a corroborating evidence is the fact that the laws of the total masses agree.This is done as follows.By Proposition 6 of [BJRV13], we have, for γ > ≤ ρ < /γ , with obviousnotation: E (cid:2) GM C γ,BJRV ( ∂ D ) ρ (cid:3) = Γ(1 − ργ )Γ(1 − γ − ) ργ Γ(1 − ρ ) γ − ργ E h GM C /γ ( ∂ D ) ργ i , and, using the Fyodorov-Bouchaud conjecture proved by Remy in [Rem17] and in a dif-ferent way in the present article, we obtain: E (cid:2) GM C γ,BJRV ( ∂ D ) ρ (cid:3) = κ ργ Γ(1 − ργ )for some κ γ > γ . Hence, the total mass GM C γ,BJRV ( ∂ D ) is thepower − γ of an exponential variable, as the total mass C ′ of the measure C ′ µ β .If one considers the construction of [MRV + − γ of an exponentialrandom variable. On relating the GMC and Random Matrix Theory: To the best of the authors’ knowl-edge, the first hints that there should be a relationship between Gaussian multiplicativecascades and Random Matrix Theory have first appeared in the paper [BFS07] and thenVir´ag’s ICM Proceeding of Seoul 2014 [Vir14]. In both of these references, the focus is onmultiplicative cascades on the real line and on tridiagonal models for the GβE (Gaussian β Ensembles). The point of view is very similar to ours since the relationship is probedthrough orthogonal polynomials, and of course, we expect similar results to hold in thecontext of the real line. A key ingredient would be the Bernstein-Szeg¨o type measuresbeing developed by Gamboa, Nagel and Rouault in [GNR].Nevertheless, the Main Theorem 2.1 comes as a surprise, as it says that in the case ofthe circle, the relationship is much stronger than expected. For example, the questions 1and 2 raised by Vir´ag [Vir14], in our context, would ask whether GM C γ and lim ←− CβE n are similar at the level of fractal spectra. For all intents and purposes, we do not needto explain what is the fractal spectrum of a measure, and refer to [Fal04, Chapter 17].We do not need to study the multifractal spectrum of the GM C γ either. The answer toVir´ag’s question remains positive, since we have proven that GM C γ and lim ←− CβE n are infact the same object.In fact, one could wonder in which sense the CβE n is a regularization of the GM C γ .As explained in Appendix A, from the works of Macdonald, Random Matrix Theory isa very peculiar regularization of a Gaussian space at the level of symmetric functions .This regularization is of course very different from convolution, which is the standardprocess in order to construct the GM C (Theorem 1.3), hence the difficulty of provinglim ←− CβE n = GM C γ . This difficulty is further exemplified by the following. Relatingan approximation via finitely many Verblunsky coefficients and an approximation viaconvolution is present in the literature in the form of Golinskii-Ibragimov (GI) measures(See [Sim05a, Section 6.1]), however one cannot apply any of the general approximationtheorems that are available. Most of the results in [Sim05a] treat only regular measuresby assuming the existence of densities or via the Szeg¨o condition, which is a finite entropy CHHAIBI AND JOSEPH NAJNUDEL condition for the Lebesgue measure relative to the measure of interest. And the GM C isvery far from that.Finally, in the same way that the GM C plays an important role in understanding theextrema of log-correlated fields, it is certainly desirable to relate the current paper to ourprevious work [CMN18] investigating the extrema the characteristic polynomial of the CβE field.2.6. Structure of the paper. In Section 3, we show the convergence of Φ ∗ n towards theexponential of a logarithmically correlated field inside the unit disc, and we provide abound on the moments of | Φ ∗ n ( z ) | when z ∈ D . This is a consequence of a general resulton OPUC, which can be of interest beyond our study of CβE . The setting is that ofrotationally invariant Verblunsky coefficients with mild decay.In Section 4, we begin the proof of the Main Theorem 2.1. In fact, we made the choiceof factoring the proof of Theorem 2.1, so that a first part can be presented as quickly aspossible. This section gives all the required arguments for a complete proof modulo twoLemmas whose proofs are postponed for later. These are Lemma 4.1 and Lemma 4.2,which motivate the next two sections.In Section 5, we define and estimate some quantities, and consider a new probabilitydistribution, in order to study the behavior of the polynomials Φ ∗ n near the unit circle.In Section 6, we prove the convergence of some discrete stochastic process towards thesolution of a suitable stochastic differential equation. This inhomogeneous SDE is ratherill-behaved and its analysis is a key ingredient in proving Lemma 4.2.Finally, Section 7 gives the missing proofs of Lemmas 4.1 and 4.2 and thus concludesthe proof of the Main Theorem 2.1.3. Orthogonal polynomials and Gaussian field inside the disc We consider the following Gaussian random holomorphic function G C , defined on theunit disc by G C ( z ) := ∞ X k =1 z k √ k N C k where ( N C k ) k ≥ are i.i.d. complex Gaussian variables, such that E [ N C k ] = E [( N C k ) ] = 0 , E [ |N C k | ] = 1 . The function G C itself is complex Gaussian and centered, the covariance structure beinggiven by E [ G C ( w ) G C ( z )] = 0 , E [ G C ( w ) G C ( z )] = − log(1 − w ¯ z ) . From this covariance structure, we deduce that the field G ( z ) := 2 ℜ ( G C ( z )) = G C ( z ) + G C ( z )is real-valued, centered and Gaussian, with covariance E [ G ( w ) G ( z )] = − log(1 − w ¯ z ) − log(1 − ¯ wz ) = − | − w ¯ z | . Recall that the Gaussian multiplicative chaos of parameter γ < r → − . We have the followingresult: Proposition 3.1. For β > , let ( α j ) j ≥ be distributed as in Theorem 2.1, and let (Φ ∗ n ) n ≥ be the corresponding sequence of OPUC. By general theory (see [Sim05a, Theorem 1.7.1p.90] ), these polynomials are equal to at and do not vanish on the unit disc.Let log Φ ∗ n be the unique continuous determination of the logarithm of Φ ∗ n which vanishesat . Then, almost surely, (log Φ ∗ n ) n ≥ converges to a limit log Φ ∗∞ , uniformly on compact MC γ = lim ←− CβE n sets of the unit disc. Moreover, this limit has the same distribution as the Gaussian field γG C for γ = q β . Consequently, the random measure lim r → − (1 − r ) β | Φ ∗∞ ( re iθ ) | − dθ π exists and has the same distribution as GM C γ , and then Theorem 2.1 is proven if weshow that C µ β coincides with this random measure.Moreover, we have the following bound on the moments of Φ ∗ n : ∀ z ∈ D , ∀ p ∈ R , ∀ n ∈ Z + , E ( | Φ ∗ n ( z ) | p ) ≤ (cid:0) − | z | (cid:1) − p β . Let us introduce the filtration: F := ( F n := σ ( α , α , . . . , α n − ) ; n ∈ Z + ) . (3.1.1)Throughout the paper, the following martingale structure is crucial. We have, by theSzeg¨o recursion: Φ ∗ n +1 ( z ) = Φ ∗ n ( z ) (1 − α n Q n ( z ))where Q n ( z ) := z Φ n ( z )Φ ∗ n ( z ) . From [Sim05a, Corollary 1.7.2 p.90], Q n < D : log Φ ∗ n +1 ( z ) = log Φ ∗ n ( z ) + log(1 − α n Q n ( z )) , where we take the principal branch of the logarithm in the last term of the equality.From the fact that Q n is F n -measurable, | log(1 − | α n | ) | is integrable, α n is rotationallyinvariant and independent of F n , we deduce that (log Φ ∗ n ( z )) n ≥ is a F -martingale, for all z ∈ D . Proof of Proposition 3.1. We claim that, almost surely, (log Φ ∗ n ) n ∈ N converges uniformlyon compact sets of D . Moreover, ℜ log Φ ∗ n ( z ) and ℑ log Φ ∗ n ( z ) have exponential momentsof all orders (positive and negative), uniformly bounded in n when the order and z ∈ D are fixed. A fortiori we have also uniform bounds for usual moments ( | x | p ≤ p !( e x + e − x )for all x ∈ R ). This is true by virtue of a general convergence result, Proposition 3.6,which we shall prove in the next subsections. The only required hypothesis is (3.6.1) andit is implied by: Lemma 3.2. For all k > and σ ≥ : E (cid:16) e σ P j ≥ | α j | k (cid:17) < ∞ . Proof. Using the independence of the α j ’s, their explicit density and then an integrationby parts: E (cid:16) e σ P j ≥ | α j | k (cid:17) = ∞ Y j =0 E (cid:16) e σ | α j | k (cid:17) = ∞ Y j =0 (cid:18) β j Z dx e σx k/ (1 − x ) β j − (cid:19) = ∞ Y j =0 (cid:18) kσ Z dx e σx k/ x k/ − (1 − x ) β j (cid:19) ≤ ∞ Y j =0 (cid:18) kσe σ Z dx x k/ − (1 − x ) β j (cid:19) CHHAIBI AND JOSEPH NAJNUDEL = ∞ Y j =0 (cid:18) kσe σ Γ( k )Γ( β j + 1)Γ( β j + k + 1) (cid:19) . This product is finite because of the asymptotics:Γ( β j + 1)Γ( β j + k + 1) ≪ β − kj ≪ j − k . (cid:3) Let us now identify the law of the limit of log Φ ∗ n . By the ratio asymptotics [Sim05a,Theorem 1.7.4 p.91], as α n n →∞ −→ 0, we have a.s.lim n →∞ Φ n − ( z )Φ ∗ n − ( z ) = 0 , uniformly in the interior of the unit disc D . On the other hand, from results by Killip andNenciu [KN04], we deduce that if log X n is the logarithm of the characteristic polynomialcorresponding to the CβE , defined as follows:log X n ( z ) := X λ ∈ CβE n log(1 − λz ) , then we have the equality in law:(log X n ( z ) ; z ∈ D ) = (cid:18) log Φ ∗ n − ( z ) + log (cid:18) − zη Φ n − ( z )Φ ∗ n − ( z ) (cid:19) ; z ∈ D (cid:19) where η is an independent uniform random variable on the unit circle. We deduce thatlog X n converges in law to log Φ ∗∞ when n goes to infinity, for the topology of uniformconvergence on compact sets in D . In particular, since the Taylor coefficients at zero oflog X n can be written as contour integrals involving log X n on a small circle centered at0, their finite dimensional joint distributions tend to the corresponding distributions forlog Φ ∗∞ .Now, thanks to the result of Jiang and Matsumoto [JM15], the joint distribution, forfinitely many given values of k ≥ 1, of the sum of the k -th powers of the zeros of X n , tendsto the joint distribution of the corresponding independent complex Gaussian variables q kβ N C k . We provide an independent proof of this fact in Appendix A. Although notquantitative, this independent proof has the advantage of showing why CβE is inherentlya regularization of a Gaussian space at the level of symmetric functions . Using thestandard expansion of the logarithm, the Taylor coefficients of log X n tend, in the senseof finite dimensional marginals, to independent Gaussians q βk N C k . Hence, the Taylorcoefficients of log Φ ∗∞ have the same joint distribution as these variables, which shows thatlog Φ ∗∞ has the same law as γG C .It remains to check the bounds on moments. For that, we observe that for z ∈ D , p ∈ R ,( | Φ ∗ n ( z ) | p ) n ≥ is a submartingale, since it is the image of the martingale ( ℜ log Φ ∗ n ( z )) n ≥ bythe convex function x e px . From the bound on the exponential moments of ℜ log Φ ∗ n ( z ),we deduce that ( | Φ ∗ n ( z ) | p ) n ≥ is bounded in L , and then, by Doob’s submartingale in-equality, sup n ≥ | Φ ∗ n ( z ) | p is in L , and a fortiori in L . By dominated convergence, wededuce, since Φ ∗ n ( z ) converges a.s. to Φ ∗∞ ( z ), that E [ | Φ ∗∞ ( z ) | p ] = lim n →∞ E [ | Φ ∗ n ( z ) | p ] . This proof was in fact known to some specialists, like Philippe Biane MC γ = lim ←− CβE n Since ( | Φ ∗ n ( z ) | p ) n ≥ is a submartingale, the last limit is increasing, and then for any n , E [ | Φ ∗ n ( z ) | p ] ≤ E [ | Φ ∗∞ ( z ) | p ] = E [ e pγ ℜ ( G C ( z )) ] = E [ e pγG ( z ) ] = (1 − | z | ) − p γ = (1 − | z | ) − p β . The proof of the proposition is complete, modulo the convergence result stated inProposition 3.6. We chose to treat it separately because the technology developed inthe next two subsections actually holds beyond the CβE , for large classes of orthogonalpolynomials. (cid:3) The identity in law between log Φ ∗∞ and γG C , and then between Φ ∗∞ and e γG C , impliesidentities in law between functions of Gaussian variables and functions of the Verblunskycoefficients. Indeed, using Szeg¨o recursion (written in a matricial way), we can write thecoefficients of the polynomial Φ ∗ n as polynomials in the Verblunsky coefficients α , . . . , α n − and their conjugates. Passing to the limit, we deduce an expression of the Taylor coeffi-cients of Φ ∗∞ as limits of infinite series involving all the Verblunsky coefficients and theirconjugates. On the other hand, the Taylor coefficients of e γG C can be written as polyno-mials in the Gaussian variables N C k . Identifying the two expressions gives the followingresult ( SC n representing the coefficient in z n ): Corollary 3.3. For N, n ≥ , let Π n,N be the set of sequences ( π j ) j ≥ such that π j = 1 for n values of j , all smaller than or equal to N , and π j = 2 for all the other values of j ,and let SC n,N := X π ∈ Π n,N Y j,π j π j +1 =12 [ − α j ] Y j,π j π j +1 =21 [ − α j ] Then, SC n,N a.s. tends to a limit SC n when N goes to infinity, and this limit can beexpressed in terms of i.i.d Gaussians N C k as: SC n = X ( m k ) k ≥ , P k ≥ km k = n Y k ≥ (cid:0) N C k (cid:1) m k m k ! (cid:18) βk (cid:19) m k . In particular, for n = 1, we deduce the following non-trivial identity in law r β N C = ∞ X j = − α j α j +1 , (3.3.1)the series being a.s. (not absolutely) convergent. In the above sum, by convention α − = − 1. This convention is consistent with the book [Sim05a] and shall be used throughoutthe paper.From the corollary, we can deduce an expression of the Gaussian variables N C k them-selves in terms of the α j . A more direct way to get this expression is to use the CMVmatrices (from Cantero, Moral, and Velazquez [CMV03], see also [Sim05a, Section 4.2.]).One can show that the characteristic polynomial corresponding to the CβE of order n has the same distribution as the characteristic polynomial of the matrix C n := L n M n , L n and M n being the n × n top-left minors of L n, and M n, , with L n, = Diag(Θ n, , Θ n, , . . . ) , M n, = Diag(1 , Θ n, , Θ n, , . . . ) , Θ n,j = (cid:18) α n,j ρ n,j ρ n,j − α n,j (cid:19) , ρ n,j = (1 − | α n,j | ) / . Here, α n,j = α j for j ≤ n − α n,n − is independent of the α j ’s and uniform on the unitcircle, α n,j = 0 for j ≥ n .The coefficient in z k of − log X n ( z ) is given by k times the sum of the k -th power ofthe points in the CβE . Hence, the joint law of the coefficients of − log X n ( z ) is the sameas the joint law of (tr( C kn ) /k ) k ≥ , and we deduce, from the convergence in law of log X n CHHAIBI AND JOSEPH NAJNUDEL towards log Φ ∗∞ , that the finite-dimensional marginals of ( − q β k tr( C kn )) k ≥ tend in law toi.i.d. complex Gaussian variables.In the expansion of tr( C kn ), each term involves a product of O ( k ) factors equal to α n,j , α n,j or ρ n,j , and such that all the indices j involved are in an interval of length O ( k ).For fixed k , we then have a bounded number of terms involving α n,n − or its conjugate,and all these terms tend to zero in probability when n → ∞ , since a careful look of thematrix product shows that they necessarily also involve a factor α n,j for j = n + O ( k ), j = n − 1. Hence, we can replace α n,n − by 0 in the matrices L n and M n without changingthe limiting distribution of tr C kn . It is not difficult to deduce the following result: Corollary 3.4. Let C = LM , for L = Diag(Θ , Θ , . . . ) , M = Diag(1 , Θ , Θ , . . . ) , where Θ j = (cid:18) α j ρ j ρ j − α j (cid:19) , ρ j = (1 − | α j | ) / , ( α j ) j ≥ being distributed as in Theorem 2.1. Then, ( − q β k tr( C k )) k ≥ has the same distri-bution as ( N C k ) k ≥ , where tr( C k ) is obtained by taking the formal expansion of the trace,removing all the terms involving indices larger than n and letting n → ∞ . Such formulas have been investigated before, for example in [GZ07].3.1. A universal bound on traces. Notice that (log Φ ∗ n ( z )) n ≥ is a complex martingale.As such, thanks to Jensen’s inequality, for all σ ∈ C ,( e ℜ σ log Φ ∗ n ( z ) ) n ≥ is a real submartingale. We shall now prove that it is uniformly bounded in L (Ω , B , P ),using the description thanks to CMV matrices. Using, in [Sim05a], the equation following(4.2.48), p. 271, we get, after taking care of the conventions of conjugation of the α j ’s(which are not the same here and in [Sim05a]),log Φ ∗ n ( z ) = − ∞ X k =1 z k k tr( C k [ n ] ) , where C [ n ] is the n × n top-left block of the infinite matrix C introduced in Corollary 3.4.A careful look at the matrix product shows that the traces of powers of C [ n ] are equal tothe traces of powers of C after replacing all Verblunsky coefficients of index larger thanor equal to n by zero.In the following computations, we then omit the subscript n and we always implicitlyassume that α j has been replaced by zero for j ≥ n . We have the following universal boundon traces, which is clearly of independent interest, and which holds deterministically: Lemma 3.5. For all k ≥ : tr (cid:0) C k (cid:1) = − k X j ≥− α j ρ j +1 ρ j +2 . . . α j + k ! + O k X j ≥− | α j | ! , the implicit constant in O being absolute.Proof. We prove the Lemma by a 3-step commutator argument. We notice that in thepresent setting, finitely many Verblunsky coefficients are non-zero and then there is noissue of convergence for the series which are involved.Introduce the matrix C ρ = L ρ M ρ where all the α terms have been replaced by zero, i.e. L ρ = Diag(Θ ρ , Θ ρ , . . . ) , M ρ = Diag(0 , Θ ρ , Θ ρ , . . . ) , MC γ = lim ←− CβE n where Θ ρj = (cid:18) ρ j ρ j (cid:19) . Notice that in terms of operator norm |L ρ | op , |M ρ | op ≤ L and M . Also C ρ is a shift operator in the sense that: C ρ e k = u k e k ± , depending whether k is odd or even, for some u k ∈ [0 , (cid:0) C k (cid:1) . Because C ρ is the matrix of a weighted shifting operator, tr C kρ =0. Write C k = LM . . . LM ( k times) , C kρ = L ρ M ρ . . . L ρ M ρ ( k times) , and use the commutator:tr (cid:0) C k (cid:1) = tr (cid:0) C k − C kρ (cid:1) = k − X i =0 tr (cid:0) C i ( C − C ρ ) C k − i − ρ (cid:1) = k − X i =0 tr (cid:0) C i ( L − L ρ ) M ρ C k − i − ρ (cid:1) + tr (cid:0) C i L ( M − M ρ ) C k − i − ρ (cid:1) = X i + i = k − tr (cid:0) C i ( L − L ρ ) M ρ C i ρ (cid:1) + tr (cid:0) C i L ( M − M ρ ) C i ρ (cid:1) The nice fact about this operation is that it forces out the appearance of diagonal matrices L − L ρ and M − M ρ . We will freely invoke (see [RS80, Chapter 6]) the circular propertyof the trace tr( AB ) = tr( BA ) when A is trace-class and B bounded, and the inequality | tr( AB ) | ≤ | AB | ≤ | A | | B | op , where | A | = tr h ( A ∗ A ) i is the trace-class norm and | B | op is the operator norm ([RS80, Chapter 6, Exercise 28]). For example, those facts appliedto the previous computation give a bound on the traces: (cid:12)(cid:12) tr (cid:0) C k (cid:1)(cid:12)(cid:12) ≤ k X j ≥− | α j | . Before pursuing, let us adopt a convention that will ease notations. Write A ( i ) = (cid:26) L if i is odd M if i is even ; A ( i ) ρ = (cid:26) L ρ if i is odd M ρ if i is even . In the same fashion: B ( i ) = (cid:26) L − L ρ if i is odd M − M ρ if i is even . Then define: A ( i ,i ) := A ( i ) . . . A ( i ) , A ( i ,i ) ρ := A ( i ) ρ . . . A ( i ) ρ , with the convention that A ( i ,i ) = A ( i ,i ) ρ = id if i > i . As such, the above commutatorargument becomes:tr (cid:0) C k (cid:1) = tr (cid:0) C k − C kρ (cid:1) = tr (cid:0) A (1 , k ) − A (1 , k ) ρ (cid:1) CHHAIBI AND JOSEPH NAJNUDEL = k − X j =0 tr (cid:0) A (1 ,j ) A ( j +1) A ( j +2 , k ) ρ − A (1 ,j ) A ( j +1) ρ A ( j +2 , k ) ρ (cid:1) = k − X j =0 tr (cid:0) A (1 ,j ) B ( j +1) A ( j +2 , k ) ρ (cid:1) . We start by the same argument as before, that is to say that A ( j +2 , k ) ρ A (1 ,j ) ρ = . . . L ρ M ρ L ρ M ρ . . . is a weighted shifting operator. Because B ( j +1) isdiagonal, B ( j +1) A ( j +2 , k ) ρ A (1 ,j ) ρ is a shifting operator as well. As a consequence, by thecircular property of the trace:tr (cid:0) A (1 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ (cid:1) = 0 . Therefore, we can repeat the operation and obtain a two step commutator:tr (cid:0) C k (cid:1) = tr (cid:0) C k − C kρ (cid:1) = k − X j =0 (cid:2) tr (cid:0) A (1 ,j ) B ( j +1) A ( j +2 , k ) ρ (cid:1) − tr (cid:0) A (1 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ (cid:1)(cid:3) = X ≤ j 1, consider the trace which is more conveniently written as a sum over Z : tr (cid:0) A (1 ,j ) B ( j +1) A ( j +2 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ (cid:1) = X i ∈ Z (cid:2) A (1 ,j ) B ( j +1) A ( j +2 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ (cid:3) i,i = X i ∈ Z (cid:2) B ( j +1) (cid:3) i,i (cid:2) A ( j +2 ,j ) ρ (cid:3) i,i + ∗ (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ (cid:2) A ( j +2 , k ) ρ A (1 ,j ) (cid:3) i + ∗ ,i , where ∗ is a shift, whose value is of no importance, and where terms are considered tobe equal to zero if they involve nonpositive indices. Taking absolute values and using thefact that for any operator A , | [ A ] i,j | ≤ | A | op , we have: (cid:12)(cid:12) tr (cid:0) A (1 ,j ) B ( j +1) A ( j +2 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ (cid:1)(cid:12)(cid:12) ≤ X i ∈ Z | (cid:2) B ( j +1) (cid:3) i,i || (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ | (cid:12)(cid:12) A ( j +2 ,j ) ρ (cid:12)(cid:12) op (cid:12)(cid:12) A ( j +2 , k ) ρ A (1 ,j ) (cid:12)(cid:12) op ≤ X i ∈ Z | (cid:2) B ( j +1) (cid:3) i,i || (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ |≤ X i ∈ Z (cid:16) | (cid:2) B ( j +1) (cid:3) i,i | + | (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ | (cid:17) ≪ X i ≥− | α i | . MC γ = lim ←− CβE n Notice that we did not even use trace-class inequalities, only that B is diagonal and A, A ρ are bounded. In the end: (cid:12)(cid:12) tr (cid:0) C k (cid:1)(cid:12)(cid:12) ≪ k X j ≥− | α j | , the implicit constant being absolute. At this level, the computation is different. The crucial point isthat, for most indices j , j :tr (cid:0) A (1 ,j ) ρ B ( j +1) A ( j +2 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ (cid:1) = 0 , but not all. We need a closer inspection.If ℓ is even, M ρ e ℓ ∈ R e ℓ +1 ; while if ℓ is odd, then L ρ e ℓ ∈ R e ℓ +1 . Reversing parity yields e ℓ − instead of e ℓ +1 . As such, if ℓ is even: A (1 ,j ) ρ B ( j +1) A ( j +2 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ ( R e ℓ ) ⊂ A (1 ,j ) ρ B ( j +1) A ( j +2 ,j ) ρ B ( j +1) ( R e ℓ +2 k − j − ) ⊂ A (1 ,j ) ρ B ( j +1) A ( j +2 ,j ) ρ ( R e ℓ +2 k − j − ) ⊂ A (1 ,j ) ρ ( R e ℓ +2 k − j − − ( j − j − ) ⊂ R e ℓ +2 k − j − − ( j − j − j = R e ℓ +2 k − j +2 j Upon considering ℓ odd as well, one has for all ℓ : A (1 ,j ) ρ B ( j +1) A ( j +2 ,j ) ρ B ( j +1) A ( j +2 , k ) ρ ( R e ℓ ) ⊂ R e ℓ ± k − ( j − j )) . Therefore, in the combinatorial computation of the trace, one has a “closed loop” onlyfor the indices such that j − j = k . Consequently,tr (cid:0) C k − C kρ (cid:1) = X ≤ j AND JOSEPH NAJNUDEL (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ (cid:2) A ( j +2 , k ) ρ A (1 ,j ) (cid:3) i + ∗ ,i (cid:12)(cid:12)(cid:12) ≤ (2 k ) max ≤ j ,j ,j ≤ k − X i ∈ Z (cid:12)(cid:12)(cid:12)(cid:2) B ( j +1) (cid:3) i,i (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ (cid:2) B ( j +1) (cid:3) i + ∗ ,i + ∗ (cid:12)(cid:12)(cid:12) ≪ (2 k ) X j ≥− | α j | . The second sum is explicitly obtained by following the “loops” in the combinatorial com-putation of the trace, without forgetting the entry B (2)1 , . We get − k X j ≥− α j ρ j +1 ρ j +2 . . . ρ j + k − α j + k , which gives the bound we require:tr (cid:0) C k (cid:1) = − k X j ≥− α j ρ j +1 ρ j +2 . . . α j + k ! + O k X j ≥− | α j | ! . (cid:3) A convergence result for OPUC with rotationally invariant ( α j ) j ≥ . We cannow prove the following general convergence result for log Φ ∗∞ inside compact sets of D .The setting is that of independent and rotationally invariant Verblunsky coefficients, sothat (log Φ ∗ n ) n ≥ is an F -martingale. Also, the condition on the decay of modulii | α j | easilyincludes the square-root decay in Random Matrix Theory. Proposition 3.6. Assume that Verblunsky coefficients are independent and rotationallyinvariant and that ∀ σ > , ∀ k ≥ , E " exp σ X j ≥− | α j | k ! < ∞ . (3.6.1) As a consequence, for any compact set K ⊂ D , (log Φ ∗ n ) n ∈ N almost surely converges uni-formly on K ⊂ D and ∀ σ ∈ C , sup z ∈ K sup n ∈ N E (cid:2) e ℜ σ log Φ ∗ n ( z ) (cid:3) < ∞ . (3.6.2) Proof. We start by proving the finiteness of exponential moments given in (3.6.2). Write: F n,k := X − ≤ j ≤ n − k − α j ρ j +1 . . . ρ j + k − α j + k where we recall that α − := − ∗ n ( z ) = ∞ X k =1 z k F n,k + O X j ≥ | α j | ! , the O being uniform on compact sets. Because of the hypothesis (3.6.1) and the Cauchy-Schwarz inequality, it is sufficient to show that: ∀ σ > , sup z ∈ K sup n ∈ N E (cid:16) e σ P ∞ k =1 | z | k | F n,k | (cid:17) < ∞ , for (3.6.2) to be true.First, we work conditionnally on the σ -algebra M generated by the modulii | α j | . Byseeing the random variable F n,k as a function of the bounded phases Θ j , it is easy to checkthat: | F n,k ( . . . , Θ i , . . . ) − F n,k ( . . . , Θ ′ i , . . . ) | MC γ = lim ←− CβE n ≤| Θ i − Θ ′ i | | α i | ρ i +1 . . . ρ i + k − | α i + k | + | α i − k | ρ i − k +1 . . . ρ i − | α i | | Θ i − Θ ′ i |≤ | α i | ( | α i + k | + | α i − k | ) =: Σ i , with the convention α − = − α k = 0 for k ≤ − 2. As such: X i Σ i ≪ X j ≥− | α j | =: Σ . Invoking McDiarmid’s inequality, there are constants C, c > ∀ x > , P ( | F n,k | ≥ Σ x | M ) ≤ Ce − cx . Classically, these subgaussian tails translate to bounds on moments: E (cid:0) e σ | F n,k | | M (cid:1) ≤ Z ∞ dx e x P ( σ | F n,k | ≥ x |M ) ≪ e cσ Σ with a possibly different constant c . We finish using the H¨older’s inequality with P k p k =1: E (cid:16) e σ P ∞ k =1 | z | k | F n,k | (cid:17) ≤ E Y k E (cid:16) e σ | z | k | F n,k | p k |M (cid:17) pk ! ≤ E (cid:16) e cσ Σ P k | z | k p k (cid:17) < ∞ the finiteness coming from the assumption (3.6.1). All constants involved here are uniformin z on any compact subset of D .Now, in order to prove uniform convergence of log Φ ∗ n , consider the Hilbert space B = L ( ρ∂ D , dθ ) of square-integrable functions on the circle of radius ρ < 1. It is easy to seethat (log Φ ∗ n ( ρ · )) n ≥ is a B -valued martingale. We shall study its convergence in B . Onecould invoke the general theory of martingales in Banach spaces (for e.g [Pis16]), but forthe reader’s convenience, let us explain why a Hilbert space such as B = L ( ρ∂ D , dθ ) doesnot require such a machinery.Thanks to bounds on moments, we havesup n ≥ E (cid:18)Z ρ∂ D | log Φ ∗ n | (cid:19) < ∞ , hence the square of the L ( ρ∂ D , dθ ) norm of log Φ ∗ n ( · ) is a scalar submartingale (Remark[Pis16, 1.12]), which is also L (Ω , B , P )-bounded, hence convergent. Because we are deal-ing with holomorphic functions, the Banach norm in B dominates the C -norm (and C , C , . . . ) in smaller discs (because of Cauchy’s formula), and for all ρ < 1, the squareof the C -norm of log Φ ∗ n ( · ) in the disc ρ D is also a convergent submartingale (a.s andin L (Ω , B , P )), because the supremum of submartingales is a submartingale. By Ascoli-Arzela theorem, (log Φ ∗ n ) n ∈ N is relatively compact (a.s.) as a family of continuous functionon ρ D . To prove the a.s. uniform convergence of this sequence on ρ D , it is then enoughto check that the limit of any subsequence is uniquely determined. This last statementis due to the a.s. convergence of each Fourier coefficient of z log Φ ∗ n ( ρz ), which is amartingale, bounded in L (Ω , B , P ). We deduce that (log Φ ∗ n ) n ∈ N a.s. converges uniformlyon compact sets of D . (cid:3) Beginning of the proof of the Main Theorem 2.1 As explained while announcing the structure of the paper, we made the choice of givinga full proof of the Main Theorem 2.1, at the cost of admitting some intermediate results.The missing ingredients are condensed in Lemma 4.1 and Lemma 4.2. And both lemmasare formulated in this section, when needed. CHHAIBI AND JOSEPH NAJNUDEL We first introduce the following quantity: M ∞ := ∞ Y j =0 (cid:0) − | α j | (cid:1) − e − β ( j +1) . (4.0.1)This product is a.s. convergent. Indeed, let us consider the logarithm and truncate theseries at rank n : log M n := n X j =0 (cid:18) − log (cid:0) − | α j | (cid:1) − β ( j + 1) (cid:19) . Because of (1.1.4), (log M n ) n ≥ is an F -martingale. Moreover, this martingale is boundedin L , and then it is a.s. convergent.Now, we fix a nonnegative smooth function f : ∂ D → R + on the unit circle. We will beinterested in the integral of f with respect to several random measures on the unit circle,and its conditional expectation given F n for n ≥ µ β . A natural approximation is the Bernstein-Szeg¨o approximation of µ β (see [Sim05a, Theorem 1.7.8 p.95]) which we denote by µ βn . By definition: µ βn ( dθ ) = dθ π Q n − j =0 (1 − | α j | ) | Φ ∗ n ( e iθ ) | . (4.0.2)The limit µ βn ( f ) n →∞ −→ µ β ( f ) holds surely by virtue of the previously referenced [Sim05a,Theorem 1.7.8 p. 95]: this is a deterministic statement regarding the Bernstein-Szeg¨oapproximation of a probability measure on the circle. Let us also prove that we haveconvergence in all L p (Ω , B , P ). We notice that for all smooth f , ( µ βn ( f )) n ≥ is an ( F , P )-martingale by integrating against f the point-wise martingale: E Q nj =0 (1 − | α j | ) | Φ ∗ n +1 ( e iθ ) | |F n ! = Q n − j =0 (1 − | α j | ) | Φ ∗ n ( e iθ ) | . The above computation uses crucially that | Q j ( e iθ ) | = 1. Now, since µ βn is constructed tobe a probability measure: µ βn ( f ) ≤ | f | ∞ , and ( µ βn ( f )) n ≥ ends up being a bounded martingale! Then, the convergence (almost sureand in all L p (Ω , B , P )) holds by Doob’s martingale convergence theorem. In any case: E (cid:0) µ β ( f ) | F n (cid:1) = µ βn ( f ) . The second quantity of interest is related to the integral of f with respect to theGaussian multiplicative chaos constructed from the Gaussian field log Φ ∗∞ . We set: X r,n ( f ) := E (cid:20) M ∞ GM C γr ( f ) (cid:12)(cid:12)(cid:12) F n (cid:21) (4.0.3) = Z dθ π f ( e iθ ) E " M ∞ (1 − r ) β | Φ ∗∞ ( re iθ ) | (cid:12)(cid:12)(cid:12) F n where GM C γr = (1 − r ) /β | Φ ∗∞ ( re iθ ) | − dθ. Now all the positive moments of M − ∞ are finite and the Gaussian multiplicative chaoscan be obtained as the limit GM C γ ( f ) = lim r → − GM C γr ( f ) , MC γ = lim ←− CβE n in L (Ω , B , P ). In fact, we have: Lemma 4.1. E (cid:20) M ∞ | GM C γr ( f ) − GM C γ ( f ) | (cid:21) −→ r → − . This lemma is the first ingredient which is admitted for now, and proven in Section 7.Therefore, using the fact that the conditional expectation is a contraction on L (Ω , B , P ),we get the L (Ω , B , P ) convergence:lim r → − X r,n ( f ) = E (cid:20) M ∞ GM C γ ( f ) (cid:12)(cid:12)(cid:12) F n (cid:21) . (4.1.1)The entire point of the proof consists in relating the two measures defined by (4.0.2)and (4.0.3).From the product (4.0.1) and the fact that (cid:0) − r (cid:1) β = e − β P ∞ j =0 r j +2 j +1 , Φ ∗∞ ( z ) = ∞ Y j =0 (1 − α j Q j ( z )) , for all z ∈ D , we obtain: X r,n ( f ) = Z π dθ π f ( e iθ ) E " ∞ Y j =0 − | α j | | − α j Q j ( re iθ ) | e β ( j +1) (1 − r j +2 ) | F n = Z π dθ π f ( e iθ ) Q n − j =0 (1 − | α j | ) e β ( j +1) (1 − r j +2 ) | Φ ∗ n ( re iθ ) | × E " ∞ Y j = n − | α j | | − α j Q j ( re iθ ) | e β ( j +1) (1 − r j +2 ) | F n = Z π dθ π f ( e iθ ) R (0 ,n − ( θ ) E (cid:2) R ( n, ∞ ) ( θ ) |F n (cid:3) , where R (0 ,n − ( θ ) and R ( n, ∞ ) ( θ ) represent respectively the products from 0 to n − n to ∞ .Let L be a compact set of D n and A L the F n -measurable event corresponding to thefact that ( α , . . . , α n − ) ∈ L . Under the event A L , because the Verblunsky coefficientsare away from the unit circle,sup θ (cid:12)(cid:12)(cid:12)(cid:12) R (0 ,n − ( θ ) − dµ βn dθ ( θ ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ sup θ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) | Φ ∗ n ( re iθ ) | − n − Y j =0 e β ( j +1) (1 − r j +2 ) − | Φ ∗ n ( e iθ ) | − (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) is bounded by a quantity c r depending only on β, r and L , and tending to zero when r goes to 1 for β, L fixed. Indeed n − Y j =0 (1 − α j Q j ( z )) = Φ ∗ n ( z )is (by the Szeg¨o recursion) a polynomial in z , the α j ’s and their conjugate, and then itis uniformly Lipschitz in z ∈ D for n fixed. On the event A L , | Φ ∗ n ( z ) | − is also uniformlyLipschitz in z ∈ D for n and L fixed since Φ ∗ n ( z ) does not vanish, and then is uniformlyaway from zero by compactness of the sets L and D . CHHAIBI AND JOSEPH NAJNUDEL As such, for any constant K β - to be determined later: E (cid:2) A L (cid:12)(cid:12) X r,n ( f ) − K β µ βn ( f ) (cid:12)(cid:12)(cid:3) ≤ c r K β | f | ∞ + E (cid:20) A L (cid:12)(cid:12)(cid:12)(cid:12) X r,n ( f ) − K β Z π dθ π f ( e iθ ) R (0 ,n − ( θ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) = c r K β | f | ∞ + | f | ∞ E (cid:20) A L (cid:12)(cid:12)(cid:12)(cid:12)Z π dθ π R (0 ,n − ( θ ) (cid:0) E (cid:2) R ( n, ∞ ) ( θ ) | F n (cid:3) − K β (cid:1)(cid:12)(cid:12)(cid:12)(cid:12)(cid:21) ≤ c r K β | f | ∞ + | f | ∞ Z π dθ π E (cid:2) R (0 ,n − ( θ ) (cid:12)(cid:12) E (cid:2) R ( n, ∞ ) ( θ ) | F n (cid:3) − K β (cid:12)(cid:12)(cid:3) , From now on, we shall use, for r ∈ (0 , 1) and θ ∈ R , a new probability measure Q r,θ ,equivalent to P and defined by: d Q r,θ d P = Q ∞ j =0 (cid:0) − | α j Q j ( re iθ ) | (cid:1) | Φ ∗∞ ( re iθ ) | . (4.1.2)If the reference angle θ is not indicated, it means that we consider θ = 0 (note that allangles play symmetric roles by rotational invariance) and we denote the measure by Q r .Moreover, Q r will be simply denoted by Q if there is no possible ambiguity. In order tocheck that the probability measure Q r,θ is well-defined, we first check that for all n ≥ Q n − j =0 (1 − | α j Q j ( r ) | ) | Φ ∗ n ( r ) | = n − Y j =0 − | α j Q j ( r ) | | − α j Q j ( r ) | From the rotational invariance of α j and its independence with F j , we have E (cid:20) − | α j Q j ( r ) | | − α j Q j ( r ) | (cid:12)(cid:12) | α j | , F j (cid:21) = 1 − u π Z π dθ | − ue iθ | where u = | α j Q j ( r ) | < 1. Computing the integral (see Lemma 5.2 below for more detail)gives that the last conditional expectation is equal to 1, and then Q n − j =0 (1 − | α j Q j ( r ) | ) | Φ ∗ n ( r ) | ! n ≥ is a ( F , P )-martingale. Moreover, this martingale is bounded in all L p (Ω , B , P ) spaces,because it is dominated by ( | Φ ∗ n ( r ) | − ) n ≥ , which has been proven to be bounded in L p (Ω , B , P ). Hence, the martingale converges a.s. and in all L p (Ω , B , P ), and its limit hasexpectation 1. It is then the density of a probability measure with respect to P .Note that we have, by the martingale property, d Q r d P (cid:12)(cid:12) F n = Q n − j =0 (1 − | α j Q j ( r ) | ) | Φ ∗ n ( r ) | and then, for a F n +1 -measurable quantity X , and a F n -measurable quantity Y , bothnonnegative, E Q r [ XY ] = E P " XY Q nj =0 (1 − | α j Q j ( r ) | ) (cid:12)(cid:12) Φ ∗ n +1 ( r ) (cid:12)(cid:12) = E P " Y Q n − j =0 (1 − | α j Q j ( r ) | ) | Φ ∗ n ( r ) | E P (cid:20) X − | α n Q n ( r ) | | − α n Q n ( r ) | |F n (cid:21) = E Q r (cid:20) Y E P (cid:20) X − | α n Q n ( r ) | | − α n Q n ( r ) | |F n (cid:21)(cid:21) , MC γ = lim ←− CβE n which gives(4.1.3) E Q r [ X |F n ] = E P (cid:20) X − | α n Q n ( r ) | | − α n Q n ( r ) | |F n (cid:21) . Similarly, if X is B -measurable and Y is F n -measurable, both nonnegative,(4.1.4) E Q r [ X |F n ] = E P " X ∞ Y j = n − | α n Q j ( r ) | | − α n Q j ( r ) | |F n . Using the problem’s rotational invariance, and (4.1.4), we deduce that: Z π dθ π E (cid:2) R (0 ,n − ( θ ) (cid:12)(cid:12) E (cid:2) R ( n, ∞ ) ( θ ) | F n (cid:3) − K β (cid:12)(cid:12)(cid:3) = E (cid:2) R (0 ,n − ( θ = 0) (cid:12)(cid:12) E (cid:2) R ( n, ∞ ) ( θ = 0) | F n (cid:3) − K β (cid:12)(cid:12)(cid:3) = E " n − Y j =0 − | α j | | − α j Q j ( r ) | e β ( j +1) (1 − r j +2 ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E " ∞ Y j = n − | α j | | − α j Q j ( r ) | e β ( j +1) (1 − r j +2 ) | F n − K β (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = E Q " n − Y j =0 − | α j | − | α j Q j ( r ) | e β ( j +1) (1 − r j +2 ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E Q " ∞ Y j = n − | α j | − | α j Q j ( r ) | e β ( j +1) (1 − r j +2 ) | F n − K β (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ e P n − j =0 2 β ( j +1) (1 − r j +2 ) E Q "(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) E Q " ∞ Y j = n − | α j | − | α j Q j ( r ) | e β ( j +1) (1 − r j +2 ) | F n − K β (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . We now introduce the following quantities, for N ≥ n : ω r,n,N := 2 β N − X k = n | Q k ( r ) | − r k +2 k + 1 , and ρ r,n,N := N − X k = n (cid:18) − log (cid:18) − | α k Q k ( r ) | − | α k | (cid:19) + 2 β − | Q k ( r ) | k + 1 (cid:19) , and their respective upper limits ω r,n and ρ r,n when N goes to infinity. Note that inProposition 5.5, we will show that these upper limits are in fact limits, i.e. ω r,n := 2 β ∞ X k = n | Q k ( r ) | − r k +2 k + 1 ,ρ r,n := ∞ X k = n (cid:18) − log (cid:18) − | α k Q k ( r ) | − | α k | (cid:19) + 2 β − | Q k ( r ) | k + 1 (cid:19) . In any case, we deduce, from the computation above: Z π dθ π E (cid:2) R (0 ,n − ( θ ) (cid:12)(cid:12) E (cid:2) R ( n, ∞ ) ( θ ) | F n (cid:3) − K β (cid:12)(cid:12)(cid:3) ≤ e P n − j =0 2 β ( j +1) (1 − r j +2 ) E Q (cid:12)(cid:12) E Q (cid:2) e ρ r,n + ω r,n |F n (cid:3) − K β (cid:12)(cid:12) . In the end, since c r goes to zero as r → − :lim sup r → − E (cid:2) A L (cid:12)(cid:12) X r,n ( f ) − K β µ βn ( f ) (cid:12)(cid:12)(cid:3) ≤ | f | ∞ lim sup r → − E Q (cid:12)(cid:12) E Q (cid:2) e ρ r,n + ω r,n |F n (cid:3) − K β (cid:12)(cid:12) . This is where we invoke: Lemma 4.2. There exists a constant K β such that: E Q (cid:12)(cid:12) E Q (cid:2) e ρ r,n + ω r,n |F n (cid:3) − K β (cid:12)(cid:12) r → − −→ . CHHAIBI AND JOSEPH NAJNUDEL This lemma is the second ingredient which is admitted for now, and proved in Section7. We deduce: E [ A L | X r,n ( f ) − K β µ βn ( f ) | ] −→ r → . From this limit, combined with (4.1.1), the triangle inequality and the fact that the newquantities involved do not depend on r anymore, we have E (cid:20) A L (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20) M ∞ GM C γ ( f ) |F n (cid:21) − K β µ βn ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:21) = 0 . This is equivalent to K β µ βn ( f ) = E (cid:20) GM C γ ( f ) M ∞ |F n (cid:21) almost surely on A L and then almost surely without extra restriction as the sample spaceΩ can be written as a countable union of events of the form A L . We have seen that theleft-hand side of the equality is a bounded martingale, a fortiori uniformly integrable.Taking the limit when n goes to infinity, we get K β µ β ( f ) = E (cid:20) GM C γ ( f ) M ∞ |F ∞ (cid:21) almost surely, for F ∞ equal to the σ -algebra generated by all the Verblunsky coefficients.Now, by construction, GM C γ ( f ) /M ∞ is F ∞ -measurable, and we easily deduce that themeasures GM C γ and M ∞ K β µ β almost surely coincide. Since the expectation of the totalmass of GM C γ is 1, we have GM C γ = M ∞ E [ M ∞ ] µ β . Now, recalling the expression M n := n − Y j =0 (cid:0) − | α j | (cid:1) − e − β ( j +1) , we have that M n E [ M n ] = n − Y j =0 (cid:0) − | α j | (cid:1) − E h(cid:0) − | α j | (cid:1) − i = n − Y j =0 (1 − | α j | ) − (cid:18) − β ( j + 1) (cid:19) is a martingale in n . From straightforward computations on the Beta distribution, it isalso bounded in L p for some p > 1, and a fortiori uniformly integrable. Therefore, wehave that E [ M n ] = n − Y j =0 (cid:18) − β ( j + 1) (cid:19) − e − β ( j +1) converges to a limit M when n goes to infinity, and then the almost sure and L limitof the martingale is M ∞ / M , and necessarily M = E [ M ∞ ] since the expectations of themartingale and its L limit are equal to 1. Hence, M ∞ E [ M ∞ ] = lim n →∞ M n E [ M n ] = lim n →∞ n − Y j =0 (1 − | α j | ) − (cid:18) − β ( j + 1) (cid:19) , which is the quantity C introduced in the Theorem 2.1. We deduce that GM C γ = C µ β almost surely, and a fortiori in distribution, which proves Theorem 2.1. MC γ = lim ←− CβE n Some useful estimates We first recall that Q j ( z ) = z Φ j ( z )Φ ∗ j ( z ) . (5.0.1)Here are a few properties we will require later: Proposition 5.1. The following holds, for all j ≥ : • Q j ( z ) is a Blaschke product, with modulus one on ∂ D . • | Q j ( z ) | ≤ for all z ∈ D . • If | z | = r , the following recurrence holds: − | Q j +1 ( z ) | = (1 − r ) + r (1 − | α j | )(1 − | Q j ( z ) | ) | − α j Q j ( z ) | . (5.1.1) • We have the conditional expectation bound, for all n ≥ : r j | Q n ( z ) | ≤ E [ | Q n + j ( z ) | |F n ] ≤ . Proof. The first point is due to the fact that Q j ( z ) = z Y ω, Φ j ( ω )=0 z − ω − ωz , as a consequence of (5.0.1), and to the fact that all the roots of Φ j have modulus strictlysmaller than 1.The second point is due to the fact that z 7→ | Q j ( z ) | is subharmonic, because Q j isholomorphic and the absolute value is convex.For the third point, we start by the recurrence relation (1.0.1) and we write:1 − | Q j +1 ( z ) | =1 − r (cid:12)(cid:12)(cid:12)(cid:12) Φ j +1 ( z )Φ ∗ j +1 ( z ) (cid:12)(cid:12)(cid:12)(cid:12) =1 − r (cid:12)(cid:12)(cid:12)(cid:12) z Φ j ( z ) − α j Φ ∗ j ( z )Φ ∗ j ( z ) − α j z Φ j ( z ) (cid:12)(cid:12)(cid:12)(cid:12) =1 − r (cid:12)(cid:12)(cid:12)(cid:12) Q j ( z ) − α j − α j Q j ( z ) (cid:12)(cid:12)(cid:12)(cid:12) =(1 − r ) + r (1 − | α j | )(1 − | Q j ( z ) | ) | − α j Q j ( z ) | . For the last point, taking conditional expectation in the recurrence yields: E (cid:0) − | Q j +1 ( z ) | |F j (cid:1) =(1 − r ) + r E (cid:18) − | α j | − | α j Q j ( z ) | |F j (cid:19) (1 − | Q j ( z ) | ) ≤ (1 − r ) + r (1 − | Q j ( z ) | )=1 − r | Q j ( z ) | . The result follows from the previous inequality by induction. (cid:3) Moment estimates. The following lemma will be useful: Lemma 5.2. For Θ uniform random variable on [0 , π ] and u ∈ D , we have, for all λ ∈ R , E (cid:16)(cid:12)(cid:12) − e i Θ u (cid:12)(cid:12) − λ (cid:17) = ∞ X k =0 (cid:18) λ + k − k (cid:19) | u | k (5.2.1) CHHAIBI AND JOSEPH NAJNUDEL E (cid:16)(cid:12)(cid:12) − e i Θ u (cid:12)(cid:12) − λ (cid:17) =1 + λ | u | + O λ | u | (1 − | u | ) | λ | ! (5.2.2) and in the case where | λ | ≤ , E (cid:16)(cid:12)(cid:12) − e i Θ u (cid:12)(cid:12) − λ (cid:17) =1 + λ | u | + O (cid:18) | u | − | u | (cid:19) (5.2.3) Proof. For (5.2.1), we have by series expansion: E (cid:16)(cid:12)(cid:12) − e i Θ u (cid:12)(cid:12) − λ (cid:17) = E ∞ X k,ℓ =0 (cid:18) − λk (cid:19) e ik Θ u k (cid:18) − λℓ (cid:19) e iℓ Θ u ℓ ! = ∞ X k =0 (cid:18) − λk (cid:19) | u | k = ∞ X k =0 (cid:18) λ + k − k (cid:19) | u | k . The two first terms of the sum are 1 and λ | u | . If λ is a nonpositive integer, only finitelymany terms are non-zero. Otherwise, for k ≥ 2, the coefficient of | u | k is (cid:18) Γ( k + λ )Γ( λ )Γ( k + 1) (cid:19) = O λ ( k λ − ) , whereas the coefficient of | u | k in the expansion of | u | (1 − | u | ) − | λ | is( − k − (cid:18) − | λ | k − (cid:19) = (cid:18) | λ | + k − k − (cid:19) = Γ(2 | λ | + k − | λ | )Γ( k − ≫ λ k | λ |− ≫ λ k λ − for λ = 0. This gives bound (5.2.2) for λ = 0, and this bound is obvious for λ = 0.If | λ | ≤ 1, we have (cid:18) − λk (cid:19) = k − Y j =0 (cid:18) | − λ − j | j (cid:19) ≤ (cid:3) The following estimates will be useful: Proposition 5.3. For β > , j large enough depending on β , and Q j = Q j ( r ) , we havealmost surely, for Q = Q r : E Q (cid:16) − | Q j +1 | (cid:12)(cid:12)(cid:12) F j (cid:17) (5.3.1)=1 − r + r (cid:0) − | Q j | (cid:1) (cid:18) − β ( j + 1) (cid:0) − | Q j | (cid:1) + 4 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19)(cid:19) . Var Q (cid:16) − | Q j +1 | (cid:12)(cid:12)(cid:12) F j (cid:17) = r (cid:0) − | Q j | (cid:1) (cid:18) | Q j | β ( j + 1) + O (cid:18) j + 1) (cid:19)(cid:19) . (5.3.2) E Q (cid:16) ( | Q j | − | Q j +1 | ) (cid:12)(cid:12)(cid:12) F j (cid:17) = O (cid:18) (1 − r ) + 1( j + 1) (cid:19) . (5.3.3) We also have E Q ( − ℜ log(1 − α j Q j ) |F j ) = 4 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) . MC γ = lim ←− CβE n Var Q ( − ℜ log(1 − α j Q j ) |F j ) = 4 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) . We recall that the implicit constant in O depends only on β .Proof. For the first equation, taking the conditional expectation E Q ( · |F j ) in (5.1.1), weonly have to prove: E Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = 1 − β ( j + 1) (cid:0) − | Q j | (cid:1) + 4 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) . On the other hand, by using the rotational invariance of α j and its independence with F j , we get, for all λ ∈ R , E [ | − α j Q j | − λ |F j , | α j | ] = E (cid:0) | − e i Θ u | − λ (cid:1) with u = | α j Q j | , and Θ a uniform random variable as in Lemma 5.2.Now, using the change of measure (4.1.3) and then (5.2.2) for λ = 2 and u = | α j Q j | ,we deduce E Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = E (cid:18) (1 − | α j | )(1 − | α j Q j | ) | − α j Q j | |F j (cid:19) = E (cid:18) (1 − | α j | )(1 − | α j Q j | ) (cid:18) | α j Q j | + O (cid:18) | α j Q j | (1 − | α j Q j | ) (cid:19)(cid:19) |F j (cid:19) = E (cid:0) − | α j | − | α j Q j | + 4 | α j Q j | |F j (cid:1) + O (cid:18) E | α j | (1 − | α j | ) (cid:19) =1 − β ( j + 1) (cid:0) − | Q j | (cid:1) + 4 β ( j + 1) | Q j | + O (cid:18) E | α j | (1 − | α j | ) (cid:19) . The first estimate holds as E | α j | (1 −| α j | ) = O ( j +1) ) for j large enough depending on β .For the second equation, the proof is similar, by taking the variance under Q condi-tionally to F j in (5.1.1). We only have to prove:Var Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = 4 | Q j | β ( j + 1) + O (cid:18) j + 1) (cid:19) . The required expansion uses (5.2.2) for λ = 3:Var Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = E (cid:18) (1 − | α j | ) (1 − | α j Q j | ) | − α j Q j | |F j (cid:19) − E Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = E (cid:18) (1 − | α j | ) (1 − | α j Q j | ) (cid:18) | α j Q j | + O (cid:18) | α j Q j | (1 − | α j | ) (cid:19)(cid:19) |F j (cid:19) − E Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = E (cid:0) − | α j | − | α j Q j | + 9 | α j Q j | |F j (cid:1) + O (cid:18) E | α j | (1 − | α j | ) (cid:19) − E Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) =1 − β ( j + 1) (cid:0) − | Q j | (cid:1) + 12 β ( j + 1) | Q j | CHHAIBI AND JOSEPH NAJNUDEL + O (cid:18) E | α j | (1 − | α j | ) (cid:19) − E Q (cid:18) − | α j | | − α j Q j | |F j (cid:19) = 4 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) , which gives the desired estimate for j large enough depending on β .For the fourth moment, we first deduce from (5.1.1): | Q j | − | Q j +1 | =(1 − r ) + (1 − | Q j | ) (cid:18) r (1 − | α j | ) | − α j Q j | − (cid:19) = | Q j | (1 − r ) + r (1 − | Q j | ) (cid:18) (1 − | α j | ) | − α j Q j | − (cid:19) . Since | Q j | ∈ [0 , − r ) and forthe second term, it is enough to get the estimate(5.3.4) E Q "(cid:18) − | α j | | − α j Q j | − (cid:19) (cid:12)(cid:12) F j = O (cid:18) j + 1) (cid:19) . We have, for p ∈ { , , , , } , thanks to (5.2.2), E Q (cid:18) (1 − | α j | ) p | − α j Q j | p |F j (cid:19) = E (cid:18) (1 − | α j | ) p (1 − | α j Q j | ) | − α j Q j | p +2 |F j (cid:19) = E (cid:18) (1 − | α j | ) p (1 − | α j Q j | ) (cid:18) p + 1) | α j Q j | + O (cid:18) | α j Q j | (1 − | α j | ) p +2 (cid:19)(cid:19) |F j (cid:19) = E (cid:0) − p | α j | − | α j Q j | + ( p + 1) | α j Q j | |F j (cid:1) + O (cid:18) E | α j | (1 − | α j | ) p +2 (cid:19) =1 − pβ ( j + 1) + 2(( p + 1) − β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) . Multiplying this estimate respectively by 1 , − , , − , p equal to 0 , , , , 4, andadding the terms, we get the desired bound for j large enough depending on β .For the last equations that concern the logarithm, we get E Q ( − log(1 − α j Q j ) |F j ) = − E (cid:18) − | α j Q j | | − α j Q j | log(1 − α j Q j ) |F j (cid:19) If we condition on F j and | α j | , the expectation is, for u = | α j Q j | ,1 − u π Z π (1 − ue iθ ) − (1 − ue − iθ ) − log(1 − ue iθ ) dθ = − − u π Z π X k,ℓ ≥ , m ≥ u k + ℓ + m m e iθ ( k − ℓ + m ) dθ = − (1 − u ) X k ≥ , m ≥ u k + m ) m = − (1 − u ) X k ≥ u k X m ≥ u m m = log(1 − u ) . We deduce E Q ( − log(1 − α j Q j ) |F j ) = E (cid:0) − log(1 − | α j Q j | ) |F j (cid:1) = E (cid:0) | α j Q j | |F j (cid:1) + O (cid:18) E | α j | − | α j | (cid:19) MC γ = lim ←− CβE n = 2 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) for j large enough depending on β .Let us now estimate the variance. We get E Q (cid:0) ( − ℜ log(1 − α j Q j )) |F j (cid:1) = E (cid:18) − | α j Q j | | − α j Q j | ( − ℜ log(1 − α j Q j )) |F j (cid:19) . If we condition on | α j | , we obtain1 − u π Z π (1 − ue iθ ) − (1 − ue − iθ ) − [ − log(1 − ue iθ ) − log(1 − ue − iθ )] dθ = 1 − u π Z π X k ≥ u k e iθk X ℓ ≥ u ℓ e − iθℓ X m ∈ Z \{ } u | m | | m | e iθm dθ = (1 − u ) X k,ℓ ≥ , m,p ∈ Z \{ } u k + ℓ + | m | + | p | | mp | k − ℓ + m + p =0 If we fix m and p , we prescribe the difference k − ℓ = − m − p . The possible values for k + ℓ are then | m + p | + 2 r for r = min( k, ℓ ) ≥ 0. Hence, we get(1 − u ) X r ≥ u r X m,p ∈ Z \{ } u | m + p | + | m | + | p | | mp | = X m,p ∈ Z \{ } u | m + p | + | m | + | p | | mp | = 2 X m ≥ p ∈ Z \{ } u | m + p | + m + | p | m | p | . If we split the sum in function of the sign of p and then isolate the terms for ( m, p ) =(1 , , (1 , , (2 , 1) in the second sum, we get after minoring 2 max( m, p ) by m + p :2 X m,p ≥ u m + p ) mp + 2 X m,p ≥ u m,p ) mp = O X m,p ≥ u m u p ! + 2 u + 2 u + O X m,p ≥ u m u p ! , which gives 2 u + O (cid:18) u (1 − u ) (cid:19) = 2 u + O (cid:18) u (1 − u ) (cid:19) . We deduce E Q (cid:0) ( − ℜ log(1 − α j Q j )) |F j (cid:1) = 2 E (cid:0) | α j Q j | ) |F j (cid:1) + O (cid:18) E | α j | (1 − | α j | ) (cid:19) = 4 β ( j + 1) | Q j | + O (cid:18) j + 1) (cid:19) for j large enough depending on β . Subtracting the square of previous estimate of theexpectation gives the desired bound for the variance. (cid:3) Bounds of useful quantities related to ( Q k ) k ≥ . We recall the expressions, avail-able for N ≥ n : ω r,n,N := 2 β N − X k = n | Q k ( r ) | − r k +2 k + 1and ρ r,n,N := N − X k = n (cid:18) − log (cid:18) − | α k Q k ( r ) | − | α k | (cid:19) + 2 β − | Q k ( r ) | k + 1 (cid:19) . The analysis of these random variables is intimately related to the following result: CHHAIBI AND JOSEPH NAJNUDEL Proposition 5.4. For r ∈ (0 , , the series: X k ≥ | Q k ( r ) | k + 1 converges Q -almost surely.Moreover, if for every A > , we define the index A r,n = max( n, ⌊ A (1 − r ) − ⌋ ) and: (cid:3) ( A, ∞ ) r,n := X k ≥ A r,n | Q k +1 ( r ) | k + 2 , then we have, for all p ∈ R : E Q (cid:16) e p (cid:3) ( A, ∞ ) r,n |F n (cid:17) ≤ exp (cid:0) O p ( A − ) (cid:1) almost surely.Proof. In this proof, we again write Q k for Q k ( r ) in order to simplify the notation.We know that (log Φ ∗ k ( r )) k ≥ is a ( F , P )-martingale, bounded in L (Ω , B , P ) since it hasbounded exponential moments, and then the expectation of its bracket is bounded. Inparticular, the bracket is P -almost surely finite, and then Q -almost surely finite since P and Q are equivalent measures. Now, sincelog Φ ∗ k ( r ) = k − X j =0 log (1 − α j Q j ) , the bracket is h log Φ ∗· ( r ) i k = k − X j =0 E (cid:0) | log (1 − α j Q j ) | | F j (cid:1) = k − X j =0 E ∞ X m =1 | α j Q j | m m | F j ! by rotational invariance of α j and independence of α j and F j , and then h log Φ ∗· ( r ) i k = 2 β k − X j =0 (cid:18) | Q j | j + 1 + O ( E [ | α j | ]) (cid:19) = O (1) + 2 β k − X j =0 | Q j | j + 1 . Hence, the last sum is almost surely bounded when k varies, for fixed r < (cid:3) ( A, ∞ ) r,n . We start by proving the following equation:(1 − r ) − (cid:3) ( A, ∞ ) r,n (5.4.1) =(1 − r ) − ∞ X k = A r,n | Q k +1 | − E Q ( | Q k +1 | |F k ) k + 2 + r (1 − r ) − (cid:3) ( A, ∞ ) r,n + O (cid:18) A (cid:19) . In order to see that, we write:(1 − r ) − (cid:3) ( A, ∞ ) r,n =(1 − r ) − ∞ X k = A r,n | Q k +1 | − E Q ( | Q k +1 | |F k ) k + 2(5.4.2) + (1 − r ) − ∞ X k = A r,n E Q ( | Q k +1 | |F k ) k + 2 , and from (5.3.1), E Q ( | Q k +1 | |F k ) k + 2 = r | Q k | k + 1 + O (cid:18) k + 1) (cid:19) . Then the combination of the two previous equations yields:(1 − r ) − (cid:3) ( A, ∞ ) r,n MC γ = lim ←− CβE n =(1 − r ) − ∞ X k = A r,n | Q k +1 | − E Q ( | Q k +1 | |F k ) k + 2 + r (1 − r ) − (cid:3) ( A, ∞ ) r,n + O r (1 − r ) − A r,n + 1 + (1 − r ) − ∞ X k = A r,n k + 1) =(1 − r ) − ∞ X k = A r,n | Q k +1 | − E Q ( | Q k +1 | |F k ) k + 2 + r (1 − r ) − (cid:3) ( A, ∞ ) r,n + O (cid:18) (1 − r ) − A r,n + 1 (cid:19) =(1 − r ) − ∞ X k = A r,n | Q k +1 | − E Q ( | Q k +1 | |F k ) k + 2 + r (1 − r ) − (cid:3) ( A, ∞ ) r,n + O (cid:18) A (cid:19) , which is (5.4.1).Rearranging this equation, we get (cid:3) ( A, ∞ ) r,n = O (cid:18) A (cid:19) − (1 − r ) − ∞ X k = A r,n − | Q k +1 | − E Q (1 − | Q k +1 | |F k ) k + 2= O (cid:18) A (cid:19) − ∞ X k = A r,n ∆ M k , where ∆ M k are martingale differences, bounded by 1. Estimating the increments of thebracket from (5.3.2) yields: E Q (cid:0) (∆ M k ) |F k (cid:1) ≪ (1 − r ) − ( k + 1) . Hence, the bracket of the corresponding martingale is bounded by h M i ∞ ≪ X k ≥ A r,n (1 − r ) − ( k + 1) ≪ A L (Ω , B , Q ). In order to bound exponential moments, weuse the following variant of conditional Chernoff bounds. For all p ∈ R : E Q (cid:0) e p ∆ M k |F k (cid:1) ≤ E Q (cid:18) p ∆ M k + 12 ( p ∆ M k ) e | p ∆ M k | |F k (cid:19) ≤ E Q (cid:18) p ∆ M k ) e | p | |F k (cid:19) ≤ O (cid:18) p (1 − r ) − e | p | k + 1) (cid:19) ≤ exp (cid:18) O (cid:18) p (1 − r ) − e | p | k + 1) (cid:19)(cid:19) In the end, by the tower property of conditional expectation, and the fact that A r,n ≥ n ,we get E Q (cid:16) e p P ∞ k = Ar,n ∆ M k |F n (cid:17) ≤ exp O p e | p | X k ≥ A r,n (1 − r ) − ( k + 1) ≤ exp (cid:0) O ( p e | p | A − ) (cid:1) . Therefore, E Q (cid:16) e p (cid:3) ( A, ∞ ) r,n |F n (cid:17) ≤ exp (cid:0) O ( pA − + p e | p | A − ) (cid:1) = exp (cid:0) O p ( A − ) (cid:1) . CHHAIBI AND JOSEPH NAJNUDEL (cid:3) The main result of this subsection is the following: Proposition 5.5. Almost surely, under Q , ω r,n,N and ρ r,n,N converge when N → ∞ ,respectively to ω r,n = 2 β ∞ X k = n | Q k ( r ) | − r k +2 k + 1 and ρ r,n := ∞ X k = n (cid:18) − log (cid:18) − | α k Q k ( r ) | − | α k | (cid:19) + 2 β − | Q k ( r ) | k + 1 (cid:19) . Moreover, for all p ≥ , there exists K > , depending only on p and β , such that for all r ∈ (0 , , E Q [ e p ω r,n |F n ] ≤ K, E Q [ e pρ r,n |F n ] ≤ K a.s., and if ε > , and L is a compact subset of the n -th power of the open unit disc, thenthere exists η , depending on β, ε, n, L and r , and tending to zero when r → − and theother parameters are fixed, such that Q [ | ρ r,n | > ε |F n ] ≤ η a.s. on the event where ( α , . . . , α n − ) ∈ L . In particular, for n = 0 , we get that ω r, and ρ r, have bounded positive exponential moments and ρ r, converges to in probabilitywhen r tends to from below.Proof. The convergence of the series defining ω r,n is a direct consequence of the conver-gence of the series (cid:3) r,n := X k ≥ n | Q k +1 | k + 2 , proven in Proposition 5.4. Let us now bound the positive exponential moments of ω r,n .Using the index A r,n = max( n, ⌊ (1 − r ) − ⌋ ) for A = 1, we get ω r,n = 2 β X n ≤ k ≤ A r,n | Q k | − r k +2 k + 1 + 2 β ∞ X k = A r,n +1 | Q k | − r k +2 k + 1 ≤ β X n ≤ k ≤ A r,n − r k +2 k + 1 + 2 β ∞ X k = A r,n +1 | Q k | k + 1 ≤ β A r,n X k = n − r k +2 k + 1 + 2 β (cid:3) ( A =1 , ∞ ) r,n = O (1) + 2 β (cid:3) ( A =1 , ∞ ) r,n . In the last step, we recognized the convergent (hence bounded) Riemann sum: A r,n X k = n − r k +2 k + 1 r → −→ Z dt − e − t t . As a consequence, it suffices to prove the bound for (cid:3) ( A =1 , ∞ ) r,n instead of ω r,n , which isalready contained in Proposition 5.4.Let us now consider ρ r,n . We have ρ r,n,N = − N − X j = n (cid:18) − log (cid:0) − | α j | (cid:1) − β ( j + 1) (cid:19) (cid:0) − | Q j | (cid:1) MC γ = lim ←− CβE n − N − X j = n (cid:0) log (cid:0) − | α j Q j | (cid:1) − | Q j | log (cid:0) − | α j | (cid:1)(cid:1) =: E + E . We control each of the terms separately. We easily check that E , in function of N , is a P -martingale. It is also a Q -martingale, since ( | α j | ) j ≥ has the same law under the twoprobability measures. Indeed, for any measurable function F from R to R + , and any j ≥ 0, we have E Q [ F ( | α j | ) |F j ] = E P (cid:20) F ( | α j | ) 1 − | α j Q j | | − α j Q j | |F j (cid:21) = E P (cid:20) F ( | α j | ) E P (cid:20) − | α j Q j | | − α j Q j | |F j , | α j | (cid:21) |F j (cid:21) = E P [ F ( | α j | ) |F j ] , the last equality coming from the fact that for u = | α j Q j | < π Z π − u | − ue iθ | dθ = 1 . The conditional L norm of the martingale is bounded as follows: E Q (cid:0) E |F n (cid:1) = N − X j = n E Q (cid:0) (1 − | Q j | ) |F n (cid:1) Var (cid:0) log(1 − | α j | ) (cid:1) ≪ N − X j = n E Q (cid:0) (1 − | Q j | ) |F n (cid:1) j + 1) , which a.s. converges when N goes to infinity. We deduce the a.s. convergence of theseries corresponding to E .Furthermore, by iterating Sz¨ego recursion, we deduce that for j ≥ n , Q j = Ξ n,j ( r, Q n , ( α k ) n ≤ k ≤ j − , ( α k ) n ≤ k ≤ j − )where Ξ n,j is a universal rational function, which has modulus 1 if the two first argumentshave modulus 1.Moreover, Ξ n,j is well-defined when the two first arguments have modulus at most 1and the others have modulus strictly less than 1, and then it is coutinuous when theseconstraints are satisfied, and uniformly continuous if we restrict the α k ’s to a compact setof the open unit disc. We deduce that for fixed ( b k ) n ≤ k ≤ q − of modulus strictly less than1, S n,j ( r, q, ( b k ) n ≤ k ≤ j − ) := inf | Q |∈ [ q, , ( | a k | = b k ) n ≤ k ≤ j − | Ξ n,j ( r, Q, ( a k ) n ≤ k ≤ j − , ( a k ) n ≤ k ≤ j − ) | goes to 1 when r and q go to 1 from below. Now, we have E Q (cid:0) E |F n (cid:1) ≪ N − X j = n j + 1) E Q (cid:0) (1 − S n,j ( r, q, ( | α k | ) n ≤ k ≤ j − ) ) |F n (cid:1) on the event when | Q n | ∈ [ q, F n , ( | α k | ) n ≤ k ≤ j − has the same distri-bution under P and under Q , as proven above. Since ( | α k | ) n ≤ k ≤ j − is independent of F n under P , it is also independent under Q , with the same distribution, and then E Q (cid:0) E |F n (cid:1) ≪ N − X j = n j + 1) E P (cid:0) (1 − S n,j ( r, q, ( | α k | ) n ≤ k ≤ j − ) ) (cid:1) CHHAIBI AND JOSEPH NAJNUDEL on the event when | Q n | ∈ [ q, | Q n | is a continuous function of r , α , . . . , α n − , andthen it is uniformly continuous if we assume that ( α , . . . , α n − ) is in a given compact set L ∈ D n . Hence, under this assumption, | Q n | ∈ [ g n,L ( r ) , 1] when g n,L is a function tendingto 1 when r → − . We deduce E Q (cid:0) E |F n (cid:1) ≪ ∞ X j = n j + 1) E P (cid:0) (1 − S n,j ( r, g n,L ( r ) , ( | α k | ) n ≤ k ≤ j − ) ) (cid:1) , when ( α , . . . , α n − ) ∈ L . By dominated convergence, the right-hand side, which dependson β, n, L and r , converges to 0 when r → − with fixed other parameters. The estimatesremains true if in E , we replace the sum from n to N − N goes toinfinity. If we still denote this infinite sum by E , we get(5.5.1) Q [ | E | > ε |F n ] ≤ η , where η ≪ ε − ∞ X j = n j + 1) E P (cid:0) (1 − S n,j ( r, g n,L ( r ) , ( | α k | ) n ≤ k ≤ j − ) ) (cid:1) depends on β, ε, n, L and r and tends to 0 when r goes to 1.In order to estimate E , we observe that for a, q ∈ [0 , − log(1 − aq ) + q log(1 − a ) = ∞ X k =1 a k ( q k − q ) k , and then this quantity is nonpositive. Moreover, since q − q k = k − X r =1 ( q r − q r +1 ) ≤ ( k − − q ) , the absolute value of the quantity above is at most ∞ X k =1 a k ( k − − q ) k ≤ (1 − q ) ∞ X k =2 a k = (1 − q ) a − a . We deduce that E is a sum of nonpositive terms, and if we still denote by E its limitwhen N goes to infinity, we get | E | ≤ ∞ X j = n (1 − | Q j | ) | α j | − | α j | . If ( α , . . . , α n − ) ∈ L , we get | E | ≤ ∞ X j = n (1 − S n,j ( r, g n,L ( r ) , ( | α k | ) n ≤ k ≤ j − ) ) | α j | − | α j | . We deduce, for ε > Q [ | E | > ε |F n ] ≤ Q (cid:20) max n ≤ j ≤ p − | α j | − | α j | > R |F n (cid:21) + ε − E Q " R p − X j = n (1 − S n,j ( r, g n,L ( r ) , ( | α k | ) n ≤ k ≤ j − ) )+ ∞ X j = p (1 − S n,j ( r, g n,L ( r ) , ( | α k | ) n ≤ k ≤ j − ) ) | α j | − | α j | |F n MC γ = lim ←− CβE n for any p ≥ n and R > 0. The first term is a quantity depending on β, n, R, p , and tendingto zero when R goes to infinity. The second term depends on β, ε, n, R, p, L, r , is finite assoon as p is large enough depending on β , and under such assumption, it tends to zerowhen r → − by dominated convergence. We deduce that for fixed β, ε, n, L , which alsoallows to fix p , we get Q [ | E | > ε |F n ] ≤ δ ( R ) + δ ( R, r )where δ ( R ) goes to 0 when R → ∞ and δ ( R, r ) goes to 0 when R is fixed and r → − .Since the left-hand side is in fact independent of R , we have(5.5.2) Q [ | E | > ε |F n ] ≤ η , where η := inf R> ( δ ( R ) + δ ( R, r ))may depend on β, ε, n, L, r . For each R > 0, we getlim sup r → − η ≤ δ ( R ) + lim r → − δ ( R, r ) = δ ( R ) , and then η goes to zero when r → − .From (5.5.1) and (5.5.2), we deduce the part of the proposition relative to the conver-gence in probability of ρ r,n .It remains to bound the positive exponential moments of ρ r,n . We notice that since E has nonpositive terms, it is enough to bound the positive exponential moments of thelimit of E when N goes to infinity, still denoted E .We have, for all p ≥ E Q (cid:20) exp (cid:18) p (cid:18) log (cid:0) − | α j | (cid:1) + 2 β ( j + 1) (cid:19) (cid:0) − | Q j | (cid:1)(cid:19) |F j (cid:21) = E Q (cid:20)(cid:0) − | α j | (cid:1) p ( −| Q j | ) e pβ ( j +1) ( −| Q j | ) |F j (cid:21) = ( β/ j + 1)( β/ j + 1) + p (1 − | Q j | ) e pβ ( j +1) ( −| Q j | )= 11 + pβ ( j +1) (1 − | Q j | ) e pβ ( j +1) ( −| Q j | ) . Now, using the inequality − log(1 + x ) ≤ − x + x , available for all x ≥ 0, we deduce: E Q (cid:20) exp (cid:18) p (cid:18) log (cid:0) − | α j | (cid:1) + 2 β ( j + 1) (cid:19) (cid:0) − | Q j | (cid:1)(cid:19) |F j (cid:21) ≤ e p β j +1)2 ( −| Q j | ) ≤ e p β j +1)2 . Using the tower property of conditional expectation, we deduce E Q [ e pE |F n ] ≤ e O ( p ) , which finishes the proof of the proposition. (cid:3) A diffusive limit for | Q j ( re iθ ) | In the proof of Lemma 4.2, we will require various estimates regarding (cid:0) | Q j ( re iθ ) | (cid:1) j ≥ n as well as the diffusive limit for fixed θ , while j is large and r close to 1. These F -adaptedprocesses satisfy some discrete approximations of SDEs. We will need to know if theirdistribution converges to some solutions of the corresponding continuous SDEs, in the CHHAIBI AND JOSEPH NAJNUDEL same way as the simple random walk converges to the Brownian motion after suitablescaling.The precise statement is as follows, and its proof is the topic of the current section. Proposition 6.1. We assume β > . For ε ∈ (0 , , A > , r ∈ (0 , , recall that theindices A r,n and ε r,n are: A r,n = max( n, ⌊ A/ (1 − r ) ⌋ ) , ε r,n = max( n, ⌊ ε/ (1 − r ) ⌋ ) . Moreover, we fix an event G ∈ F n , which may depend on r , and under which ( α , . . . , α n − ) is in some deterministic compact subset of D n , independent of r . Then, for every ξ > ,we have lim sup A →∞ sup r ∈ (0 , Q ∞ X j = A r,n | Q j ( r ) | j + 1 ≥ ξ | G =0(6.1.1) and lim sup ε → sup r ∈ (0 , Q " ε r,n − X j = n − | Q j ( r ) | j + 1 ≥ ξ | G =0 . (6.1.2) Moreover, consider the process X ( r ) t equal to | Q n + t log( r − ( r ) | at time t when t is a multi-ple of log( r − ) , and linearly interpolated for other values of t . Under Q and conditionallyon G , the law of X ( r ) tends, for the topology of uniform convergence on compact sets, tothe distribution of a continuous solution X of the following stochastic differential equation (6.1.3) d (1 − X t ) = X t dt − (1 − X t ) dtβt + 4 (1 − X t ) X t dtβt + s (1 − X t ) X t βt dB t , where B is a Brownian motion.Furthermore, we have that almost surely, (6.1.4) sup t ∈ (0 , − X t t − ε ′ < ∞ for all ε ′ ∈ (0 , , and Z ∞ X t − e − t t dt < ∞ . Finally, the law of X is uniquely determined by the properties given above.Strategy of proof. In Subsection 6.1, we provide the proofs of the estimates (6.1.1) and(6.1.2).In Subsection 6.2, we prove that the sequence of laws L (cid:16) X ( r ) t ; t ≥ (cid:17) is tight for r < r → t > 0. At that stage, uniqueness will still be required to finish the proof of the convergencein law.In Subsection 6.3, we study the entrance law for any solution to the SDE (6.1.3) satis-fying (6.1.4) for t > (cid:3) MC γ = lim ←− CβE n Proofs of Eq. (6.1.1) and (6.1.2) . For the first statement, with the notation ofProposition 5.4, it is enough to prove that(6.1.5) E Q (cid:0) (cid:3) ( A, ∞ ) r,n |F n (cid:1) = O ( A − )almost surely, the implicit constant being independent from r . From the proof of thatproposition, we already have: (cid:3) ( A, ∞ ) r,n = O (1 /A ) − ∞ X k = A r,n ∆ M k , where ∆ M k are martingale differences, bounded by 1. Estimating the increments of thebracket from (5.3.2) yields: E Q (cid:0) (∆ M k ) |F k (cid:1) ≪ (1 − r ) − ( k + 1) . Hence, the bracket of the corresponding martingale is bounded by h M i ∞ ≪ X k ≥ A r,n (1 − r ) − ( k + 1) ≪ /A and the martingale converges in L (Ω , B , Q ), with E [( M ∞ − M A r,n ) |F n ] ≪ /A , which gives the desired estimate.For the second statement, we will need the following notion. We say that a family ofrandom variables ( X ( r )) r ∈ (0 , is tight conditionally to G when almost surely:lim sup a →∞ sup r ∈ (0 , Q ( | X ( r ) | > a | G ) = 0 . (6.1.6)Thanks to the recurrence (5.1.1) and the fact that Q ( r ) = r , we have1 − | Q n + k | = k − X j =0 (1 − r ) r j k + n − Y ℓ = k + n − j − | α ℓ | | − α ℓ Q ℓ | + (1 − | Q n | ) r k k + n − Y ℓ = n − | α ℓ | | − α ℓ Q ℓ | ≤ (1 − | Q n | ) k + n − Y ℓ = n − | α ℓ | | − α ℓ Q ℓ | + (1 − r ) n + k X j = n +1 n + k − Y ℓ = j − | α ℓ | | − α ℓ Q ℓ | . By introducing the two following ( F , Q )-martingales: ( N j := P j − ℓ =0 (cid:16) − log(1 − | α ℓ | ) − β ( ℓ +1) (cid:17) M j := M j ( r ) = P j − ℓ =0 (cid:0) − log | − α ℓ Q ℓ | + E Q [log | − α ℓ Q ℓ | |F ℓ ] (cid:1) , we can rewrite the previous expression as:1 − | Q n + k | ≤ (1 − | Q n | ) exp k + n − X ℓ = n log (cid:18) − | α ℓ | | − α ℓ Q ℓ | (cid:19)! + (1 − r ) n + k X j = n +1 exp n + k − X ℓ = j log (cid:18) − | α ℓ | | − α ℓ Q ℓ | (cid:19)! ≤ (1 − | Q n | ) exp − ( N k + n − N n ) + M k + n − M n + k + n − X ℓ = n E Q (cid:20) log (cid:18) − | α ℓ | | − α ℓ Q ℓ | (cid:19) |F ℓ (cid:21)! CHHAIBI AND JOSEPH NAJNUDEL + (1 − r ) n + k X j = n +1 exp − ( N k + n − N j ) + M k + n − M j + k + n − X ℓ = j E Q (cid:20) log (cid:18) − | α ℓ | | − α ℓ Q ℓ | (cid:19) |F ℓ (cid:21)! . In this computation, we have used the fact that the conditional distribution of | α j | given F j is the same under P and under Q , which implies E Q [ − log(1 − | α j | ) |F j ] = E P [ − log(1 − | α j | ) |F j ] = 2 β ( j + 1) . Combining these estimates with the last estimates of Proposition 5.3, we have a.s., for n large enough depending on β : E Q (cid:20) log (cid:18) − | α j | | − α j Q j | (cid:19) |F j (cid:21) = 4 β ( j + 1) | Q j | − β ( j + 1) + O (cid:18) j + 1) (cid:19) ≤ β ( j + 1) + O (cid:18) j + 1) (cid:19) , which becomes upon summing: n + k − X ℓ = j E Q (cid:20) log (cid:18) − | α ℓ | | − α ℓ Q ℓ | (cid:19) |F ℓ (cid:21) ≤ O (1) + 2 β log (cid:18) n + kj + 1 (cid:19) . We deduce a.s.:1 − | Q n + k | ≪ (1 − | Q n | ) exp ( − ( N k + n − N n ) + M k + n − M n ) (cid:18) n + kn + 1 (cid:19) β + (1 − r ) n + k X j = n +1 exp ( − ( N k + n − N j ) + M k + n − M j ) (cid:18) n + kj + 1 (cid:19) β . Now, we have to control the martingales N and M . We start by the easier N , whichis a convergent martingale. We have that: C N ω := sup j ≥ n |N j − N n | < ∞ , since ( N j − N n ) j ≥ n is almost surely a Cauchy sequence. Furthermore, since N has inde-pendent increments, C N ω is independent from F n , and its distribution under Q does notdepend on r . As such, because single random variables are tight:lim sup a →∞ Q (cid:0) | C N ω | > a |G (cid:1) = 0 , and (6.1.6) is satisfied for C N ω .In order to control the contribution of ( M j ( r )) j ≥ n , we crucially use the epsilon of roombetween β and 1 in the subcritical regime. To that end, pick η > 0, depending only on β , such that β + η < 1, and consider the random variable C M ,kω ( r ) := sup n ≤ j ≤ n + k (cid:20) M n + k ( r ) − M j ( r ) − η log n + k + 1 j + 1 (cid:21) . (6.1.7)This quantity has a distribution that depends on r and k . Upon using the boundexp [ − ( N k + n − N j ) + M k + n ( r ) − M j ( r )]= exp [ − ( N k + n − N n ) + N j − N n )] × exp (cid:20) M k + n ( r ) − M j ( r ) − η log n + k + 1 j + 1 (cid:21) (cid:18) n + k + 1 j + 1 (cid:19) η MC γ = lim ←− CβE n ≤ e C N ω + C M ,kω ( r ) (cid:18) n + k + 1 j + 1 (cid:19) η , we have:1 − | Q n + k | ≪ e C N ω + C M ,kω ( r ) " (1 − | Q n | ) (cid:18) n + k + 1 n + 1 (cid:19) β + η + (1 − r ) n + k X j = n +1 (cid:18) n + k + 1 j + 1 (cid:19) β + η . From the classical comparison between series and integrals, we have: ∀ k ∈ N , k X j =0 j + 1) β + η ≤ ( k + 1) − η − β − η − β , and from the mean value theorem, we have:(1 − | Q n | ) ≤ (1 − r ) | Q ′ n Q n | L ∞ ( D ) ≤ − r ) | Q ′ n | L ∞ ( ∂ D ) . In this last inequality, we used the fact that | Q n | ≤ | Q ′ n | reaches its maximum on the boundary of the disc.Therefore, the previous inequality becomes:1 − | Q n + k | ≪ e C N ω + C M ,kω ( r ) (1 − r ) " | Q ′ n | L ∞ ( ∂ D ) (cid:18) n + k + 1 n + 1 (cid:19) β + η + n + k + 11 − ( β + η ) ≪ e C N ω + C M ,kω ( r ) ( n + k + 1) (1 − r ) " | Q ′ n | L ∞ ( ∂ D ) n + 1 + 11 − ( β + η ) . Moreover, by Szeg¨o recursion, Q ′ n ( z ) is a rational function of z , the Verblunsky coefficientsof index between 0 and n − 1, and their conjugates: it is then continuous in z, α , . . . , α n − ,and bounded if we restrict to | z | = 1 and ( α , . . . , α n − ) ∈ L for some compact set L ∈ D n . Hence, | Q ′ n | L ∞ ( ∂ D ) is uniformly bounded, independently of r , under the event G .By absorbing all the constants into a single one, we deduce that there exists a C ω > r , which satisfies (6.1.6) and such that:1 − | Q n + k | ≤ C ω e C M ,kω ( r ) ( n + k + 1)(1 − r ) . For the second estimate, we are done upon showing that for ε ′ > 0, the random variables(6.1.8) S ( r ) := sup k ≤ / log(1 /r ) (cid:0) C M ,kω ( r ) + ε ′ log[( n + k + 1)(1 − r )] (cid:1)! r ∈ (0 , form a tight family, under Q and conditionally on G . Indeed, by combining (6.1.8) withthe previous equation, since ε r,n = max (cid:18) n, (cid:22) ε − r (cid:23)(cid:19) ≤ n + 1log(1 /r ) + O (1) , we obtain, by taking ε ′ = 1 / ε r,n − X j = n − | Q j ( r ) | j + 1 ≤ O (cid:0) min(1 , log(1 /r )) (cid:1) + min( ε r,n − ,n +1 / log(1 /r )) X j = n − | Q j ( r ) | j + 1 ≤O (cid:0) − r (cid:1) + C ω e S ( r ) ε r,n − X j = n j + 1 (cid:0) ( j + 1)(1 − r ) (cid:1) ≪O (cid:0) − r (cid:1) + C ω e S ( r ) √ ε. CHHAIBI AND JOSEPH NAJNUDEL Now, the sum we want to estimate is non-empty only if ε r,n ≥ n + 1 which implies ε/ (1 − r ) ≥ 1, i.e. 1 − r ≤ ε . We deduce ε r,n − X j = n − | Q j ( r ) | j + 1 ≪ ε + C ω e S ( r ) √ ε. and thensup r ∈ (0 , Q " ε r,n − X j = n − | Q j ( r ) | j + 1 ≥ ξ |G ≤ sup r ∈ (0 , Q (cid:2) (1 + C ω e S ( r ) ) ≫ ξε − / |G (cid:3) , which goes to zero with ε by the tightness of ( S ( r )) r ∈ (0 , .In order to be truly done with the proof of (6.1.2), it remains to prove the tightness of(6.1.8).The proof of (6.1.8) is rather technical but essentially boils down to Doob’s martingaleinequality in order to control suprema and a dyadic decomposition argument. First, letus start with controlling suprema. Fix λ ∈ [ − , ℓ ≥ n : E Q (cid:0) e λ ( M ℓ +1 −M ℓ ) |F ℓ (cid:1) = E − | α ℓ Q ℓ | | − α ℓ Q ℓ | λ ) |F ℓ ! e λ E Q [ log | − α ℓ Q ℓ | |F ℓ ] . By the estimates in Lemma 5.2 and Proposition 5.3, we have for ℓ large enough dependingon β : E Q (cid:0) e λ ( M ℓ +1 −M ℓ ) |F ℓ (cid:1) = E (cid:18) (1 − | α ℓ Q ℓ | ) (cid:18) λ ) | α ℓ Q ℓ | + O (cid:18) | α ℓ | − | α ℓ | (cid:19)(cid:19) |F ℓ (cid:19) × e − β ( ℓ +1) λ | Q ℓ | + O (( ℓ +1) − ) = (cid:18) λ ) − 1) 2 | Q ℓ | β ( ℓ + 1) + O (cid:0) ( ℓ + 1) − (cid:1)(cid:19) e − β ( ℓ +1) λ | Q ℓ | + O (( ℓ +1) − ) = exp (cid:18) ((1 + λ ) − 1) 2 | Q ℓ | β ( ℓ + 1) − β ( ℓ + 1) λ | Q ℓ | + O (( ℓ + 1) − ) (cid:19) = exp (cid:18) λ | Q ℓ | β ( ℓ + 1) + O (( ℓ + 1) − ) (cid:19) . Therefore, there exists a constant c > E λj,j ′ := exp λ ( M j − M j ′ ) − λ β j − X ℓ = j ′ ℓ + 1 − c j − X ℓ = j ′ ℓ + 1) ! , is a positive ( F , Q )-supermartingale in j ≥ j ′ , starting at 1, for j ′ large enough dependingon β . We deduce that the probability that this supermartingale reaches a level M > /M . Applying this for λ ∈ (0 , 1) and for − λ , we deduce that for a large enoughdepending on β and b ≥ a ≥ n (recall that G is F n -measurable): Q (cid:18) sup a ≤ j ≤ b |M j − M a | ≥ x |G (cid:19) ≪ e − λx + λ β ( b +1 a +1 ) . Hence for λ ∈ (0 , 1) again: Q sup a ≤ j ≤ b |M j − M a | ≥ (cid:18) b + 1 a + 1 (cid:19) x |G ! ≪ e − λx + β λ . MC γ = lim ←− CβE n For all integers p ≥ 0, let us define S p ( r ) := ∞ X m =0 max , sup k ∈ [2 p − , p +1 − sup n ≤ j ≤ n + k log n + k +1 j +1 ≤ m |M n + k − M j | . − m . The double supremum is bounded by the supremum of |M j − M j ′ | . where j and j ′ arein an interval [ a, b ] such that b = n + 2 p +1 − 1, and log(( n + 2 p ) / ( a + 1)) ≤ m . Now:sup a ≤ j,j ′ ≤ b |M j − M j ′ | . ≪ sup a ≤ j ≤ b |M j − M a | . . Because of the above estimate, this quantity has a k -th moment (conditionally on G )dominated by 2 . km for fixed k ≥ 0, and a large enough depending on β . This lastconstraint can in fact be dropped since the individual increments of M after n have allbounded conditional moments given G (they are dominated by the moments of log(1 −| α j | )).The expectation of the power 3 . m is dominated by2 . × . × . m = 2 m , and then: E Q [max(0 , Σ m − m ) |G ] ≤ E [Σ m Σ m ≥ m |G ] ≤ − . m E [Σ . m |G ] ≤ − . m m = 2 − . m which implies that E Q [ S p ( r ) |G ] is bounded by a quantity depending only on β . Now, for k ∈ [2 p − , p +1 − n ≤ j ≤ n + k , we can consider, in S p ( r ), the term of index m ≥ / 2) + log(( n + k + 1) / ( j + 1)) ∈ [2 m − , m ), and we get |M n + k − M j | ≪ . m + S p ( r ) . ≪ . (( n + k + 1) / ( j + 1)) + S p ( r ) . . Then recalling the expression (6.1.7), we get for k ∈ [2 p − , p +1 − C M ,kω ( r ) ≪ S p ( r ) . + sup n ≤ j ≤ n + k (cid:18) log . n + k + 1 j + 1 − η log n + k + 1 j + 1 (cid:19) ≪ S p ( r ) . + 1 . Now, let us define for p r by 1 / log(1 /r ) ∈ [2 p r − , p r +1 − S ( r ) from(6.1.8) is controlled by g S ( r ) defined as: g S ( r ) := max p ≤ p r (cid:0) S p ( r ) . − ε ′′ ( p r − p ) (cid:1) for ε ′′ = ε ′ log 2. We have, for all x > 1, using a union bound and Markov’s inequality: Q [ g S ( r ) ≥ x |G ] ≤ X p ≤ p r Q [ S . p ( r ) ≥ x + ε ′′ ( p r − p ) |G ] ≤ X p ≤ p r x + ε ′′ ( p r − p )) . E Q [ S p ( r ) |G ] ≪ ε ′′ x − / which is finite and tends to zero when x → ∞ , since E [ S p ( r ) |G ] is bounded independentlyof p and G . This provides the tightness of (6.1.8).6.2. Weak convergence to a solution of the SDE. We start this subsection by ageneral theorem of convergence of discrete stochastic processes towards the solution of aSDE. Then, we apply this theorem to the setting of Proposition 6.1. CHHAIBI AND JOSEPH NAJNUDEL A general theorem for convergence of stochastic processes. Proposition 6.2. Let ( ε n ) n ≥ be a positive sequence converging to zero. Let ( X ( n ) ) n ≥ be a family stochastic processes defined on intervals ( I n ) n ≥ containing a fixed compactinterval I ⊂ R + and whose endpoints are multiples of ε n , X ( n ) being continuous andpiecewise linear on the intervals of the form [ kε n , ( k + 1) ε n ] , uniformly bounded, andsatisfying the following equation: X ( n )( k +1) ε n − X ( n ) kε n = b n (cid:16) kε n , X ( n ) kε n (cid:17) ε n + σ n (cid:16) kε n , X ( n ) kε n (cid:17) √ ε n Y ( n ) k where b n : I n × R → R and σ n : I n × R → R + are given functions.We assume that b n and σ n are uniformly converging on R × I to continuous and boundedfunctions b and σ when n goes to infinity, and E [ Y ( n ) k | ( X ( n ) jε n ) j ≤ k ] = 0 , E [( Y ( n ) k ) | ( X ( n ) jε n ) j ≤ k ] = 1 , E [( X ( n )( k +1) ε n − X ( n ) kε n ) | ( X ( n ) jε n ) j ≤ k ] = O ( ε n ) . Moreover, we suppose that b and σ satisfy the estimates: | b ( t, x ) − b ( t, y ) | ≪ | x − y | , | σ ( t, x ) − σ ( t, y ) | ≪ p | x − y | . Then, the family of the laws of X ( n ) restricted to I for n ≥ is tight and any subsequenciallimit has the law of a solution of the SDE: dX t = b ( t, X t ) dt + σ ( t, X t ) dB t ,B being a Brownian motion.Proof. For the tightness, by the classical Kolmogorov criterion, it is enough to show that E [( X ( n ) t − X ( n ) s ) ] = O (( t − s ) )for | t − s | smaller than some absolute constant. Since we assume a linear interpolation,we can suppose s < t , s = kε n , t = mε n for k and m integers. For m ≥ k , we define∆ k,m = X ( n ) mε n − X ( n ) kε n and we expand: E [∆ k,m +1 ] = E [∆ k,m ] + 4 E [∆ k,m E [∆ m,m +1 |H m ]] + 6 E [∆ k,m E [∆ m,m +1 |H m ]]+4 E [∆ k,m E [∆ m,m +1 |H m ]] + E [∆ m,m +1 ] , where H m is the σ -algebra generated by X ( n ) kε n for k ≤ m . We have E [∆ m,m +1 |H m ] = b n ( mε n , X ( n ) mε n ) ε n = O ( ε n )since b n converges uniformly to b and b is bounded. Similarly, E [∆ m,m +1 |H m ] = b n ( mε n , X ( n ) mε n ) ε n + σ n ( mε n , X ( n ) mε n ) ε n = O ( ε n ) , E [∆ m,m +1 |H m ] = O ( ε n )by assumption, and by using Cauchy-Schwarz inequality, E [ | ∆ m,m +1 | (cid:12)(cid:12) H m ] = O ( ε / n ) . Using H¨older inequality, we deduce that if E m = E [∆ k,m ], we have E m +1 − E m ≪ ( E / m + E / m ) ε n + E / m ε / n + ε n . As soon as E m ≤ E m +1 − E m ≪ E / m ε n + ε n , MC γ = lim ←− CβE n and then, if E m +1 ≥ E m ≥ ε n E / m +1 − E / m ≤ E m +1 − E m E / m ≪ ε n + ε n E − / m ≪ ε n , which remains true if E m +1 ≥ E m and E m ≤ ε n since in this case E m +1 ≤ E m + O ( E / m ε n + ε n ) ≪ ε n . By induction, we deduce E / m ≪ ( m − k ) ε n as soon as ( m − k ) ε n is smaller than someabsolute constant. This is enough for tightness.Now, let X be any limit in law of a subsequence of ( X ( n ) ) n ≥ . In order to prove thatthe law of X is necessarily the unique weak solution of the above SDE, we prove that X solves a well-posed martingale problem. Let f be a smooth function with compactsupport. We have by Taylor’s formula: f ( X ( n )( k +1) ε n ) − f ( X ( n ) kε n ) = f ′ ( X ( n ) kε n )∆ k,k +1 + 12 f ′′ ( X ( n ) kε n )∆ k,k +1 + O f ( | ∆ k,k +1 | ) , where the subscript f means that the implicit constant may depend on the function f .Since E [∆ k,k +1 |F k ] = b n ( kε n , X ( n ) kε n ) ε n , E [∆ k,k +1 |F k ] = σ n ( kε n , X ( n ) kε n ) ε n + O ( ε n ) , E [ | ∆ k,k +1 | (cid:12)(cid:12) F k ] = O ( ε / n ) , we get E (cid:16) f ( X ( n )( k +1) ε n ) − f ( X ( n ) kε n ) − ε n [ f ′ ( X ( n ) kε n ) b n ( kε n , X ( n ) kε n )+ 12 f ′′ ( X ( n ) kε n ) σ n ( kε n , X ( n ) kε n )] (cid:12)(cid:12) H k (cid:21) = O f ( ε / n ) . Summing for consecutive values of k , and conditioning, we deduce that for s < t multiples of ε n in I , E (cid:20) f ( X ( n ) t ) − f ( X ( n ) s ) − Z ts (cid:16) f ′ ( X ( n ) ⌊ u ⌋ εn ) b n ( ⌊ u ⌋ ε n , X ( n ) ⌊ u ⌋ εn )+ 12 f ′′ ( X ( n ) ⌊ u ⌋ εn ) σ n ( ⌊ u ⌋ ε n , X ( n ) ⌊ u ⌋ εn ) (cid:19) du (cid:12)(cid:12) ( X ( n ) v ) v ≤ s (cid:21) is dominated by ε / n (with a constant depending on f ), ⌊ u ⌋ ε n denoting the largest multipleof ε n which is smaller than or equal to u . Replacing b n by b and σ n by σ changes theintegral by at most | I | ( || f ′ || ∞ || b n − b || ∞ + || f ′′ || ∞ || σ n − σ || ∞ ) −→ n →∞ , | I | denoting the length of I . After that, replacing ⌊ u ⌋ ε n by u changes the integrand by atmost w A ( f ′ b + f ′′ σ , ε n + η n ), defined assup x,y ∈ [ − A,A ] u,v ∈ I | x − y | + | u − v |≤ ε n + η n (cid:12)(cid:12)(cid:12)(cid:12)(cid:18) f ′ ( x ) b ( u, x ) + 12 f ′′ ( x ) σ ( u, x ) (cid:19) − (cid:18) f ′ ( y ) b ( v, y ) + 12 f ′′ ( y ) σ ( v, y ) (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) , as soon as X ( n ) ⌊ u ⌋ εn and X ( n ) u are in [ − A, A ] and their difference is at most η n . Since X ( n ) is uniformly bounded, we can choose A in such a way that the integral is changed by atmost | I | w A ( f ′ b + f ′′ σ / , ε n + η n )+2 || f ′ b + f ′′ σ / || ∞ Z ts du (cid:18) | X ( n ) ⌊ u ⌋ εn − X ( n ) u |≥ η n (cid:19) . CHHAIBI AND JOSEPH NAJNUDEL The variation of the last conditional expectation is then at most | I | w A ( f ′ b + f ′′ σ / , ε n + η n )+2 | I ||| f ′ b + f ′′ σ / || ∞ η − n sup u ∈ [ s,t ] E [( X ( n ) ⌊ u ⌋ εn − X ( n ) u ) | ( X ( n ) v ) v ≤ s ] . Now, we have, with the previous notation, E [∆ k,k +1 |H k ] = O ( ε n )and then the last conditional expectation is dominated by ε n .The variation of the conditional expectation of the integral involving f is then domi-nated by: w A ( f ′ b + f ′′ σ / , ε n + η n ) + || f ′ b + f ′′ σ / || ∞ η − n ε n . By uniform continuity, the first term goes to zero when n goes to infinity if η n goes tozero. We deduce, by taking η n = ε / n , that E (cid:20) f ( X ( n ) t ) − f ( X ( n ) s ) − Z ts (cid:18) f ′ ( X ( n ) u ) b ( u, X ( n ) u ) + 12 f ′′ ( X ( n ) u ) σ ( u, X ( n ) u ) (cid:19) du (cid:12)(cid:12) ( X ( n ) v ) v ≤ s (cid:21) is bounded by a deterministic quantity δ n which goes to zero when we let n → ∞ . Wethen get, for all measurable functionals G , (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20)(cid:18) f ( X ( n ) t ) − f ( X ( n ) s ) − Z ts (cid:0) f ′ ( X ( n ) u ) b ( u, X ( n ) u )+ 12 f ′′ ( X ( n ) u ) σ ( u, X ( n ) u ) (cid:1) du (cid:19) G (cid:0) ( X ( n ) v ) v ≤ s (cid:1)(cid:21)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || G || ∞ δ n . if s < t are multiples of ε n . If in the big parenthesis, we replace t by t ′ ∈ [ t, t + ε n ] and s by s ′ ∈ [ s, s + ε n ], we change the corresponding quantity by at most || f ′ || ∞ ( | X ( n ) t ′ − X ( n ) t | + | X ( n ) s ′ − X ( n ) s | ) + 2 || f ′ b + f ′′ σ / || ∞ ε n which shows that the estimate just above still occurs if we replace s by s ′ and t by t ′ , witha possibly different δ n satisfying the same properties. We can then write, by changingnotation, (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20)(cid:18) f ( X ( n ) t ) − f ( X ( n ) s ) − Z ts (cid:0) f ′ ( X ( n ) u ) b ( u, X ( n ) u )+ 12 f ′′ ( X ( n ) u ) σ ( u, X ( n ) u ) (cid:1) du (cid:19) G (cid:0) ( X ( n ) v ) v ≤⌊ s ⌋ εn (cid:1)(cid:21)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || G || ∞ δ n for all s < t in any interval [ kε n , mε n ] contained in I . Hence, for s ′ < s < t fixed in theinterior of I , we have for n large enough and all measurable functionals G . (cid:12)(cid:12)(cid:12)(cid:12) E (cid:20)(cid:18) f ( X ( n ) t ) − f ( X ( n ) s ) − Z ts (cid:0) f ′ ( X ( n ) u ) b ( u, X ( n ) u )+ 12 f ′′ ( X ( n ) u ) σ ( u, X ( n ) u ) (cid:1) du (cid:19) G (cid:0) ( X ( n ) v ) v ≤ s ′ (cid:1)(cid:21)(cid:12)(cid:12)(cid:12)(cid:12) ≤ || G || ∞ δ n If G is a continuous, bounded functional from C ( I, R ) to R , the quantity inside the ex-pectation is continuous and bounded in the trajectory of X ( n ) , since this process is uni-formly bounded by some constant A and f ′′ , σ and b are uniformly continuous in [ − A, A ],[ − A, A ] × I and [ − A, A ] × I respectively. We deduce that if the law of X is a limit pointof the family of laws of X ( n ) restricted to I , then E (cid:20)(cid:18) f ( X t ) − f ( X s ) − Z ts (cid:0) f ′ ( X u ) b ( u, X u ) + 12 f ′′ ( X u ) σ ( u, X u ) (cid:1) du (cid:19) G (( X v ) v ≤ s ′ ) (cid:21) = 0 MC γ = lim ←− CβE n for all s ′ < s < t in the interior of I . Taking limits and using dominated convergence,we can let s = s ′ and allow s ′ and t to be at the boundary of I . As such for all smoothfunctions f with bounded support, and bounded continuous functionals G , and s < t in I , we have: E (cid:20)(cid:18) f ( X t ) − f ( X s ) − Z ts (cid:0) f ′ ( X u ) b ( u, X u ) + 12 f ′′ ( X u ) σ ( u, X u ) (cid:1) du (cid:19) G (( X v ) v ≤ s ) (cid:21) = 0 . Now, let t be the left-hand point of the interval I , and let us fix x . In order toinvoke the machinery of the martingale problem, we will need a fixed initial condition- as the literature is stated in that form. Since X belongs to the space of continuousfunctions, which is a separable complete metric space, the regular conditional probability P t ,x = P ( · | X t = x ) does exist (see [Dur19]). The above equation says that for allsmooth f with bounded support: M ft := f ( X t ) − f ( X t ) − Z tt (cid:0) f ′ ( X u ) b ( u, X u ) + 12 f ′′ ( X u ) σ ( u, X u ) (cid:1) du is an (cid:0) P , F X (cid:1) -martingale, where F X denotes the natural filtration of X . We then get, for t ≤ s ≤ t , and for a bounded continuous functional G , E P t ,Xt [ M ft G (( X v ) v ≤ s )] = E P [ M ft G (( X v ) v ≤ s ) | X t ] = E P [ E P [ M ft G (( X v ) t ≤ v ≤ s ) |F Xs ] | X t ]= E P [ M fs G (( X v ) t ≤ v ≤ s ) | X t ] = E P t ,Xt [ M fs G (( X v ) v ≤ s )]almost surely. Hence, for P X t ( dx )-almost every x , we have E P t ,x [ M ft ˜ G (( X sv ) v ≤ )] = E P t ,x [ M fs ˜ G (( X sv ) v ≤ )]for t ≤ s ≤ t , ˜ G bounded and continuous functional on C ([0 , , R ), f smooth withcompact support, all these elements being restricted to arbitrary countable sets.Replacing s and t by s ′ and t ′ for s ′ > s , t ′ > t in the chosen countable set, supposedto be dense, and letting s ′ → s and t ′ → t , we can drop the restriction on s and t bydominated convergence. Moreover, since for two smooth functions f and g with compactsupport, | M ft − M gt | ≪ b,σ,I (1 + t )( || f − g || ∞ + || f ′ − g ′ || ∞ + || f ′′ − g ′′ || ∞ ) , we can drop the restriction on f after considering a dense subset of smooth functions withrespect to the C norm.We then have for P X t ( dx )-almost every x , E P t ,x [( M ft − M fs ) H ( X sv , . . . , X sv r )] = 0for all smooth f with compact support, all t ≥ s ≥ t , all v , . . . v r ∈ Q ∩ [0 , 1] and allpolynomials H with rational coefficients. By dominated convergence, one can drop theassumption v , . . . , v r ∈ Q and only assume that H is a continuous function. Using againdominated convergence, the equality remains true when H is the indicator of a productof intervals. By monotone class theorem, we easily deduce that for P X t ( dx )-almost every x and all smooth f with compact support, M f is a ( P t ,x , F X )-martingale, i.e. thelaw L ( X t , t ≥ t | X t = x ) is a solution of the martingale problem associated to theSDE and with initial condition x . The regularity assumptions on the coefficients of b and σ allow us to apply the result by Yamada and Watanabe given in [SV07, Theorem8.2.1]. It says that the SDE satisfies Itˆo uniqueness and that the martingale problem iswell-posed for deterministic initial conditions. This means that for P X t ( dx )-every x , thelaw L ( X t , t ≥ t | X t = x ), is uniquely determined and coincides with the law of thesolution of the SDE with initial condition x . CHHAIBI AND JOSEPH NAJNUDEL In the end, as required, the law of X is uniquely determined by its initial probabilitydistribution at the left-hand point of I . And X has the same distribution as the solutionof the SDE with the same initial distribution. (cid:3) An application of the previous result. We will now apply the result of the previoussubsection to the particular case we are interested in. Recall that the goal is to provethat (cid:0) X ( r ) ; r < (cid:1) is tight and that any limit point solves the SDE (6.1.3).We start by observing that, thanks to Proposition 5.3, there exists an ( F , Q )-martingale N with normalized bracket such that, for Q j := Q j ( r ),1 − | Q j +1 | (6.2.1)=1 − r + r (cid:0) − | Q j | (cid:1) (cid:18) − β ( j + 1) (cid:0) − | Q j | (cid:1) + 4 β ( j + 1) | Q j | + O ( 1( j + 1) ) (cid:19) + s r (1 − | Q j | ) (cid:18) | Q j | β ( j + 1) + O (cid:18) j + 1) (cid:19)(cid:19) ∆ N k . We choose ε ∈ (0 , 1) and A > 1, a sequence ( r m ) m ≥ in (0 , 1) which tends to 1 and suchthat r m is sufficiently close to 1 for all m , and we apply Proposition 6.2 to: ε m = log(1 /r m ) , X ( m ) kε m = | Q n + k | , I m = R + , I = [ ε, A ] ,b m ( t, x ) = (log(1 /r m )) − (cid:20) (1 − x ) − (1 − r m ) − r m (1 − x ) (cid:18) − β ( j + 1) (1 − x )+ 4 β ( j + 1) x + O (( j + 1) − ) (cid:19)(cid:21) , and σ m ( t, x ) = (log(1 /r m )) − r m (1 − x ) (cid:18) xβ ( j + 1) + O (( j + 1) − ) (cid:19) , for j = n + t/ log(1 /r m ), which tend uniformly to b ( t, x ) = − x + (1 − x ) (cid:18) βt (1 − x ) − xβt (cid:19) and σ ( t, x ) = 4 x (1 − x ) βt . We then also have the uniform convergence of σ m towards σ since the square root isuniformly continuous. Moreover, the condition on the conditional fourth moment is en-sured by (5.3.3), and the Lipschitz-like conditions for b and σ are also satisfied. FromProposition 6.2, we deduce that the family, indexed by m , of distributions of the linearinterpolation X ( r m ) t of t 7→ | Q n + t/ log(1 /r m ) | , restricted to the interval [ ε, A ], is tight, andany limit point satisfies the SDE of Proposition 6.1 on the interval [ ε, A ].From the tightness of ( S ( r )) r ∈ (0 , (see (6.1.8)), we get that for k ≤ / log(1 /r ), and ε ′ ∈ (0 , − | Q n + k | ≤ C ω e S ( r ) (cid:2) ( n + k + 1)(1 − r ) (cid:3) − ε ′ ≤ C ω e S ( r ) [[(1 − r )( n + 1)] − ε ′ + ( k log(1 /r )) − ε ′ ] MC γ = lim ←− CβE n which implies that the family of conditional distributions ofsup t ∈ [0 , − X ( r ) t t − ε ′ + (1 − r ) − ε ′ , given G , remains tight with respect to r ∈ (0 , X ( r m ) given G from theinterval [ ε, A ] to the interval [0 , A ]. Indeed, tightness is ensured by the fact that for all ξ > 0, lim sup m →∞ Q [ w X ( rm ) , [0 ,A ] ( δ ) > ξ |G ] −→ δ → , where w X ( rm ) , [0 ,A ] denotes the modulus of continuity of X ( r m ) restricted to the interval[0 , A ]. We already know tightness on the interval [ ε, A ] and thenlim sup m →∞ Q [ w X ( rm ) , [ ε,A ] ( δ ) > ξ/ |G ] −→ δ → . Moreover, for δ ∈ (0 , ε ),lim sup m →∞ Q [ w X ( rm ) , [0 ,ε ] ( δ ) > ξ/ |G ] ≤ lim sup m →∞ Q [ w X ( rm ) , [0 ,ε ] ( ε ) > ξ/ |G ] ≤ lim sup r → Q " sup t ∈ (0 , − X ( r ) t t − ε ′ + (1 − r ) − ε ′ > ξ/ ε − ε ′ + (1 − r ) − ε ′ |G . We deduce, for all ε ∈ (0 , δ → lim sup m →∞ Q [ w X ( rm ) , [0 ,A ] ( δ ) > ξ |G ] ≤ lim sup r → Q " sup t ∈ (0 , − X ( r ) t t − ε ′ + (1 − r ) − ε ′ > ξ/ ε − ε ′ + (1 − r ) − ε ′ |G ≤ lim sup r → Q " sup t ∈ (0 , − X ( r ) t t − ε ′ + (1 − r ) − ε ′ > ξ/ ε − ε ′ |G Letting ε → X ( r m ) ) m ≥ for r m → G ) to a limiting process. Moreover, this limit should satisfythe SDE of the proposition on the full open interval (0 , ∞ ), since it satisfies the equationon any interval of the form [ ε, A ].Let X be a process which is the limit in law of a subsequence of ( X ( r m ) ) m ≥ , and let usnow show that sup t ∈ (0 , − X t t − ε ′ < ∞ almost surely. Since X has the limiting distribution of a subsequence of ( X ( r m ) ) m ≥ , wehave, by continuity of the underlying functional, for all ε ∈ (0 , ξ > P " sup t ∈ ( ε, − X t t − ε ′ > ξ ≤ lim sup r → Q " sup t ∈ ( ε, − X ( r ) t t − ε ′ > ξ − |G and then P " sup t ∈ ( ε, − X t t − ε ′ > ξ ≤ lim sup r → Q " sup t ∈ ( ε, − X ( r ) t t − ε ′ + (1 − r ) − ε ′ > ξ − |G ≤ lim sup r → Q " sup t ∈ (0 , − X ( r ) t t − ε ′ + (1 − r ) − ε ′ > ξ − |G , CHHAIBI AND JOSEPH NAJNUDEL which implies P " sup t ∈ (0 , − X t t − ε ′ = ∞ ≤ lim sup ε → P " sup t ∈ ( ε, − X t t − ε ′ > ξ ≤ lim sup r → Q " sup t ∈ (0 , − X ( r ) t t − ε ′ + (1 − r ) − ε ′ > ξ − |G , for all ξ > 2, and then by letting ξ → ∞ , P " sup t ∈ (0 , − X t t − ε ′ = ∞ = 0 . This immediately provides the a.s. integrability of ( X t − e − t ) /t on the interval (0 , E Q X k ≥ max( n, ⌊ (1 − r ) − ⌋ | Q k +1 ( r ) | k + 2 |G = O (1) , which easily implies for r close enough to 1 (depending on n ), E Q "Z ∞ X ( r ) t t dt |G = O (1) , and then E Q "Z B X ( r ) t t dt |G = O (1)uniformly in B > 1. Since X is a limit point of a subsequence of ( X ( r m )) m ≥ , and theintegral on a compact set with respect to dt/t is a continuous functional, we have E (cid:20)Z B X t t dt (cid:21) = O (1) , again independently of B . By monotone convergence, E (cid:20)Z ∞ X t t dt (cid:21) = O (1) , which implies the integrability of X t − e − t t on [1 , ∞ ).To end the proof of the Proposition 6.1, it now remains to prove that the law of X isuniquely determined by the fact that it satisfies the SDE and thatsup t ∈ (0 , − X t t − ε ′ < ∞ . The proof is given in the next two subsections.6.3. Entrance law from Dufresne’s identity. An amusing fact is that the entrancelaw of the process X uses Dufresne’s identify which relates the perpetuity of a Brownianmotion with drift to the inverse of a Gamma random variable.More precisely, if W is a Brownian motion, then Dufresne’s identity states that (see[BS12, p.78]) for all b > 0: 2 Z ∞ exp ( − W w + bw )) dw L = 1 γ b (6.2.2)where γ b is a Gamma random variable of parameter b . This plays an important role inthe following: MC γ = lim ←− CβE n Lemma 6.3. Let ( X t ) t ≥ be a continuous process, satisfying the SDE (6.1.3) , and suchthat sup t ∈ (0 , − X t t − ε ′ dt < ∞ almost surely, for all ε ′ ∈ (0 , . Then, the following convergence in law holds: − X t t t → −→ β γ ν where ν = β − .Proof. Recall that X = 1 and X satisfies the SDE: d (1 − X t ) = X t dt − (1 − X t ) dtβt + 4 (1 − X t ) X t dtβt + s (1 − X t ) X t βt dB t . Via Itˆo’s formula, we can write, for all t > 0, on the event when X t < t and the first hitting time T t of 1 after t : − d log(1 − X t ) = 11 − X t dX t + 12 1(1 − X t ) d h X, X i t = − X t − X t dt + (1 − X t ) 2 dtβt − X t dtβt − s X t βt dB t + 2 X t βt dt = − X t − X t dt + 2 dtβt − X t dtβt − s X t βt dB t . (6.3.1)Let us show that this equation is in fact satisfied for all t > 0. Using the Dambis-Dubins-Schwarz Theorem and the fact that X is bounded by 1, we deduce that for all t > t ,sup t ∈ [ t , min( t ,T t )) [ − log(1 − X t ) + log(1 − X t )] ≤ β log( t /t ) + sup ≤ s ≤ β log( t /t ) β s < ∞ , where ( β s ) s ≥ is a Brownian motion. Taking limit at the end of the interval, we deduce − log(1 − X min( t ,T t ) ) + log(1 − X t ) < ∞ i.e. X min( t ,T t ) < T t > t . Since, t > t is arbitrary, T t = ∞ , which meansthat X almost surely never returns to 1 after t , conditionally on the event X t < 1. Inparticular, for all t < t , P [ X t < , X t = 1] = 0and then taking a countable union, P [ X t = 1 , ∃ t ∈ Q ∩ (0 , t ) , X t < 1] = 0 , and by continuity P [ X t = 1 , ∃ t ∈ (0 , t ) , X t < 1] = 0 , i.e. P [ X t = 1] = P [ ∀ t ∈ (0 , t ] , X t = 1] = 0since the fact that X remains equal to 1 in a non-trivial interval contradicts the SDEsatisfied by X . Hence, for all t > X t < X never hits 1 after t . Taking the countable intersection ofthe events for t ∈ Q ∩ (0 , X is strictly smaller than 1 everywhereexcept at time 0. Hence, (6.3.1) is almost surely satisfied for all t ∈ (0 , ∞ ). CHHAIBI AND JOSEPH NAJNUDEL As we shall see, Eq. (6.3.1) can be recast into the following Volterra equation:(1 − X t ) = Z t ds e − ( t − s ) (cid:16) st (cid:17) β exp Z ts X u βu du + Z ts s X u βu dB u ! . (6.3.2)To do so, fix an arbitrary time, say 1, and write: Y t = (1 − X t ) e t t β exp Z t X u βu du + Z t s X u βu dB u ! . (6.3.3)First, let us prove that almost surely, lim t → Y t = 0. Fix an integer r ≥ 1. Usingthe Dambis-Dubins-Schwarz Theorem, there exists a Brownian motion β ( r ) such that for t ≥ e − r : Z te − r r X u u dB u = β ( r ) R te − r Xuu du . Because X u ≤ 1, we obtain:sup e − r ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z te − r r X u u dB u (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ sup ≤ s ≤ R e − r u du | β ( r ) s | = sup ≤ s ≤ r | β ( r ) s | L = √ rV where V is a random variable with subexponential tails. By the Borel-Cantelli Lemma,we deduce that there is random variable C ω < ∞ , such that ∀ r ≥ , sup e − r ≤ t ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z te − r r X u u dB u (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ω r / , and then ∀ r ≥ , ∀ e − r ≤ t ≤ , Z t (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)r X u u dB u (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ω r / , which gives ∀ t ∈ (0 , , (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z t r X u u dB u (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ω (1 + | log t | ) / . The previous bound applied to Eq. (6.3.3) gives Y t ≤ (1 − X t ) e t t − β e C ω (1+ | log t | ) / . By assumption on the process X and the fact that β < Y t almost surelygoes to zero when t → 0. We are now ready to recast the SDE on X into a Volterraequation. From (6.3.1), we deduce that d log Y t = dt/ (1 − X t ), and then, since Y tends tozero at zero: Y t = Z t ds Y s − X s . which implies the Volterra equation (6.3.2).We are now ready to finish the argument. We have0 ≤ Z ts − X u ) βu du ≤ Z t − X u ) βu du which tends to zero in probability when t goes to zero, since (1 − X u ) /u is integrablebecause of the assumption on X .On the other hand, by using again the Dambis-Dubins-Schwarz theorem, we have for ε ∈ (0 , t ), sup ε ≤ s ≤ t (cid:12)(cid:12)(cid:12)(cid:12)Z sε (1 − p X u ) r βu dB u (cid:12)(cid:12)(cid:12)(cid:12) ≤ sup ≤ ℓ ≤ L | γ ( ε ) ℓ | MC γ = lim ←− CβE n where γ ( ε ) is a Brownian motion and L = Z tε (1 − p X u ) βu du ≤ Z t (1 − X u ) 4 βu du . We deduce that for a, α > P (cid:20) sup ε ≤ s ≤ t (cid:12)(cid:12)(cid:12)(cid:12)Z sε (1 − p X u ) r βu dB u (cid:12)(cid:12)(cid:12)(cid:12) > a (cid:21) ≤ P (cid:20)Z t (1 − X u ) 4 βu du ≥ α (cid:21) + P (cid:20) sup ≤ ℓ ≤ α | γ ( ε ) ℓ | ≥ a (cid:21) = P (cid:20)Z t (1 − X u ) 4 βu du ≥ α (cid:21) + P (cid:2) √ αV ≥ a (cid:3) . Using the triangle inequality and letting ε → 0, we get P (cid:20) sup 0, we deduce thatsup CHHAIBI AND JOSEPH NAJNUDEL Uniqueness in law of solutions of the SDE.Lemma 6.4. Let ( X t ) t ≥ be a continuous process, satisfying the SDE (6.1.3) , and suchthat sup t ∈ (0 , − X t t − ε ′ dt < ∞ almost surely, for all ε ′ ∈ (0 , . Then the law of X is uniquely determined.Proof. Let X and X be two solutions. In order to prove that X L = X as processes, weneed to prove that finite dimensional distributions on { t > } coincide.Nevertheless, because X and X solve the same SDE which admits strong solutionsfor t > 0, both processes enjoy the Markov property, with the same transition kernels.Therefore, we only need to prove that marginals match: ∀ t > , X t L = X t . In other words, one can fix t ∈ (0 , 1) and prove that X t and X t have the same law.Although X = X = 1 is the natural continuation at t = 0, we do not have the Markovproperty in order to directly make the transition from time 0 to time t . From the previoussection, since both X and X have the required tightness property at 0 and solve theSDE, they have the same ”entrance law”:1 − X t t t → −→ G, − X t t t → −→ G, for G L = β γ ( β/ − . From this convergence in law, one deduces that for all η > 0, there exists δ ∈ (0 , t )small enough, depending on η , such that we can couple the distributions of ( X t ) t ≥ δ and( X t ) t ≥ δ , in such a way that with probability larger than 1 − η , | L δ − L δ | ≤ η , where L jt := log − X jt t ! and ( X t ) t ≥ δ and ( X t ) t ≥ δ are driven by the same Brownian motion. From the SDE givenby (6.3.1), the drift term in L jt is nonincreasing with respect to the value of L jt . Moreover,since the SDE has strong solutions, the corresponding stochastic flows do not intersect,which implies that the relative order of L and L never changes. We deduce that theprocess ( | L t − L t | ) t ≥ δ is a nonnegative supermartingale, and then for all t ≥ δ , P h | L t − L t | ≥ η | | L δ − L δ | ≤ η i ≤ E (cid:20) | L t − L t | η | | L δ − L δ | ≤ η (cid:21) ≤ η , which implies P [ | L t − L t | ≥ η / ] ≤ η / + P [ | L δ − L δ | > η ] ≤ η + η / . Hence, the characteristic functions of L t and L t , taken at λ ∈ R , differ by at most E [min(2 , λ | L t − L t | )] ≤ λη / + 2 P [ | L t − L t | ≥ η / ] ≤ η + (1 + λ ) η / . This bound does not depend on the coupling between L t and L t and is available for all η > 0, which implies that L t and L t have the same distribution. Hence, X t L = X t . (cid:3) MC γ = lim ←− CβE n Concluding the proof of the Main Theorem 2.1 Finally, we are ready to tackle the proofs of Lemma 4.1 and Lemma 4.2, thus completingthe proof of the Main Theorem. Proof of Lemma 4.1. If we multiply the quantity inside the expectation by the indicator of M − ∞ ≤ A for some A > 0, the convergence occurs since GM C γr ( f ) converges to GM C γ ( f )in L . Hence, the upper limit of the left-hand side is at most || f || ∞ lim sup r →∞ E (cid:20) M − ∞ >A M ∞ GM C γr ( ∂ D ) (cid:21) + || f || ∞ E (cid:20) M − ∞ >A M ∞ GM C γ ( ∂ D ) (cid:21) . By Fatou’s lemma: E (cid:20) M − ∞ >A M ∞ GM C γ ( ∂ D ) (cid:21) ≤ lim inf r →∞ E (cid:20) M − ∞ >A M ∞ GM C γr ( ∂ D ) (cid:21) , which is trivially smaller than the lim sup, and therefore the upper limit is at most:2 A − || f || ∞ lim sup r →∞ E (cid:20) M ∞ GM C γr ( ∂ D ) (cid:21) . Since A is arbitrarily large, it is enough to show that E h M ∞ GM C γr ( ∂ D ) i is bounded,uniformly in r < E (cid:20) M ∞ GM C γr ( ∂ D ) (cid:21) = E " M − ∞ ∞ Y j =0 − | α j | | − α j Q j ( r ) | e β ( j +1) (1 − r j +2 ) = E Q (cid:2) M − ∞ e ρ r, + ω r, (cid:3) ≤ (cid:0) E Q (cid:2) M − ∞ (cid:3)(cid:1) / (cid:0) E Q (cid:2) e ρ r, (cid:3)(cid:1) / (cid:0) E Q (cid:2) e ω r, (cid:3)(cid:1) / Recall that ρ r,n = ∞ X j = n (cid:18) − log (cid:18) − | α j Q j ( r ) | − | α j | (cid:19) + 2 β − | Q j ( r ) | j + 1 (cid:19) and ω r,n = 2 β ∞ X j = n | Q j ( r ) | − r j +2 j + 1 . Since the law of ( | α j | ) j ≥ is the same under P and Q , M − ∞ has moments of all ordersunder Q . We also know that ρ r, and ω r, have exponential moments of all orders, boundedindependently of r . (cid:3) Proof of Lemma 4.2. We have for η > E Q [ | e ρ r,n − | e ω r,n |F n ] ≤ E Q (cid:2) | e ρr,n − |≥ η (1 + e ρ r,n ) e ω r,n |F n (cid:3) + η E Q [ e ω r,n |F n ]In Proposition 5.5, we have proven that the conditional exponential moments of order p of ω r,n and ρ r,n are bounded by a quantity depending only on p and β , independentlyof r , and that on the event A L , Q [ | e ρ r,n − | ≥ η |F n ] is bounded by a quantity dependingonly on β, η, n, L, r and tending to 0 when r goes to 1 − .We deduce, by applying H¨older inequality and letting η → 0, that on A L , the condi-tional expectation E Q [ | e ρ r,n − | e ω r,n |F n ] is bounded by a deterministic quantity depend-ing on β, n, L, r and tending to zero when r goes to 1 − . Since it is also uniformly boundedby a quantity depending only on β , we have E Q (cid:2) E Q [ | e ρ r,n − | e ω r,n |F n ] (cid:3) −→ r → − CHHAIBI AND JOSEPH NAJNUDEL by dominated convergence. It is then enough to show that E Q (cid:12)(cid:12) E Q [ e ω r,n |F n ] − K β (cid:12)(cid:12) −→ r → − K β ∈ R .Now, for all u > E Q (cid:12)(cid:12) E Q (cid:2) e ω r,n − e min( ω r,n ,u ) |F n (cid:3)(cid:12)(cid:12) ≤ e − u E Q (cid:2) e ω r,n (cid:3) = O ( e − u )since the exponential moments of ω r,n are bounded uniformly in r and n . It is then enoughto show that for all u > 0, there exists some K β,u ∈ R such that E Q (cid:12)(cid:12) E Q (cid:2) e min( ω r,n ,u ) |F n (cid:3) − K β,u (cid:12)(cid:12) −→ r → − . In this case, K β,u is bounded uniformly in u because of the bound on the exponentialmoments of ω r,n and one can take for K β a limit point of K β,u for u → ∞ . It is a fortiorisufficient to prove that for some random variable ω with distribution depending only on β , and for all continuous, bounded functions G from R to R , E Q (cid:12)(cid:12) E Q [ G ( ω r,n ) |F n ] − E [ G ( ω )] (cid:12)(cid:12) −→ r → − . By dominated convergence, it is then enough to show that some determination of theconditional law of ω r,n given F n , under Q , almost surely converges to the distribution ofa random variable ω depending only on β .In order to avoid the issue of fixing these determinations of conditional laws (recall thatconditional expectations are only defined almost surely), let us use the following trick.First, since the full event can be approximated by events of the form A L , it is enough toshow, for all compact sets L ∈ D n , E Q (cid:2) A L (cid:12)(cid:12) E Q [ G ( ω r,n ) |F n ] − E [ G ( ω )] (cid:12)(cid:12)(cid:3) −→ r → − . Let E be the F n -measurable event, defined by E = { E Q [ G ( ω r,n ) |F n ] − E [ G ( ω )] > } . This event may depend on r . The left-hand side of the last convergence can be written as E Q [ E Q [ G ( ω r,n ) |F n ] − E [ G ( ω )] |A L , E ] Q [ E ∩ A L ]+ E Q [ E Q [ − G ( ω r,n ) |F n ] + E [ G ( ω )] |A L , E c ] Q [ E c ∩ A L ]and then, since E and A L are in F n , it is equal to( E Q [ G ( ω r,n ) |A L , E ] − E [ G ( ω )]) Q [ E ∩ A L ]+( E Q [ − G ( ω r,n ) |A L , E c ] + E [ G ( ω )]]) Q [ E c ∩ A L ]We deduce that it is sufficient to prove the following: for any event G of non-zero proba-bility, possibly dependent on r , included in A L and F n -measurable, the conditional law of ω r,n given G tends to the distribution of a random variable ω depending only on β , when r → − .Recall that for ε ∈ (0 , 1) and A > 1, we have: ω r,n = ω (0 ,ε ) r,n + ω ( ε,A ) r,n + ω ( A, ∞ ) r,n , where for a < b : ω ( a,b ) r,n = 2 β b n,r − X j = a n,r | Q j ( r ) | − r j +2 j + 1 . and a n,r = max ( n, ⌊ a/ (1 − r ) ⌋ ). From Proposition 6.1, we deduce that for all η > ε → sup r ∈ (0 , Q (cid:0)(cid:12)(cid:12) ω (0 ,ε ) r,n (cid:12)(cid:12) ≥ η |G (cid:1) = 0 , MC γ = lim ←− CβE n lim sup A →∞ sup r ∈ (0 , Q (cid:0)(cid:12)(cid:12) ω ( A, ∞ ) r,n (cid:12)(cid:12) ≥ η |G (cid:1) = 0 . From this bound, it is easy to deduce the convergence we are looking for, from theconvergence, for all positive integers A and all ε ∈ (0 , ω ( ε,A ) r,n , under Q and given G , to a variable ω ( ε,A ) depending only on β , ε and A . Noticethat we have convergence in law of ω ( ε,A ) towards ω , when A → ∞ and ε → X ( r ) t which is equal to | Q n +( t/ log(1 /r )) ( r ) | when n +( t/ log(1 /r )) is an integer and which is linearly interpolated otherwise. In other words,for all t ≥ X ( r ) t = X j ≥ n w rj ( t ) | Q j ( r ) | , where w rj ( t ) = w (cid:16) − ( j − n ) + t log(1 /r ) (cid:17) and w ( t ) = (cid:26) | t | > , − | t | if | t | ≤ . That is to say that the graph of w rj is given by a triangle of width 2 log(1 /r ) and height1, centered at t = ( j − n ) log(1 /r ).We then have Z Aε X ( r ) t t dt = X j ≥ n | Q j ( r ) | Z Aε w rj ( t ) t dt = X j ≥ n | Q j ( r ) | Z − ( j − n )+ A/ log(1 /r ) − ( j − n )+ ε/ log(1 /r ) w ( t ) j − n + t dt . Since w is supported on [ − , − ( j − n ) + ε/ log(1 /r ) ≥ − ( j − n ) + A/ log(1 /r ) ≤ − 1. Therefore, we can restrict thesummation index j to ε/ log(1 /r ) − < j − n < A/ log(1 /r ) + 1 . (7.0.1)For such indices, j grows to infinity as r → − and the contribution of each boundednumber of terms to the series goes to zero as r → − . This remark allows us to take careof two boundary effects: • The integrals R − ( j − n )+ A/ log(1 /r ) − ( j − n )+ ε/ log(1 /r ) w ( t ) j − n + t dt can be taken over [ − , 1] since the inte-gration interval does not cover [ − , 1] only for a bounded number of indices j suchthat (7.0.1) occurs. • Because ε/ log(1 /r ) = ε n,r + O n,ε (1) for r ∈ (1 / , 1) and the same for A n,r , insteadof restricting indices j to (7.0.1), we can restrict to ε r,n ≤ j ≤ A r,n − o r (1) denotes any quantity tending to zero when r → − , n , ε and A beingfixed: Z Aε X ( r ) t t dt = o r (1) + A n,r − X j = ε n,r | Q j ( r ) | Z [ − , w ( t ) j − n + t dt = o r (1) + A n,r − X j = ε n,r | Q j ( r ) | (cid:18)Z [ − , w ( t ) j + 1 dt + O n (cid:18) j + 1) (cid:19)(cid:19) = o r (1) + A n,r − X j = ε n,r | Q j ( r ) | j + 1 . CHHAIBI AND JOSEPH NAJNUDEL A comparison between Riemann sums and integrals easily gives Z Aε e − t t dt = o r (1) + A r,n − X j = ε r,n r j +1) j + 1 , and then, the combination of the two previous equations yields ω ( ε,A ) r,n = 2 β o r (1) + Z Aε X ( r ) t − e − t t dt ! . Now, from Proposition 6.1, ( X ( r ) t ) t ≥ , conditionally on G , under Q , tends in law to alimiting stochastic process ( X t ) t ≥ whose distribution depends only on β , for the topologyof uniform convergence on compact sets. Since the map Y β Z Aε Y t − e − t t dt is continuous for this topology, the continuous mapping theorem entails that ω ( ε,A ) r,n con-verges in distribution to ω ε,A := 2 β Z Aε X t − e − t t dt . Since the integral 2 β Z ∞ X t − e − t t dt is absolutely convergent by Proposition 6.1, it defines a random variable ω which is thelimit of ω ε,A when ε goes to zero and A goes to infinity. This completes the proof of theMain Theorem. (cid:3) Appendix A. CBE as regularization of a Gaussian space Here we provide a short proof independent from [JM15] that traces from the CircularBeta Ensemble become Gaussian as n → ∞ . This proof is absent from the literature andis in fact hidden in the book of Macdonald [Mac98].Unlike [DS94] or [JM15], this proof is not quantitative. It shows that the CβE n is theregularization of a Gaussian space by n points at the level of symmetric functions. Lemma A.1 (Gaussianity of traces) . Given a unitary matrix U n whose spectrum is sam-pled following the CβE n , we have the convergence in distribution: (cid:0) tr (cid:0) U kn (cid:1) ; k ∈ Z ∗ + (cid:1) n →∞ −→ s kβ N C k ; k ∈ Z ∗ + ! . Proof. Consider the following specialization of power sum polynomials, which form a basisof symmetric functions: p k := p k ( U n ) = tr (cid:0) U kn (cid:1) and for a partition λ p λ := Y i p λ i . Also, consider the following scalar product for functions in n variables. Given f, g :( ∂ D ) n → C symmetric, define: h f, g i n := E CβE n (cid:16) f ( y , . . . , y n ) g ( y , . . . , y n ) (cid:17) where { y , . . . , y n } follows the CβE n . MC γ = lim ←− CβE n The convergence in law of traces of the CβE n to Gaussians is equivalently reformulatedas the convergence of the h p λ , p µ i n to the appropriate limit. This is readily obtained fromthe combination of the two following facts: • [Mac98, Chapter VI, § 9, “Another scalar product”, Theorem (9.9)]: The scalarproduct h· , ·i n approximates the Macdonald scalar product in infinitely many vari-ables h· , ·i n → h· , ·i , where h p λ , p µ i = δ λ,µ Cste ( λ ) . • [Mac98, Chapter VI, § 10, “Jack symmetric functions”]: The Macdonald scalarproduct has a Gaussian space lurking behind as Cste ( λ ) δ λ,µ = z λ (cid:18) β (cid:19) ℓ ( λ ) δ λ,µ = E Y k s kβ N C k ! m k ( λ ) s kβ N C k ! m k ( µ ) , where ℓ ( λ ) is the length of the partition λ , m k ( λ ) the multiplicity of k in thepartition λ and z λ = Y k (cid:0) m k ( λ )! k m k ( λ ) (cid:1) . (cid:3) Acknowledgments. J.N. would like to thank R. Rhodes and V. Vargas for the helpfuldiscussions we had on the Gaussian Multiplicative Chaos and the Fyodorov-Bouchaudformula. References [APS18] Juhan Aru, Ellen Powell, and Avelio Sep´ulveda. Critical Liouville measure as a limit of sub-critical measures. ArXiv preprint arXiv:1802.08433 , 2018.[B + 17] Nathana¨el Berestycki et al. An elementary approach to Gaussian multiplicative chaos. Elec-tronic Communications in Probability , 22, 2017.[BFS07] Jonathan Breuer, Peter J Forrester, and Uzy Smilansky. Random discrete Schr¨odinger op-erators from random matrix theory. Journal of Physics A: Mathematical and Theoretical ,40(5):F161, 2007.[BG06] Daniel Bump and Alex Gamburd. On the averages of characteristic polynomials from classicalgroups. Communications in mathematical physics , 265(1):227–274, 2006.[BHNY08] Paul Bourgade, Christopher Hughes, Ashkan Nikeghbali, and Marc Yor. The characteristicpolynomial of a random unitary matrix: A probabilistic approach. Duke Math. J. , 145(1):45–69, 2008.[BJRV13] Julien Barral, Xiong Jin, R´emi Rhodes, and Vincent Vargas. Gaussian multiplicative chaosand KPZ duality. Communications in Mathematical Physics , 323(2):451–485, 2013.[BS12] Andrei N Borodin and Paavo Salminen. Handbook of Brownian motion-facts and formulae .Birkh¨auser, 2012.[CD16] Xiangyu Cao and Pierre Le Doussal. Joint min-max distribution and edwards-anderson’sorder parameter of the circular 1/f-noise model. EPL (Europhysics Letters) , 114(4):40003,may 2016.[CMN18] Reda Chhaibi, Thomas Madaule, and Joseph Najnudel. On the maximum of the CβE field. Duke Math. J. , 167(12):2243–2345, 09 2018.[CMV03] M. J. Cantero, L. Moral, and L. Vel´azquez. Five-diagonal matrices and zeros of orthogonalpolynomials on the unit circle. Linear Algebra Appl. , 362:29–56, 2003.[DG10] Charles Dunkl and Stephen Griffeth. Generalized Jack polynomials and the representationtheory of rational Cherednik algebras. Selecta Mathematica , 16(4):791–818, 2010.[DRS + 14] Bertrand Duplantier, R´emi Rhodes, Scott Sheffield, Vincent Vargas, et al. Critical Gaussianmultiplicative chaos: convergence of the derivative martingale. The Annals of Probability ,42(5):1769–1808, 2014.[DS94] Persi Diaconis and Mehrdad Shahshahani. On the eigenvalues of random matrices. J. Appl.Probab. , 31A:49–62, 1994. Studies in applied probability.[Dur19] Rick Durrett. Probability: theory and examples , volume 49. Cambridge university press, 2019. CHHAIBI AND JOSEPH NAJNUDEL [Fal04] Kenneth Falconer. Fractal geometry: mathematical foundations and applications . John Wiley& Sons, 2004.[FB08] Y.-V. Fyodorov and J.-P. Bouchaud. Freezing and extreme value statistics in a RandomEnergy Model with logarithmically correlated potential. Journal of Physics A: Mathematicaland Theoretical , 41(37), 2008.[GNR] Fabrice Gamboa, Jan Nagel, and Alain Rouault. Minimization of the Killip-Simon entropyand Bernstein-Szeg¨o type measures. In preparation .[GZ07] Leonid Golinskii and Andrej Zlatos. Coefficients of orthogonal polynomials on the unit circleand higher-order Szego theorems. Constructive Approximation , 26(3):361–382, 2007.[JM15] Tiefeng Jiang and Sho Matsumoto. Moments of traces of Circular β Ensembles. Ann. Probab. ,43(6):3279–3336, 11 2015.[JS + 17] Janne Junnila, Eero Saksman, et al. Uniqueness of critical Gaussian chaos. Electronic Journalof Probability , 22, 2017.[KLS98] Alexander Kiselev, Yoram Last, and Barry Simon. Modified Pr¨ufer and EFGP Transformsand the Spectral Analysis of One-Dimensional Schr¨odinger Operators. Communications inMathematical Physics , 194(1):1–45, May 1998.[KN04] Rowan Killip and Irina Nenciu. Matrix models for circular ensembles. Int. Math. Res. Not. ,(50):2665–2701, 2004.[Lam19] Gaultier Lambert. Mesoscopic central limit theorem for the circular β ensembles and appli-cations. ArXiv preprint arXiv:1902.06611 , 2019.[Mac98] Ian Grant Macdonald. Symmetric functions and Hall polynomials . Oxford university press,1998.[MRV + 16] Thomas Madaule, R´emi Rhodes, Vincent Vargas, et al. Glassy phase and freezing of log-correlated Gaussian potentials. The Annals of Applied Probability , 26(2):643–690, 2016.[NSW18] Miika Nikula, Eero Saksman, and Christian Webb. Multiplicative chaos and the characteristicpolynomial of the CUE: the L1-phase. ArXiv preprint arXiv:1806.01831 , 2018.[Pis16] Gilles Pisier. Martingales in Banach spaces , volume 155. Cambridge University Press, 2016.[Pow18] Ellen Powell. Critical Gaussian chaos: convergence and uniqueness in the derivative normali-sation. Electronic Journal of Probability , 23, 2018.[Rem17] Guillaume Remy. The Fyodorov-Bouchaud formula and Liouville conformal field theory. ArXiv e-prints , October 2017, 1710.06897.[RS80] Michael Reed and Barry Simon. Functional analysis, vol. i, 1980.[RV13] R. Rhodes and V. Vargas. Gaussian multiplicative chaos and applications: a review. ArXive-prints , May 2013, 1305.6221.[Sim05a] Barry Simon. Orthogonal polynomials on the unit circle. Part 1 , volume 54 of AmericanMathematical Society Colloquium Publications . American Mathematical Society, Providence,RI, 2005. Classical theory.[Sim05b] Barry Simon. Orthogonal polynomials on the unit circle. Part 2 , volume 54 of AmericanMathematical Society Colloquium Publications . American Mathematical Society, Providence,RI, 2005. Spectral theory.[SV07] Daniel W Stroock and SR Srinivasa Varadhan. Multidimensional diffusion processes . Springer,2007.[Vir14] Balint Virag. Operator limits of random matrices. Proceedings of the International Congressof Mathematicians, Volume 4, Seoul. ArXiv preprint arXiv:1804.06953 , pages 247–272, 2014.[Web15] Christian Webb. The characteristic polynomial of a random unitary matrix and Gaussianmultiplicative chaos - The L2-phase. Electron. J. Probab. , 20(104):1–21, 2015. Universit´e Paul Sabatier, Toulouse 3 – Institut de math´ematiques de Toulouse (IMT)– 118, route de Narbonne, 31400, Toulouse, France E-mail address : [email protected] Bristol University – University Walk, Clifton, Bristol, United Kingdom E-mail address :: a (cid:21) ≤ P (cid:20)Z t (1 − X u ) 4 βu du ≤ α (cid:21) + P (cid:2) √ αV ≥ a (cid:3) . By the assumption on the behavior of X at zero, we have almost surely Z t (1 − X u ) 4 βu du −→ t → . We deduce lim sup t → P (cid:20) sup a (cid:21) ≤ P (cid:2) √ αV ≥ a (cid:3) . By letting α →