Conditional expectations of random holomorphic fields on Riemann surfaces
CCONDITIONAL EXPECTATIONS OF RANDOM HOLOMORPHICFIELDS ON RIEMANN SURFACES
RENJIE FENG
Abstract.
We study two conditional expectations: K n ( z | p ) of the expecteddensity of critical points of Gaussian random holomorphic sections s n ∈ H ( M, L n )of powers of a positive holomorphic line bundle ( L, h ) over Riemann surfaces(
M, ω ) given that the random sections vanish at a point and D n ( z | q ) of theexpected density of zeros given that the random sections has a fixed criticalpoint. The critical points are points ∇ h n s n = 0 where ∇ h n is the smoothChern connection of the Hermitian metric h n . The main result is that therescaling conditional expectations K n ( p + u √ n | p ) and D n ( q + u √ n | q ) have uni-versal limits K ∞ ( u |
0) and D ∞ ( v |
0) as the power of the line bundle tends toinfinity. We will see that the short distance behaviors of these two condi-tional expectations are quite different: the behavior between critical pointsand the conditioning zero is neutral while there is a repulsion between zerosand the conditioning critical point. But the long distance behaviors of thesetwo rescaling densities are the same. Introduction
For Gaussian random polynomials of degree n , we study the conditional ex-pectation of critical points given that the polynomials vanish at a point and theconditional expectation of zeros given that the polynomials have a fixed criticalpoint. More generally, we consider the conditional distribution K n ( z | p ) of criticalpoints of Gaussian random holomorphic sections s n ∈ H ( M, L n ) of powers of apositive holomorphic line bundle ( L, h ) over Riemann surface (
M, ω ) given thatthe random sections vanish at a point p and the conditional expectation D n ( z | q )of zeros given that the Chern connection of the random sections vanish at a point q . We will apply a Kac-Rice type formula to derive K n ( z | p ) and the probabilis-tic Poincar´e-Lelong formula to derive D n ( z | q ), then we rescale them to prove that K n ( p + u √ n | p ) and D n ( q + u √ n | q ) have universal limits as n tends to infinity.The motivation of this paper is to study the local behavior between the criticalpoints and zeros of random holomorphic fields. The famous Gauss-Lucas Theoremstates that the holomorphic critical points of any polynomial are contained in theconvex hull of its zeros. This implies that some non-trivial correlations betweenzeros and critical points of random polynomials may exist. This problem has beenstudied recently in [9, 8] for Gaussian random SU (2) polynomials where a two-pointcorrelation function between zeros and critical points is derived. It is also provedthat on the n − length-scaled, zeros and critical points appear in rigid pairs. Itseems that the similar results hold for holomorphic sections of line bundles overRiemann surfaces. In this article, we study the analogous problems. Instead oftwo-point correlation between zeros and critical points, we study the conditional Date : September 23, 2018. a r X i v : . [ m a t h . C V ] N ov RENJIE FENG expectations of critical points and zeros. Our essential setting on Riemann surfacesis that the critical points are defined as zeros of the derivative of the smooth Chernconnection ∇ h n instead of the meromorphic connection (or locally, the holomorphicderivative ∂∂z on a coordinate patch C of Riemann surfaces).1.1. Results on critical points.
To state our results, we need to recall the def-inition of Gaussian random holomorphic sections of a line bundle (see § L, h ) → ( M, ω ) be a positive Hermitian holomorphic line bundle over a Riemannsurface with the K¨ahler form ω = √− Θ h , where Θ h is the curvature of h . Wedenote H ( M, L n ) as the space of global holomorphic sections of n -th tensor powerof L . A special case is when M = CP and L = O (1) the hyperplane line bundle, H ( CP , O ( n )) is the space of homogeneous polynomials of degree n . The Her-mitian metric h will induce an inner product on H ( M, L n ) and thus induces aGaussian measure dγ d n on H ( M, L n ), where d n is the dimension of H ( M, L n ).We define K n ( z | p ) the conditional expectation of critical points as a (1 , (cid:90) M ψK n ( z | p ) = E ( H ( M,L n ) ,dγ dn ) (cid:88) z : ∇ hn s n =0 ψ ( z ) | s n ( p ) = 0 , for any test function ψ ∈ C ∞ ( M ) where ∇ h n is the Chern connection. In §
3, wewill rewrite the right hand side as an expectation taken in the probability space( H p ( M, L n ) , dγ pd n − ) with respect to the conditional Gaussian measure dγ pd n − (see § , K n ( z | p ). Theorem 1.
Let ( L, h ) → ( M, ω ) be a positive Hermitian holomorphic line bundleover a compact Riemann surface with the K¨ahler form ω = √− Θ h , let ( H ( M, L n ) , dγ d n ) be a complex Gaussian ensemble defined in § p is K n ( z | p ) = (cid:18)(cid:90) C π A n det Λ n exp (cid:26) − (cid:28)(cid:18) ξy (cid:19) , Λ − n (cid:18) ¯ ξ ¯ y (cid:19)(cid:29)(cid:27) (cid:12)(cid:12) | ξ | − n | y | (cid:12)(cid:12) d(cid:96) y d(cid:96) ξ (cid:19) ω ( z ) , where d(cid:96) y and d(cid:96) ξ are Lebesgue measures on C and Λ n = C n − A − n B ∗ n B n where A n = ∂ z ∂ ¯ w Π pn ( z, w ) | z = w ,B n = ( ∂ z ∂ w Π pn ( z, w ) | z = w , ∂ z Π pn ( z, w ) | z = w ) , and C n = (cid:18) ∂ z ∂ w Π pn ( z, w ) | z = w ∂ z Π pn ( z, w ) | z = w ∂ w Π pn ( z, w ) | z = w Π pn ( z, z ) (cid:19) , where Π pn ( z, w ) = Π n ( z, w ) − Π n ( z, p )Π n ( w, p )Π n ( p, p ) , where Π n ( z, w ) is the Bergman kernel which is the projection of the L integralsections to the holomorphic sections (see § ONDITIONAL EXPECTATIONS 3
We rescale the global expression of K n ( z | p ) to get the following local behaviorbetween critical points and the conditioning zero, Theorem 2.
The rescaling limit of the (1 , -current of the conditional expectationhas a pointwise universal limit, K ∞ ( u |
0) := lim n →∞ K n ( p + u √ n | p ) = 1 πa ∞ ( λ ∞ ) + ( λ ∞ ) | λ ∞ | + | λ ∞ | √− du ∧ d ¯ u where a ∞ = 1 + | u | , λ ∞ = 2 + 2 | u | + | u | , λ ∞ = − | u | e −| u | + e −| u | . We will first prove this result for the special case of Gaussian random SU (2)polynomials in § §
6. To prove this result, weneed the estimates of the rescaling limits of the Bergman kernel Π n ( p + z √ n , p + w √ n )and its derivatives up to order 4.1.2. Results on zeros.
The conditional expectation D n ( z | q ) of zeros of Gaussianrandom holomorphic sections with a fixed critical point is defined similarly,(2) (cid:90) M ψD n ( z | q ) = E ( H ( M,L n ) ,dγ dn ) (cid:32) (cid:88) z : s n =0 ψ ( z ) |∇ h n s n ( q ) = 0 (cid:33) for any test function ψ .In §
7, we will apply the probabilistic Poincar´e-Lelong formula to get,
Theorem 3.
With the same assumptions in Theorem 1, the conditional expectationof the empirical measure of zeros given that the random sections have a critical pointat q is D n ( z | q ) = √− π ∂ ¯ ∂ log | Π qn ( z, z ) | , where Π qn ( z, z ) = Π n ( z, z ) − | ∂ ¯ w Π n ( z, w ) | w = q | ( ∂ z ∂ ¯ z Π n )( q, q ) . Furthermore, D n ( z | q ) admits the following universal limit, Theorem 4. D ∞ ( v |
0) := lim n →∞ D n ( q + v √ n | q ) = √− π ∂ ¯ ∂ log (cid:16) e | v | − | v | (cid:17) . Theorem 2 and Theorem 4 indicate that these two universal limits depend onlyon the distance r = | u | or | v | between the scaled point and the conditioning fixedpoint. The following graphs illustrate the growth of the density functions K ∞ ( u | D ∞ ( v |
0) (by discarding the Lebesgue measure).
RENJIE FENG
Graph of K ∞ ( u | D ∞ ( v | K ∞ ( u |
0) and D ∞ ( v |
0) are quite different,(3) lim | u |→ K ∞ ( u |
0) = 2 π , lim | v |→ D ∞ ( v |
0) = 0 . But the long distance behaviors are the same,(4) lim | u |→∞ K ∞ ( u |
0) = 1 π , lim | v |→∞ D ∞ ( v |
0) = 1 π .
Intuitively, the rescaling limit measures the asymptotic probability of findingcritical points/zeros in the geodesic ball of length scale n − centered at the condi-tioning point. Roughly speaking, the limit D ∞ ( v |
0) tends to 0 as | v | → q , it’s unlikely to find a zero near q , i.e., there is a‘repulsion‘ between zeros and the conditioning critical point. Such behavior is quitedifferent from that of the critical points given a zero: the limit of K ∞ ( u |
0) tendsto a constant as | u | → K ∞ ( u |
0) and D ∞ ( v |
0) are the rescal-ing conditional densities for the Bargmann-Fock case with the conditioning pointat z = 0, i.e., the corresponding conditional densities for the random holomorphic ONDITIONAL EXPECTATIONS 5 functions, f ( z ) = ∞ (cid:88) j =0 a j √ j ! z j , where a j are i.i.d. standard complex Gaussian random variables with mean 0 andvariance 1. The reason of this can be tell from the proof: the covariance kernel ofGaussian random holomorphic sections is expressed by the Bergman kernel, whilethe rescaling limit of the Bergman kernel on any polarized line bundle over K¨ahlermanifolds is universal which is the rescaling limit of the Bergman kernel of theBargmann-Fock space (see Remark 2). Acknowledgement:
The author would like to thank Steve Zelditch for hisgenerous support for so many years. He would like to thank Bernard Shiffman,Zuoqin Wang, Zhiqin Lu and Zhenan Wang for many helpful discussions. Manythanks also go to Richard Wentworth, Robert Adler and Bo Guan.2.
Background
In this section, we will review some basic concepts and notations on Gaussianrandom holomorphic sections of positive holomorphic line bundles over Riemannsurfaces. We refer to [3, 7] for more details.2.1.
K¨ahler manifolds.
Let (
M, ω ) be a compact Riemann surface (which is aK¨ahler manifold) with the K¨ahler form(5) ω = ∂ φ∂z∂ ¯ z √− dz ∧ d ¯ z, where φ is the local K¨ahler potential in a local coordinate patch U ⊂ M . Let( L, h ) → ( M, h ) be a positive holomorphic line bundle such that the curvature ofthe Hermitian metric h (6) Θ h = − ∂ log h∂z∂ ¯ z √− dz ∧ d ¯ z is a positive (1 ,
1) form [7]. Let e be a local non-vanishing holomorphic section of L over U ⊂ M such that locally L | U ∼ = U × C and the pointwise h -norm of e is | e | h = h ( e, e ) / . Throughout the article, we further assume that the line bundle ispolarized, i.e., Θ h = ω or equivalently | e | h = h ( e, e ) = e − φ .We denote by H ( M, L n ) the space of global holomorphic sections of n -th tensorpower of L . Locally, we can write the global holomorphic section of L n as s n = f n e ⊗ n where f n is a holomorphic function on U . We denote the dimension of H ( M, L n ) by d n . The Hermitian metric h induces a Hermitian metric h n on L n given by | e ⊗ n | h n = | e | nh , i.e., | s n | h n = | f n | h n ( e ⊗ n , e ⊗ n ) = | f n | e − nφ .We decompose the smooth Chern connection ∇ h n = ∇ (cid:48) h n + ∇ (cid:48)(cid:48) h n of the Hermitianline bundle ( L n , h n ) into holomorphic and antiholomorphic parts where in the localcoordinate ∇ (cid:48) h n = d z + n∂ log h and ∇ (cid:48)(cid:48) h n = d ¯ z . For the polarized line bundles, givena global holomorphic section s n = f n e ⊗ n , we have the following formula [7](7) ∇ h n s n = ( ∂f n ∂z − n ∂φ∂z f n ) e ⊗ n ⊗ dz. We can define an inner product on H ( M, L n ) as(8) (cid:104) s n, , s n, (cid:105) h n = (cid:90) M h n ( s n, , s n, ) ω. RENJIE FENG
Under the local coordinate it reads,(9) (cid:104) s n, , s n, (cid:105) h n = (cid:90) M f n, , f n, h n ( e ⊗ n , e ⊗ n ) ω = (cid:90) M f n, , f n, e − nφ ω, where s n, = f n, e ⊗ n and s n, = f n, e ⊗ n .2.2. Gaussian random fields.
Let’s recall that a complex Gaussian measure on C k is a measure of the form(10) dγ ∆ = e −(cid:104) ∆ − z, ¯ z (cid:105) π k det ∆ d(cid:96) kz , where d(cid:96) kz denotes Lebesgue measure on C k and ∆ is a positive definite Hermitian k × k matrix. The matrix ∆ is the covariance matrix.The inner product (8) induces a complex Gaussian probability measure dγ d n onthe space H ( M, L n ) as,(11) dγ d n ( s n ) = e −| a | π d n da, s n = d n (cid:88) j =1 a j s n,j , where { s n, , ..., s n,d n } is an orthonormal basis for H ( M, L n ) and { a , ..., a d n } arei.i.d. standard complex Gaussian random variables with mean 0 and variance 1.3. Conditional expectation
In this section, we will rewrite the conditional expectations of the empiricalmeasures in (1)(2) by defining the conditional Gaussian measure, then we willderive the covariance kernels for the conditional Gaussian processes.3.1.
Conditional Gaussian measure.
Let dγ be a complex Gaussian measureon a finite dimensional complex vector space V , let W be a vector subspace of V .We define the conditional Gaussian measure dγ W on W to be the restriction of dγ on W associated with the Hermitian inner product on W induced by the innerproduct on V [14].This definition can be understood as follows. For simplicity we let W be a vectorsubspace of V of codimension 1. We let { v , ..., v m } be an orthonormal basis of V and dγ be a Gaussian measure induced by the inner product, dγ ( v ) = e −| a | π m da, v = m (cid:88) j =1 a j v j , where a j are i.i.d. standard Gaussian random variables.Let { w , ..., w m − } be an orthonormal basis of W where the inner product is in-duced by that of V . Then we extend it to an orthonormal basis { w , ..., w m − , w m } of V . There exists a unitary group U = ( u ij ) such that v j = (cid:80) mi =1 u ji w i . Then wecan express v = (cid:80) mj =1 a j ( (cid:80) mi =1 u ji w i ) = (cid:80) mi =1 ( (cid:80) mj =1 a j u ji ) w i . By the definition ofthe conditional expectation, the conditional Gaussian measure dγ W induced by dγ is actually induced by the conditional Gaussian process w = m − (cid:88) i =1 ( m (cid:88) j =1 a j u ji ) w i , ONDITIONAL EXPECTATIONS 7 which is the projection of v to W . The conditional expectation taken with respectto dγ ( v ) is actually taken with respect to dγ W ( w ). Furthermore, we can computethe covariance kernel of the conditional Gaussian process. The covariance kernelfor the Gaussian process v is C V ( v ( x ) , v ( y )) = E dγ ( v ( x )¯ v ( y )) = m (cid:88) j =1 v j ( x )¯ v j ( y ) = m (cid:88) j =1 w j ( x ) ¯ w j ( y ) , and the covariance kernel for the conditional Gaussian process is C W ( w ( x ) , w ( y )) = E dγ W ( w ( x ) ¯ w ( y ))= E dγ [ m − (cid:88) i =1 ( m (cid:88) j =1 a j u ji ) w i ][ m − (cid:88) i =1 ( m (cid:88) j =1 ¯ a j ¯ u ji ) ¯ w i ] = m − (cid:88) j =1 w j ( x ) ¯ w j ( y ) . Hence, we have the following crucial relation,(12) C W ( w ( x ) , w ( y )) = C V ( v ( x ) , v ( y )) − w m ( x ) ¯ w m ( y ) . Conditional densities.
Now we can define the expected density of criticalpoints of Gaussian random sections given that the random sections vanish at apoint p by the conditional Gaussian measure. We need the following bundle-valuemap(13) T : H ( M, L n ) → L np , s n → s n ( p ) . We define the kernel of T as H p ( M, L n ) ⊂ H ( M, L n ) which is a subspace ofcodimension 1.We denote C s n as the empirical measure of critical points of sections,(14) C s n = { z ∈ M : ∇ h n s n ( z ) = 0 } . By definition of conditional Gaussian measure, we have the following relation,(15) E ( H ( M,L n ) ,dγ dn ) ( C s n | s n ( p ) = 0) = E ( H p ( M,L n ) ,dγ pdn − )( C s n ) , in the sense of distribution,(16) (cid:104) E ( H ( M,L n ) ,dγ dn ) ( C s n | s n ( p ) = 0) , ψ (cid:105) = E ( H p ( M,L n ) ,dγ pdn − ) (cid:104) C s n , ψ (cid:105) , for any test function ψ ∈ C ∞ ( M ), where dγ pd n − is the conditional Gaussian mea-sure which is the restriction of dγ d n on H p ( M, L n ).We denote K n ( z | p ) as the conditional expectation E ( H ( M,L n ) ,dγ dn ) ( C s n | s n ( p ) =0), then K n ( z | p ) is a (1 , (cid:90) M ψK n ( z | p ) = E ( H p ( M,L n ) ,dγ pdn − ) (cid:88) z ∈ C sn ψ ( z ) . We need the following bundle-value linear map to define the conditional expec-tation of zeros given that the random sections have a critical point at q ,(18) K : H ( M, L n ) → ( L n ⊗ T ∗ (cid:48) M ) q , s n → ∇ h n s n ( q ) , RENJIE FENG where ∇ h n is the Chern connection. We denote the kernel of this linear map as H q ( M, L n ). By the Kodaira embedding, H q ( M, L n ) is a subspace of H ( M, L n ) ofcodimension 1. We denote(19) Z s n = { z ∈ M : s n ( z ) = 0 } , and denote D n ( z | p ) =: E ( H ( M,L n ) ,dγ dn ) ( Z s n |∇ h n s n ( p ) = 0) . Then similarly, we have(20) (cid:90) M ψD n ( z | q ) = E ( H q ( M,L n ) ,dγ qdn − ) (cid:88) z ∈ Z sn ψ ( z ) , where dγ qd n − is the conditional Gaussian measure on the subspace H q ( M, L n ).4. Proof of Theorem 1
In this section, we will derive a Kac-Rice type formula for the global expressionof the conditional expectation K n ( z | p ). The formula may be derived from [1, 3, 5, 6]but we take advantage of some simplifications to speed up the proof.4.1. Kac-Rice formula.
In the local coordinate U ∼ = C and a local trivializationof L , we write the conditional Gaussian random sections with a zero at p as s pn = f pn e ⊗ n . We will prove the following,
Lemma 1.
The (1,1)-current of the conditional expectation of critical points ofsections with a conditioning zero at p with respect to the Gaussian measure dγ d n is (21) K n ( z | p ) = (cid:18)(cid:90) C p nz ( y, , ξ ) (cid:12)(cid:12) | ξ | − n | y | (cid:12)(cid:12) d(cid:96) y d(cid:96) ξ (cid:19) ω, where p nz ( y, s, ξ ) is the joint density of the conditional Gaussian processes ( f pn , ∂f pn ∂z , ∂ f pn ∂z ) ; d(cid:96) y and d(cid:96) ξ are Lebesgue measures on C .Proof. The strategy to get this formula is to find the local expression in a coordinatepath U ∼ = C , then turn it to be global.We denote(22) Ω p = { z ∈ C : ( ∂f pn ∂z − n ∂φ∂z f pn ) e − nφ = 0 } , then Ω p is the same as the set of critical points { z ∈ C : ∇ h n s pn = 0 } which is { z ∈ C : ∂f pn ∂z − n ∂φ∂z f pn = 0 } on the local coordinate patch (recall (7)).We first introduce some notations:(23) p n = f pn e − nφ , q n = ( ∂f pn ∂z − n ∂φ∂z f pn ) e − nφ , r n = ∂ f pn ∂z e − nφ , then p n , q n and r n are all complex Gaussian random variables. We thank Prof. Zhiqin Lu for clarifying this.
ONDITIONAL EXPECTATIONS 9
By definition of the delta function, for any test functions ψ ∈ C ∞ ( C ) we have, (cid:104) (cid:88) z ∈ Ω p δ z , ψ (cid:105) = (cid:88) z : q n ( z )=0 ψ ( z )= (cid:90) C δ ( q n ) ψ ( z ) √− dq n ∧ d ¯ q n = (cid:90) C δ ( q n ) ψ ( z ) (cid:12)(cid:12)(cid:12)(cid:12) | ∂q n ∂z | − | ∂q n ∂ ¯ z | (cid:12)(cid:12)(cid:12)(cid:12) d(cid:96) z , where d(cid:96) z is the Lebesgue measure on C .By direct computations, we have, ∂q n ∂z = ( ∂ f pn − n∂φ∂f pn − n∂ φf pn ) e − nφ − n ∂φq n = r n − n∂φq n − n ( ∂φ ) p n − n∂ φp n − n ∂φq n and ∂q n ∂ ¯ z = − n∂ ¯ ∂φp n − n ∂φq n . By taking expectation on both sides, we have locally, E (cid:104) (cid:88) z ∈ Ω p δ z , ψ (cid:105) = E (cid:90) C δ ( q n ) ψ ( z ) (cid:12)(cid:12)(cid:12)(cid:12) | ∂q n ∂z | − | ∂q n ∂ ¯ z | (cid:12)(cid:12)(cid:12)(cid:12) d(cid:96) z = (cid:90) C ψ ( z ) p nz ( y, , ξ ) (cid:12)(cid:12) | ξ − n ( ∂φ ) y − n∂ φy | − n | ∂ ¯ ∂φ | | y | (cid:12)(cid:12) d(cid:96) ξ d(cid:96) y d(cid:96) z where p nz ( y, s, ξ ) is the joint probability of the Gaussian random field ( p n , q n , r n ), d(cid:96) ξ and d(cid:96) y are Lebesgue measures on C . Thus the conditional density is locallygiven by the (1,1)-current,(24) (cid:18)(cid:90) C p nz ( y, , ξ ) (cid:12)(cid:12) | ξ − n ( ∂φ ) y − n∂ φy | − n | ∂ ¯ ∂φ | | y | (cid:12)(cid:12) d(cid:96) ξ d(cid:96) y (cid:19) d(cid:96) z on the local coordinate patch U ∼ = C of Riemann surfaces.Now we need to get the global expression for the conditional density. Since theconditional expectation is a (1,1)-current globally defined on the Riemann surface(which is also independent of the local coordinate and the local frame), it’s sufficientto find the formula when we freeze at a point and the formula will turn out to beglobal. For this purpose, given a complex m -dimensional K¨ahler manifold ( L, h ) → ( M, ω ), we freeze at a point z as the origin of the coordinate patch and choose theK¨ahler normal coordinate { z j } as well as an adapted frame e L of the line bundle L around z . It is well-known that in terms of K¨ahler normal coordinates { z j } , theK¨ahler potential φ has the following expansion in the neighborhood of the origin z ,(25) φ ( z, ¯ z ) = (cid:107) z (cid:107) − (cid:88) R j ¯ kp ¯ q ( z ) z j ¯ z ¯ k z p ¯ z ¯ q + O ( (cid:107) z (cid:107) ) . And thus,(26) φ ( z ) = 0 , ∂φ ( z ) = 0 , ∂ φ ( z ) = 0 , ∂ ¯ ∂φ ( z ) = 1 , ω ( z ) = d(cid:96) z . An example on the K¨ahler normal coordinate and the adapted frame is present in §
5, it is the affine coordinate for the Fubini-Study metric of the hyperplane linebundle over the complex projective space ( O (1) , h F S ) → ( CP , ω F S ). We also referto § z , by identities (26), the joint densityof the Gaussian processes ( p n , q n , r n ) (recall (23)) at z should be the same as thejoint density of Gaussian processes ( f pn , ∂f pn , ∂ f pn ). Hence, by (26) again, the localexpression (24) admits the following global expression, E (cid:88) z ∈ Ω p δ z = (cid:18)(cid:90) C p nz ( y, , ξ ) (cid:12)(cid:12) | ξ | − n | y | (cid:12)(cid:12) d(cid:96) y d(cid:96) ξ (cid:19) ω := K n ( z | p ) , where p nz ( y, s, ξ ) is the joint density of the Gaussian processes ( f pn , ∂f pn ∂z , ∂ f pn ∂z ). Thiscompletes the proof of Lemma 1. (cid:3) Proof of Theorem 1.
In this subsection, we will calculate the joint density p nz of the Gaussian processes of ( f pn , ∂f pn , ∂ f pn ).For the conditional Gaussian processes ( f pn , ∂f pn , ∂ f pn ), the joint density is givenby the formula [1](27) p nz ( y, s, ξ ) = 1 π n exp (cid:42) ysξ , (∆ n ) − ¯ y ¯ s ¯ ξ (cid:43) , where ∆ n is the covariance matrix of the conditional Gaussian process ( f pn , ∂f pn , ∂ f pn ).We rearrange the order of the Gaussian processes and write ˜∆ n as the covariancematrix of ( ∂f pn , ∂ f pn , f pn ), then we rewrite(28) p nz ( y, s, ξ ) = 1 π n exp (cid:42) sξy , ( ˜∆ n ) − ¯ s ¯ ξ ¯ y (cid:43) . The covariance matrix is then given by(29) ˜∆ n = (cid:18) A n B n B ∗ n C n (cid:19) × , where A n = ∂ z ∂ ¯ w Π pn ( z, w ) | z = w ,B n = ( ∂ z ∂ w Π pn ( z, w ) | z = w , ∂ z Π pn ( z, w ) | z = w ) , and C n = (cid:18) ∂ z ∂ w Π pn ( z, w ) | z = w ∂ z Π pn ( z, w ) | z = w ∂ w Π pn ( z, w ) | z = w Π pn ( z, z ) (cid:19) , where Π pn ( z, w ) is the covariance kernel of the conditional Gaussian process f pn .Thus, when s = 0, by some element matrix computations, we have, Lemma 2.
With all notations above, (30) p nz ( y, , ξ ) = 1 π A n det Λ n exp (cid:26) − (cid:28)(cid:18) ξy (cid:19) , Λ − n (cid:18) ¯ ξ ¯ y (cid:19)(cid:29)(cid:27) , ONDITIONAL EXPECTATIONS 11 where (31) Λ n = C n − A − n B ∗ n B n . We combine Lemma 1 and Lemma 2 to rewrite, K n ( z | p ) = (cid:18)(cid:90) C π A n det Λ n exp (cid:26) − (cid:28)(cid:18) ξy (cid:19) , Λ − n (cid:18) ¯ ξ ¯ y (cid:19)(cid:29)(cid:27) (cid:12)(cid:12) | ξ | − n | y | (cid:12)(cid:12) d(cid:96) y d(cid:96) ξ (cid:19) ω. Hence, we will complete the proof of Theorem 1 once we derive the expression of thecovariance kernel Π pn ( z, w ) of the conditional Gaussian process. This is computedin the next subsection.4.3. Covariance kernel.
The Bergman kernel is the orthogonal projection fromthe L integral sections to the holomorphic sections,(32) Π n ( z, w ) : L ( M, L n ) → H ( M, L n )with respect to the inner product (8). It has the following reproducing property(33) (cid:104) s n ( z ) , Π n ( z, w ) (cid:105) h n = s n ( w ) , where s n ∈ H ( M, L n ) is a global holomorphic section. Let { s n, , ..., s n,d n } be anyorthonormal basis of H ( M, L n ) with respect to the inner product (8), then wehave,(34) Π n ( z, w ) = d n (cid:88) j =1 s n,j ( z ) ⊗ s n,j ( w ) . It’s easy to check that Π n ( z, w ) is also the covariance kernel of the Gaussian process( H ( M, L n ) , dγ d n ) defined by (11).Recall H p ⊂ H ( M, L n ) is the space of holomorphic sections vanishing at p . Let { s pn, , ..., s pn,d n − } be an orthonormal basis of H p with respect to the inner product(8). By the reproducing property of the Bergman kernel Π n ( z, w ), one can showthat the holomorphic sections { s pn, , ..., s pn,d n − , Φ pn } will be an orthonormal basisfor H ( M, L n ) (see equation (3.7) in [14] for more details) where (by discarding thelocal frame e ⊗ n )(35) Φ pn ( z ) = Π n ( z, p ) (cid:112) Π n ( p, p ) . Recall relation (12), then the covariance kernel of the conditional Gaussian pro-cess is(36) Π pn ( z, w ) = Π n ( z, w ) − Φ pn ( z )Φ pn ( w ) . Hence, we complete the proof of Theorem 1.4.4.
Further simplification.
We can further simplify the expression of K n ( z | p )in Theorem 1 as follows. Let H = ( ξ, y ), then we can rewrite (cid:90) C π A n det Λ n exp (cid:26) − (cid:28)(cid:18) ξy (cid:19) , Λ − n (cid:18) ¯ ξ ¯ y (cid:19)(cid:29)(cid:27) (cid:12)(cid:12) | ξ | − n | y | (cid:12)(cid:12) d(cid:96) y d(cid:96) ξ as 1 π A n det Λ n (cid:90) C e − H Λ − n H ∗ | HQ n H ∗ | d(cid:96) H , where Q n = (cid:18) − n (cid:19) and d(cid:96) H is the Lebesgue measure on C .We change variable H → H Λ − n to get1 π A n (cid:90) C e − HH ∗ | H Λ n Q n Λ n H ∗ | d(cid:96) H . We diagonalize Λ n Q n Λ n with eigenvalues λ n , λ n (note that it is easy to check λ n and λ n are also eigenvalues of Λ n Q n ) to simplify the above integration as1 π A n (cid:90) C | λ n | y | + λ n | ξ | | e −| y | −| ξ | d(cid:96) y d(cid:96) ξ = 1 π A n (cid:90) ∞ (cid:90) ∞ | λ n x + λ n y | e − x − y dxdy = 1 π A n ( λ n ) + ( λ n ) | λ n | + | λ n | . Thus we have,
Lemma 3.
The conditional expectation is (37) K n ( z | p ) = 1 π A n ( λ n ) + ( λ n ) | λ n | + | λ n | ω ( z ) , where λ n and λ n are eigenvalues of (Λ n Q n )( z ) . Calculations of Theorem 2 for Gaussian random SU (2) polynomials In this section, we will derive the rescaling conditional density of critical pointsfor Gaussian random SU (2) polynomials. This is the case where M = CP ∼ = S and L is the hyperplane line bundle O (1). The global holomorphic sections of O (1)are linear functions on C and hence the global holomorphic sections of L n = O ( n )are homogeneous polynomials of degree n .The K¨ahler form on CP is the Fubini-Study form. In an affine coordinate, theK¨ahler form and the K¨ahler potential for the Fubini-Study metric are(38) ω F S = √− dz ∧ d ¯ z (1 + | z | ) , φ ( z ) = log(1 + | z | ) . It’s easy to check that φ satisfies (26) and the affine coordinate is actually theK¨ahler normal coordinate at z = 0.We equip O (1) with its Fubini-Study metric. In fact, we can choose an adaptedframe e ( z ) such that | e ( z ) | h FS = e − φ = 11 + | z | . Hence, an orthonormal basis of H ( CP , O ( n )) under the inner product (8) isgiven by (cid:40)(cid:32)(cid:115) ( n + 1) (cid:18) nj (cid:19) z j (cid:33) e ⊗ n (cid:41) nj =0 . Throughout the article, we will discard the local frame e ⊗ for simplicity. ONDITIONAL EXPECTATIONS 13
The Gaussian linear combination of the above basis is Gaussian random SU (2)polynomials and the distribution of zeros of such polynomials is invariant under therotation of S [10].By (34), the Bergman kernel for the Fubini-Study case isΠ SU (2) n ( z, w ) = ( n + 1)(1 + z ¯ w ) n . By the expression of K n ( z | p ) in Theorem 1, the expected density of critical points isunchanged when the covariance kernel is multiplied by a constant (or equivalentlythe Gaussian process is multiplied by a constant). In the following computations,for simplicity, we can replace the Bergman kernel Π SU (2) n ( z, w ) by the normalizedBergman kernel Π n ( z, w ) = (1 + z ¯ w ) n . By formula (36), we have the following expression for the covariance kernel ofthe conditional Gaussian measureΠ pn ( z, w ) = (1 + z ¯ w ) n − (1 + z ¯ p ) n (1 + p ¯ w ) n (1 + p ¯ p ) n . Now let’s compute the matrices B n and C n for H ( CP , O ( n )). Indeed, we have, ∂ Π pn ∂z = n (1 + z ¯ w ) n − ¯ w − n ¯ p (1 + z ¯ p ) n − (1 + p ¯ w ) n (1 + p ¯ p ) n ∂ Π pn ∂z∂ ¯ w = n (1 + z ¯ w ) n − + zn ( n − z ¯ w ) n − ¯ w − n p ¯ p (1 + z ¯ p ) n − (1 + p ¯ w ) n − (1 + p ¯ p ) n ∂ Π pn ∂ z = n ( n − z ¯ w ) n − ¯ w − n ( n − p (1 + z ¯ p ) n − (1 + p ¯ w ) n (1 + p ¯ p ) n ∂ Π pn ∂ z∂ ¯ w = 2 n ( n − z ¯ w ) n − ¯ w + n ( n − n − z ¯ w ) n − z ¯ w − n ( n − p ¯ p (1 + z ¯ p ) n − (1 + p ¯ w ) n − (1 + p ¯ p ) n ∂ Π pn ∂ z∂ ¯ w = 2 n ( n − z ¯ w ) n − +4 n ( n − n − z ¯ w ) n − z ¯ w + n ( n − n − n − z ¯ w ) n − z ¯ w − n ( n − p ¯ p (1 + z ¯ p ) n − (1 + p ¯ w ) n − (1 + p ¯ p ) n Throughout the article, the notation a n ∼ b n means the asymptotics a n = b n + o ( b n ) as n large enough; for simplicity we will discard the negligible terms o ( b n ) in some steps which do not contribute in the pointwise limit as n → ∞ inorder to keep track of the leading order term, although the precise estimates for allerrors terms can be derived.In order to get the rescaling density K n ( p + u √ n | p ) around p , we choose theaffine coordinate at p = 0 with z = u √ n . By Lemma 3, we need to find rescalinglimits of λ ( u √ n ) and λ ( u √ n ) where λ and λ are eigenvalues of matrix Λ n Q n , orequivalently, we need to find the estimates of two eigenvalues of matrix (Λ n Q n )( u √ n ).We first have the following asymptotics as n large enough,(39) A n ( u √ n ) = n (1 + | u | n ) n − + ( n − | u | (1 + | u | n ) n − ∼ n (1 + | u | ) e | u | . Similarly, we have, B n ( u √ n ) ∼ (2 n u + n u | u | , n ¯ u ) e | u | , and C n ( u √ n ) ∼ e | u | (cid:18) n + 4 n | u | + n | u | n ¯ u nu − e −| u | (cid:19) . Since Λ n = C n − A − n B ∗ n B n , it’s easy to compute(Λ n Q n )( u √ n ) ∼ n e | u | | u | × (cid:18) | u | + | u | − e −| u | + | u | e −| u | (cid:19) . Hence, the eigenvalues of (Λ n Q n )( u √ n ) satisfy asymptotics,(40) λ ( u √ n ) ∼ n (2 + 2 | u | + | u | ) e | u | | u | > λ ( u √ n ) ∼ n ( − e −| u | + | u | e −| u | ) e | u | | u | ≤ . For the Fubini-Study metric, we have the following estimate,(42) lim n →∞ nω F S ( u √ n ) = lim n →∞ n √− d u √ n ∧ d ¯ u √ n (1 + | u √ n | ) = √− du ∧ d ¯ u. As a remark, this estimate is true for any K¨ahler metric ω by (26), i.e.,(43) lim n →∞ nω ( p + u √ n ) = √− du ∧ d ¯ u. If we combine Lemma 3 with asymptotics (39)(40)(41)(42), we have the limit,lim n →∞ K n ( p + u √ n | p ) = 1 πa ∞ ( λ ∞ ) + ( λ ∞ ) | λ ∞ | + | λ ∞ | √− du ∧ d ¯ u with a ∞ , λ ∞ and λ ∞ given in Theorem 2. Hence we prove Theorem 2 for Gaussianrandom SU (2) polynomials. Remark 1. In [14] , the authors studied the rescaling limit of the expected density ofzeros given that the random sections vanish at a point. The expected (conditional)density of zeros of Gaussian random holomorphic functions can be derived by theprobabilistic Poincar´e-Lelong formula (see § SU (2) polynomials, we have the following explicit global expression, E SU (2) dγ dn ( (cid:88) z : s n ( z )=0 δ z | s n ( p ) = 0)= √− π ∂ ¯ ∂ log Π pn ( z, w )= √− π ∂ ¯ ∂ log (cid:18) (1 + z ¯ w ) n − (1 + z ¯ p ) n (1 + p ¯ w ) n (1 + p ¯ p ) n (cid:19) . ONDITIONAL EXPECTATIONS 15
Hence, the rescaling limit of the above density by choosing the affine coordinate at p = 0 with z = u √ n will be √− π ∂ ¯ ∂ log( e | u | − , which is one of the main results Corollary 1.3 of [14] . Proof of Theorem 2
In this section, we will prove Theorem 2 for any Riemann surfaces.By Lemma 2, the joint density p nz only depends on the Bergman kernel and itsderivatives up to order 4; thus the rescaling limit of the conditional expectationshould only depend on the rescaling limits of Bergman kernel and its derivatives.We will see that all these rescaling limits are universal.The Bergman kernel has the following Tian-Yau-Zelditch C ∞ -expansion on thediagonal for Riemann surfaces [12, 15, 16, 17],(44) Π n ( z, z ) = ne nφ (1 + a ( z ) n − + a ( z ) n − + · · · ) , where a is the scalar curvature of ω .Integrating over M with respect to e − nφ ω gives the well-known dimension poly-nomial,(45) d n = n (1 + n − (cid:90) M a ω + n − (cid:90) M a ω + · · · ) . The proof of the full expansion (44) makes use of Boutet de Monvel-Sjostrandparametrix construction [17]. Actually the same construction can be carried out toderive the recaling limits of the Bergman kernel off diagonal. We also remark thatthe estimates of the Bergman kernel off diagonal are studied by Dai-Liu-Ma [4].Let’s choose the K¨ahler normal coordinate at p . First, if we apply identities (26),we have the following on diagonal asymptotics at p ,(46) Π n ( p, p ) = n (1 + a ( p ) n − + a ( p ) n − + · · · ) . Regarding the rescaling limit of the Bergman kernel, we have the following uni-versal limit,(47) Π n ( p + u √ n , p + v √ n ) = n ( e u · ¯ v + 1 √ n p + ... ) , where p is a homogeneous polynomial and the error terms have precise estimates(see § Remark 2.
The term ne u · ¯ v is actually the rescaling limit of the Bergman kernelfor the Bargmann-Fock space. We refer to [3] for more details. Regarding the rescaling limit of the Bergman kernel and its derivatives on thediagonal, we have the following estimates (we refer [2, 3] for more details), Π n ( u √ n , u √ n ) = ne | u | + O ( n ) , ( ∂ Π n )( u √ n , u √ n ) = n ¯ ue | u | + O ( n ) , ( ∂ Π n )( u √ n , u √ n ) = n ¯ u e | u | + O ( n ) , ( ∂ ¯ ∂ Π n )( u √ n , u √ n ) = n e | u | (1 + | u | ) + O ( n ) , ( ∂ ¯ ∂ Π n )( u √ n , u √ n ) = n ¯ ue | u | (2 + | u | ) + O ( n ) , ( ∂ ¯ ∂ Π n )( u √ n , u √ n ) = n e | u | (2 + 4 | u | + | u | ) + O ( n ) . And similarly, we have,Π n ( u √ n ,
0) = n + O ( n ) , ( ∂ Π n )( u √ n ,
0) = O ( n ) , ( ∂ Π n )( u √ n ,
0) = O ( n ) . Now we can get the estimates of the covariance matrix, A n ( u √ n ) = ∂ z ∂ ¯ w Π pn ( z, w ) | z = w = u √ n , = ∂ z ∂ ¯ w Π n ( z, w ) | z = w = u √ n − ∂ z Π( z, p ) ∂ w Π( w, p )Π n ( p, p ) | z = w = u √ n ,p =0 = n e | u | (1 + | u | ) + O ( n ) − O ( n ) n (1 + a ( p ) n − + O ( n − ))= n (1 + | u | ) e | u | + O ( n )The similar computations yield, B n ( u √ u ) = (cid:16) n u (2 + | u | ) e | u | + O ( n ) , n ¯ ue | u | + O ( n ) (cid:17) and C n ( u √ u ) = (cid:32) n (2 + 4 | u | + | u | ) e | u | + O ( n ) n ¯ u e | u | + O ( n ) n u e | u | + O ( n ) n ( e | u | −
1) + O ( n ) (cid:33) . Note that the above estimates are the same as the ones in § n since we used normalized Bergman kernel in § § n Q n )( u √ n ). 7. Proofs of Theorem 3
In this section, we will apply the probabilistic Poincar´e-Lelong formula to derivea global formula for D n ( z | q ) of the empirical measure of zeros with a conditioningcritical point. ONDITIONAL EXPECTATIONS 17
Poincar´e-Lelong formula.
Given a global holomorphic sections s n of a pos-itive holomorphic line bundle over K¨ahler manifolds, we denote Z s n as the empiricalmeasure of zeros of s n . We write locally s n = f n e ⊗ n , then the classical Poincar´e-Lelong formula states that [7](48) Z s n = √− π ∂ ¯ ∂ log | f n | . Taking the expectation on both sides, we have the following probabilistic Poincar´e-Lelong formula [11, 14]: Let S ⊂ H ( M, L n ) be a Gaussian random field withcovariance kernel Π S ( z, w ), then(49) E S ( Z s n ) = √− π ∂ ¯ ∂ log | Π S ( z, z ) | . Proof of Theorem 3.
Now we turn to prove Theorem 3. Recall (20), werewrite the conditional expectation of zeros D n ( z | q ) as(50) D n ( z | q ) = E ( H q ( M,L n ) ,dγ qdn − ) ( Z s n ) . By probabilistic Poincar´e-Lelong formula (49), we have,(51) D n ( z | q ) = E ( H q ( M,L n ) ,dγ qdn − ) ( Z s n ) = √− π ∂ ¯ ∂ log | Π qn ( z, z ) | . where Π qn ( z, w ) is the covariance kernel of the conditional Gaussian random sections( H q ( M, L n ) , dγ qd n − ), where H q ( M, L n ) is the kernel of the linear map s n → ∇ h n s n (18).The Kodaira embedding implies that H q ( M, L n ) is a subspace of H ( M, L n )of codimension 1. Let { s qn, , ..., s qn,d n − } be an orthonormal basis of H q ( M, L n )with respect to the inner product (8). Such basis satisfies ∇ h n s n,j ( q ) = 0 for all j = 1 , · · · , d n −
1. We can extend this basis to be a basis of H ( M, L n ), we denotesuch basis as { s pn, , ..., s pn,d n − , Ψ qn } . By relation (12) again, the covariance kernelfor the conditional Gaussian measure ( H q ( M, L n ) , dγ qd n − ) is(52) Π qn ( z, w ) = Π n ( z, w ) − Ψ qn ( z )Ψ qn ( w ) . And hence,(53) D n ( z | q ) = √− π ∂ ¯ ∂ log | Π n ( z, z ) − | Ψ qn ( z ) | | . To prove Theorem 3, it is enough to find the expression of | Ψ qn | . In the followingcomputations, we will discard the local frames e ⊗ n and dz for simplicity. We write s qn,j = f qn,j e ⊗ n locally. Then the Bergman kernel readsΠ n ( z, w ) = d n − (cid:88) j =1 f qn,j ( z ) f qn,j ( w ) + Ψ qn ( z )Ψ qn ( w ) . We take the Chern connection ∇ zh n on both sides with respect to variable z andevaluate at z = q , we have the relation, ∇ zh n Π n ( z, w ) | z = q = ∇ zh n Ψ qn ( z ) | z = q Ψ qn ( w ) . This implies that Ψ qn ( w ) is parallel to ∇ zh n Π n ( z, w ) | z = q . We defineΨ qn ( w ) = λ q ∇ zh n Π n ( z, w ) | z = q . We will find | λ q | in order to get | Ψ qn ( w ) | in (53). By definition of the Chernconnection (7), we haveΨ qn ( w ) = λ q [ ∂ Π n ( z, w ) ∂z | z = q − n ∂φ∂z ( q )Π n ( q, w )] . We can choose the K¨ahler normal coordinate freezing at the point z = q as theorigin of the coordinate patch to simplify our computations. Recall equation (26),at the origin of the K¨ahler normal coordinate, we have ∂φ∂z ( q ) = 0, and hence locally,(54) Ψ qn ( w ) = λ q ∂ Π n ( z, w ) ∂z | z = q . We can further rewrite Ψ qn ( w ) as follows: choose any orthonormal basis { ψ n, , · · · , ψ n,d n } of H ( M, L n ) with respect to the inner product (8) (or (9)), then the Bergman ker-nel is Π n ( z, w ) = d n (cid:88) j =1 ψ n,j ( z ) ψ n,j ( w ) , thus, Ψ qn ( w ) = λ q d n (cid:88) j =1 ∂ψ n,j ∂z ( q ) ψ n,j ( w ) . Note that the L -norm of Ψ qn ( w ) is 1 by assumption, hence,1 = (cid:107) Ψ qn ( w ) (cid:107) h n = (cid:107) λ q d n (cid:88) j =1 ∂ψ n,j ∂z ( q ) ψ n,j ( w ) (cid:107) h n = | λ q | d n (cid:88) j =1 ∂ψ n,j ∂z ( q ) ∂ψ n,j ∂z ( q ) = | λ q | ( ∂ z ∂ ¯ z Π n )( q, q ) . Hence, combining the expression of | λ q | with (54), we have, | Ψ qn ( w ) | = | ∂ z Π n ( z, w ) | z = q | ( ∂ z ∂ ¯ z Π n )( q, q ) . Note that Π n ( z, w ) = Π n ( w, z ), thus | ∂ z Π n ( z, w ) | z = q | = | ∂ ¯ z Π n ( z, w ) | z = q | = | ∂ ¯ z Π n ( w, z ) | z = q | = | ∂ ¯ w Π n ( z, w ) | w = q | , thus,(55) | Ψ qn ( w ) | = | ∂ ¯ w Π n ( z, w ) | w = q | ( ∂ z ∂ ¯ z Π n )( q, q ) | . Now we complete the proof of Theorem 3 if we combine (53)(55).
ONDITIONAL EXPECTATIONS 19 Proof of Theorem 4
Gaussian random SU (2) polynomials. In this subsection, let’s computethe rescaling limit D ∞ ( z |
0) for Gaussian random SU (2) polynomials.We choose the affine coordinate at q = 0. As in §
5, we still use the normalizedBergman kernel Π n ( z, w ) = (1 + z ¯ w ) n . Thus we have, ( ∂ ¯ w Π n )( z,
0) = nz, ( ∂ z ∂ ¯ z Π n )(0 ,
0) = n. Thus we have the following exact formula for the SU (2) polynomials, D SU (2) n ( z |
0) = √− π ∂ ¯ ∂ log (cid:0) (1 + | z | ) n − n | z | (cid:1) . We expand the right hand side to get, D SU (2) n ( z |
0) = 1 π [ n (cid:0) (1 + | z | ) n − − n − | z | ) n − | z | (cid:1) (1 + | z | ) n − n | z | − n ((1 + | z | ) n − − | z | ((1 + | z | ) n − n | z | ) ] √− dz ∧ d ¯ z. Now we rescale z → z √ n to get the limit, D SU (2) ∞ ( z |
0) : = lim n →∞ D SU (2) n ( q + z √ n | q )= 1 π (cid:16) e | z | − e | z | | z | (cid:17) e | z | − | z | − ( e | z | − | z | ( e | z | − | z | ) √− dz ∧ d ¯ z, which can be rewritten as D SU (2) ∞ ( z |
0) = √− π ∂ ¯ ∂ log (cid:16) e | z | − | z | (cid:17) . This proves Theorem 4 for Gaussian random SU (2) polynomials.8.2. Proof of Theorem 4.
Let’s turn to prove Theorem 4 for the general cases.We have to apply the similar estimates about the Bergman kernel as in §
6. Wecontinue to use the K¨ahler normal coordinate with the origin at q = 0 as in § D n ( z | q ) is given as D n ( v √ n |
0) = √− π ∂ ¯ ∂ log (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) Π n ( v √ n , v √ n ) − | ( ∂ ¯ w Π n )( v √ n , | ( ∂ z ∂ ¯ z Π n )(0 , (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . Theorem 4 follows once we find the rescaling limits of Π n ( v √ n , v √ n ) and ( ∂ ¯ w Π n )( v √ n , ∂ z ∂ ¯ z Π n )(0 , n ( q + v √ n , q + u √ n ) = n ( e v · ¯ u + 1 √ n p + ... ) . As in §
6, we first have the following asymptotics at q = 0,(56) Π n ( v √ n , v √ n ) ∼ ne | v | , ( ∂ ¯ w Π n )( v √ n , ∼ n v. Let’s recall the C ∞ -expansion of the Bergman kernel on the diagonal,Π n ( z, z ) = ne nφ (1 + a ( z ) n − + a ( z ) n − + · · · ) . We take ∂ ¯ ∂ on both sides to get the full expansion,( ∂ z ∂ ¯ z Π n )( z, z ) = n e nφ | ∂φ | (1 + a n − + · · · ) + n e nφ ∂ ¯ ∂φ (1 + a n − + · · · )+2 n e nφ (cid:60) ( ∂φ ( ∂ ¯ a n − + · · · )) + ne nφ ( ∂ ¯ ∂a n − + · · · ) . Using identities (26) at the origin of the K¨ahler normal coordinate, we have,(57) ( ∂ z ∂ ¯ z Π n )(0 ,
0) = n + na + ( ∂ ¯ ∂a + a ) + O ( n − ) . If we combine the asymptotics (56)(57), we have the universal limit D ∞ ( v |
0) := lim n →∞ D n ( v √ n |
0) = √− π ∂ ¯ ∂ log (cid:16) e | v | − | v | (cid:17) , which completes the proof of Theorem 4. References [1] R. Adler and J. Taylor,
Random fields and geometry , Springer Monographs in MathematicsSpringer, New York (2007).[2] J. Baber,
Scaled correlations of critical points of Random sections on Riemann surfaces , JStat Phys (2012)148: 250–279.[3] P. Bleher, B. Shiffman and S. Zelditch,
Universality and scaling of correlations between zeroson complex manifolds , Invent. Math. 142 (2000), 351–395.[4] X. Dai, K. Liu and X. Ma,
On the asymptotic expansion of Bergman Kernel , J.DifferentialGeom. 72 (2006), no.1, 1–41.[5] M. R. Douglas, B. Shiffman and S. Zelditch,
Critical Points and Supersymmetric Vacua I ,Commun. Math. Phys. 252, 325–358 (2004).[6] M. R. Douglas, B. Shiffman and S. Zelditch,
Critical Points and Supersymmetric Vacua II:asymptotics and extremal metrics , J. Differential. Geom. 72, (2006), 381–427.[7] G. Griffiths and J. Harris,
Principles of Algebraic Geometry , Wiley-Interscience, (1978).[8] B. Hanin,
Pairing of Zeros and Critical Points for Random Meromorphic Functions onRiemann Surfaces , Math. Res. Lett. 22 (2015), no 1, 111–140.[9] B. Hanin,
Correlations and Pairing Between Zeros and Critical Points of Gaussian RandomPolynomials , IMRN 2015, no. 2, 381–421.[10] J. H. Hannay,
Chaotic analytic zero points: exact statistics for those of a random spin state ,J. Phys. A 29 (1996), 314–320.[11] J. Hough, M. Krishnapur, Y. Peres and B. Virag,
Zeros of Gaussian Analytic Functions andDeterminantal Point Processes , AMS, 2010.[12] X. Ma and G. Marinescu,
Holomorphic Morse inequalities and Bergman kernels , Progress inMath., Vol. 254, Birkh¨auser, Basel, 2007.[13] B. Shiffman and S. Zelditch,
Number variance of random zeros on complex manifolds ,Geom.funct.anal. Vol. 18 (2008), 1422–1475.[14] B. Shiffman, S. Zelditch and Q. Zhong,
Random zeros on complex manifolds: conditionalexpectations , Journal of the Institute of Mathematics of Jussieu, Volume 10, Special Issue 03,July 2011, pp 753–783.[15] G. Tian,
On a set of polarized K¨ahler metrics on algebraic manifolds , J. Differential Geom.32 (1990), Math. Volume 13, Number 4 (1963), 1171–1180.[16] S. T. Yau,
Survey on partial differential equations in differential geometry. Seminar onDifferential Geometry , pp. 3–71, Ann. of Math. Stud., 102, Princeton Univ. Press, Princeton,N.J., 1982.[17] S. Zelditch,
Szeg¨o kernels and a theorem of Tian , IMRN 6 (1998), 317–331.
Beijing International Center for Mathematical Research, Peking University, Bei-jing, China
E-mail address ::