aa r X i v : . [ m a t h . P R ] M a r Remarks on the Central Limit Theorem for Non-ConvexBodies
Uri Grupel ∗ October 16, 2018
Abstract
In this note, we study possible extensions of the Central Limit Theorem for non-convexbodies. First, we prove a Berry-Esseen type theorem for a certain class of unconditionalbodies that are not necessarily convex. Then, we consider a widely-known class of non-convexbodies, the so-called p-convex bodies, and construct a counter-example for this class.
Let X , ..., X n be random variables with E X i = 0 and E X i X j = δ i,j for i, j = 1 , , ..., n . Let θ ∈ S n − , where S n − ⊆ R n is the unit sphere centered at 0, and let G be a standard Gaussianrandom variable, that is G has density function √ π e − x / . We denote X = ( X , ..., X n ). In thispaper we examine different conditions on X under which X · θ is close to G in distribution. Theclassical central limit theorem states that if X , ..., X n are independent then for most θ ∈ S n − the marginal X · θ is close to G . It was conjectured by Anttila, Ball and Perissinaki [1] and byBrehm and Voigt [5] that if X is distributed uniformly in a convex body K ⊆ R n , then for most θ ∈ S n − the marginal X · θ is close to G . This is known as the central limit theorem for convexsets and was first proved by Klartag [13].In this note we examine extensions of the above theorem to non-convex settings. Our studywas motivated by the following observation on the unit balls of l p spaces for 0 < p < B np = { x ∈ R n ; | x | p + · · · + | x n | p ≤ } the unit ball of the space l np . For X =( X , ..., X n ) that is distributed uniformly on c p,n B np , p > θ ∈ S n − , and G a standard Gaussian,one can show that | P ( θ · X ≤ t ) − P ( G ≤ t ) | ≤ C p n X k =1 | θ k | where c p,n is chosen such that E X i = 0 and E X i X j = δ i,j for i, j = 1 , , ..., n , and C p > n .In order to formulate our results we use the following definitions: Let X = ( X , ..., X n ) be arandom vector in R n . A random vector X is called isotropic if E X i = 0 and E X i X j = δ i,j for i, j = 1 , , ..., n . A random vector X is called unconditional if the distribution of ( ε X , ..., ε n X n )is the same as the distribution of X for any ε i = ± , i = 1 , ..., n .The first class of densities we define is based on Klartag’s recent work [14] and includes theuniform distribution over B np for 0 < p < ∗ Supported by a grant from the European Research Council. heorem 1.1. Let X be an unconditional, isotropic random vector with density e − u ( x ) , where thefunction u ( x κ , ..., x κn ) is convex in R n + = { x ∈ R n ; x i ≥ ∀ i ∈ { , , ..., n }} for κ > . Let G be astandard Gaussian random variable and θ ∈ S n − . Then | P ( θ · X ≥ t ) − P ( G ≥ t ) | ≤ C κ n X k =1 | θ k | , where C κ > depends on κ only, and does not depend on n . In order to see that Theorem 1.1 includes the uniform distribution over B np for 0 < p < u ( x ) = (cid:26) , x p + · · · + x pn ≤ ∞ , otherwise , and set κ = p .The error rate in Theorem 1.1 is the same as in the classical Central Limit Theorem. Forexample, by choosing θ = (cid:16) √ n , ..., √ n (cid:17) , we get an error rate of O (cid:16) √ n (cid:17) .The symmetry conditions in Theorem 1.1 are highly restrictive. Hence, we are led to studyp-convex bodies, which satisfy fewer symmetry conditions and are shown to share some of theproperties of convex bodies.We say that K ⊂ R n is p-convex with 0 < p < K = − K and for all x, y ∈ K and 0 < λ < λ p x + (1 − λ ) p y ∈ K, These bodies are related to unit balls of p − norms and were studied in relation to local theory ofBanach spaces by Gordon and Lewis [11], Gordon and Kalton [10], Litvak, Milman and Tomczak-Jaegermann [17] and others (see [4], [8], [12], [16], [18]).The following discussion explains why the class of p-convex bodies does not give the desiredresult. Theorem 1.2.
Set N = n + n log n . There exists a random vector X distributed uniformly in a − convex body K ⊆ R N , and a subspace E with dim ( E ) = n , such that for any θ ∈ S N − ∩ E , therandom variable θ · Proj E X is not close to a Gaussian random variable in any reasonable sense(Kolmogorov distance, Wasserstein distance and others). A similar construction can be made for any fixed parameter 0 < p <
1. Since dim( E ) tends toinfinity with n , a similar theorem is not true in the convex case. Hence, the central limit theoremfor convex sets cannot be extended for the p-convex case. Thus, we need to look for a new class ofbodies (densities) that includes the l np unit balls, with a weaker condition than the unconditionalone. Remark 1.3.
In [16] Litvak constructed an example of a p -convex body for which the volumedistribution is very different from the convex case. Litvak’s work studies the large deviationsregime for p -convex distributions, while our work is focused on the central limit theorem. Throughout the text the letters c, C, c ′ , C ′ will denote universal positive constants that do notdepend on the dimension n . The value of the constant may change from one instance to another.We use C α , C ( α ) for constants that depend on a parameter α and nothing else. σ n − will denotethe Haar probability measure on S n − . f ( n ) = O ( g ( n )) is the big O notation, i.e. there exists aconstant C > | f ( n ) | ≤ Cg ( n ) , ∀ n ∈ N .2 cknowledgement. This paper is part of the authors M.Sc thesis written under the supervisionof Professor Bo’az Klartag whose guidance, support and patience were invaluable. In addition, Iwould like to thank Andrei Iacob for his helpful editorial comments. Supported by the EuropeanResearch Council (ERC).
In this section we use Klartag’s recent work [14] in order to exhibit a family of functions, whichincludes the indicator functions of l np unit balls, for 0 < p <
1, having almost Gaussian marginals.A special case of Theorem 1.1 in [14] gives us the following Lemma.
Lemma 2.1.
Let κ > and let φ : R n → R be an unconditional probability density function suchthat φ ( x κ , ..., x κn ) is convex on R n + . Let X be a random vector with density e − φ ( x ) . ThenVar | X | ≤ c κ n X j =1 E | X j | , where c κ depends only on κ . Lemma 2.2.
Let κ ≥ and let φ : R n → R be an unconditional probability density function suchthat φ ( x κ , ..., x κn ) is convex on R n + . Let X be a random vector with density e − φ ( x ) . Then for any p ≥ and i = 1 , ..., n , E | X i | p ≤ c p,κ (cid:0) E | X i | (cid:1) p . Proof. If p ≤ c p,κ = 1. Assume that p ≥
2. Define π : R n + → R n + by π ( x ) = ( | x | κ , ..., | x n | κ ). The Jacobian of π is Q nj =1 κ | x j | κ − . Using thesymmetry of φ we obtain Z R n | x i | p e − φ ( x ) dx = 2 n Z R n + | x i | p e − φ ( x ) dx = 2 n Z R n + | x i | pκ n Y j =1 κ | x j | κ − e − φ ( π ( x )) dx Now set u ( x ) = φ ( π ( x )) − ( κ − n X j =1 log | x j | . The function e − u ( x ) is log-concave on R n + , with κ n Z R n + e − u ( x ) = 12 n , and Z R n + | x i | p e − φ ( x ) dx = κ n Z R n + | x i | pκ e − u ( x ) dx. By Borell’s Lemma (see [7], [3], [19]) we obtain(2 κ ) n Z R n + | x i | pκ e − u ( x ) dx ≤ C κ,p (2 κ ) n Z R n + | x i | κ e − u ( x ) dx ! p = C κ,p (cid:18)Z R n | x i | e − φ ( x ) dx (cid:19) p Lemma 2.3.
Let κ > and let φ : R n → R be an unconditional, isotropic probability densityfunction such that φ ( x κ , ..., x κn ) is convex on R n + . Let X be a random vector with density e − φ ( x ) .Then, for any a ∈ R n Var ( a X + · · · + a n X n ) ≤ C κ n X j =1 | a j | . roof. By applying a linear transformation, Lemma 2.1 givesVar( a X + · · · + a n X n ) ≤ C ′ κ n X j =1 E a j | X j | . By Lemma 2.2, we obtainVar( a X + · · · + a n X n ) ≤ C ′ κ n X j =1 E a j | X j | ≤ C κ n X j =1 a j (cid:0) E | X j | (cid:1) = C κ n X j =1 | a j | . We are now ready to prove Theorem 1.1.
Proof.
Since X is unconditional, P ( θ · X ≥ t ) = P n X k =1 θ k X k ε k ≥ t ! , where ε , ..., ε n are i.i.d. random variables distributed uniformly on {± } that are independent of X . By the triangle inequality, (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P n X k =1 θ k X k ε k ≥ t ! − P ( G ≥ t ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ E X (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P ( G ≥ t ) − P G G ≥ t pP nk =1 θ k X k !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + E X (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P ε n X k =1 ε k θ k X k ≥ t ! − P G G ≥ t pP nk =1 θ k X k !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) . We estimate each term separately. Denote Y n = P nk =1 θ k X k . By the Berry-Esseen Theorem (see[9]), E X (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P ε n X k =1 ε k θ k X k ≥ t ! − P G (cid:18) G ≥ t √ Y n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C E X n X k =1 | θ k | | X k | ( Y n ) , ∞ ) ( Y n ) + 2 P (cid:18) Y n < (cid:19)! ≤ C n X k =1 E X | θ k | | X k | + 2 P (cid:18) Y n < (cid:19)! ≤ C κ n X k =1 | θ k | + C P (cid:18) Y n < (cid:19) Here we used Lemma 2.2 to estimate E | X k | . Note that E X Y n = E X n X j =1 θ j X j = n X j =1 θ j E X X j = n X j =1 θ j = 1 , so by Chebyshev’s inequality and Lemma 2.3 P (cid:18) | Y n − | ≥ (cid:19) ≤ V ar ( Y n ) ≤ C κ n X j =1 | θ j | (1)Hence, since | θ i | ≤ i = 1 , ..., n , E X (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) P ε n X k =1 ε k θ k X k ≥ t ! − P (cid:18) G ≥ t √ Y n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C κ n X k =1 | θ k | . E X (cid:12)(cid:12)(cid:12)(cid:12) P ( G ≥ t ) − P (cid:18) G ≥ t √ Y n (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) we use (1) and Klartag’s argument in[15] (Section 6, Lemma 7) and conclude that it is enough to show that E (cid:18) ( Y n − (cid:12)(cid:12)(cid:12)(cid:12) Y n ≥ (cid:19) ≤ C n X j =1 | θ j | By Lemma 2.3 we get E ( Y n − = Var ( Y n ) ≤ C κ n X j =1 | θ j | Hence, E (cid:18) ( Y n − (cid:12)(cid:12)(cid:12)(cid:12) Y n ≥ (cid:19) ≤ E ( Y n − P (cid:18) Y n ≥ (cid:19) − ≤ C κ n X j =1 | θ j | P (cid:18) Y n ≥ (cid:19) − From inequality (1) it follows that (cid:18) P (cid:18) Y n ≥ (cid:19)(cid:19) − = P n X j =1 θ j X j ≥ − ≤ − C κ P nj =1 | θ j | . We may assume that n X j =1 | θ j | is bounded by some small positive constant depending on κ , sinceotherwise the result is trivial, and obtain11 − C κ P nj =1 | θ j | ≤ C κ n X j =1 | θ j | which completes our proof. In this section we construct a random vector X , distributed uniformly in a − convex body K ,such that for a large subspace E ⊆ R n the random vector Proj E X has no single approximatelyGaussian marginal. We define a function f : R + → R + such that the radial density r n − e − f ( r ) isspread across an interval of length proportional to √ n ; that is, we want r n − e − f ( r ) to be constant(or close to constant) on such an interval. Such densities have marginals that are far from Gaus-sian. We use the density function introduced above and an approximation argument to constructthe desired body K .In order to construct a p-convex body from a function f , we restrict ourselves to p-convexfunctions . Definition 3.1.
A function f : R n → R ∪{∞} is called p-convex if for any x, y ∈ R n and t ∈ [0 , , f (cid:16) t p x + (1 − t ) p y (cid:17) ≤ tf ( x ) + (1 − t ) f ( y ) . (2)The following proposition allows us to construct a p-convex body with 0 < p < roposition 3.2. For ψ : R n → R + p-convex function with < p < and fixed N > , define f N ( x ) = (cid:18) − ψ ( x ) N (cid:19) N + . Then the set K N ( ψ ) = n ( x, y ); x ∈ R n , y ∈ R N , | y | < f N N ( x ) o is p-convex.Proof. Let ( x , y ) , ( x , y ) ∈ K N ( ψ ). Since ( x i , y i ) ∈ K N ( ψ ) we have f N ( x i ) >
0. Therefore, f N N ( x i ) = 1 − ψ ( x i ) N .
Let 0 ≤ t ≤ f N N ( t p x + (1 − t ) p x ) ≥ − N ψ ( t p x + (1 − t ) p x ) ≥ − N ( tψ ( x ) + (1 − t ) ψ ( x )) == tf N N ( x ) + (1 − t ) tf N N ( x ) > t | y | + (1 − t ) | y | ≥ | t p y | + | (1 − t ) p y | ≥ | t p y + (1 − t ) p y | . Hence, t p ( x , y ) + (1 − t ) p ( x , y ) ∈ K N ( ψ ), as needed. Proposition 3.3.
There exists a universal constant
C > such that, for a ≥ C the function f ( x ) = log a, if ≤ x ≤ a log x, if a ≤ x ≤ a √ x − √ a + log 2 a, if a ≤ x is − convex.Proof. We begin by verifying that the function f is − convex for each interval [0 , a ] , [ a, a ] , [2 a, ∞ ).Then we need to check that condition (2) holds when x and y are from different intervals. Bysymmetry, we may assume that x < y . The cases x, y ∈ [0 , a ] and x, y ∈ [2 a, ∞ ) are straightfor-ward. In order for condition (2) to hold for the function log x on an interval [ a, b ] we must showthat for any x, y ∈ [ a, b ]log((1 − t ) x + t y ) ≤ (1 − t ) log( x ) + t log( y ) = log (cid:0) x − t y t (cid:1) . (3)This is equivalent to (1 − t ) x + t y − x − t y t ≤ . Setting here y = cx , we obtain (1 − t ) + t c − c t ≤ . This inequality holds for every 1 ≤ c ≤ ≤ t ≤
1. To see that note that g ( t, c ) =(1 − t ) + t c − c t is a convex function in c (as a sum of convex functions). Hence, it is enough toverify that g ( t, ≤ g ( t, ≤ ≤ t ≤
1. Indeed, g ( t,
1) = (1 − t ) + t − t ( t − ≤ g ( t,
4) = (1 − t ) + t − t ⇒ ∂ g ( t, ∂t = 2 + 8 − (log 4) t ≥ . Hence, g ( t,
4) is convex in t . Since g (0 ,
4) = g (1 ,
4) = 0, we obtain, g ( t, ≤ ≤ t ≤ a, b ] ⊆ [ a, a ].Next, we verify condition (2) for f when x ∈ [ a, a ], y ∈ [2 a, ∞ ), and t x + (1 − t ) y ∈ [ a, a ]. Weconsider two cases 6. y ∈ [2 a, a ]. By inequality (3), f ( t x + (1 − t ) y ) = log( t x + (1 − t ) y ) ≤ t log( x ) + (1 − t ) log( y ) ≤ log( x ) + (1 − t )(log(2 a ) + √ y − √ a ) = tf ( x ) + (1 − t ) f ( y ) . The second inequality holds thanks to the elementary inequality log( y ) − log(2 a ) ≤ √ y −√ a .Since for y = 2 a we have equality, and ( √ y ) ′ = √ y ≥ y = (log( y )) ′ for y ≥
4, the inequalityholds if 2 a ≥ y ≥ a . Define g ( t ) = log( t x + (1 − t ) y ) − t log( x ) − (1 − t )( √ y − √ a + log(2 a )) . We need to show that g ( t ) ≤ t ∈ [0 , g (1) = 0, it is enough to show that g ′ ( t ) ≥ ≤ t ≤
1. We have, g ′ ( t ) = 2 tx − − t ) yt x + (1 − t ) y − log( x ) + √ y − √ a + log(2 a ) ≥ tx − − t ) yt x + (1 − t ) y + (cid:18) − √ (cid:19) √ y Hence, if 2 tx − − t ) y + (cid:16) − √ (cid:17) √ y ( t x + (1 − t ) y ) ≥
0, then g ′ ( t ) ≥
0. Recalling that t x + (1 − t ) y ≥ a , it suffices to prove that2 tx − − t ) y + (cid:18) − √ (cid:19) √ ya ≥ . Using the fact that (1 − t ) y ≤ t x + (1 − t ) y ≤ a , we obtain (1 − t ) √ y ≤ √ a . Hence,2 tx − − t ) y + (cid:18) − √ (cid:19) a √ y ≥ ta − √ a √ y + (cid:18) − √ (cid:19) a √ y ≥ √ y (cid:18)(cid:18) − √ (cid:19) a − √ a (cid:19) . This gives the condition (cid:18) − √ (cid:19) a − √ a ≥ , Which is satisfied for a ≥ x ∈ [ a, a ] and y ∈ [2 a, ∞ ) and t x + (1 − t ) y ≥ a , we have f ( t x + (1 − t ) y ) = p t x + (1 − t ) y − √ a + log 2 a ≤ t √ x + (1 − t ) √ y − √ a + log 2 a and tf ( x ) + (1 − t ) f ( y ) = t log x + (1 − t )( √ y − √ a + log 2 a ) . Hence, (2) holds thanks to the elementary inequality log 2 a − log x + √ x − √ a ≤
0, which holdsfor a ≥ x ∈ [0 , a ], then f ( x ) = f ( a ) and f ( t x + (1 − t ) y ) ≤ f ( t a + (1 − t ) y ). Hence, for x ∈ [0 , a ]and y ∈ [ a, ∞ ) we have f ( t x + (1 − t ) y ) ≤ f ( t a + (1 − t ) y ) ≤ tf ( a ) + (1 − t ) f ( y ) = tf ( x ) + (1 − t ) f ( y ) . roposition 3.4. Let f : R + → R + be a p-convex function with parameter < p < . Then x f ( | x | ) is a p-convex function on R n .Proof. First, we prove that f is non-decreasing. Let 0 < x < y . There exists some k ≥ − k ( p − ) y ≤ x . We proceed by induction on k . For k = 1, note that h ( t ) = t p y + (1 − t ) p y is continuous, h (0) = y , and h (cid:0) (cid:1) = 2 − ( p − ) y . Hence, there exists some 0 ≤ t ≤ h ( t ) = x , and so f ( x ) = f ( t p y + (1 − t ) p y ) ≤ t f ( y ) + (1 − t ) f ( y ) = f ( y )For k ≥ f (2 − ( k − ( p − ) y ) ≤ f ( y ) by the induction hypothesis, and by the same argument asabove f ( x ) ≤ f (2 − ( k − ( p − ) y ) ≤ f ( y ) . We thus showed that f is monotone non-decreasing. Now, by the triangle inequality, for any x, y ∈ R n and 0 < t < f ( | t p x + (1 − t ) p y | ) ≤ f ( t p | x | + (1 − t ) p | y | ) ≤ tf ( | x | ) + (1 − t ) f ( | y | )Using the function from Proposition 3.3, we are ready to construct the − convex body K andprove Theorem 1.2. Definition 3.5.
A sequence of probability measures { µ n } on R n is called essentially isotropic if R xdµ n ( x ) = 0 and R x i x j dµ n ( x ) = (1 + ε n ) δ ij for all i, j = 1 , ..., n , when ε n −→ n →∞ . Proposition 3.6.
The probability measure dµ = C n e − ( n − f ( | x | ) dx , where f is defined as inProposition 3.3, with a = q n , is essentially isotropic. That is, Z x i x j dµ ( x ) = (1 + ε n ) δ ij for all i, j = 1 , , ..., n , when | ε n | ≤ Cn .Proof. The density µ is spherically symmetric, hence Z R n x i x j dµ ( x ) = 0 , for i = j , and Z R n x i dµ ( x ) = 1 n Z R n | x | dµ ( x ) , for i = 1 , , ..., n . Integration in spherical coordinates and using Laplace asymptotic method yields Z | x | dµ ( x ) = Z √ n r n +1 (cid:16)q n (cid:17) n − dr + Z √ n √ n r dr + e q √ n q n n − Z ∞ √ n r n +1 e − ( n − √ r dr Z √ n r q n n − dr + Z √ n √ n dr + e q √ n q n n − Z ∞ √ n r n − e − ( n − √ r dr = q n + O ( √ n ) q n + O (cid:16) √ n (cid:17) = n + O (1) . roposition 3.7. Let X be a random vector in R n distributed according to µ from Proposition3.6. Then, P r n ≤ | X | ≤ r n ! ≥ − Cn .
Proof.
By the same arguments as in Proposition 3.6 P r n ≤ | X | ≤ r n ! = Z √ n √ n dr q n + O (cid:16) √ n (cid:17) = 1 + O (cid:18) n (cid:19) . Proposition 3.8.
Let X be a random vector in R n distributed according to µ from Proposition3.6, and let e X be a random variable distributed according to d e µ = f C n (cid:16) − ( n − f ( | x | ) N (cid:17) N + . Thenfor N ≥ n log n , e X is essentially isotropic, namely Z x i x j d e µ ( x ) = (1 + ε ′ n ) δ ij for all i, j = 1 , , ..., n , when | ε ′ n | ≤ C √ n . Also ∀ t, (cid:12)(cid:12)(cid:12) P ( | X | ≤ t ) − P ( | ˜ X | ≤ t ) (cid:12)(cid:12)(cid:12) ≤ C √ n . Proof.
The random vector e X is spherically symmetric. Hence Z R n x i x j d e µ ( x ) = 0 , for i = j , and Z R n x i d e µ ( x ) = 1 n Z R n | x | d e µ ( x ) , for i = 1 , , ..., n . Since both densities are spherically symmetric, we need to estimate the one-dimensional integrals I k = Z ∞ r k e − ( n − f ( r ) − (cid:18) − ( n − f ( r ) N (cid:19) N + ! dr for k = n − , n + 1. Define α by the equation √ α − r q n + log (cid:16) q n (cid:17)! ( n −
1) = N , Thatis, for any r ≤ α we have ( n − f ( r ) N ≤ . By Taylor’s Theorem, for any r ≤ α , (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) log (cid:18) − ( n − f ( r ) N (cid:19) N + − ( − ( n − f ( r )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( n − N f ( r ) . Hence, for any r ≤ α (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) e − ( n − f ( r ) − (cid:18) − ( n − N f ( r ) (cid:19) N + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) = e − ( n − f ( r ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) − exp ( n − f ( r ) − log (cid:18) − ( n − N f ( r ) (cid:19) N + !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C n N e − ( n − f ( r ) f ( r ) . (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z ∞ α e − ( n − f ( r ) − (cid:18) − ( n − N f ( r ) (cid:19) N + ! dr (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ C Z ∞ α e − ( n − f ( r ) dr ≤ Ce − n . Combining the above inequalities, we obtain | I k | ≤ C n N Z α r k e − ( n − f ( r ) f ( r ) dr + C e − n ≤ C n N Z ∞ r k e − ( n − f ( r ) f ( r ) dr. Hence, | I n − | ≤ C n N (cid:18) √ n log n + O (cid:18) log n √ n (cid:19)(cid:19) ≤ C , | I n +1 | ≤ C n N (cid:16) n log n + O (cid:0) log n √ n (cid:1)(cid:17) ≤ C n. By the estimation on I n − , and the calculations in Proposition 3.6 we obtain (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z ∞ (cid:18) − ( n − N f ( r ) (cid:19) N + dr − r n (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z ∞ (cid:18) − ( n − N f ( r ) (cid:19) N + dr − Z ∞ e − ( n − f ( r ) dr (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + O (cid:18) √ n (cid:19) = | I n − | + O (cid:18) √ n (cid:19) ≤ C . Hence, • Z ∞ (cid:18) − ( n − N f ( r ) (cid:19) N + dr ! − = q n (cid:16) O (cid:16) √ n (cid:17)(cid:17) ; • ∀ t, (cid:12)(cid:12)(cid:12) P ( | X | ≤ t ) − P ( | ˜ X | ≤ t ) (cid:12)(cid:12)(cid:12) ≤ C √ n .By the estimation of I n +1 we obtain, (cid:12)(cid:12)(cid:12) E X i − E ˜ X i (cid:12)(cid:12)(cid:12) = 1 n (cid:12)(cid:12)(cid:12) E | X | − E | ˜ X | (cid:12)(cid:12)(cid:12) ≤ C √ n n | I n +1 | ≤ C √ n . Remark 3.9.
It is possible to take a ≈ q n in the definition of f , such that e X is isotropic. We use the following estimation in our proof of Theorem 1.2.
Proposition 3.10.
Let Z , .., Z n be independent standard Gaussian random variables, and let < δ < . Then, P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12)q Z + ... + Z n − √ n (cid:12)(cid:12)(cid:12)(cid:12) ≤ n δ (cid:19) ≥ − Ce − cn δ , where c, C > are constants.Proof. Note that (cid:12)(cid:12) Z + · · · + Z n − n (cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:12)q Z + · · · + Z n − √ n (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)q Z + · · · + Z n + √ n (cid:12)(cid:12)(cid:12)(cid:12) ≥ (cid:12)(cid:12)(cid:12)(cid:12)q Z + · · · + Z n − √ n (cid:12)(cid:12)(cid:12)(cid:12) √ n. Therefore it is enough to show that P (cid:16)(cid:12)(cid:12) Z + · · · + Z n − n (cid:12)(cid:12) ≤ n δ + (cid:17) ≥ − Ce − cn δ . m ≥ i = 1 , ..., n , we have E | Z i − | m ≤ m X k =1 (cid:18) mk (cid:19) E Z ki ≤ m (2 m )!! ≤ m m !where (2 m )!! = 1 · · · · · (2 m − P (cid:16)(cid:12)(cid:12) ( Z −
1) + · · · + ( Z n − (cid:12)(cid:12) > n + δ (cid:17) ≤ Ce − cn δ . We are now ready to prove Theorem 1.2.
Proof.
By Proposition 3.3, the function ( n − f ( | x | ) is − convex. Proposition 3.2 with N = n log n yields a − convex body K . Let X be a random vector distributed uniformly in K .By the definition of K the marginal of X with respect to the first n coordinates has densityproportional to (cid:16) − ( n − f ( | x | ) N (cid:17) N + . Denote this subspace by E . By Proposition 3.8, Proj E X is essentially isotropic. Let G be a standard Gaussian random variable. In order to show that Y = Proj E X has no approximately Gaussian marginals, we examine P ( | θ · Y | ≤ t ), for any θ ∈ S n − . Using the symmetry of Y and the rotation invariance of σ n − , we obtain, P ( | θ · Y | ≤ t ) = E [0 ,t ] ( | θ · Y | ) = Z S n − E [0 ,t ] ( | θ · Y | ) dσ n − ( θ )= E Z S n − [0 ,t ] ( θ | Y | ) dσ n − ( θ ) , where θ = ( θ , ..., θ n ). Let Z = ( Z , ..., Z n ), where Z i are independent standard Gaussian randomvariable. Since Z is invariant under rotations, Z | Z | is distributed uniformly on S n − . Hence, P ( | θ · Y | ≤ t ) = P (cid:18) | Z || Y | ≤ t q Z + · · · + Z n (cid:19) . By the Proposition 3.10, P ( | q Z + · · · + Z n − √ n | ≤ n ) ≥ − Ce − cn . Hence, P ( | θ · Y | ≤ t ) = P (cid:16) | Z || Y | ≤ t √ n (cid:16) O (cid:16) n − + (cid:17)(cid:17)(cid:17) + O (cid:18) e − cn (cid:19) . (4)By Propositions 3.8 and 3.7, there exists a random vector Y ′ such that ∀ t | P ( | Y ′ | ≤ t ) − P ( | Y | ≤ t ) | ≤ C √ n , P r n ≤ | Y ′ | ≤ r n ! ≥ − Cn , and | Y ′ | has constant density function on hq n, q n i . By the triangle inequality, for W distributed uniformly on hq , q i and any q ≤ α ≤ β ≤ q we have | P ( √ nα ≤ | Y | ≤ √ nβ ) − P ( α ≤ W ≤ β ) | ≤ C √ n . Combining with (4), P ( | θ · Y | ≤ t ) = P (cid:16) | G | W ≤ t (1 + O ( n − + )) (cid:17) + O (cid:18) √ n (cid:19) . We conclude that | Y · θ | is very close to a distribution which is the product of a Gaussian with auniform random variable, and the latter distribution is far from Gaussian.11 eferences [1] M. Anttila, K. Ball, I. Perissinaki, The central limit problem for convex bodies. Trans. Amer.Math. Soc. 355 (2003), 4723–4735. [2] G. Bennet, Probability Inequalities for the Sum of Independent Random Variables.
Journalof the American Statistical Association, 57 (1962), 33–45 [3] C. Borell, Convex measures on locally convex spaces.
Arkiv f¨or Matematik , Volume 12, Issue1-2 (1974), 239–252 [4] J.Bastero, J.Bernues, A.Pena, An extension of Milmans reverse Brunn-Minkowski inequality.
Geom. Funct. Anal. 5 (1995), no. 3, 572–581. [5] U. Brehm, J. Voigt, Asymptotics of cross sections for convex bodies.
Beitr¨age Algebra Geom.41 (2000), 437–454. [6] S. G. Bobkov, A. Koldobsky, On the central limit property of convex bodies
Geom. Aspectsof Funct. Analysis (Milman-Schechtman eds.), Lecture Notes in Math. 1807, 2003. [7] L. Berwald, Verallgemeinerung eines Mittelwertsatzes von J. Favard f¨ur positive konkaveFunktionen.
Acta Math. 79 (1947), 17-37. [8] S.J.Dilworth, The dimension of Euclidean subspaces of quasinormed spaces.
Math. Proc.Cambridge Philos. Soc. 97 (1985), no. 2, 311–320. [9] W. Feller, An introduction to Probability Theory and its Applications. vol. II, Sect. XVI.5.,J. Wiley New York 1971. [10] Y. Gordon and N. J. Kalton, Local structure theory for quasi-normed spaces.
Bull. Sci. Math.,118 (1994), 441–453. [11] Y. Gordon and D.R. Lewis, Dvoretzky’s theorem for quasi-normed space.
Illinois J. Math.35, no.2, (1991), 250–259. [12] N.J.Kalton, Convexity, type and the three space problem.
Studia Math. 69 (1981) 247–287. [13] B. Klartag, A central limit theorem for convex sets.
Invent. Math., 168, (2007), 91–131. [14] B. Klartag, Poincare Inequalities and Moment Maps.
Ann. Fac. Sci. Toulouse Math., Vol.22, No. 1, (2013), 1–41 [15] B. Klartag, A Berry-Esseen type inequality for convex bodies with an unconditional basis.
Probab. Theory Related Fields, 45, no. 1, (2009), 1 – 33. [16] A.E. Litvak, Kahane-Khinchins inequality for the quasi-norms.
Canad. Math. Bull. 43 (2000)368–379. [17] A. E. Litvak, V. D. Milman, N. Tomczak-Jaegermann, Isomorphic random subspaces andquotients of convex and quasi-convex bodies
GAFA, Lecture Notes in Math., 1850, 159–178,Springer-Verlag, 2004. [18] V.Milman, Isomorphic Euclidean regularization of quasi-norms in Rn.
C. R. Acad. Sci. Paris321 (7) (1995), 879–884 [19] V. D. Milman, A. Pajor, Isotropic position and inertia ellipsoids and zonoids of the unit ballof a normed n-dimensional space.