The lower bound for Koldobsky's slicing inequality via random rounding
aa r X i v : . [ m a t h . M G ] F e b THE LOWER BOUND FOR KOLDOBSKY’S SLICING INEQUALITY VIARANDOM ROUNDING
BO’AZ KLARTAG, GALYNA V. LIVSHYTSA
BSTRACT . We study the lower bound for Koldobsky’s slicing inequality. We show thatthere exists a measure µ and a symmetric convex body K ⊆ R n , such that for all ξ ∈ S n − and all t ∈ R , µ + ( K ∩ ( ξ ⊥ + tξ )) ≤ c √ n µ ( K ) | K | − n . Our bound is optimal, up to the value of the universal constant. It improves slightlyupon the results of the first named author and Koldobsky [11], which included a doubly-logarithmic error. The proof is based on an efficient way of discretizing the unit sphere.
1. I
NTRODUCTION
We shall work in the Euclidean n -dimensional space R n . The unit ball shall be denotedby B n and the unit sphere by S n − . The Lebesgue volume of a measurable set A ⊂ R n isdenoted by | A | . Throughout the paper, c , C , C ′ etc stand for positive absolute constantswhose value may change from line to line.Given a measure µ with a continuous density f on R n and a set A ⊆ R n of Hausdorffdimension n − , we write µ + ( A ) = Z A f ( x ) dx, where the integration is with respect to the ( n − -dimensional Hausdorff measure.For a measure µ on R n with a continuous density and for an origin symmetric convexbody K in R n (i.e., K = − K ), define the quantity S µ,K = inf ξ ∈ S n − µ ( K ) | K | n µ + ( K ∩ ξ ⊥ ) , where ξ ⊥ = { x ∈ R n , h x, ξ i = 0 } is the hyperplane orthogonal to ξ . We let S n = sup µ sup K ⊂ R n S µ,K , where the suprema run over measures µ with a continuous density f in R n and all origin-symmetric convex bodies K ⊆ R n Koldobsky in a series of papers [12], [13], [14] investigated the question how large can S n be? The discrete version of this question was studied by Alexander, Henk, Zvavitch[1] and Regev [19]. In [12], where the question has first arisen, Koldobsky gave upperand lower bounds on S ( µ, K ) , that are independent of the dimension in the case when K is an intersection body. In [13], he established the general bound S n ≤ √ n . In [14], hehas shown that S µ,K is bounded from above by an absolute constant in the case when K isan unconditional convex body (invariant under coordinate reflections). Further, Koldobsky Date : February 12, 2019.2010
Mathematics Subject Classification.
Primary: 52.
Key words and phrases.
Convex bodies, log-concave. and Pajor [15] have shown that S µ,K ≤ C √ p when K is the unit ball of an n -dimensionalsection of L p . In the case when µ is the Lebesgue measure, it was conjectured by Bourgain [5], [6] that S µ,K ≤ C, for an arbitrary origin-symmetric convex body K . The best currently knownbound in this case is S µ,K ≤ Cn , established by the first named author [10], slightlyimproving upon Bourgain’s estimate from [7]. However, it was shown by the first namedauthor and Koldobsky [11] that S n ≥ c √ n √ log log n . Moreover, it was shown there that for every n there exists a measure µ with continuous density and a symmetric convex body K ⊆ R n such that for all ξ ∈ S n − and for all t ≥ , (1) µ + ( K ∩ ( ξ ⊥ + tξ )) ≤ C √ log log n √ n µ ( K ) | K | − n , where C > is some absolute constant. Here A + x = { y + x ; y ∈ A } for a set A ⊆ R n and a vector x ∈ R m . In this note we improve the bound (1), and obtain: Theorem 1.1.
For every n there exists a measure µ and a convex symmetric body L ⊆ R n such that for all ξ ∈ S n − and for all t ≥ , (2) µ + ( L ∩ ( ξ ⊥ + tξ )) ≤ C √ n µ ( L ) | L | − n , where C > is a universal constant. In [4], the first named author, Bobkov and Koldobsky explored the connections of (1) andthe maximal “distance” of convex bodies to subspaces of L p . Write L np for the collection oforigin-symmetric convex bodies in R n that are linear images of unit balls of n -dimensionalsubspaces of the Banach space L p . The outer volume ratio of a symmetric convex body K in R n to the subspaces of L p is defined as d ovr ( K, L np ) := inf D ∈ L np : K ⊂ D (cid:18) | D || K | (cid:19) n . John’s theorem, and the fact that l n embeds in L p , entails that d ovr ( K, L np ) ≤ √ n , forany symmetric convex body K . Combined with the consideration from [4], Theorem 1.1implies a doubly-logarithmic improvement of a result of [4]: Corollary 1.2.
There exists an absolute constant c > and an origin-symmetric convexbody L in R n such that for any p ≥ , d ovr ( L, L np ) ≥ c √ n √ p . The construction of µ and K is randomized, and follows the idea from [11]. The questionboils down to estimating the supremum of a certain random function. The method of theproof is based on an efficient way of discretizing the unit sphere. We consider, for everypoint in S n − , a “rounding” to a point in a scaled integer lattice, chosen at random, seeRaghavan and Thompson [17]. This construction was recently used in [2] for efficientlycomputing sketches of high-dimensional data. It is somewhat reminiscent of the methodused in discrepancy theory, called jittered sampling. For instance, using this method, Beck[3] has obtained strong bounds for the L -discrepancy.In Section 2 we describe the net construction. In Section 3 we derive the key estimatefor our random function. In Section 4 we conclude the proof of Theorem 1.1. In Section5 we briefly outline some further applications, in particular in relation to random matrices;this discussion in detail shall appear in a separate paper. OLDOBSKY’S SLICING INEQUALITY 3
We use the notation log ( k ) ( · ) for the logarithm iterated k times, and log ∗ n for the smallestpositive integer m such that log ( m ) n ≤ . Denote k x k p = ( P i | x i | p ) /p for x ∈ R n , andalso k x k ∞ = max i | x i | and | x | = k x k = p h x, x i . Write B np = { x ∈ R n ; k x k p ≤ } .We also write A + B = { x + y ; x ∈ A, y ∈ B } for the Minkowski sum. Acknowledgements.
The second named author is supported in part by the NSF CA-REER DMS-1753260. The work was partially supported by the National Science Founda-tion under Grant No. DMS-1440140 while the authors were in residence at the Mathemati-cal Sciences Research Institute in Berkeley, California, during the Fall 2017 semester. Theauthors are grateful to Alexander Koldobsky for fruitful discussions and helpful comments.The authors are thankful to the anonymous referee for valuable suggestions.2. T
HE RANDOM ROUNDING AND THE NET CONSTRUCTION
We fix a dimension n and a parameter ρ ∈ (0 , / . We define F ρ as the set of all vectorsof Euclidean norm between − ρ and ρ in which every coordinate is an integer multipleof ρ/ √ n . That is, F ρ = ((1 + ρ ) B n \ (1 − ρ ) B n ) ∩ ρ √ n Z n . Lemma 2.1.
The set F ρ satisfies F ρ ≤ (cid:16) Cρ (cid:17) n , where C is a universal constant. More-over, let ξ ∈ S n − , and suppose that η ∈ ( ρ/ √ n ) Z n satisfies k ξ − η k ∞ ≤ ρ/ √ n . Then η ∈ F ρ .Proof. Any x ∈ F ρ satisfies k x k ≤ √ n | x | ≤ √ n . Hence all vectors in the scaledset ( √ n/ρ ) · F ρ have integer coordinates whose absolute values sum to a number whichis at most n/ρ . Recall that the number of vectors x ∈ R n with non-negative, integercoordinates and k x k ≤ R equals (cid:18) R + nn (cid:19) ≤ (cid:18) e R + nn (cid:19) n where R is a non-negative integer. Consequently, F ρ ≤ n · (cid:18) e ρ − n + nn (cid:19) n ≤ (cid:18) Cρ (cid:19) n . We move on to the “Moreover” part. We have | ξ − η | ≤ √ n k ξ − η k ∞ ≤ ρ . Therefore − ρ < | η | ≤ ρ and consequently η ∈ ((1 + ρ ) B n \ (1 − ρ ) B n ) ∩ ρ √ n Z n = F ρ . (cid:3) Definition 2.2.
For ξ ∈ S n − consider a random vector η ξ ∈ ( ρ/ √ n ) Z n with independentcoordinates such that k ξ − η ξ k ∞ ≤ ρ/ √ n with probability one and E η ξ = ξ . Namely, for i = 1 , . . . , n , writing ξ i = ρ √ n ( k i + p i ) for an integer k i and p i ∈ [0 , , η ξi = ( ρ √ n k i , with probability − p iρ √ n ( k i + 1) , with probability p i . For any ξ ∈ S n − , the random vector η ξ belongs to F ρ with probability one, accordingto Lemma 2.1. The random vector η ξ − ξ is a centered random vector with independent co-ordinates, all belonging to the interval [ − ρ/ √ n, ρ/ √ n ] . We shall make use of Hoeffding’sinequality for bounded random variables (see, e.g., Theorem 2.2.6 and Theorem 2.6.2 inVershynin [20]). BO’AZ KLARTAG, GALYNA V. LIVSHYTS
Lemma 2.3 (Hoeffding’s inequality) . Let X , ..., X n be independent random variables tak-ing values in [ m i , M i ] , i = 1 , ..., n . Then for any β > ,P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) n X i =1 X i − E X i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ β ! ≤ e − cβ P ni =1( Mi − mi )2 , where c > is an absolute constant. The next Lemma follows immediately from Hoeffding’s inequality with X i = ( η ξi − ξ i ) θ i and [ m i , M i ] = [ − ρ √ n θ i , ρ √ n θ i ] : Lemma 2.4.
For any ξ ∈ S n − , β > and θ ∈ R n ,P ( |h η ξ − ξ, θ i| ≥ β ) ≤ (cid:18) − cnβ | θ | ρ (cid:19) . Here c > is an absolute constant.
3. T
HE KEY ESTIMATE
Let N be a positive integer, and consider independent random vectors θ , ..., θ N uni-formly distributed on the unit sphere S n − . Unless specified otherwise, the expectation andthe probability shall be considered with respect to their distribution.For r > , abbreviate ϕ ( r ) = e − r . The main result of this section is the following Proposition.
Proposition 3.1.
There exist absolute constants C , . . . , C > with the following prop-erty. Let n ≥ , consider r ∈ [ C √ n, n ] and suppose that N ≥ n satisfies N ∈ [ C n log Nrn √ n , n ] . Then with probability at least − e − n , for all ξ ∈ S n − , and forall t ∈ R , N N X k =1 ϕ ( r h ξ, θ k i + t ) ≤ C s n log Nrn √ n N + (cid:18) C √ nr (cid:19) √ nr ϕ (cid:18) q √ nr t (cid:19) , where q ≥ − C √ nr . We shall require a few Lemmas, before we proceed with the proof of Proposition 3.1.3.1.
Asymptotic estimates.
For a fixed vector η ∈ R n and t ∈ R , denote(3) F ( η, t ) = 1 N N X k =1 ϕ ( r h η, θ k i + t ) . Observe that F ( η, t ) ≤ with probability one. First, we shall show a sharpening of [11,Lemma 3.2]. Lemma 3.2.
Let n ≥ . Let θ be a random vector uniformly distributed on S n +2 . For any r > , for any t ∈ R , for any fixed η ∈ R n +3 , one has E ϕ ( r h θ, η i + t ) ≤ (cid:18) c (log n ) n (cid:19) √ n p n + r | η | ϕ t √ n p n + r | η | ! . Here c > is an absolute constant. OLDOBSKY’S SLICING INEQUALITY 5
Proof.
Observe that the formulation of the Lemma allows to assume, without loss of gener-ality, that | η | = 1 : indeed, in the case η = 0 the statement is straight-forward, and otherwiseit follows from the case | η | = 1 by scaling. The random variable h θ, η i is distributed on [ − , according to the density (1 − s ) n R − (1 − s ) n ds . Recall that for any x ∈ [0 , ,(4) log(1 − x ) = − x − x O ( x ) , and hence there is an absolute constant C > such that for any x ∈ [0 , nn ] ,(5) log(1 − x ) ≥ − x − C (log n ) n . Applying (5) with x = s , we estimate Z − (1 − s ) n ds ≥ Z √ nn − √ nn (1 − s ) n ds ≥ Z √ nn − √ nn e − ns − C (log n )22 n ds ≥ (cid:18) − c ′ (log n ) n (cid:19) Z √ nn − √ nn e − ns ds = (6) √ n (cid:18) − c ′ (log n ) n (cid:19) Z √ n −√ n e − s ds. Recall that for any a > , one has(7) Z ∞ a e − y dy ≤ a e − a , and therefore(8) Z √ n −√ n e − s ds ≥ √ π − √ n √ log n . By (8) and (6), we conclude that there exists an absolute constant ˜ c > such that(9) Z − (1 − s ) n ds ≥ √ π √ n (cid:18) − ˜ c (log n ) n (cid:19) . We remark that the second order term estimate is of course not sharp, yet it is more thansufficient for our purposes.Next, using the inequality − x ≤ e − x for x = s , we estimate from above(10) Z − (1 − s ) n e − ( rs + t )22 ds ≤ Z ∞−∞ e − ns rs + t )22 ds. It remains to observe that ns + ( rs + t ) = (cid:18) √ n + r s + tr √ n + r (cid:19) + nt n + r , and to conclude, by (10), that(11) Z − (1 − s ) n e − ( rs + t )22 ds ≤ √ π √ n + r ϕ (cid:18) √ nt √ n + r (cid:19) . BO’AZ KLARTAG, GALYNA V. LIVSHYTS
From (9) and (11) we note, for every unit vector η : (12) E ϕ ( r h θ, η i + t ) ≤ (cid:18) c (log n ) n (cid:19) √ n √ n + r ϕ (cid:18) t √ n √ n + r (cid:19) . (cid:3) As an immediate corollary of Lemma 3.2 and Hoeffding’s inequality, we get:
Lemma 3.3.
Let N ≥ n ≥ , r ≥ √ n and ρ ∈ (0 , ) . There exist absolute constants c, C, C ′ > such that for all η ∈ (1 + ρ ) B n \ (1 − ρ ) B n and t ∈ R , β > , P (cid:18) F ( η, t ) > β + (1 + c ( ρ + (log n ) n + nr )) √ nr ϕ (cid:18) qt √ nr (cid:19)(cid:19) ≤ e − Cβ N , where q ≥ − C ′ ( ρ + nr ) . Proof.
In view of Lemma 2.3 (Hoeffding’s inequality), it suffices to show that under theassumptions of the Lemma,(13) E ϕ ( r h θ, η i + t ) ≤ (1 + c ( ρ + (log n ) n + nr )) √ nr ϕ (cid:18) qt √ nr (cid:19) . Indeed, by Lemma 3.2, for some c > , E ϕ ( r h θ, η i + t ) ≤ (cid:18) c (log n ) n (cid:19) √ n p n + r | η | ϕ t √ n p n + r | η | ! . It remains to observe, that since r ≥ √ n , | t |√ n p n + r | η | ≥ q | t |√ nr , where q = 1 + O ( ρ + nr ) , and (cid:18) c (log n ) n (cid:19) √ n p n + r | η | ≤ (cid:18) c ( ρ + (log n ) n + nr ) (cid:19) √ nr , with an appropriate constant c > . (cid:3) Union bound.
Given ρ > , recall the notation F ρ for the net from Lemma 2.1. Ournext Lemma is a combination of the union bound with Lemma 3.3. Lemma 3.4 (union bound) . There exist absolute constants C , C , C ′ > such that thefollowing holds. Let ρ ∈ (0 , ) . Let N ∈ [ C n log ρ , n ] be an integer. Fix r ∈ [ C √ n, n ] .Then with probability at least − e − n , for every η ∈ F ρ , and for every t ∈ R , F ( η, t ) ≤ C r nN log 1 ρ + (cid:18) C ( ρ + nr + (log n ) n + 1 r ) (cid:19) √ nr ϕ (cid:18) q √ ntr (cid:19) , for large enough absolute constants C , C > , which depend only on C and C , and for q ≥ − C ′ ( ρ + nr ) . Proof.
Let α = C r nN log 1 ρ + (cid:18) C ( ρ + nr + (log n ) n + 1 r ) (cid:19) √ nr ϕ (cid:18) q √ ntr (cid:19) , OLDOBSKY’S SLICING INEQUALITY 7 where q ≥ − C ′ ( ρ + n/r ) and the constants shall be appropriately chosen later. Notethat(14) α ≥ C r nN log 1 ρ ≥ n − . · C p log 2 , since ρ ≤ and N ≤ n .Observe also that for any pair of vectors θ ∈ S n − , η ∈ F ρ ⊂ B n and for any t ≥ r ,we have | r h η, θ i + t | ≥ r, and hence(15) e − ( r h η,θ i + t ) ≤ e − r . In view of (14), (15), and the fact that r ≥ √ n , we have, for t ≥ r : F ( η, t ) ≤ e − r ≤ e − n ≤ n − . C p log 2 ≤ α, where the inequality follows as long as C is chosen to be larger than o (1) . This impliesthe statement of the Lemma in the range t ≥ r .Next, suppose t ∈ [0 , r ] . Let ǫ = r . Consider an ǫ -net N = { t , ..., t m } on the interval [0 , r ] with t j = ǫ · j . Note that(16) N ≤ [3 r ] + 1 ≤ r , since r ≥ √ n ≥ . For any A ∈ R , for any ǫ > , and for any t , t ∈ R such that | t − t | ≤ ǫ, we have | A + t | ≤ | A + t | + 2 ǫ | A + t | + ǫ , and hence(17) ϕ ( A + t ) ≤ ϕ ( A + t ) e | A + t | ǫ + ǫ . Observe that for all t ∈ [0 , r ] , for an arbitrary η ∈ F ρ ⊂ B n , and any θ ∈ S n − , wehave | r h η, θ i + t | ≤ r, and hence(18) e | r h η,θ i + t | ǫ + ǫ ≤ e rǫ + ǫ = e r + r ≤ C ′ r , for an absolute constant C ′ . By (17) and (18), for each t ∈ [0 , r ] there exists τ ∈ N , such that F ( η, t ) ≤ (1 + C ′ r ) F ( η, τ ) . Therefore, by the union bound, P ( ∃ t ∈ [0 , r ] , ∃ η ∈ F ρ : F ( η, t ) > α ) ≤ P ∃ τ ∈ N , ∃ η ∈ F ρ : F ( η, τ ) > α C ′ r ! ≤ (19) N · F ρ · P F ( η, τ ) > α C ′ r ! . BO’AZ KLARTAG, GALYNA V. LIVSHYTS
By Lemma 2.1 and (16),(20)
N · F ρ ≤ r (cid:18) Cρ (cid:19) n ≤ ˜ Cρ ! n . We used above that r ≤ n. Let β := (cid:18) C ′ r (cid:19) − C r nN log 1 ρ . Provided that C and C are chosen large enough, we have:(21) α C ′ r ≥ β + (1 + c ( ρ + (log n ) n + nr )) √ nr ϕ (cid:18) qt √ nr (cid:19) , and(22) Cβ N = C (1 + C ′ r ) − C n log 1 ρ ≥ n + n log ˜ Cρ , where c and C are the constants from Lemma 3.3 and ˜ C is the constant from (20).By Lemma 3.3, (21) and (22), we have(23) P F ( η, t ) > α C ′ r ! ≤ e − Cβ N ≤ e − n − n log ˜ Cρ . By (19), (20) and (23), we conclude that the desired event holds with probability at least − ˜ Cρ ! n e − n − n log ˜ Cρ = 1 − e − n . This finishes the proof. (cid:3)
An application of random rounding and conclusion of the proof of the Propo-sition 3.1.
We begin by formulating a general fact about subgaussian random variables,which complements the estimate from Lemma 3.2.
Lemma 3.5.
Let M ≥ . Let Y be a subgaussian random variable with constant M : thatis, suppose for any s > , (24) P ( | Y | > s ) ≤ e − M s . Then there exists an absolute constant
C > , such that for any a ∈ R , E ϕ ( Y + a ) ≥ ϕ ( a ) − CM .
Here the expectation is taken with respect to Y. Proof.
Since the condition (24) applies for both Y and − Y , and since ϕ is an even function,we may assume, without loss of generality, that a ≥ (alternatively, we may replace a with | a | in the calculations below).We begin by writing E ϕ ( Y + a ) = Z P ( ϕ ( Y + a ) > λ ) dλ = Z ∞ se − s P ( | Y + a | < s ) ds ≥ (25) Z ∞ a se − s (1 − P ( | Y + a | ≥ s )) ds = e − a − Z ∞ a se − s P ( | Y + a | ≥ s ) ds. OLDOBSKY’S SLICING INEQUALITY 9
Note that for s ≥ a ≥ , we have(26) P ( | Y + a | ≥ s ) = P ( Y ≥ s − a ) + P ( − Y ≥ s + a ) ≤ P ( | Y | ≥ s − a ) . By (24) and (26), we estimate Z ∞ a se − s P ( | Y + a | ≥ s ) ds ≤ Z ∞ a se − s e − M ( s − a ) ds = (27) Z ∞ ( t + a ) e − ( t + a )22 e − M t dt. Recall that(28) ( t + a ) e − ( t + a )22 ≤ √ e , and that(29) Z ∞ e − M t dt = √ π M .
By (25), (27), (28) and (29), letting C = √ π √ e , we have(30) E ϕ ( Y + a ) ≥ ϕ ( a ) − CM , yielding the conclusion. (cid:3)
Next, we shall demonstrate the following corollary of Lemma 2.4 and Lemma 3.5.
Corollary 3.6.
There exist absolute constants
C, c > such that for any M, r > and ρ ∈ (0 , c √ nrM ] , and for any ξ ∈ S n − ,F ( ξ, t ) ≤ E η F ( η ξ , t ) + CM , with function F defined in (3) and η ξ defined in Definition 2.2, and the expectation takenwith respect to η ξ . Proof.
By Lemma 2.4, for any fixed θ ∈ S n − , for an absolute constant c > , the randomvariable r h η ξ − ξ, θ i is subgaussian with constant rρc √ n ≤ M . Therefore, applying Lemma3.5 N times with Y = r h η ξ − ξ, θ k i and a = r h ξ, θ k i + t , we get E η N N X k =1 ϕ ( r h η ξ , θ k i + t ) ≥ N N X k =1 ϕ ( r h ξ, θ k i + t ) − N N X k =1 CM = 1 N N X k =1 ϕ ( r h ξ, θ k i + t ) − CM , finishing the proof. (cid:3)
We are ready to prove Proposition 3.1.
Proof of the Proposition 3.1.
Let ρ = n √ nNr . By Corollary 3.6, applied with M = c Nn ,we have, for every ξ ∈ S n − , N N X k =1 ϕ ( r h ξ, θ k i + t ) ≤ E η N N X k =1 ϕ (cid:0) r h η ξ , θ k i + t (cid:1) + C ′ nN ≤ (31) max η ∈F ρ N N X k =1 ϕ ( r h η, θ k i + t ) + C ′ nN . By Lemma 3.4 and with our choice of ρ , with probability − e − n , (31) is bounded fromabove by C s nN log N rn √ n + (cid:18) C ( n √ nN r + nr + (log n ) n + 1 r ) (cid:19) √ nr ϕ (cid:18) q √ ntr (cid:19) + C ′ nN , where q = 1 − C ′ ( ρ + nr ) ≥ − C ( √ nr ) , in view of our choice of ρ . It remains to note,in view of the fact that N ≥ nC log 2 and r ≥ C √ n , that for an appropriate absoluteconstant C > , one has C s nN log N rn √ n + C ′ nN ≤ C s nN log N rn √ n , and for an appropriate absolute constant C > ,C (cid:18) n √ nN r + nr + (log n ) n + 1 r (cid:19) ≤ C √ nr . The proposition follows. (cid:3)
4. P
ROOF OF T HEOREM m be the largest positive integer such that log ( m ) n ≥ C , for a sufficiently largeabsolute constant C > to be determined shortly. Note that, hence,(32) log ( m ) n ≤ C ′ , for some absolute constant C ′ .Consider, for k = 1 , ..., m,N = n , N = n (log n ) , ..., N k = n (cid:16) log ( k − n (cid:17) , ... Let also R = n log n , ..., R k = n log ( k ) n , ... Consider independent unit random vectors θ kj ∈ S n − , where k = 1 , ..., m and j =1 , ..., N k . Following [11], consider the convex body K = conv {± R k θ kj , ± ne i } , and the probability measures µ k = 1 N k N k X j =1 δ R k θ kj , µ − k = 1 N k N k X j =1 δ − R k θ kj , where δ stands for the Dirac measure. We now set µ = γ n ∗ µ ∗ µ ∗ . . . ∗ µ m + µ − ∗ µ − ∗ . . . ∗ µ − m . Here γ n stands for the standard Gaussian measure on R n . We shall show that there exists aconfiguration of θ kj , such that µ and L = 4 K satisfy the conclusion of the theorem. OLDOBSKY’S SLICING INEQUALITY 11
Step 1.
Firstly, we estimate the volume of the body L = 4 K from above, following themethod of [11]. Note that for all k = 1 , ..., m we have ϕ (cid:16) nR k (cid:17) ≤ c, for some absoluteconstant c ∈ (0 , , and hence there exists an absolute constant ˆ C > such that(33) log (cid:20) − ϕ (cid:18) nR k (cid:19)(cid:21) ≥ − ˆ Cϕ (cid:18) nR k (cid:19) , for all k = 1 , ..., m .By Khatri-Sidak lemma (see, e.g. [9] for a simple proof), applied together with theBlaschke-Santalo inequality, and in view of (33), we have | K | − ≥ c n | nK o | ≥ c n γ n (5 nK o ) ≥ c n m Y k =1 (cid:18) − ϕ (cid:18) nR k (cid:19)(cid:19) N k ≥ (34) c n exp − ˆ C m X k =1 N k e − n R k ! . Plugging the values of N k and R k , and using > , we get(35) m X k =1 N k e − n R k ≤ n e − n ) + n m X k =2 (log ( k − n ) e − ( k ) n ) ≤ c ′ n, since the sum converges faster than exponentially.By (34) and (35), we conclude that(36) | K | ≤ c n , for some absolute constant c > . Step 2.
Next, we estimate the sections from above. Note that (see [11] for details),(37) µ + ( ξ ⊥ + tξ ) = A + B where(38) A = 1 √ π N ...N m N X j =1 N X j =1 . . . N m X j m =1 ϕ ( t + R h ξ, θ j i + ... + R m h ξ, θ mj m i ) and(39) B = 1 √ π N ...N m N X j =1 N X j =1 . . . N m X j m =1 ϕ ( − t + R h ξ, θ j i + ... + R m h ξ, θ mj m i ) For r ≥ C √ n we set q ( r ) = 1 − C √ n/r where C is the constant coming fromProposition 3.1. We define r , r , . . . , r m ∈ [ C √ n, n ] such that r := R and for k ≥ , r k +1 := q ( r k ) √ nR k +1 R k = k Y j =1 q ( r j ) √ nr j ! · R k +1 . Denote(40) α k := k − Y j =1 (cid:20)(cid:18) C √ nr j (cid:19) q ( r j ) (cid:21) ≤ k − Y j =1 " C √ nr j ! . The reason for the definition of α k , is the inequality k − Y j =1 (cid:20)(cid:18) C √ nr j (cid:19) √ nr j (cid:21) ≤ C α k √ nR k − , which we will use below in a repeated application of Proposition 3.1. Observe that thereexists an absolute constant ˜ C > such that for every k = 1 , ..., m , we have(41) α k ≤ (1 + ˆ C √ nR ) k − Y j =2 (1 + ˇ C log ( j ) n log ( j − n ) ≤ Ce ¯ C P k − j =2 log( j ) n log( j − n ≤ ˜ C, since the sum converges faster than exponentially.Provided that C > is selected large enough, we have that for each k , the pair N = N k and r = r k satisfies the assumptions of Proposition 3.1. Applying Proposition 3.1consecutively m times with N = N k and r = r k for k = 1 , ..., m , we get that withprobability at least − me − n = 1 − o (1) , for every ξ ∈ S n − and for every t ∈ R , theterm A from (38) is bounded from above by a constant multiple of s n log N r n √ n N + α √ nR s n log N r n √ n N + . . . + α m √ nR m − s n log N m r m n √ n N m + α m +1 √ nR m ≤ c ′ √ n + c ′′ √ n m X k =1 α k ( k ) n + α m log ( m ) n √ n ≤ C √ n , for an appropriate constant C > , where we used (41) to bound α k , and (32) to bound log ( m ) n .The same bound applies also to the term B from (39). We conclude, in view of (37) thatwith high probability, for all ξ ∈ S n − and for all t ∈ R ,(42) µ ( ξ ⊥ + tξ ) ≤ C √ n . Step 3.
Recall that µ is an average of translates of the Gaussian measure, centered atthe vertices of K . As was shown in [11, Lemma 3.8], using the fact that √ nB n ⊂ K , andsince K = 2 K + 2 K contains √ nB n + 2 K, one has(43) µ (4 K ) ≥ γ n (2 √ nB n ) ≥ , where, e.g. Markov’s inequality is used in the last passage.Combining (36), (42) and (43), we arrive to the conclusion of the theorem, with L = 4 K . (cid:3)
5. F
URTHER APPLICATIONS
Comparison via the Hilbert-Schmidt norm for arbitrary matrices.
As anotherconsequence of the Lemma 2.4, we have:
Lemma 5.1 (comparison via the Hilbert-Schmidt norm) . Let ρ ∈ (0 , ) . There exists acollection of points
N ⊂ B n \ B n with N ≤ ( Cρ ) n such that for any matrix A : R n → R N , for every ξ ∈ S n − there exists an η ∈ N satisfying (44) | Aη | ≤ C | Aξ | + C ρ n || A || HS . Here
C, C , C are absolute constants. OLDOBSKY’S SLICING INEQUALITY 13
Proof.
Recall that | Ax | = P Ni =1 h X i , x i , where X i are the rows of A . In order to provethe Lemma, it suffices to show, for every vector g ∈ R n , that(45) E η h η ξ , g i ≤ C h ξ, g i + C ρ n | g | ; the Lemma shall follow by applying (45) to the rows of A and summing up.We shall show (45). Using the inequality a = ( a − b + b ) ≤ a − b ) + 2 b , we see |h η ξ , g i| ≤ |h η ξ , g i − h ξ, g i| + 2 |h ξ, g i| , and hence(46) E η |h η ξ , g i| ≤ E η |h η ξ , g i − h ξ, g i| + 2 |h ξ, g i| . By Lemma 2.4, |h η ξ , g i − h ξ, g i| is sub-gaussian with constant c ′ ρ | g |√ n , and hence(47) E η |h η ξ , g i − h ξ, g i| ≤ Z ∞ te − cnt ρ | g | dt ≤ C ρ | g | n , for some absolute constant C > ; (46) and (47) entail (45). (cid:3) A fact similar to Lemma 5.1 was recently shown and used by Lytova and Tikhomirov[16].Lemma 5.1 shows that there exists a net of cardinality C n , such that for any randommatrix A : R n → R N whose entries have bounded second moments, with probability atleast − P ( || A || HS ≥ E || A || HS ) ≥ one has (44), with E || A || HS in place of || A || HS . However, such probability estimate is un-satisfactory when studying small ball estimates for the smallest singular values of randommatrices. In the soon-to-follow paper, we significantly strengthen Lemma 5.1: we employthe idea of Rebrova and Tikhomirov [18], and in place of the covering by cubes, we con-sider a covering by paralelepipeds of sufficiently large volume. This leads us to considerthe following refinement of the Hilbert-Schmidt norm : with κ > , for an N × n matrix A ,define B κ ( A ) = min α i ∈ [0 , , Q ni =1 α i ≥ κ − n n X i =1 α i | Ae i | . B κ acts as an averaging on the columns of A . In a separate paper we shall show that thereexists a net N ⊂ B n \ B n , of cardinality (cid:16) Cρ (cid:17) n , such that for all N × n matrices A , forevery ξ ∈ S n − there exists an η ∈ N satisfying(48) | Aη | ≤ C | Aξ | + ρ n B ( A ) . The proof shall be a combination of the argument similar to the proof of Lemma 5.1 alongwith the construction of a net on the family of admissible nets. The bound on the cardinalityof that net shall follow, in fact, again from Lemma 2.1.The advantage of (48) over (44) is the strong large deviation properties of B ( A ) . Forexample, we shall show an elementary fact that for any random matrix A with independentcolumns and E || A || HS < ∞ ,(49) P ( B ( A ) ≥ E || A || HS ) ≤ e − cn . The detailed proofs of the mentioned facts, and applications to sharp estimates for the smallball probability of the smallest singular value of heavy-tailed matrices shall be outlined ina separate paper.5.2.
Covering spheres with strips.
For θ ∈ S n − , τ ∈ R and α > , consider a strip S ( θ, α, τ ) := { ξ ∈ S n − : |h ξ, θ i + τ | ≤ α } . Observe that N X k =1 S ( θ k , r , tr ) ( ξ ) ≤ C N X k =1 ϕ ( r h ξ, θ k i + t ) . Therefore, Proposition 3.1 implies
Proposition 5.2.
For any N and for any α ≤ c √ n with N ∈ [ cn log Nαn / , n ] there existsa collection of points θ , ..., θ N ∈ S n − such that every strip of width α contains no morethan ˜ C "r N n log
Nαn / + N √ nα points in this collection. We note that in view of the point-strip duality, bounding P Nk =1 S ( θ k , r , tr ) ( ξ ) yields esti-mates of the form stated in Proposition 5.2.The direct consideration of the characteristic functions in place of the Gaussian functionsgives exactly the same bound as an application of Proposition 3.1.In [8], Frankl, Nagy and Naszodi conjecture that for every collection of N points on S there exists a strip of width N containing at least f ( N ) points, where f ( N ) → ∞ as N → ∞ . Proposition 5.2 generalizes Theorem 4.2 by Frankl, Nagy, Naszodi [8] from thetwo-dimensional case to an arbitrary dimension, with good dimensional constant, althoughit does not shed any light on the dependence on N .R EFERENCES [1] M. Alexander, M. Henk, A. Zvavitch,
A discrete version of Koldobsky’s slicing inequality,
Israel J. Math.,to appear.[2] N. Alon, B. Klartag,
Optimal compression of approximate inner products and dimension reduction , Sym-posium on Foundations of Computer Science (FOCS 2017), 639-650.[3] J. Beck,
Irregularities of distribution,
I. Acta Math. 159 (1987), no. 1-2, 1-49.[4] S. Bobkov, B. Klartag, A. Koldobsly,
Estimates for moments of general measures on convex bodies , Proc.Amer. Math. Soc., 146 (2018), 4879-4888.[5] J. Bourgain,
On high-dimensional maximal functions associated to convex bodies , Amer. J. Math., 108,(1986), 1467-1476.[6] J. Bourgain,
Geometry of Banach spaces and harmonic analysis , Proceedings of the International Con-gress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), Amer. Math. Soc., Providence, RI, (1987),871-878.[7] J. Bourgain,
On the distribution of polynomials on high-dimensional convex sets,
Geom. aspects of Funct.Anal. (GAFA seminar notes), Israel Seminar, Springer Lect. Notes in Math. 1469 (1991), 127-137.[8] N. Frankl, A. Nagy, M. Naszodi,
Coverings: variations on a result of Rogers and on the Epsilon-nettheorem of Haussler and Welzl , Discrete Mathematics, Volume 341, Issue 3, March 2018, Pages 863-874.[9] A. Giannopoulos,
On some vector balancing problems , Studia Math. 122 (1997), 225-234.[10] B. Klartag,
On convex perturbations with a bounded isotropic constant,
Geom. Funct. Anal. (GAFA)16 (2006), 1274-1290.[11] B. Klartag, A. Koldobsky,
An example related to the slicing inequality for general measures , Journal ofFunctional Analysis, Volume 274, Issue 7, 1 April 2018, Pages 2089-2112.
OLDOBSKY’S SLICING INEQUALITY 15 [12] A. Koldobsky,
A hyperplane inequality for measures of convex bodies in R n , n ≤ , Discrete Comput.Geom. 47 (2012), 538-547.[13] A. Koldobsky, A √ n estimate for measures of hyperplane sections of convex bodies , Adv. Math. 254(2014), 33-40.[14] A. Koldobsky, Slicing inequalities for measures of convex bodies,
Adv. Math. 283 (2015), 473-488.[15] A. Koldobsky, A.Pajor,
A remark on measures of sections of L p -balls, Geom. Aspects of Funct. Anal.(GAFA seminar notes), Israel Seminar, Springer Lect. Notes in Math. 2169 (2017), 213-220.[16] A. Lytova, K. Tikhomirov,
On delocalization of eigenvectors of random non-Hermitian matrices ,preprint.[17] P. Raghavan, C. D. Thompson,
Randomized rounding: a technique for provably good algorithms andalgorithmic proofs,
Combinatorica 7 (1987), no. 4, 365-374.[18] E. Rebrova, K. Tikhomirov,
Coverings of random ellipsoids, and invertibility of matrices with i.i.d.heavy-tailed entries , Israel Journal of Math, to appear.[19] O. Regev,
A note on Koldobsky’s lattice slicing inequality , preprint https://arxiv.org/abs/1608.04945.[20] R. Vershynin,