The Kadec-Peł czynski theorem in L p , 1≤p<2
aa r X i v : . [ m a t h . F A ] J un The Kadec-Pe lczynski theorem in L p , ≤ p < I. Berkes ∗ and R. Tichy † Abstract
By a classical result of Kadec and Pe lczynski (1962), every normalized weaklynull sequence in L p , p > ℓ or to the unit vector basis of ℓ p . In this paper we investigate the case1 ≤ p < µ of the sequencesatisfies R R x dµ ( x ) ∈ L p/ . Call two sequences ( x n ) and ( y n ) in a Banach space ( B, k · k ) equivalent if there existsa constant K > K − (cid:13)(cid:13) n X i =1 a i x i (cid:13)(cid:13) ≤ (cid:13)(cid:13) n X i =1 a i y i (cid:13)(cid:13) ≤ K (cid:13)(cid:13) n X i =1 a i x i (cid:13)(cid:13) for every n ≥ a , . . . , a n ) ∈ R n . By a classical theorem of Kadec andPe lczynski [11], any normalized weakly null sequence ( x n ) in L p (0 , p > ℓ or to the unit vector basisof ℓ p . In the case when {| x n | p , n ≥ } is uniformly integrable, the first alternativeholds, while if the functions ( x n ) have disjoint support, the second alternative holdstrivially. The general case follows via a subsequence splitting argument as in [11].The purpose of the present paper is to investigate the case 1 ≤ p < ≤ p < X n ) be a sequence of random variables defined on a probability space(Ω , F , P ); assume that {| X n | p , n ≥ } is uniformly integrable and X n → L p . (This is meant as lim n →∞ E ( X n Y ) = 0 for all Y ∈ L q where 1 /p + 1 /q = 1.To avoid confusion with weak convergence of probability measures and distributions, ∗ Graz University of Technology, Institute of Statistics, Kopernikusgasse 24, 8010 Graz, Austria. e-mail: [email protected] . Research supported by FWF grant P24302-N18 and OTKA grant K 108615. † Graz University of Technology, Institute of Mathematics A, Steyrergasse 30, 8010 Graz, Austria.e-mail: [email protected] . Research supported by FWF grant SFB F5510. he latter will be called convergence in distribution and denoted by D −→ .) Using theterminology of [5], we call a sequence ( X n ) of random variables determining if it hasa limit distribution relative to any set A in the probability space with P ( A ) > A ⊂ Ω with P ( A ) > F A such thatlim n →∞ P ( X n ≤ t | A ) = F A ( t )for all continuity points t of F A . Here P ( ·| A ) denotes conditional probability given A . (This concept is the same as that of stable convergence, introduced in [15].)Since {| X n | p , n ≥ } is uniformly integrable, the sequence ( X n ) is tight and thusby an extension of the Helly-Bray theorem (see e.g. [5]), it contains a determiningsubsequence. Hence in the sequel we can assume, without loss of generality, thatthe sequence ( X n ) itself is determining. As is shown in [1], [5], for any determiningsequence ( X n ) there exists a random measure µ (i.e. a measurable map from (Ω , F , P )to ( M , π ), where M is the set of probability measures on R and π is the Prohorovdistance, see Section 3) such that for any A with P ( A ) > t of F A we have F A ( t ) = E A ( µ ( −∞ , t ]) , (1.1)where E A denotes conditional expectation given A . We call µ the limit randommeasure of ( X n ). We will prove the following result. Theorem 1.1
Let ≤ p < and let ( X n ) be a determining sequence of randomvariables such that k X n k p = 1 ( n = 1 , , . . . ) , {| X n | p , n ≥ } is uniformly integrableand X n → weakly in L p . Let µ be the limit random measure of ( X n ) . Then thereexists a subsequence ( X n k ) equivalent to the unit vector basis of ℓ if and only if Z ∞−∞ x dµ ( x ) ∈ L p/ . (1.2)By assuming the uniform integrability of | X n | p , we exclude ”spike” situationsleading to a subsequence equivalent to the unit vector basis of ℓ p as in the Kadec-Pelczynski theorem. It is easily seen that (1.2) (and in fact R ∞−∞ x dµ ( x ) < ∞ a.s.)imply that for any δ > A ⊂ Ω with P ( A ) ≥ − δ and a subsequence( X n k ) such that sup k ≥ Z A | X n k | dP < ∞ . Thus the first alternative in the Kadec-Pe lczynski theorem ’almost’ implies bounded L norms.Call a sequence ( X n ) of random variables in L p almost symmetric if for any ε > K = K ( ε ) such that for any k ≥
1, any indices j > j > . . . j k ≥ K ,any permutation ( σ ( j ) , . . . σ ( j k )) of ( j , . . . j k ) and any ( a , . . . , a k ) ∈ R k we have(1 − ε ) k k X i =1 a i X j i k p ≤ k k X i =1 a i X σ ( j i ) k p ≤ (1 + ε ) k k X i =1 a i X j i k p . nce in Theorem 1.1 we found a subsequence ( X n k ) equivalent to the unit vectorbasis of ℓ , a result of Guerre [9] implies the existence of a further subsequence( X m k ) of ( X n k ) which is almost symmetric. Note that this conclusion also followsfrom the proof of Theorem 1.1. Guerre and Raynaud [10] also showed that for any1 ≤ p < q < X n ) in L p , equivalent to the unit vector basisof ℓ q , but not having an almost symmetric subsequence. No characterization for theexistence of almost symmetric subsequences of ( X n ) in terms of the limit randommeasure of ( X n ) or related quantities is known. The necessity of the proof of Theorem 1.1 depends on a general structure theorem forlacunary sequences proved in [3] (see Theorem 2 of [3] and the definition precedingit); for the convenience of the reader we state it here as a lemma.
Lemma 2.1
Let ( X n ) be a determining sequence of r.v.’s and ( ε n ) a positive numer-ical sequence tending to 0. Then, if the underlying probability space is rich enough,there exists a subsequence ( X m k ) and a sequence ( Y k ) of discrete r.v.’s such that P (cid:0) | X m k − Y k | ≥ ε k (cid:1) ≤ ε k k = 1 , . . . (2.1) and for each k > the atoms of the σ -field σ { Y , . . . , Y k − } can be divided into twoclasses Γ and Γ such that (i) P B ∈ Γ P ( B ) ≤ ε k ; (ii) For any B ∈ Γ there exist P B -independent r.v.’s { Z ( B ) j , j = k, k +1 , . . . } definedon B with common distribution function F B such that P B (cid:0) | Y j − Z ( B ) j | ≥ ε k (cid:1) ≤ ε k j = k, k + 1 , . . . (2.2) Here F B denotes the limit distribution of ( X n ) relative to B and P B denotes condi-tional probability given B . Note that, instead of (2.1), in Theorem 2 of [3] the conclusion is P ∞ k =1 | X m k − Y k | < ∞ a.s., but after a further thinning, (2.1) will also hold. The phrase ”the underlyingprobability space is rich enough” is meant in Lemma 2.1 in the sense that on theunderlying space there exists a sequence of independent r.v.’s, uniformly distributedover (0 ,
1) and also independent of the sequence ( X n ). Clearly, this condition canbe guaranteed by a suitable enlargement of the probability space not changing thedistribution of ( X n ) and µ and thus this assumption can be assumed without loss ofgenerality.Lemma 2.1 means that every tight sequence of r.v.’s has a subsequence which canbe closely approximated by an exchangeable sequence having a very simple structure,namely which is i.i.d. on each set of a suitable partition of the probability space. Thisfact is an ”effective” form of the general subsequence principle of Aldous [1] (for arelated result see Berkes and Rosenthal [5]) and reduces the studied problem to thei.i.d. case which will be handled by the classical concentration technique of L´evy [12],as improved by Esseen [7]. emma 2.2 Let X , X , . . . , X n be i.i.d. random variables with distribution function F and put S n = X + · · · + X n . Then for any t > we have P (cid:18)(cid:12)(cid:12)(cid:12)(cid:12) S n (cid:12)(cid:12)(cid:12)(cid:12) ≤ t (cid:19) ≤ A t √ n (cid:20) Z | x |
0, where A is an absolute constant. Thus the left hand side of (2.3) isbounded by the last expression in (2.4) with λ = 2 t and thus to prove (2.3) it sufficesto show that R | x | 2. Thus (2.5) is valid. Lemma 2.3 Let ( X n ) be a determining sequence of r.v.’s with limit random distri-bution function F • . Then for any set A ⊂ Ω with P ( A ) > we have E A (cid:18) + ∞ Z −∞ x dF • ( x ) (cid:19) = + ∞ Z −∞ x dF A ( x ) (2.6) n the sense that if one side is finite then the other side is also finite and the twosides are equal. The statement remains valid if in (2.6) we replace the intervals ofintegration by ( − t, t ) , provided t and − t are continuity points of F Ω . We used here the notation F • to distinguish it from the ordinary limit distributionfunction of ( X n ). Proof. Assume that t and − t are continuity points of F Ω . As observed in [5, p.482], t and − t are continuity points of F • with probability 1 (and hence also for F A for any A ⊂ Ω with P ( A ) > 0) and thus almost surely Z | x | 0) and (0 , t ) (the integral over { } clearly equals 0) and using integration byparts. The same formula holds with F • replaced by F A . Integrating the last relationover A ⊂ Ω and using (1.1) and Fubini’s theorem, we get the validity of (2.6) over( − t, t ). Letting t → ∞ we get (2.6) over ( −∞ , ∞ ).For the following lemma (which is the key tool for the proof of the sufficiencypart of Theorem 1.1) we need some definitions. Given probability measures ν n , ν onthe Borel sets of a separable metric space ( S, d ) we say that ν n D −→ ν if Z S f ( x ) ν n ( dx ) −→ Z S f ( x ) ν ( dx ) as n → ∞ (2.7)for every bounded, real valued continuous function f on S . (For equivalent definitionsand properties of this convergence see [4]). (2.7) is clearly equivalent to Ef ( Z n ) −→ Ef ( Z ) (2.8)where Z n , Z are r.v.’s valued in ( S, d ) ( i.e. , measurable maps from some probabilityspace to ( S, d )) with distribution ν n , ν . A class G of real valued functions on S iscalled locally equicontinuous if for every ε > x ∈ S there is a δ = δ ( ε, x ) > y ∈ S , d ( x, y ) ≤ δ imply | f ( x ) − f ( y ) | ≤ ε for every f ∈ G . Lemma 2.4 (Ranga Rao [14]) Let ( S, d ) be a separable metric space and ν, ν n ( n =1 , , . . . ) probability measures on the Borel sets of ( S, d ) such that ν n D −→ ν . Let G bea class of real valued functions on ( S, d ) such that(a) G is locally equicontinuous(b) There exists a continuous function g ≥ on S such that | f ( x ) | ≤ g ( x ) for all f ∈ G and x ∈ S and Z S g ( x ) ν n ( dx ) −→ Z S g ( x ) ν ( dx ) ( < ∞ ) as n → ∞ . (2.9) Then Z S f ( x ) ν n ( dx ) −→ Z S f ( x ) ν ( dx ) as n → ∞ (2.10) uniformly in f ∈ G . Proof of Theorem 1.1 Let (Ω , F , P ) be the probability space of the X n ’s and X = ( X , X , . . . ) ; let further µ be the limit random measure of ( X n ). Let ( Y n ) be a sequence of r.v.’s on (Ω , F , P )such that, given X and µ , the r.v.’s Y , Y , . . . are conditionally i.i.d. with distribution µ , i.e. , P ( Y ∈ A , . . . , Y k ∈ A k | X , µ ) = k Y i =1 P ( Y i ∈ A i | X , µ ) a.s. (3.1) P ( Y j ∈ A | X , µ ) = µ ( A ) a.s. (3.2)for any j, k and Borel sets A, A , . . . , A k on the real line. Such a sequence ( Y n )always exists after a suitable enlargement of the probability space (in fact ( Y n ) existson (Ω , F , P ) if (Ω , F , P ) is atomless over σ ( X , µ ), see the vector-valued version ofTheorem (1.5) of [5]; see also the remark preceding Theorem (1.3) in [5, p. 479]) or,alternatively, the sequence ( X n ) can be redefined, without changing its distribution,on a standard sequence space over which ( Y n ) can be defined, see [1, p. 72]. Clearly,( Y n ) is an exchangeable sequence; we call it the limit exchangeable sequence of ( X n ).It is not hard to see (cf. [1], [5]) that there exists a subsequence ( X n k ) such that forevery k ≥ X n j , . . . , X n jk ) D −→ ( Y , . . . , Y k ) if j < · · · < j k , j → ∞ . (3.3)Note that the existence of a subsequence ( X n k ) and exchangeable ( Y k ) satisfying (3.3)was first proved by Dacunha-Castelle and Krivine [6] via ultrafilter techniques. Thelimit exchangeable sequence, as defined above, also has the following simple property,proved in [1, Lemma 12]. Lemma 3.1 For every σ ( X ) -measurable r.v. Z and any j ≥ we have ( X n , Z ) D −→ ( Y j , Z )As before, let M denote the set of all probability measures on R and let π be theProhorov metric on M defined by π ( ν, λ ) = inf (cid:8) ε > ν ( A ) ≤ λ ( A ε ) + ε and λ ( A ) ≤ ν ( A ε ) + ε for all Borel sets A ⊂ R (cid:9) . Here A ε = { x ∈ R : | x − y | < ε for some y ∈ A } denotes the open ε -neighborhood of A . Let S = (cid:26) ν ∈ M : Z xdν ( x ) = 0 , Z x dν ( x ) < + ∞ (cid:27) . (3.4)Since R ∞−∞ x dµ ( x ) < ∞ a.s. (which follows from (1.2)) and R ∞−∞ xdµ ( x ) = 0 a.s. by X n → P (cid:8) µ ∈ S (cid:9) = 1 . (3.5) ollowing Aldous [1] we define another metric d on S by d ( ν, λ ) = (cid:18)Z (cid:0) F − ν ( x ) − F − λ ( x ) (cid:1) dx (cid:19) / (3.6)where F ν and F λ are the distribution functions of ν and λ , respectively, and F − isdefined by F − ( x ) = inf (cid:8) t : F ( t ) ≥ x (cid:9) , < x < F . The right side of (3.6) equals k F − ν ( η ) − F − λ ( η ) k where η is a random variable uniformly distributed in (0 , F − ν ( η ) and F − λ ( η ) are r.v.’s with distribution ν and λ , respectively (and thus square integrable),it follows that d is a metric on S . It is easily seen (cf. [1, p. 80] and relation (5.15)in [1, p. 74]) that d is separable and generates the same Borel σ -field as π . By thedefinition of d we have, letting 0 denote the zero distribution, E d ( µ, p = E (Var( µ )) p/ = E (cid:18)Z ∞−∞ x dµ ( x ) (cid:19) p/ < ∞ (3.7)by our assumption (1.2). The following lemma expresses the crucial equicontinuityproperty of d . Lemma 3.2 Let ψ ( a , . . . , a n ) = (cid:13)(cid:13) n X i =1 a i Y i (cid:13)(cid:13) p . (3.8) Then we have (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) k t + n X k =1 a k ξ ( ν ) k k p − k t + n X k =1 a k ξ ( λ ) k k p (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ Kd ( ν, λ ) ψ ( a , . . . , a n ) (3.9) for some constant K > , every n ≥ , ν, λ ∈ S , real numbers t, a , . . . , a n and i.i.d.sequences ( ξ ( ν ) n ) , ( ξ ( λ ) n ) with respective distributions ν and λ . Relation (3.9) means that the class of functions { f t, a ,n } defined by f t, a ,n ( ν ) = ψ ( a ) − (cid:13)(cid:13) t + n X k =1 a k ξ ( ν ) k (cid:13)(cid:13) p a = ( a , . . . a n ) = (3.10)(where the variable is ν and t, a , n are parameters) is equicontinuous. In the contextof unconditional convergence of lacunary series, the importance of such equiconti-nuity conditions was discovered by Aldous [1]. A similar condition in terms of thecompactness of the 1-conic class belonging to the type determined by ( X n ) was givenby Krivine and Maurey (see Proposition 3 in Guerre [9]). The proof of our results is,however, purely probabilistic and we will not use types. Proof of Lemma 3.2. We start with recalling the well known fact that if ( ξ n ) isan i.i.d. sequence with Eξ n = 0, Eξ n < + ∞ then C k ξ k k X i =1 a i ! / ≤ (cid:13)(cid:13) k X i =1 a i ξ i (cid:13)(cid:13) p ≤ k ξ k k X i =1 a i ! / (3.11) or any 1 ≤ p < a , . . . , a n ) ∈ R n , where C > L p norm of P ki =1 a i ξ i in (3.11) cannot exceed the L norm, the upper boundin (3.11) is obvious, while the lower bound is classical, see [13]. Since E | P ni =1 a i Y i | p can be obtained by integrating E (cid:12)(cid:12)(cid:12)P ni =1 a i ξ ( ω ) i (cid:12)(cid:12)(cid:12) p over Ω with respect to dP ( ω ) wherefor each ω ∈ Ω the ξ ( ω ) i are i.i.d. with distribution µ ( ω ), relation (3.11) implies that A k X i =1 a i ! / ≤ (cid:13)(cid:13) k X i =1 a i Y i (cid:13)(cid:13) p ≤ B k X i =1 a i ! / (3.12)where A = C (cid:20) E (cid:18)Z ∞−∞ | x | dµ ( x ) (cid:19) p (cid:21) /p , B = " E (cid:18)Z ∞−∞ x dµ ( x ) (cid:19) p/ /p . By (1.2) and since the assumptions of Theorem 1.1 imply that µ is not concentratedat zero a.s., we have 0 < A ≤ B < ∞ .Turning to the proof of (3.9), note that the L p norms on the left hand side dependof the sequences ( ξ ( ν ) n ) , ( ξ ( λ ) n ) only through their distributions ν, λ , but not the actualchoice of these i.i.d. sequences and thus it suffices to verify (3.9) for any specificconstruction. Let ( η n ) be a sequence of independent r.v.’s, uniformly distributed over(0 , ξ ( ν ) n = F − ν ( η n ) and ξ ( λ ) n = F − λ ( η n ) are i.i.d. sequences with distribution ν and λ , respectively. Using these sequences in (3.9), the left hand side is at most k P ni =1 a i ( ξ ( ν ) i − ξ ( λ ) i ) k p and since ξ ( ν ) n − ξ ( λ ) n = F − ν ( η n ) − F − λ ( η n ) is also an i.i.d.sequence with mean 0 and variance d ( ν, λ ) , using (3.11) and the first relation of(3.12) we get that the left hand side of (3.9) is at most Kd ( ν, λ ) ψ ( a , . . . , a n ) withsome constant K > 0. This completes the proof of Lemma 3.2.With the equicontinuity statement of Lemma 3.2 at hand, we can prove thesufficiency part of Theorem 1.1 with a selection procedure similar to [2]. Assume that( X n ) satisfies (1.2) and fix 0 < ε ≤ / 2. We shall construct a sequence n < n < · · · of integers such that(1 − ε ) ψ ( a , . . . , a k ) ≤ (cid:13)(cid:13) k X i =1 a i X n i (cid:13)(cid:13) p ≤ (1 + ε ) ψ ( a , . . . , a k ) (3.13)for every k ≥ a , . . . , a k ) ∈ R k . In view of (3.12), this will imply that ( X n k )is equivalent to the unit vector base of ℓ , but it actually shows more, namely thatunder the assumptions of Theorem 1.1 there is a subsequence (1 + ε )-equivalent tothe limit exchangeable sequence and hence (1 + ε )-symmetric.To construct n we set Q ( a , n, ℓ ) = | a X n + a Y + · · · + a ℓ Y ℓ | p R ( a , ℓ ) = | a Y + a Y + · · · + a ℓ Y ℓ | p or every n ≥ ℓ ≥ a = ( a , . . . , a ℓ ) ∈ R ℓ . We show that E (cid:26) Q ( a , n, ℓ ) ψ ( a ) p (cid:27) −→ E (cid:26) R ( a , ℓ ) ψ ( a ) p (cid:27) as n → ∞ uniformly in a , ℓ. (3.14)(The right side of (3.14) equals 1.) To this end we recall that, given X and µ , ther.v.’s Y , Y , . . . are conditionally i.i.d. with common conditional distribution µ andthus, given X , µ and Y , the r.v.’s Y , Y , . . . are conditionally i.i.d. with distribution µ . Thus E (cid:0) Q ( a , n, ℓ ) | X , µ (cid:1) = g a ,ℓ ( X n , µ ) (3.15)and E (cid:0) R ( a , ℓ ) | X , µ, Y (cid:1) = g a ,ℓ ( Y , µ ) (3.16)where g a ,ℓ ( t, ν ) = E (cid:12)(cid:12) a t + ℓ X i =2 a i ξ ( ν ) i (cid:12)(cid:12) p ( t ∈ R , ν ∈ S )and ( ξ ( ν ) n ) is an i.i.d. sequence with distribution ν . Integrating (3.15) and (3.16) weget E (cid:0) Q ( a , n, ℓ ) (cid:1) = Eg a ,ℓ ( X n , µ ) (3.17) E (cid:0) R ( a , ℓ ) (cid:1) = Eg a ,ℓ ( Y , µ ) (3.18)and thus (3.14) is equivalent to E g a ,ℓ ( X n , µ ) ψ ( a ) p −→ E g a ,ℓ ( Y , µ ) ψ ( a ) p as n → ∞ , uniformly in a , ℓ. (3.19)We shall derive (3.19) from Lemma 2.4 and Lemma 3.1. As we have seen above,there exists a separable metric d on S , generating the same σ -field as the Prohorovmetric π , such that (3.9) holds. But then the limit random measure µ , which is arandom variable taking values in ( S, π ) ( i.e. , a measurable map from the underlyingprobability space to ( S, B π ) where B π denotes the Borel σ -field in S generated by π )can be also regarded as a random variable taking values in ( S, d ). Also, µ is clearly σ ( X ) measurable and thus ( X n , µ ) D −→ ( Y , µ ) by Lemma 3.1. Hence, (3.19) willfollow from Lemma 2.4 (note the equivalence of (2.7) and (2.8)) if we show that theclass of functions (cid:26) g a ,ℓ ( t, ν ) ψ ( a ) p (cid:27) (3.20)defined on the product metric space ( R × S , λ × d ) ( λ denotes the ordinarydistance on R ) satisfies conditions (a),(b) of Lemma 2.4. Observe now that ψ ( a , . . . , a n ) ≥ ψ ( a ∗ , . . . , a ∗ n ) (3.21)where a ∗ i equals either a i or 0. (In case ( Y n ) is an i.i.d. sequence with mean 0,(3.21) follows from Jensen’s inequality (see e.g. [8, p. 153]) and the fact that, for any H ⊂ { , , . . . , n } , the conditional expectation of P ni =1 a i Y i given σ { Y j , j ∈ H } is i ∈ H a i Y i . Since ( Y n ) is a mixture of i.i.d. sequences with mean 0, (3.21) holds ingeneral.) In particular, ψ ( a , . . . , a n ) ≥ ψ (0 , a , . . . , a n ) (3.22)and ψ ( a , . . . , a n ) ≥ ψ ( a , , . . . , 0) = const · | a | (3.23)and thus using (3.9) we get for any ν ∈ S , t ∈ R and a = ( a , . . . a ℓ ) ∈ R ℓ , (cid:13)(cid:13) a t + ℓ X i =2 a i ξ ( ν ) i (cid:13)(cid:13) p ≤ | a t | + (cid:13)(cid:13) ℓ X i =2 a i ξ ( ν ) i (cid:13)(cid:13) p ≤≤ | a t | + ψ ( a , . . . , a ℓ ) d ( ν, ≤ const · ψ ( a ) | t | + ψ ( a ) d ( ν, ≤ const ψ ( a ) (cid:0) | t | + d ( ν, (cid:1) . (3.24)Hence using (3.9), (3.22), (3.23) and the inequality | x p − y p | ≤ | x − y | · p · ( x p − + y p − )( x > , y > 0) we get for any real t, t ′ and ν, ν ′ ∈ S (cid:12)(cid:12) g a ,ℓ ( t, ν ) − g a ,ℓ ( t ′ , ν ′ ) (cid:12)(cid:12) = (cid:12)(cid:12)(cid:12)(cid:13)(cid:13) a t + ℓ X i =2 a i ξ ( ν ) i (cid:13)(cid:13) pp − (cid:13)(cid:13) a t ′ + ℓ X i =2 a i ξ ( ν ′ ) i (cid:13)(cid:13) pp (cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:13)(cid:13) a t + ℓ X i =2 a i ξ ( ν ) i (cid:13)(cid:13) p − (cid:13)(cid:13) a t ′ + ℓ X i =2 a i ξ ( ν ′ ) i (cid:13)(cid:13) p (cid:12)(cid:12)(cid:12) const pψ ( a ) p − (cid:0) | t | + | t ′ | + d ( ν, 0) + d ( ν ′ , (cid:1) p − ≤ const (cid:0) | a | | t − t ′ | + ψ ( a , . . . , a n ) d ( ν, ν ′ ) (cid:1) pψ ( a ) p − (cid:0) | t | + | t ′ | + d ( ν, 0) + d ( ν ′ , (cid:1) p − ≤ const (cid:0) | t − t ′ | + d ( ν, ν ′ ) (cid:1) pψ ( a ) p (cid:0) | t | + | t ′ | + d ( ν, 0) + d ( ν ′ , (cid:1) p − ≤ const (cid:0) | t − t ′ | + d ( ν, ν ′ ) (cid:1) pψ ( a ) p (cid:0) | t | + 2 d ( ν, 0) + | t − t ′ | + d ( ν, ν ′ ) (cid:1) p − . Given t, ν and ε > 0, there exists a δ = δ ( t, ν, ε ) > ≤ εψ ( a ) p provided | t − t ′ | + d ( ν, ν ′ ) ≤ δ and thus the class (3.20) is locally equicontinuouson the product metric space ( R × S , λ × d ). On the other hand, (3.24) shows that thefunction in (3.20) is bounded by const( | t | + d ( ν, p ≤ const 2 p ( | t | p + d ( ν, p ). Now,using ( X n , µ ) D −→ ( Y , µ ), the uniform integrability of | X n | p and Ed ( µ, p < + ∞ (see (3.7)) we get E (cid:0) | X n | p + d ( µ, p (cid:1) −→ E (cid:0) | Y | p + d ( µ, p (cid:1) . Thus the class (3.20) satisfies also condition (b) of Lemma 2.4. We thus provedrelation (3.19) and thus also (3.14) whence it follows (note again that the right sideof (3.14) equals 1) that ψ ( a ) − k a X n + a Y + · · · + a ℓ Y ℓ k p −→ ψ ( a ) − k a Y + a Y + · · · + a ℓ Y ℓ k p as n → ∞ (3.25)unformly in ℓ, a . Hence we can choose n so large that (cid:12)(cid:12) k a X n + a Y + · · · + a ℓ Y ℓ k p − k a Y + a Y + · · · + a ℓ Y ℓ k p (cid:12)(cid:12) ≤ ε ψ ( a , . . . , a ℓ ) or every ℓ, a . This completes the first induction step.Assume now that n , . . . , n k − have already been chosen. Exactly in the sameway as we proved (3.25), it follows that for ℓ > kψ ( a ) − k a X n + · · · + a k − X n k − + a k X n + a k +1 Y k +1 + · · · + a ℓ Y ℓ k p −→ ψ ( a ) − k a X n + · · · + a k − X n k − + a k Y k + · · · + a ℓ Y ℓ k p as n → ∞ uniformly in a and ℓ . Hence we can choose n k so large that n k > n k − and (cid:12)(cid:12) k a X n + · · · + a k − X n k − + a k X n k + a k +1 Y k +1 + · · · + a ℓ Y ℓ k p − k a X n + · · · + a k − X n k − + a k Y k + · · · + a ℓ Y ℓ k p (cid:12)(cid:12) ≤ ε k ψ ( a , . . . , a ℓ )for every ( a , . . . , a ℓ ) ∈ R ℓ and ℓ > k . This completes the k -th induction step; the soconstructed sequence ( n k ) obviously satisfies (cid:12)(cid:12) k a X n + · · · + a ℓ X n ℓ k p − k a Y + · · · + a ℓ Y ℓ k p | ≤ εψ ( a , . . . , a ℓ )for every ℓ ≥ a , . . . , a ℓ ) ∈ R ℓ . The last relation is equivalent to (3.13) andthus the sufficiency of (1.2) in Theorem 1.1 is proved.We now turn to the proof of necessity of (1.2) in Theorem 1.1. Assume that ( X n )is equivalent to the unit vector basis of ℓ ; then for any increasing sequence ( m k ) ofintegers we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) √ N N X k =1 X m k (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) p = O (1)and thus by the Markov inequality we have for any A ⊂ Ω with P ( A ) > P A ((cid:12)(cid:12)(cid:12)(cid:12)(cid:12) √ N N X k =1 X m k (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≥ T ) ≤ / T ≥ T , N ≥ T depends on A and the sequence ( X n ). We show first that Z ∞−∞ x dµ ( x ) < ∞ a.s. (3.27)Let F • ( x ) denote the random distribution function corresponding to µ and assumeindirectly that there exists a set B ⊂ Ω with P ( B ) > t →∞ Z | x | 0. Integrating(3.29), (3.31) on C and using (1.1) and Lemma 2.3 we get Z | x | 0. Choose now t so large that˜ ε t ≤ / 16 and then choose t > t so large that K / t ≥ t for t ≥ t . Then for t ≥ t we have, using (3.32), (cid:12)(cid:12)(cid:12)(cid:12)Z | x | The authors are indebted to the referee for his/her valuableremarks and suggestions leading to a considerable improvement of the presentation. eferences [1] D. J. Aldous . Limit theorems for subsequences of arbitrarily-dependent se-quences of random variables, Z. Wahrscheinlichkeitstheorie verw. Gebiete (1977), 59–82.[2] I. Berkes , On almost symmetric sequences in L p . Acta Math. Hung. 54 (1989)269-278.[3] I. Berkes and E. P´eter , Exchangeable r.v.s and the subsequence principle.Probability Theory and Rel. Fields 73 (1986) 395–413.[4] P. Billingsley , Convergence of Probability Measures. Wiley, New York 1999.[5] I. Berkes and H. P. Rosenthal . Almost exchangeable sequences of randomvariables, Z. Wahrscheinlichkeitstheorie verw. Gebiete (1985), 473–507.[6] D. Dacunha-Castelle . Indiscernability and exchangeability in L p spaces.Proc. Aarhus Seminar on random series, convex sets and geometry of Banachspaces. Aarhus Universitet, various publications 25 (1975), 50-56.[7] C. G. Esseen . On the concentration function of a sum of independent randomvariables. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete (1968), 290–308.[8] W. Feller . An Introduction to Probability Theory and its Applications , Vol.II, 2nd Edition, Wiley, New York 1970.[9] S. Guerre . Types and suites sym´etriques dans L p , 1 ≤ p < + ∞ , p = 2, IsraelJ. Math. (1986), 191–208.[10] S. Guerre and Y. Raynaud . On sequences with no almost symmetric subse-quence. Texas Functional Analysis Seminar 1985–1986 (Austin, TX, 19851986),83–93, Longhorn Notes, Univ. Texas, Austin, TX, 1986.[11] M.I. Kadec and W. Pe lczy´nski . Bases, lacunary sequences and comple-mented subspaces in the spaces L p . Studia Math. P. L´evy . Th´eorie de l’addition des variables al´eatoires. Gauthier-Villars, 1937.[13] J. Marczinkiewicz and A. Zygmund . Quelques th´eor`emes sur les fonctionsind´ependantes. Studia Math. (1938), 104–120.[14] R. Ranga Rao . Relations between weak and uniform convergence of measureswith applications. Ann. of Math. Statist. (1962), 659–680.[15] A. R´enyi , On stable sequences of events. Sankhya Ser. A (1963), 293–302.(1963), 293–302.