aa r X i v : . [ m a t h . D S ] J a n INTEGER PART POLYNOMIAL CORRELATION SEQUENCES
ANDREAS KOUTSOGIANNIS
Abstract.
Following an approach presented by N. Frantzikinakis, we prove that anymultiple correlation sequence, defined by invertible measure preserving actions of com-muting transformations with integer part polynomial iterates, is the sum of a nilsequenceand an error term, small in uniform density. As an intermediate result, we show thatmultiple ergodic averages with iterates given by the integer part of real valued polyno-mials converge in the mean. Also, we show that under certain assumptions the limit iszero. An important role in our arguments plays a transference principle, communicatedto us by M. Wierdl, that enables to deduce results for Z -actions from results for flows. Introduction and main results
In this paper we study the structure of sequences of the form(1) a ( n ) = Z f · ( ℓ Y i =1 T [ p i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ p i,m ( n )] i ) f m dµ, n ∈ N , which we call integer part polynomial correlation sequences , where [ · ] denotes the integerpart, T , . . . , T ℓ : X → X are invertible commuting measure preserving transformationson a probability space ( X, X , µ ) , f , f , . . . , f m ∈ L ∞ ( µ ) and p i,j ∈ R [ t ] are real valuedpolynomials for all ≤ i ≤ ℓ, ≤ j ≤ m. Even if it’s not stated, without loss of generality,for the bounded functions f i , we will always assume that k f i k ∞ ≤ for all i. Definition.
We call the setting ( X, X , µ, T , . . . , T ℓ ) a system , where T , . . . , T ℓ : X → X are invertible commuting measure preserving transformations on the probability space ( X, X , µ ) . Any sequence of the form (1) is a special case of a multiple correlation sequence, i.e.,sequence of the form(2) Z f · T n f · . . . · T n ℓ ℓ f ℓ dµ, with n , . . . , n ℓ ∈ Z . Results on the structure and the limiting behaviour of averages ofmultiple correlation sequences is a central problem in ergodic (Ramsey) theory. Althoughdetermining the precise structure of such sequences is an important open problem in thearea, in recent years a lot of progress has been made.In order to state some relevant results, we recall the notion of an ℓ -step nilsequence. Mathematics Subject Classification.
Primary: 37A30; Secondary: 05D10, 37A05, 11B30.
Key words and phrases.
Correlation sequences, nilsequences, generalized polynomials.
Definition ([2]) . For ℓ ∈ N , an ℓ -step nilsequence is a sequence of the form ( F ( g n Γ)) ,where F ∈ C ( X ) , X = G/ Γ , G is an ℓ -step nilpotent Lie group, Γ is a discrete cocompactsubgroup and g ∈ G . A -step nilsequence is a constant sequence.In the special case of a single ergodic transformation (i.e., all the invariant sets underthe transformation have measure either or ) where T i = T i , i = 1 , . . . , ℓ, Bergelson,Host and Kra proved the following result:
Theorem ([2, Theorem 1.9]) . For ℓ ∈ N , let ( X, X , µ, T ) be an ergodic system and f , f ,. . . , f ℓ ∈ L ∞ ( µ ) . Then we have the decomposition Z f · T n f · . . . · T ℓn f ℓ dµ = N ( n ) + e ( n ) , n ∈ N , where (i) ( N ( n )) is a uniform limit of ℓ -step nilsequences with kN k ∞ ≤ (ii) lim N − M →∞ N − M N − X n = M | e ( n ) | = 0 . The polynomial iterates version of this result is due to Leibman (in [13]). Also, thesame author (in [14]), proved the result of Bergelson, Host and Kra without the ergodicityassumption.All these results depend on the theory of characteristic factors, a tool that in a more gen-eral setting, say for correlation sequences involving actions of commuting transformations,proved to be extremely complex.Quite recently, Frantzikinakis (in [8]) showed that this problem can be settled, since heshowed, avoiding the use of the theory of characteristic factors, that modulo error terms,which are small in uniform density, correlation sequences of actions of commuting transfor-mations are nilsequences ([8, Theorem 1.3]). More specifically, using the convergent resultof Walsh from [16] and tools from [11] and [12], he proved:
Theorem ([8, Theorem 1.2]) . Let ℓ, m ∈ N and p i,j ∈ Z [ t ] , ≤ i ≤ ℓ, ≤ j ≤ m, bepolynomials. Then there exists k ∈ N , k = k ( ℓ, m, max deg( p i,j )) , such that for everysystem ( X, X , µ, T , . . . , T ℓ ) , functions f , f , . . . , f m ∈ L ∞ ( µ ) and ε > , we have (3) Z f · ( ℓ Y i =1 T p i, ( n ) i ) f · . . . · ( ℓ Y i =1 T p i,m ( n ) i ) f m dµ = N ( n ) + e ( n ) , n ∈ N , where (i) ( N ( n )) is a k -step nilsequence with kN k ∞ ≤ ; (ii) lim N − M →∞ N − M N − X n = M | e ( n ) | ≤ ε. The arguments that give this result (see [8]) focus on some distinctive properties thatcorrelation sequences as in (3) satisfy. Actually, [8] deals with a more general settingproviding a sufficient condition ([8, Theorem 1.3]) in order a multiple correlation sequenceto have the required decomposition in nil + nul parts. So, in order to obtain the previousresult, one proves that sequences of the form (3) satisfy the sufficient condition. NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 3
In this article, we get an analogous result for the respective integer part polynomialcorrelation sequences. Namely, we prove the following:
Theorem 1.1.
Let ℓ, m ∈ N and p i,j ∈ R [ t ] , ≤ i ≤ ℓ, ≤ j ≤ m, be polynomials . Then there exists k ∈ N , k = k ( ℓ, m, max deg( p i,j )) , such that for every system ( X, X , µ,T , . . . , T ℓ ) , functions f , f , . . . , f m ∈ L ∞ ( µ ) and ε > , we have (4) Z f · ( ℓ Y i =1 T [ p i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ p i,m ( n )] i ) f m dµ = N ( n ) + e ( n ) , n ∈ N , where (i) ( N ( n )) is a k -step nilsequence with kN k ∞ ≤ ; (ii) lim N − M →∞ N − M N − X n = M | e ( n ) | ≤ ε. In order to prove this result, we will also give, in Theorem 2.1, a sufficient condition,analogous to the one in Theorem 1.3 of [8], in order to have the decomposition in nil + nulterms of Theorem 1.1. Theorem 1.1 then will follow by showing that sequences of the form(4) satisfy the aforementioned sufficient condition.Note that Theorem 1.1 is a result in a more general scheme. It can be considered as thefirst step towards the understanding of the structure of correlation sequences with iteratesgiven by generalized polynomials (see definition below). Definition ([4]) . We denote by G the smallest family of functions N → Z containing Z [ n ] that forms an algebra under addition and multiplication and having the property that forevery f , . . . , f r ∈ G and c , . . . , c r ∈ R , [ r X i =1 c i f i ] ∈ G (i.e., G contains all the functionsthat can be obtained from regular polynomials with the help of the floor function and theusual arithmetic operations). The members of G are called generalized polynomials .In the special case of Theorem 1.1, where the polynomials are linear, and so of the form p i,j ( n ) = a i,j n for a i,j ∈ R , ≤ i ≤ ℓ, ≤ j ≤ m, the next result gives more preciseinformation about the dependence of k on the other parameters. Theorem 1.2.
For ℓ, m ∈ N let ( X, X , µ, T , . . . , T ℓ ) be a system, a i,j ∈ R , ≤ i ≤ ℓ, ≤ j ≤ m and f , f , . . . , f m ∈ L ∞ ( µ ) . Then, for every ε > , we have the decomposition (5) Z f · ( ℓ Y i =1 T [ a i, n ] i ) f · . . . · ( ℓ Y i =1 T [ a i,m n ] i ) f m dµ = N ( n ) + e ( n ) , n ∈ N , where (i) ( N ( n )) is an m -step nilsequence with kN k ∞ ≤ ; (ii) lim N − M →∞ N − M N − X n = M | e ( n ) | ≤ ε . We will also give an application of the previous results.
ANDREAS KOUTSOGIANNIS
For k ∈ N , we consider the following subsets of ℓ ∞ ( N ) : A k := n ( ψ ( n )) : ψ is a k -step nilsequence o ; B k := n Z f · k Y i =1 T ( ℓ i − ℓ k +1 ) n f i dµ : ( X, X , µ, T ) is a system , f i ∈ L ∞ ( µ ) and ℓ i = ( k + 1)! i o ; C k := n Z f · ( k Y i =1 T [ a i, n ] i ) f · . . . · ( k Y i =1 T [ a i,k n ] i ) f k dµ : ( X, X , µ, T , . . . , T k ) is a system, a , , . . . , a k, , a , , . . . , a k, , . . . , a ,k , . . . , a k,k ∈ R and f i ∈ L ∞ ( µ ) o . It is proven in [8] that the sets A k and B k are linear subspaces of ℓ ∞ ( N ) . By the samereasoning (with the space B k ), we have that C k is a linear subspace of ℓ ∞ ( N ) as well.Also, from Theorem 1.4 in [8], we have that, modulo sequences small in uniform density,the two subspaces A k and B k coincide. Namely, we have that A k k·k = B k k·k , where k·k is the seminorm on ℓ ∞ ( N ) defined by(6) k a k := lim sup N − M →∞ N − M N − X n = M | a ( n ) | . Using Theorem 1.2, we will prove the following:
Theorem 1.3.
For every k ∈ N we have A k k·k = B k k·k = C k k·k . In order to prove Theorem 1.1, as an intermediate step, we also prove the followingresult on the existence of the integer part polynomial multiple mean convergence:
Theorem 1.4.
For ℓ, m ∈ N let ( X, X , µ, T , . . . , T ℓ ) be a system, p i,j ∈ R [ t ] , polynomials, ≤ i ≤ ℓ, ≤ j ≤ m and f , . . . , f m ∈ L ∞ ( µ ) . The averages (7) N − M N − X n = M ( ℓ Y i =1 T [ p i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ p i,m ( n )] i ) f m converge in L ( µ ) as N − M → ∞ . Also, in some special cases of Theorem 1.4, via the theory of characteristic factors(equivalently, via the seminorms ||| · ||| k ) using Theorem 1.2 and Proposition 5.1 from [6], weprove convergence to 0 for the previous averages. More specifically, we prove the following(for the definitions, see Section 2): NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 5
Theorem 1.5.
For ℓ ∈ N let ( X, X , µ, T , . . . , T ℓ ) be a system, a , . . . , a ℓ ∈ R , be non-zeroreal numbers, r , . . . , r ℓ ∈ N , be pairwise distinct positive integers and f , . . . , f ℓ ∈ L ∞ ( µ ) . Then there exists k = k ( ℓ, max r i ) ∈ N such that if ||| f i ||| k,µ,T i = 0 for some ≤ i ≤ ℓ, thenthe averages N − M N − X n = M T [ a n r ]1 f · . . . · T [ a ℓ n rℓ ] ℓ f ℓ converge to in L ( µ ) as N − M → ∞ . Moreover, we have the following result that applies to a larger class of polynomialiterates:
Theorem 1.6.
For ℓ ∈ N let ( X, X , µ, T , . . . , T ℓ ) be a system, p , . . . , p ℓ ∈ R [ t ] , be non-constant polynomials with distinct degrees and highest degree d = deg( p ) and f , . . . , f ℓ ∈ L ∞ ( µ ) . Then there exists k = k ( ℓ, d ) ∈ N such that if ||| f ||| k,µ,T = 0 , then the averages N − M N − X n = M T [ p ( n )]1 f · . . . · T [ p ℓ ( n )] ℓ f ℓ converge to in L ( µ ) as N − M → ∞ . Actually, we remark at this point that a more general result is proven in Proposition 5.4which implies Theorem 1.6.Since for weak mixing systems ( X, X , µ, T ) we have that Z f dµ = 0 implies ||| f ||| k,µ,T = 0 for every k ∈ N , we deduce: Corollary 1.7.
For ℓ ∈ N let ( X, X , µ, T i ) , ≤ i ≤ ℓ, commuting weak mixing systems, p , . . . , p ℓ ∈ R [ t ] be non-constant polynomials with distinct degrees and f , . . . , f ℓ ∈ L ∞ ( µ ) . Then the averages N − M N − X n = M T [ p ( n )]1 f · . . . · T [ p ℓ ( n )] ℓ f ℓ converge to ℓ Y i =1 Z f i dµ in L ( µ ) as N − M → ∞ . Remark.
It is an open problem if the same result (of Corollary 1.7) is true when thenon-constant polynomials p i are essentially distinct (i.e., p i − p j is non-constant for i = j ).As Theorem 1.1, Theorem 1.4 is a first result towards establishing mean convergencefor the more general case where the iterates are given by generalized polynomials. Forthese two results, i.e., dealing with integer part polynomial iterates, the method of proofrelies on a respective convergence result for flows. This technique was first used in [15] byE. Lesigne, in order to prove that when a sequence of real positive numbers is good for thesingle term pointwise ergodic theorem, then the respective sequence of its integer parts isalso good (see also [5]). This method later adapted by M. Wierdl (in [17]) to deal withmultiple term averages (see Theorem 3.2 below).Before we close this section, we state the following conjecture: ANDREAS KOUTSOGIANNIS
Conjecture.
Theorems 1.1 and 1.4 hold if p i,j are generalized polynomials.Note that for Theorem 1.1, i.e., decomposition in the form nil + nul, the conjecture in itsgenerality is completely open. As for Theorem 1.4, only the one-term case is known dueto Bergelson and Leibman ([3]). Even the case where all the transformations are equal isnot known. Notation.
We denote by N the set of positive integers. If ( a ( n )) is a bounded sequence,we denote by lim sup N − M →∞ (cid:12)(cid:12)(cid:12) N − M N − X n = M a ( n ) (cid:12)(cid:12)(cid:12) the limit lim N →∞ sup M ∈ N (cid:12)(cid:12)(cid:12) N M + N − X n = M a ( n ) (cid:12)(cid:12)(cid:12) (this limitexists by subadditivity). For a measurable function f on a measure space X with a trans-formation T : X → X, we denote with T f the composition f ◦ T. Given transformations T i : X → X, ≤ i ≤ ℓ, we denote with ℓ Y i =1 T i the composition T ◦ · · · ◦ T ℓ . Acknowledgements.
I would like to express my indebtedness to N. Frantzikinakis whointroduced to me this problem and for his in-depth suggestions throughout the writingprocedure of this article. These suggestions corrected mistakes from the original versionof this paper and led me to additional results. I would also like to thank M. Wierdl formaking an unpublished note of his, available to me. The proof of Theorem 3.2 below isbased on this note. Finally, I wish to thank V. Bergelson for helpful discussions during thepreparation of this paper. 2.
Definitions and Main Ideas
Anti-uniformity and Regularity.
In order to prove Theorem 1.1, we follow thearguments of [8]. For reader’s convenience, in this subsection, we repeat most of the partof Section 2 from [8]. First we recall the notion of the uniformity seminorms (a slightvariant of the uniformity seminorms defined by B. Host and B. Kra in [12]).
Definition ([8]) . Let k ∈ N and a : N → C be a bounded sequence.(i) Given a sequence of intervals I = ( I N ) with lengths tending to infinity, we saythat the sequence ( a ( n )) is distributed regularly along I if the limit lim N →∞ | I N | X n ∈ I N a ( n + h ) · . . . · a r ( n + h r ) exists for every r ∈ N and h , . . . , h r ∈ N , where a i is either a or ¯ a. (ii) If I is as in (i) and ( a ( n )) is distributed regularly along I , we define inductively k a k I , := lim N →∞ (cid:12)(cid:12)(cid:12) | I N | X n ∈ I N a ( n ) (cid:12)(cid:12)(cid:12) ; and for k ≥ (one can show as in [12, Proposition 4.3] that the next limit exists) k a k k I ,k := lim H →∞ H H X h =1 k σ h a · ¯ a k k − I ,k − , where σ h is the shift transformation defined by ( σ h a )( n ) := a ( n + h ) . NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 7 (iii) If ( a ( n )) is a bounded sequence we let k a k U k ( N ) := sup I k a k I ,k , where the sup is taken over all sequences of intervals I with lengths tending toinfinity along which the sequence ( a ( n )) is distributed regularly . Next, we recall the notions of k -anti-uniformity and k -regularity from [8]. These notionsplay an important role to the decomposition of multiple correlation sequences in the formnil+nul. We will adapt them, dealing with the notion of regularity and a notion similar toanti-uniformity for our setting (i.e., the notion of weak-anti-uniformity). Definition ([8]) . Let k ∈ N . We say that the bounded sequence a : N → C is(i) k -anti-uniform if there exists C := C ( k, a ) such that lim sup N − M →∞ (cid:12)(cid:12)(cid:12) N − M N − X n = M a ( n ) b ( n ) (cid:12)(cid:12)(cid:12) ≤ C k b k U k ( N ) for every b ∈ ℓ ∞ ( N ) . (ii) k -regular if the limit lim N − M →∞ N − M N − X n = M a ( n ) ψ ( n ) exists for every ( k − -step nilsequence ( ψ ( n )) . It turns out that the previous properties ( k -anti-uniformity and k -regularity) give a suffi-cient condition on the required decomposition of sequences of the form (3) ([8, Theorem 1.3]).In our case though, for sequences of the form (1), we cannot prove that they are k -anti-uniform, but we can prove something weaker. Namely, in Theorem 4.1 we will prove thatthese sequences are k -weak-anti-uniform (see definition below). Definition.
Let k ∈ N . We say that the bounded sequence a : N → C is k -weak-anti-uniform if for every < δ < there exists a constant C δ := C δ ( k, a ) and a term c δ , with c δ → as δ → + , such that for every b ∈ ℓ ∞ ( N )lim sup N − M →∞ (cid:12)(cid:12)(cid:12) N − M N − X n = M a ( n ) b ( n ) (cid:12)(cid:12)(cid:12) ≤ C δ k b k U k ( N ) + c δ . The same proof of Theorem 1.3 in [8], works in our case as well, namely, we have thefollowing (we only sketch the proof, for details, see the proof of Theorem 1.3 from [8]):
Theorem 2.1 ([8, Theorem 1.3]) . For k ∈ N let a : N → C be a sequence with k a k ∞ ≤ that is k -weak-anti-uniform and k -regular . Then, for every ε > , we have the decomposition a ( n ) = N ( n ) + e ( n ) , n ∈ N , where (i) ( N ( n )) is a ( k − -step nilsequence with kN k ∞ ≤ ; (ii) lim N − M →∞ N − M N − X n = M | e ( n ) | ≤ ε. ANDREAS KOUTSOGIANNIS
Proof. ([8, Theorem 1.3]) We first remark that the limit lim N − M →∞ N − M N − X n = M | a ( n ) | exists . Let Y := n ( ψ ( n )) : ψ is a ( k − -step nilsequence o and X := span { Y, a } . On X × X we define the bilinear form h f, g i := lim N − M →∞ N − M N − X n = M f ( n ) g ( n ) . Since the limit exists for f, g ∈ X , this bilinear form induces the seminorm k f k := p h f, f i (note that this is the restriction, on the space X, of the seminorm defined in (6)).Let ε > . Then, there exists δ > such that for all ( b ( n )) ∈ ℓ ∞ ( N ) we have lim sup N →∞ (cid:12)(cid:12)(cid:12) | I N | X n ∈ I N a ( n ) b ( n ) (cid:12)(cid:12)(cid:12) ≤ C δ k b k U k ( N ) + ε . We can assume that C δ ≥ . For d := inf {k a − y k : y ∈ Y } and δ := (cid:16) ε C δ (cid:17) k , there exists y ∈ Y such that(8) k a − y k ≤ d + δ . We can also assume that k y k ∞ ≤ . Using (8) we can prove that(9) sup y ∈ Y : k y k ≤ |h a − y , y i| ≤ δ. Since the set { y ∈ Y : k y k ≤ } contains all the ( k − -step nilsequences that arebounded by , we have that(10) k a − y k U k ( N ) ≤ (2 δ ) − k . We let N := y , e := a − y . Then a = N + e, and ( N ( n )) is an ( k − -step nilsequence with kN k ∞ ≤ . Using the fact that a is k -weak-anti-uniform and the Relation (10), we get that |h a, e i| ≤ C δ k e k U k ( N ) + ε C δ k a − y k U k ( N ) + ε ≤ ε/ . Furthermore, from (9) we have |hN , e i| ≤ ε/ . Combining the last two estimates we deduce that k e k = h e, e i ≤ |h a, e i| + |hN , e i| ≤ ε, from which we have the result. (cid:3) NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 9
So, in order to prove Theorem 1.1, it suffices to show that the sequences of the form (1)are k -weak-anti-uniform and k -regular for some k. We will deal with these issues in thenext two sections.In order to show the k -regularity, we will modify a trick of M. Wierdl ([17]) by passinga convergence result for flows to the convergence of the respective integer parts (Theo-rem 3.2). By making use of a convergence result for sequences with iterates of integervalued polynomials due to Walsh ([16]), we will get the desired property.In order to show k -weak-anti-uniformity, we will use, like in the previous step, an anal-ogous trick as Wierdl’s (Theorem 4.1) and the fact that the sequences with iterates ofinteger valued polynomials are k -anti-uniform (which we will get from [8]). This last resultis obtained by an inductive procedure known as PET (Polynomial Exhaustion Technique)induction, introduced in [1].2.2. The seminorms ||| · ||| k . In this subsection, we follow [11] and [6] for the definition ofthe seminorms ||| · ||| k which we will use in order to prove Theorem 1.5 and Proposition 5.4,which will imply Theorem 1.6 and Corollary 1.7. The inductive definition that we usehere follows from [11] (in the ergodic case) and [6] (in the general case) and the use of vonNeumann’s ergodic theorem.Let ( X, X , µ, T ) be a system and f ∈ L ∞ ( µ ) . we define inductively the seminorms ||| f ||| k,µ,T as follows: ||| f ||| ,µ,T := k E ( f |I ) k L ( µ ) , where I is the σ -algebra of T -invariant sets and E ( f |I ) is the conditional expectation of f with respect to I , satisfying Z E ( f |I ) dµ = Z f dµ and T E ( f |I ) = E ( T f |I ) . For k ≥ , we let ||| f ||| k +1 k +1 ,µ,T := lim N − M →∞ N − M N − X n = M ||| ¯ f · T n f ||| k k,µ,T . All these limits exist and define seminorms (see [11]). By using the von Neumann’s ergodictheorem, we get ||| f ||| ,µ,T = lim N − M →∞ N − M N − X n = M Z ¯ f · T n f dµ, and more generally, forevery k ≥ , we have that(11) ||| f ||| k k,µ,T := lim N − M →∞ N − M N − X n = M . . . lim N − M →∞ N − M N − X n k = M Z Y ~ǫ ∈{ , } k C | ~ǫ | T ~ǫ · ~n f dµ, where ~ǫ = ( ǫ , . . . , ǫ k ) , ~n = ( n , . . . , n k ) , | ~ǫ | = ǫ + . . . + ǫ k , ~ǫ · ~n = ǫ n + . . . + ǫ k n k andfor z ∈ C , k ∈ N ∪ { } we let C k ( z ) = (cid:26) z , if k is even ¯ z , if k is odd .We remark that ||| f ⊗ ¯ f ||| k,µ × µ,T × T ≤ ||| f ||| k +1 ,µ,T , for all k ∈ N , which follows from (11)and the ergodic theorem, ||| f ||| k,µ,T ≤ ||| f ||| k +1 ,µ,T for all k ∈ N (by using Lemma 3.9 from[11]) and ||| f ||| k,µ,T = ||| f ||| k,µ,T − for all k ∈ N , which follows from (11). It is a deep fact, shown in [11], that for ergodic systems we have that ||| f ||| k +1 = 0 if andonly if the function f is orthogonal to the largest k -step "nil-factor" of the system. Wewill not use this fact here though.In order to recall a convergence result from [6], we also have to recall the notion of anice family of polynomials. Definition ([6]) . Let ℓ, m ∈ N . Given ℓ families of polynomials in R [ t ] P = ( p , , . . . , p ,m ) , . . . , P ℓ = ( p ℓ, , . . . , p ℓ,m ) we define an ordered family of m polynomial ℓ -tuples as follows ( P , . . . , P ℓ ) = (( p , , . . . , p ℓ, ) , . . . , ( p ,m , . . . , p ℓ,m )) . In the special case where the polynomials p i,j belong to Z [ t ] we call the ordered familyof polynomial ℓ -tuples ( P , . . . , P ℓ ) nice if(i) deg( p , ) ≥ deg( p ,j ) for ≤ j ≤ m ;(ii) deg( p , ) > deg( p i,j ) for ≤ i ≤ ℓ, ≤ j ≤ m ; and(iii) deg( p , − p ,j ) > deg( p i, − p i,j ) for ≤ i ≤ ℓ, ≤ j ≤ m. A nice family of polynomials with maximum degree has only one non-zero term.Using the theory of characteristic factors, Chu, Frantzikinakis and Host showed thefollowing results: Theorem 2.2 ([6, Theorem 1.2]) . For ℓ ∈ N let ( X, X , µ, T , . . . , T ℓ ) be a system, p , . . . , p ℓ ∈ Z [ t ] be non-constant polynomials with distinct degrees and maximum degree d and func-tions f , . . . , f ℓ ∈ L ∞ ( µ ) . Then there exists k = k ( ℓ, d ) ∈ N such that if ||| f i ||| k,µ,T i = 0 forsome ≤ i ≤ ℓ, then the averages N − M N − X n = M T p ( n )1 f · . . . · T p ℓ ( n ) ℓ f ℓ converge to in L ( µ ) as N − M → ∞ . We will use this theorem together with Theorem 5.1 (see below) in order to proveTheorem 1.5. For the proof of Theorem 1.6, we will use again Theorem 5.1 and theanalogous (in Proposition 5.4) of the following result:
Proposition 2.3 ([6, Proposition 5.1]) . For ℓ ∈ N let ( X, X , µ, T , . . . , T ℓ ) be a system, ( P , . . . , P ℓ ) be a nice family of ℓ -tuples of polynomials in Z [ t ] with maximum degree d and f , . . . , f m ∈ L ∞ ( µ ) . Then there exists k = k ( d, ℓ, m ) ∈ N such that if ||| f ||| k,µ,T = 0 , thenthe averages N − M N − X n = M ( ℓ Y i =1 T p i, ( n ) i ) f · . . . · ( ℓ Y i =1 T p i,m ( n ) i ) f m converge to in L ( µ ) as N − M → ∞ . Remark.
In these two results (Theorem 2.2 and Proposition 2.3) k can be chosen arbi-trarily large. NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 11 Regularity
Let k ∈ N . In this section, we will prove that a sequence of the form (1) is k -regular.In order to do so, we will make use of a trick due to Wierdl (Theorem 3.2 below), a meanconvergence result for multiple averages due to Walsh ([16]) and the following propositionwhich we borrow from [8]: Proposition 3.1 ([8]) . For k ∈ N let ( ψ ( n )) be a ( k − -step nilsequence . Then for every ε > there exists a system ( X, X , ν, S ) and functions f , . . . , f k ∈ L ∞ ( ν ) , such that thesequence ( b ( n )) , defined by (12) b ( n ) := Z S ℓ n f · . . . · S ℓ k n f k dν, n ∈ N , where ℓ i := k ! /i for i = 1 , . . . , k , satisfies k ψ − b k ∞ ≤ ε. In order to obtain the k -regularity for any k ∈ N of a sequence ( a ( n )) as in (1), wehave to check that the limit lim N − M →∞ N − M N − X n = M a ( n ) ψ ( n ) exists for every ( k − -stepnilsequence ( ψ ( n )) . From Proposition 3.1, it suffices to check that the limit(13) lim N − M →∞ N − M N − X n = M a ( n ) b ( n ) exists for every sequence ( b ( n )) of the form Z S ℓ n g · . . . · S ℓ k n g k dν , where ℓ , . . . , ℓ k ∈ N , ( Y, Y , ν, S ) is a system, and g , . . . , g k ∈ L ∞ ( ν ) .To verify that the limit in (13) exists, we will use a trick of M. Wierdl (the result ofWierdl is for R measure preserving flows, see definition below, and Cesaro averages. Wewill translate his proof in our setting).We first recall the notion of the upper Banach density. Definition.
Let S be a subset of natural numbers. We define the upper Banach density of S, d ∗ ( S ) , to be the number d ∗ ( S ) = lim sup N − M →∞ | S ∩ [ M, . . . , N ) | N − M .
Also, we recall the notion of a measure preserving flow.
Definition.
Let r ∈ N and ( X, X , µ ) be a probability space. We call a family ( T t ) t ∈ R r ofmeasure preserving transformations T t : X → X, a measure preserving flow , if it satisfies T s + t = T s ◦ T t for all s, t ∈ R r . The following theorem contains the central idea for passing from results for flows toresults for Z -actions. Essential for this, is the ( ℓm ) -dimensional variant of the special flowabove a system under the constant ceiling function (see in the proof below) first defined(for ℓ = m = 1 ) in [15]. Theorem 3.2 ([17]) . Let ℓ, m ∈ N . Suppose that the sequences of real numbers ( a i,j ( n )) for ≤ i ≤ ℓ, ≤ j ≤ m, satisfy the following two properties: (i) For any R ℓm measure preserving flow ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m , where the transfor-mations T i,a i,j are defined in the probability space ( X, X , µ ) and functions f , . . . , f m ∈ L ∞ ( µ ) , the averages | I N | X n ∈ I N ( ℓ Y i =1 T i,a i, ( n ) ) f · . . . · ( ℓ Y i =1 T i,a i,m ( n ) ) f m converge in L ( µ ) as N → ∞ (where | I N | → ∞ as N → ∞ ); (ii) lim δ → + d ∗ (cid:16)n n : { a i,j ( n ) } ∈ [1 − δ, o(cid:17) = 0 , for all ≤ i ≤ ℓ, ≤ j ≤ m, where {·} denotes the fractional part . Then the averages | I N | X n ∈ I N ( ℓ Y i =1 T [ a i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ a i,m ( n )] i ) f m also converge in L ( µ ) , as N → ∞ , for every system ( X, X , µ, T , . . . , T ℓ ) and functions f , . . . , f m ∈ L ∞ ( µ ) . Proof (We use the notation and arguments of Wierdl from [17] ). For the given transforma-tions on X, we define the R ℓm action ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m on the probability space Y = X × [0 , ℓm , with the measure ν = µ × λ ℓm ( λ is the Lebesgue measure on [0 , ), by m Y j =1 ℓ Y i =1 T i,a i,j ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ, , . . . , b ,m , . . . , b ℓ,m ) = m Y j =1 ℓ Y i =1 T [ a i,j + b i,j ] i x, { a , + b , } , . . . , { a ℓ, + b ℓ, } , . . . , { a ,m + b ,m } , . . . , { a ℓ,m + b ℓ,m } . Since the transformations T , . . . , T ℓ are measure preserving and commute, and since wehave [ x + { y } ] + [ y ] = [ x + y ] , it is easy to check that the above action define a measurepreserving flow on the product probability space Y .Note that this is nothing else than the ( ℓm )-dimensional variant of the special flow abovea system under the constant ceiling function . For a bounded function f on X, we define its version ˆ f on Y by ˆ f ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ, , . . . , b ,m , . . . , b ℓ,m ) = f ( x ) . Note that if a i,j , for ≤ i ≤ ℓ, ≤ j ≤ m, are real numbers, then m Y j =1 ( ℓ Y i =1 T i,a i,j ) ˆ f j ( x, , . . . ,
0) = m Y j =1 ( ℓ Y i =1 T [ a i,j ] i ) f j ( x ) . NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 13
We want to show that the averages | I N | X n ∈ I N m Y j =1 ( ℓ Y i =1 T i,a i,j ( n ) ) ˆ f j ( x, , . . . , converge in L ( X, , . . . , as N → ∞ . Denote A N ( x, b , , . . . , b ℓ,m ) = 1 | I N | X n ∈ I N m Y j =1 ( ℓ Y i =1 T i,a i,j ( n ) ) ˆ f j ( x, b , , . . . , b ℓ,m ) and assume to the contrary that ( A N ( x, , . . . , is not Cauchy in L ( X, , . . . , . Thismeans that we can find a sequence ( N k ) going to infinity, with(14) Z ( X, ,..., (cid:12)(cid:12)(cid:12) A N k +1 ( x, , . . . , − A N k ( x, , . . . , (cid:12)(cid:12)(cid:12) dµ > c, k = 1 , , . . . , for some positive number c. By the hypothesis, ( A N ( x, b , , . . . , b ℓ,m )) is Cauchy in L ( Y ) . By Fubini’s theorem, for any given positive ε, we can find ( b , , . . . , b ℓ,m ) arbitrarily closeto (0 , . . . , and k ∈ N so that Z ( X,b , ,...,b ℓ,m ) (cid:12)(cid:12)(cid:12) A N k +1 ( x, b , , . . . , b ℓ,m ) − A N k ( x, b , , . . . , b ℓ,m ) (cid:12)(cid:12)(cid:12) dµ < ε. We will show that for any given ε > , there exists δ > so that if ≤ b i,j ≤ δ, for ≤ i ≤ ℓ, ≤ j ≤ m, then, for every x ∈ X and large enough N, we have(15) (cid:12)(cid:12)(cid:12) A N ( x, b , , . . . , b ℓ,m ) − A N ( x, , . . . , (cid:12)(cid:12)(cid:12) < ε, and so, we will obtain the required contradiction with the Relation (14).Let < δ < (we will choose it later), and assume that ≤ b i,j ≤ δ, for all ≤ i ≤ ℓ, ≤ j ≤ m. In (15) we have to compare terms of the form m Y j =1 ˆ f j ℓ Y i =1 T i,a i,j ( n ) ( x, b , , . . . , b ℓ,m ) ! with m Y j =1 ˆ f j ℓ Y i =1 T i,a i,j ( n ) ( x, , . . . , ! . So, by the definition of the flow, we need to compare terms of the form ˆ f ℓ Y i =1 T [ a i, ( n )+ b i, ] i x, { a , ( n ) + b , } , . . . , { a ℓ, ( n ) + b ℓ, } , b , , . . . , b ℓ,m ! · . . .. . . · ˆ f m ℓ Y i =1 T [ a i,m ( n )+ b i,m ] i x, b , , . . . , b ℓ,m − , { a ,m ( n ) + b ,m } , . . . , { a ℓ,m ( n ) + b ℓ,m } ! with ˆ f ℓ Y i =1 T [ a i, ( n )] i x, { a , ( n ) } , . . . , { a ℓ, ( n ) } , , . . . , ! · . . .. . . · ˆ f m ℓ Y i =1 T [ a i,m ( n )] i x, , . . . , , { a ,m ( n ) } , . . . , { a ℓ,m ( n ) } ! , or equivalently, by the definition of ˆ f j , we need to compare m Y j =1 f j ( ℓ Y i =1 T [ a i,j ( n )+ b i,j ] i x ) with m Y j =1 f j ( ℓ Y i =1 T [ a i,j ( n )] i x ) . Since all b i,j are less or equal than δ, if the fractional part of all a i,j ( n ) is less than − δ, we have that T [ a i,j ( n )+ b i,j ] i = T [ a i,j ( n )] i for all ≤ i ≤ ℓ, ≤ j ≤ m. It remains to deal with those n ’s for which the fractional part of some a i,j ( n ) is greater orequal than − δ. By the Condition (ii), we have that the upper Banach density of these n’sis as small as we want by taking δ small. All f j are bounded, and so, the combined densityof these terms in the averages A N ( x, b , , . . . , b ℓ,m ) and A N ( x, , . . . , will be as small aswe want independently of x. Hence, if δ is chosen sufficiently small, we get (15). (cid:3) Now, by using the previous result, we will prove Theorem 1.4.
Proof of Theorem 1.4.
It suffices to show that for a i,j = p i,j , real valued polynomials, wehave the conditions of Theorem 3.2.Using Walsh’s convergence result for commuting measure preserving transformationsfrom [16] we have the Condition (i). Indeed, if, for example, p ( t ) = a r t r + . . . + a t + a ∈ R [ t ] , we write T p ( n ) = ( T a r ) n r · . . . · ( T a ) n · T a and we use Walsh’s result for the commutingmeasure preserving transformations S = T a , . . . , S r = T a r . Real valued polynomials also satisfy the Condition (ii) of the previous theorem.Indeed, let p ( t ) = a r t r + . . . + a t + a ∈ R [ t ] . If a i / ∈ Q for some ≤ i ≤ r, then we have the condition from Weyl’s result, since ( p ( n )) is uniformly distributed (mod 1).If a i ∈ Q for all ≤ i ≤ r, then the sequence ( p ( n )) is periodic (mod 1) and Condition (ii)is obvious. (cid:3) In order to show the k -regularity, it is sufficient to show that the limit (13) exists forevery sequence ( b ( n )) of the form Z S ℓ n g · . . . · S ℓ r n g r dν , where r ∈ N is arbitrary, ℓ , . . . , ℓ r ∈ N , ( Y, Y , ν, S ) is a system, and g , . . . , g r ∈ L ∞ ( ν ) . We want to show that the averages of Z f · ( ℓ Y i =1 T [ p i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ p i,m ( n )] i ) f m dµ · Z S ℓ n g · . . . · S ℓ r n g r dν converge, so, we want to show that the averages of ( ℓ Y i =1 T [ p i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ p i,m ( n )] i ) f m · Z S ℓ n g · . . . · S ℓ r n g r dν converge in L ( µ ) . We will use the convergence consequence from Theorem 1.4 for the ℓ + r commutingmeasure preserving transformations T i × id , i = 1 , . . . , ℓ, and id × S ℓ j , j = 1 , . . . , r, actingon X × Y with the measure ˜ µ := µ × ν, and the functions f i ⊗ , i = 1 , . . . , ℓ and ⊗ g j ,j = 1 , . . . , r . NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 15
By Theorem 1.4 we have that the averages of m Y j =1 ( ℓ Y i =1 ( T i × id ) [ p i,j ( n )] )( f j ⊗ ! · r Y j =1 (cid:16) ( id × S ℓ j ) n (1 ⊗ g j ) (cid:17) converge in L ( µ × ν ) , and so, integrating by dν, we get the required convergence.4. Weak-Anti-Uniformity
In this section we will show that a sequence of the form (1) is k -weak-anti-uniform, forsome k depending only on ℓ, m and the maximum degree of the polynomials p i,j . In orderto do so, we need a result (see Theorem 4.1 below), that will allow us to pass from knownresults for flows, to Z actions.As we saw in the previous section, in order to get the required convergence result toshow k -regularity for sequences of the form (1), we used an argument (Theorem 3.2) and aknown, from [16], convergence result for flows. We will now prove a similar result that willallow us to do the analogous thing in order to obtain the weak-anti-uniformity. The knownresult for (some particular) flows in this case will be the anti-uniformity that we obtainfrom [8]. More specifically, we will show that the k -anti-uniformity of (some particular)flows will give us the k -weak-anti-uniformity for the sequences of the form (1) (for the same k ). Theorem 4.1.
Let ℓ, m ∈ N . Suppose that the sequences of real numbers ( a i,j ( n )) satisfythe following two properties: (i) For any R ℓm measure preserving flow ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m , where the transfor-mations T i,a i,j are defined in the probability space ( X, X , µ ) and functions f , f , . . . ,f m ∈ L ∞ ( µ ) , the sequence ˜ a ( n ) = Z f · ( ℓ Y i =1 T i,a i, ( n ) ) f · . . . · ( ℓ Y i =1 T i,a i,m ( n ) ) f m dµ is k -anti-uniform, for some k depending only on ℓ, m and a i,j ; (ii) lim δ → + d ∗ (cid:16)n n : { a i,j ( n ) } ∈ [1 − δ, o(cid:17) = 0 , for all ≤ i ≤ ℓ, ≤ j ≤ m, where {·} denotes the fractional part.Then, the sequence a ( n ) = Z f · ( ℓ Y i =1 T [ a i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ a i,m ( n )] i ) f m dµ is k -weak-anti-uniform for every system ( X, X , µ, T , . . . , T ℓ ) and functions f , f , . . . , f m ∈ L ∞ ( µ ) . Proof.
Let < δ < . We define the same action R ℓm on Y = X × [0 , ℓm as we didin the proof of Theorem 3.2. If f , f , . . . , f m are bounded functions on X, for every ( b , , . . . , b ℓ, , b , , . . . , b ℓ, , . . . , b ,m , . . . , b ℓ,m ) ∈ [0 , ℓm we define the Y -extensions ˆ f j ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ, , . . . , b ,m , . . . , b ℓ,m ) = f j ( x ) , ≤ j ≤ m ; and ˆ f ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) = f ( x ) · [0 ,δ ] ℓm ( b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) . Then , we have: (cid:12)(cid:12)(cid:12) δ ℓm a ( n ) − ˜ a ( n ) (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) Z [0 ,δ ] ℓm Z X f ( x ) · m Y j =1 f j ( ℓ Y i =1 T [ a i,j ( n )] i x ) − m Y j =1 f j ( ℓ Y i =1 T [ a i,j ( n )+ b i,j ] i x ) dµ dλ ℓm (cid:12)(cid:12)(cid:12) . Since all the relevant b i,j in the integrand are less or equal than δ, if the fractional part ofall a i,j ( n ) is less than − δ, we have T [ a i,j ( n )+ b i,j ] i = T [ a i,j ( n )] i for all ≤ i ≤ ℓ, ≤ j ≤ m. Ifthe fractional part of some a i,j ( n ) is greater or equal than − δ, by the Condition (ii), wehave that the upper Banach density of these n ’s is as small as we want by taking δ small.All f j are bounded, and so, the combined density of these terms in the averages will be assmall as we want independently of x. Hence, by taking the averages and using the triangle inequality , for any ( b ( n )) ∈ ℓ ∞ ( N ) and N > M, we have that (cid:12)(cid:12)(cid:12) N − M N − X n = M a ( n ) b ( n ) (cid:12)(cid:12)(cid:12) ≤ δ ℓm (cid:12)(cid:12)(cid:12) N − M N − X n = M ˜ a ( n ) b ( n ) (cid:12)(cid:12)(cid:12) + c δ , where c δ → as δ → + . Then, if C is the constant that we get from the k -anti-uniformity of ˜ a, we have lim sup N − M →∞ (cid:12)(cid:12)(cid:12) N − M N − X n = M a ( n ) b ( n ) (cid:12)(cid:12)(cid:12) ≤ Cδ ℓm · k b k U k ( N ) + c δ , and so , we have the result. (cid:3) Remark.
As we showed in the proof of Theorem 1.4 , if every a i,j is a polynomial p i,j ∈ R [ t ] , then the Condition (ii) of Theorem 4.1 is satisfied. In order to get the Condition (i), asdescribed by Theorem 1.2 in [8], for the corresponding sequences, we have to successivelymake use of Lemma 6.1 (using the van der Corput operation, choosing every time appropri-ate polynomials in order to have reduction in our complexity), defined in Section 6 below. k can be chosen to be equal to d + 1 , where d is the number of steps we need to do in orderour polynomials to be reduced into constant ones, by using the PET induction. This d, and so k as well, only depends on ℓ, m and the maximum degree of the polynomials p i,j . For more information and details on the van der Corput operation and the scheme ofthe PET induction we are using here, we refer the reader to [9].So, by using the previous remark and Theorem 4.1, we have that every sequence ( a ( n )) of the form (1) is k -weak-anti-uniform, for some positive integer k = k ( ℓ, m, max deg( p i,j )) . Convergence
In this section we will present all the ingredients in order to prove Theorems 1.5 (whichwe prove in Section 6) and 1.6 (which we prove below). More specifically, in order toderive these results, we prove Theorem 5.1 below, which is yet another result for provingresults for Z -actions via known results for flows. In particular, Theorem 1.5 will follow NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 17 from Theorem 5.1 and the analogous result from [6] (Theorem 1.2) while Theorem 1.6,which is the analogous result to Corollary 5.2 from [6], is an implication of Proposition 5.4which follows from Theorem 5.1 and a result from [6] ([6, Proposition 5.1]).For ℓ, m ∈ N and a system ( X, X , µ, T , . . . , T ℓ ) , recall the definition of the R ℓm measurepreserving flow ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m on the space Y = X × [0 , ℓm that we defined in theproof of Theorem 3.2. Also, If f , . . . , f m ∈ L ( µ ) , for every ( b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) ∈ [0 , ℓm let ˆ f j ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) = f j ( x ) , ≤ j ≤ m be the Y -extensions of f j . Following this notation, we have:
Theorem 5.1.
Let ℓ, m ∈ N , ( X, X , µ, T , . . . , T ℓ ) a system, f , . . . , f m ∈ L ∞ ( µ ) andsequences of real numbers ( a i,j ( n )) , ≤ i ≤ ℓ, ≤ j ≤ m, that satisfy the following: (i) For the R ℓm action ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m on the space Y, endowed with theprobability measure ν = µ × λ ℓm , and the extensions ˆ f , . . . , ˆ f m ∈ L ∞ ( ν ) , we have lim N − M →∞ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M ( ℓ Y i =1 T i,a i, ( n ) ) ˆ f · . . . · ( ℓ Y i =1 T i,a i,m ( n ) ) ˆ f m (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( ν ) = 0; (ii) lim δ → + d ∗ (cid:16)n n : { a i,j ( n ) } ∈ [1 − δ, o(cid:17) = 0 , for all ≤ i ≤ ℓ, ≤ j ≤ m. Then, we have lim N − M →∞ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M ( ℓ Y i =1 T [ a i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ a i,m ( n )] i ) f m (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( µ ) = 0 . Proof.
Let < δ < and define the function ˆ f in Y with ˆ f ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) = [0 ,δ ] ℓm ( b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) . If ˜ a ( n ) = ˆ f · ( ℓ Y i =1 T i,a i, ( n ) ) ˆ f · . . . · ( ℓ Y i =1 T i,a i,m ( n ) ) ˆ f m , for every x ∈ X we define a ′ ( n )( x ) = Z [0 , ℓm ˜ a ( n )( x, b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) dλ ℓm , where the integration is with respect to the variables b i,j . Then, if a ( n ) := ( ℓ Y i =1 T [ a i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ a i,m ( n )] i ) f m , for every x ∈ X, we have that (cid:12)(cid:12)(cid:12) δ ℓm a ( n )( x ) − a ′ ( n )( x ) (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) Z [0 ,δ ] ℓm m Y j =1 f j ( ℓ Y i =1 T [ a i,j ( n )] i x ) − m Y j =1 f j ( ℓ Y i =1 T [ a i,j ( n )+ b i,j ] i x ) dλ ℓm (cid:12)(cid:12)(cid:12) . Since all the relevant b i,j in the integrand are less or equal than δ, if the fractional part ofall a i,j ( n ) is less than − δ, we have T [ a i,j ( n )+ b i,j ] i = T [ a i,j ( n )] i for all ≤ i ≤ ℓ, ≤ j ≤ m. We will deal with the case where the fractional part of some a i,j ( n ) is greater or equal than − δ. For every ≤ i ≤ ℓ, ≤ j ≤ m, let E i,jδ := { n ∈ N : { a i,j ( n ) } ∈ [1 − δ, } . Then, by using the fact that E , δ ∪ ... ∪ E ,mδ ∪ E , δ ∪ ... ∪ E ℓ,mδ ≤ X ( i,j ) ∈ [1 ,ℓ ] × [1 ,m ] E i,jδ and that E i,jδ ( n ) = [1 − δ, ( { a i,j ( n ) } ) , for ≤ i ≤ ℓ, ≤ j ≤ m, n ∈ N , we have δ ℓm (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M ( δ ℓm a ( n ) − a ′ ( n )) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( µ ) ≤ X ( i,j ) ∈ [1 ,ℓ ] × [1 ,m ] (cid:12)(cid:12)(cid:12) N − M N − X n = M [1 − δ, ( { a i,j ( n ) } ) (cid:12)(cid:12)(cid:12) , where (cid:12)(cid:12)(cid:12) N − M N − X n = M [1 − δ, ( { a i,j ( n ) } ) (cid:12)(cid:12)(cid:12) = | E i,jδ ∩ [ M, N ) | N − M .
Using Condition (ii), we have that for small enough δ, the term (and the sum of finitelymany terms of this form) | E i,jδ ∩ [ M, N ) | N − M is as small as we want. Since, δ ℓm (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M a ( n ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( µ ) ≤ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M ( δ ℓm a ( n ) − a ′ ( n )) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( µ ) + (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M ˜ a ( n ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( ν ) we have that (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M a ( n ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( µ ) ≤ c δ + δ − ℓm (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) N − M N − X n = M ˜ a ( n ) (cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L ( ν ) , where c δ → as δ → + . We first take lim sup N − M →∞ , in order the second term of the right-handside to disappear from Condition (i), and then δ → + to get the result. (cid:3) We will also need the following elementary estimate (for simplicity, we use the notationU- lim sup n ,...,n k E n ,...,n k instead of lim sup N − M →∞ N − M N − X n = M . . . lim sup N − M →∞ N − M N − X n k = M ). Lemma 5.2.
Let k ∈ N and s ∈ (0 , + ∞ ) . For any sequence ( a ( n , . . . , n k )) of real non-negative numbers we have U - lim sup n ,...,n k E n ,...,n k a ([ n s ] , . . . , [ n k s ]) ≤ s k ( h s i + 1) k U - lim sup n ,...,n k E n ,...,n k a ( n , . . . , n k ) . NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 19
Proof.
For k = 1 we have N − M N − X n = M a ([ ns ]) ≤ ( h s i + 1) 1 N − M [( N − s ] X n =[ Ms ] a ( n )= ( h s i + 1) [( N − s ] − [ M s ] N − M N − s ] − [ M s ] [( N − s ] X n =[ Ms ] a ( n ) . Since lim N − M →∞ [( N − s ] − [ M s ] N − M = s, by taking lim sup N − M →∞ will give us the required relation.The general k > case follows analogously with induction. (cid:3) For ℓ, m ∈ N , ( X, X , µ, T , . . . , T ℓ ) system and f , . . . , f m ∈ L ∞ ( µ ) , recall the action ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m that we defined in the proof of Theorems 3.2, 4.1 and 5.1 and the Y -extensions, ˆ f j of f j , where Y = X × [0 , ℓm is endowed with the probability measure ν = µ × λ ℓm . By the definition of the action, the first coordinate of T i ,a i ,j evaluated at the point ( x, b , , . . . , b ℓ, , b , , . . . , b ℓ,m ) ∈ Y is T [ a i ,j + b i ,j ] i x, the (( j − ℓ + i + 1) -coordinateis equal to { a i ,j + b i ,j } , while in all the other coordinates we have the identity map,mapping b i,j to itself.So, without loss of generality, in order to study the transformations T i,s , s ∈ R , whichwe will essentially use in the proof of Theorems 1.5 and 1.6, we restrict our study to the ℓ = m = 1 case, studying the transformation S = T s , where s ∈ (0 , + ∞ ) (in the case where s < , we set S = T − − s ). Hence, we study the transformation S ( x, b ) = ( T [ s + b ] x, { s + b } ) . By making use of the relations [ s + { s ′ } ] + [ s ′ ] = [ s + s ′ ] and { s + { s ′ }} = { s + s ′ } , wehave that S n ( x, b ) = ( T [ ns + b ] x, { ns + b } ) , and so, S n ˆ f ( x, b ) = T [ ns + b ] f ( x ) . The next important lemma will give us a relation between ||| ˆ f ||| k,ν,S and ||| f ||| k,µ,T (recallthe definitions and remarks from Subsection 2.2). Lemma 5.3.
With the previous terminology, for any k ∈ N , there exists a constant c = c ( k, s ) such that ||| ˆ f ||| k,ν,S ≤ c ||| f ||| k +1 ,µ,T . Proof. If c k = ( k + 1) k , c k,s = c k · s k ( h s i + 1) k , F = f ⊗ ¯ f and R = T × T, then by thedefinition of the seminorm ||| · ||| k , by Lemma 5.2, the Cauchy-Schwarz inequality and theremarks in Subsection 2.2, we have (for simplicity, we use the notation U- lim n ,...,n k E n ,...,n k instead of lim N − M →∞ N − M N − X n = M . . . lim N − M →∞ N − M N − X n k = M and U- lim sup n ,...,n k E n ,...,n k for the lim sup respectively, as we did in Lemma 5.2) ||| ˆ f ||| k k,ν,S = U- lim n ,...,n k E n ,...,n k Z Y ~ǫ ∈{ , } k C | ǫ | S ~ǫ · ~n ˆ f dν ≤ U- lim sup n ,...,n k E n ,...,n k (cid:12)(cid:12)(cid:12) Z Y ~ǫ ∈{ , } k C | ǫ | T [( ~ǫ · ~n ) s + b ] f dν (cid:12)(cid:12)(cid:12) ≤ c k max e ~ǫ ∈{ ,...,k } U- lim sup n ,...,n k E n ,...,n k (cid:12)(cid:12)(cid:12) Z Y ~ǫ ∈{ , } k C | ǫ | T P ki =1 [ ǫ i n i s ]+ e ~ǫ f dµ (cid:12)(cid:12)(cid:12) ≤ c k,s max e ~ǫ ∈{ ,...,k } U- lim sup n ,...,n k E n ,...,n k (cid:12)(cid:12)(cid:12) Z Y ~ǫ ∈{ , } k C | ǫ | T ~ǫ · ~n + e ~ǫ f dµ (cid:12)(cid:12)(cid:12) ≤ c k,s max e ~ǫ ∈{ ,...,k } U- lim sup n ,...,n k E n ,...,n k (cid:12)(cid:12)(cid:12) Z Y ~ǫ ∈{ , } k C | ǫ | T ~ǫ · ~n + e ~ǫ f dµ (cid:12)(cid:12)(cid:12) / = c k,s max e ~ǫ ∈{ ,...,k } U- lim sup n ,...,n k E n ,...,n k Z Y ~ǫ ∈{ , } k C | ǫ | R ~ǫ · ~n + e ~ǫ F d ( µ × µ ) / , using the Relation (10) from [11] and the Condition (1) of Lemma 3.9 from [11], this lastterm is bounded by c k,s max e ~ǫ ∈{ ,...,k } Y ~ǫ ∈{ , } k ||| R e ~ǫ F ||| k,µ × µ,R / = c k,s (cid:16) ||| F ||| k k,µ × µ,R (cid:17) / ≤ c k,s ||| f ||| k k +1 ,µ,T , where we have used the fact that ||| F ||| k,µ × µ,R = ||| f ⊗ ¯ f ||| k,µ × µ,T × T ≤ ||| f ||| k +1 ,µ,T . (cid:3) In order to state the Proposition 5.4 below, which is the analogous to Proposition 2.3,we need the following notation:Let ( X, X , µ ) be a probability space. If ( T t ) t ∈ R is a measure preserving flow and p ( t ) = a r t r + . . . + a t + a ∈ R [ t ] , we write(16) T p ( t ) = T t r a r . . . T ta T a . We assign to the polynomial p ( t ) an ( r + 1) -tuple of polynomials ~p = ( p r ( t ) , . . . , p ( t )) , where we put p i ( t ) = t i if a i = 0 and p i ( t ) = 0 otherwise.We are now ready to define the notion of the R -nice family. Definition.
Let ℓ, m ∈ N . The family of m polynomial ℓ -tuples of polynomials with max-imum degree d in R [ t ]( P , . . . , P ℓ ) = (( p , , . . . , p ℓ, ) , . . . , ( p ,m , . . . , p ℓ,m )) is called R -nice family if the respective family of m polynomial ( d + 1) ℓ -tuples ( ~ P , . . . , ~ P ℓ ) := (( ~p , , . . . , ~p ℓ, ) , . . . , ( ~p ,m , . . . , ~p ℓ,m )) , where we complete the coordinates with zeros from the right in order every vector ~p i,j tohave d + 1 coordinates, is a nice family (in Z [ t ] ). NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 21
We will now prove a proposition which implies Theorem 1.6.
Proposition 5.4.
Let ℓ ∈ N , ( X, X , µ, T , . . . , T ℓ ) be a system, ( P , . . . , P ℓ ) be an R -nicefamily of ℓ -tuples of polynomials in R [ t ] with maximum degree d and f , . . . , f m ∈ L ∞ ( µ ) . Then there exists k = k ( d, ℓ, m ) ∈ N such that if ||| f ||| k,µ,T = 0 , then the averages N − M N − X n = M ( ℓ Y i =1 T [ p i, ( n )] i ) f · . . . · ( ℓ Y i =1 T [ p i,m ( n )] i ) f m converge to in L ( µ ) as N − M → ∞ . Proof.
We have to show, for the action ℓ Y i =1 T i,a i, · . . . · ℓ Y i =1 T i,a i,m , where a i,j = p i,j , and the Y -extensions of f j , ˆ f j , all defined in the proof of Theorem 3.2, that the Condition (i) ofTheorem 5.1 holds and we will have the result, since we have already seen, in the proof ofTheorem 1.4, that Condition (ii) holds for real polynomials. According to the hypothesis,there exists k = k ( d, ℓ, m ) ∈ N such that ||| f ||| k,µ,T = 0 . Using Lemma 5.3 we have that ||| ˆ f ||| k − ,ν,S = 0 , where, if a , r is the leading coefficient of p , , then S can be chosen tobe equal to T , | a , r | . Writing each T i,p i,j as in (16) and using the fact that ( P , . . . , P ℓ ) is an R -nice family in R [ t ] , we get the respective nice family ( ~ P , . . . , ~ P ℓ ) in Z [ t ] . The Condition (i) of Theorem 5.1 now follows from Proposition 2.3. (cid:3)
Theorem 1.6 will now follow from Proposition 5.4, as Corollary 5.2 in [6] follows fromProposition 5.1 in [6].
Proof of Theorem 1.6.
We apply Proposition 5.4 to the family ( P , . . . , P ℓ ) where P =( p , , . . . , , P = (0 , p , . . . , , . . . , P ℓ = (0 , . . . , , p ℓ ) which is an R -nice family. (cid:3) Before we prove Corollary 1.7 via Theorem 1.6 and close this section, we recall the notionof a weak mixing system.
Definition ([10]) . Let ( X, X , µ, T ) be a system. If for any two functions f, g ∈ L ( µ ) wehave that lim N →∞ N N X n =1 (cid:12)(cid:12)(cid:12) Z f T n g dµ − Z f dµ Z g dµ (cid:12)(cid:12)(cid:12) = 0 , then T is called weakly mixing transformation and the system ( X, X , µ, T ) weakly mixingsystem .It is an immediate consequence of the definition of the seminorms ||| · ||| k that, for weakmixing systems ( X, X , µ, T ) , we have ||| f ||| k,µ,T = (cid:12)(cid:12)(cid:12) Z f dµ (cid:12)(cid:12)(cid:12) for every k ∈ N . Proof of Corollary 1.7.
We can assume that the integral of some f i is , by using theelementary identity from [10] ℓ Y i =1 a i − ℓ Y i =1 b i = k X j =1 j − Y i =1 a i ! ( a j − b j ) k Y i = j +1 b i , where we have set Y i =1 a i = k Y i = k +1 b i = 1 . We can actually assume that Z f dµ = 0 . Then ||| f ||| k,µ,T = 0 for all k. The result now follows from Theorem 1.6. (cid:3) Proof of main results
In this last section we give the proof of Theorems 1.1, 1.2, 1.3 and 1.5.Theorem 1.1 will follow from Theorem 2.1, since, as we have already seen, every sequenceof the form (1) is k -regular (for all k ) and k -weak-anti-uniform, for some k depending onthe integers ℓ, m and the maximum degree of the polynomials p i,j . Theorem 1.2 will follow from Theorems 1.4 and 4.1, using also results from [8].Theorem 1.3 will follow from Theorem 1.2 and the analogous result, Theorem 1.4 of [8].Finally, Theorem 1.5 will follow from Theorem 5.1 via Theorem 2.2 and Lemma 5.3.
Proof of Theorem 1.1.
Since any sequence ( a ( n )) of the form (1) is k -weak-anti-uniformand k -regular, for some k depending on the integers ℓ, m and the maximum degree ofthe polynomials p i,j , as we showed in the end of Section 3 and Section 4, we can applyTheorem 2.1 to get the conclusion. (cid:3) In order to prove Theorem 1.2, we will make use of the following Hilbert-space variantof van der Corput’s estimate (see [8]).
Lemma 6.1.
Let ( v n ) be a bounded sequence of vectors in an inner product space and ( I N ) be a sequence of intervals with lengths tending to infinity. Then lim sup N →∞ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) | I N | X n ∈ I N v n (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) ≤ H →∞ H H X h =1 lim sup N →∞ (cid:12)(cid:12)(cid:12) | I N | X n ∈ I N h v n + h , v n i (cid:12)(cid:12)(cid:12) . Proof of Theorem 1.2.
According to Theorem 2.1, in order to have the conclusion, we haveto show that the sequence Z f · ( ℓ Y i =1 T [ a i, n ] i ) f · . . . · ( ℓ Y i =1 T [ a i,m n ] i ) f m dµ is ( m + 1) -regularand ( m + 1 )-weak-anti-uniform.The ( m + 1) -regularity follows from Section 3, making use of Proposition 3.1 and theconvergence result we get from Theorem 1.4.According to Theorem 4.1, since, as we saw in the proof of Theorem 1.4, real valuedpolynomials satisfy the Condition (ii) of this result, if S i,j = T i,a i,j for every i, j we haveto prove that the sequence ˜ a ( n ) = Z f · ( ℓ Y i =1 S ni, ) f · . . . · ( ℓ Y i =1 S ni,m ) f m dµ is ( m + 1 )-anti-uniform. This follows from the Subsection 2.3.1 of [8] with induction, by successivelyapplying Lemma 6.1, using the van der Corput operation (at most) m times in order thelinear polynomial iterates that appear in ˜ a ( n ) to be reduced into constant ones. (cid:3) Proof of Theorem 1.3.
From the definitions, it is immediate that B k k·k ⊆ C k k·k . FromTheorem 1.2, we also have (for ℓ = m = k ) that C k k·k ⊆ A k k·k . The result now follows,since A k k·k = B k k·k (from Theorem 1.4 in [8]). (cid:3) NTEGER PART POLYNOMIAL CORRELATION SEQUENCES 23
We will close this article with the proof of Theorem 1.5.
Proof of Theorem 1.5.
Analogously to Proposition 5.4, we have to show for the action T ,a n r · . . . · T ℓ,a ℓ n rℓ (we have set a i,i = a i n r i and a i,j = 0 for all i = j ) and the Y -extensionsof f j , ˆ f j , all defined in the proof of Theorem 3.2, that the Condition (i) of Theorem 5.1holds and we will have the result, since we have already seen that Condition (ii) holds forreal polynomials in the proof of Theorem 1.4. According to the hypothesis, there exists k = k ( d, max r i ) ∈ N such that ||| f i ||| k,µ,T i = 0 for some ≤ i ≤ ℓ. Using Lemma 5.3 wehave that ||| ˆ f i ||| k − ,ν,S i = 0 , where, S i can be chosen to be equal to T i , | a i | . Since T i,a i n ri = T n ri i,a i and all r i are pairwise distinct, the Condition (i) of Theorem 5.1follows from Theorem 2.2. (cid:3) References [1] V. Bergelson. Weakly mixing PET.
Ergodic Theory Dynam. Systems (1987), no. 3, 337–349.[2] V. Bergelson, B. Host, B. Kra, with an appendix by I. Ruzsa. Multiple recurrence and nilsequences. Inventiones Math. (2005), no. 2, 261–303.[3] V. Bergelson, A. Leibman. Distribution of values of bounded generalized polynomials.
Acta Math-ematica (2007), 155–230[4] V. Bergelson, R. McCucheon. Idempotent Ultrafilters, Multiple Weak Mixing and Szemerédi’s The-orem for Generalized Polynomials.
J. Analyse Math. (2010), 77–130.[5] M. Boshernitzan, R. L. Jones, M. Wierdl. Integer and fractional parts of good averaging sequencesin ergodic theory.
Convergence in Ergodic Theory and Probability (Caribbean Studies),
Jul 1, 1996.[6] Q. Chu, N. Frantzikinakis, B. Host. Ergodic averages of commuting transformations with distinctdegree polynomial iterates.
Proc. of the London Math. Society. (3), (2011), 801–842.[7] N. Frantzikinakis. Some open problems on multiple ergodic averages. arXiv:1103.3808 .[8] N. Frantzikinakis. Multiple correlation sequences and nilsequences.
Invent. Math. (2015), no.2, 875–892.[9] N. Frantzikinakis, B. Host, B. Kra. The polynomial multidimensional Szemerédi theorem alongshifted primes.
Israel J. Math. (2013), no. 1, 331–348.[10] H. Furstenberg, Y. Katznelson, D. Ornstein. The ergodic theoretical proof of Szemerédi’s theorem.
Bull. Amer. Math. Soc. (1982), 527–552.[11] B. Host, B. Kra. Nonconventional ergodic averages and nilmanifolds. Annals of Math. (2005),no. 1, 397–488.[12] B. Host, B. Kra. Uniformity seminorms on l ∞ and applications. J. Analyse Math. (2009),219–276.[13] A. Leibman. Multiple polynomial sequences and nilsequences.
Ergodic Theory Dynam. Systems (2010), no. 3, 841–854.[14] A. Leibman. Nilsequences, null-sequences, and multiple correlation sequences. Ergodic Theory Dy-nam. Systems (2015), no. 1, 176–191.[15] E. Lesigne. On the sequence of integer parts of a good sequence for the ergodic theorem. Comment.Math. Univ. Carolin. (1995), no. 4, 737–743.[16] M. Walsh. Norm convergence of nilpotent ergodic averages. Annals of Mathematics (2012), no.3, 1667–1688.[17] M. Wierdl, personal communication.(Andreas Koutsogiannis)
The Ohio State University, Department of mathematics, Columbus,Ohio, USA
E-mail address ::