Conditional Square Functions, the Sine-Cosine Decomposition for Hardy Martingales and Dyadic Perturbation
aa r X i v : . [ m a t h . F A ] N ov Conditional Square Functions and DyadicPerturbations of the Sine-Cosine decomposition forHardy Martingales
Paul F. X. M¨uller ∗ October 16, 2016
Abstract
We prove that the P− norm estimates between a Hardy martingale and its cosinepart are stable under dyadic perturbations. AMS Subject Classification 2000:
Key-words:
Hardy Martingales, Martingale Inequalities, Embedding.
Contents
Hardy martingales developed alongside Banach spaces of analytic functions and played animportant role in establishing their isomorphic invariants. For instance those martingaleswere employed in the construction of subspaces in L /H isomorphic to L . An integrableHardy martingale F = ( F k ) satisfies the L estimate k sup k | F k |k ≤ e sup k k F k k , and it may be decomposed into the sum of Hardy martingales as F = G + B such that k ( X E k − | ∆ k G | ) / k + X k ∆ B k k ≤ C k F k . ∗ Supported by the Austrian Science foundation (FWF) Pr.Nr. FWFP28352-N32. PRELIMINARIES k ( X E k − | ∆ k G | ) / k ≤ C k ( X E k − |ℑ w k − ∆ k G | ) / k , for every adapted sequence ( w k ) satisfying | w k | ≥ /C. A proof of Bourgain’s theoremthat L embeds into L /H may be obtained in the following way:1. Use as starting point the estimates of the Garnett Jones Theorem.2. Prove stability under dyadic perturbation for the Davis and Garsia Inequalities.3. Prove stability under dyadic perturbation of the martingale transform estimates.We determined the extent to which DGI are stable under dyadic perturbation, and weshowed how the above strategy actually gives an isomorphism from L into a subspaceof L /H . In the present paper we turn to the martingale transform estimates and verifythat they are indeed stable under dyadic perturbations.
Martingales and Transforms on T N . Let T = { e iθ : θ ∈ [0 , π [ } be the torusequipped with the normalized angular measure. Let T N be its countable product equippedwith the product Haar measure P . We let E denote expectation with respect to P . Fix k ∈ N , the cylinder sets { ( A , . . . , A k , T N ) } , where A i , i ≤ k are measurable sub-sets of T , form the σ − algebra F k . Thus we obtain a filtered probability space ( T N , ( F k ) , P ).We let E k denote the conditional expectation with respect to the σ − algebra F k . Let G = ( G k ) be an L ( T N ) − bounded martingale. Conditioned on F k − the martingale dif-ference ∆ G k = G k − G k − defines an element in L ( T ) , the Lebesgue space of integrable,functions with vanishing mean. We define the previsible norm as k G k P = k ( ∞ X k =1 E k − | ∆ G k | ) / k L , (2.1)and refer to ( P ∞ k =1 E k − | ∆ G k | ) / as the conditional square function of G. For any bounded and adapted sequence W = ( w k ) we define the martingale trans-form operator T W by T W ( G ) = ℑ hX w k − ∆ k G i . (2.2)Garsia [5] is our reference to martingale inequalities. Sine-Cosine decomposition.
Let G = ( G k ) be a martingale on T N with respect to thecanonical product filtration ( F k ). Let U = ( U k ) be the martingale defined by averaging U k ( x, y ) = 12 [ G k ( x, y ) + G k ( x, y )] , (2.3)where x ∈ T k − , y ∈ T . The martingale U is called the cosine part of G . Putting V k = G k − U k we obtain the corresponding sine-martingale V = ( V k ), and the sine-cosinedecomposition of G defined by G = U + V. By construction we have ∆ V k ( x, y ) = − ∆ V k ( x, y ) , and U k ( x, y ) = U k ( x, y ) , for any k ∈ N . MARTINGALE ESTIMATES The Hilbert transform.
The Hilbert transform on L ( T ) is defined as Fourier multi-plier by H ( e inθ ) = − i sign( n ) e inθ . Let 1 ≤ p ≤ ∞ . The Hardy space H p ( T ) ⊂ L p ( T ) consist of those p − integrablefunctions of vanishing mean, for which the harmonic extension to the unit disk is analytic.See [4]. For h ∈ H ( T ) and let y = ℑ h. The Hilbert transform recovers h from itsimaginary part y , we have h = − Hy + iy. and k h k = √ k y k . For w ∈ C , | w | = 1 wehave therefore k h k = √ k y k = √ kℑ ( w · h ) k . Hardy martingales. An L ( T N ) bounded ( F k ) martingale G = ( G k ) is called a Hardymartingale if conditioned on F k − the martingale difference ∆ G k defines an element in H ( T ) . See [3], [2]. [6, 7, 8]Since the Hilbert transform, applied to functions with vanishing mean, preseves the L norm, we have E k − | ∆ U k | = E k − |ℑ w k − ∆ G k | , for each adapted sequence W = ( w k )with | w k | = 1 , and consequently, k ( X E k − | ∆ U k | ) / k = k ( X E k − |ℑ w k − ∆ G k | ) / k . (3.1)We restate (3.1) as k U k P = k T W ( G ) k P , where T W ( G ) = ℑ [ P w k − ∆ k ( G )] . In this paperwe show that the lower P norm estimate k U k P ≤ k T W ( G ) k P , is stable under dyadicperturbation. Dyadic martingales.
The dyadic sigma-algebra on T N is defined with Rademacherfunctions. For x = ( x k ) ∈ T N define cos k ( x ) = ℜ x k and σ k ( x ) = sign(cos k ( x )) . We let D be the sigma- algebra generated by { σ k , k ∈ N } and call it the dyadic sigma-algebra on T N . Let G ∈ L ( T N ) with sine cosine decomposition G = U + V , then E ( U k |D ) = E ( G k |D ) for k ∈ N , and hence U − E ( U |D ) + V = G − E ( G |D ) . Our principle result asserts stability for (3.1) under dyadic perturbations as follows:
Theorem 3.1.
Let G = ( G k ) nk =1 be a martingale and let U = ( U k ) nk =1 be its cosinemartngale given by (2.3) . Then, for any adapted sequence W = ( w k ) satisfying | w k | = 1 , we have k U − E ( U |D ) k P ≤ C k T W ( G − E ( G |D )) k / P k G k / P , (3.2) where T W is the martingale transform operator defined by (2.2) . Define σ ∈ L ( T ) by σ ( ζ ) = sign ℜ ζ . Note that σ ( ζ ) = σ ( ζ ) , for all ζ ∈ T . For f, g ∈ L ( T ) we put h f, g i = R T f gdm. Lemma 3.2.
Let h ∈ H ( T ) , and u ( z ) = ( h ( z ) + h ( z )) / .Then for w, b ∈ C , with | w | = 1 , ℑ ( w · ( h u, σ i − b )) + ℜ ( w · h u, σ i ) + Z T | u − h u, σ i σ | dm = Z T ℑ ( w · ( h − bσ )) dm MARTINGALE ESTIMATES Proof.
First put w = 1 T , w = σ, and choose any orthonormal system { w k : k ≥ } in L G ( T ) so that { w k : k ≥ } is an orthonormal basis for L G ( T ) . Then { w k , Hw k : k ≥ } , where H the Hilbert transform, is a orthonormal basis in L ( T ). Moreover in the Hardyspace H ( T ) the analytic system { ( w k + iHw k ) : k ≥ } is an orthogonal basis with k w k + iHw k k = √ , k ≥ . Fix h ∈ H ( T ) and w, b ∈ C , with | w | = 1 . Clearly by replacing h by wh and b by wb it suffices to prove the lemma with w = 1 . Since R u = 0 we have that u = ∞ X n =1 c n w n . We apply the Hilbert transform and rearrange terms to get h − bσ = ( c − b ) σ + ic Hσ + ∞ X n =2 c n ( w n + iHw n ) . (3.3)Then, taking imaginary parts gives ℑ ( h − bσ ) = ℑ ( c − b ) σ + ℜ c Hσ + ∞ X n =2 ℑ c n w n + ℜ c n Hw n . (3.4)By ortho-gonality the identity (3.4) yields Z T ℑ ( h − bσ ) dm = ℑ ( c − b ) + ℜ c + ∞ X n =2 | c n | . (3.5)On the other hand, since R u = 0, c = h u, σ i , and w = σ we get Z T | u − h u, σ i σ | dm = ∞ X n =2 | c n | . (3.6)Comparing the equations (3.5) and (3.6) completes the proof.We use below some arithmetic, that we isolate first. Lemma 3.3.
Let µ, b ∈ C and | µ | + | µ − b | | µ | + | b | = a. (3.7) Then for any w ∈ T , ( a − | b | ) ≤ ℑ ( w · ( µ − b )) + ℜ ( w · µ )) . (3.8) and | µ − b | ≤ a − | µ | ) . (3.9) MARTINGALE ESTIMATES Proof.
By rotation invariance it suffices to prove (3.8) for w = 1 . Let µ = m + im and b = b + ib . By definition (3.7), we have a − | b | = | µ | − | b | + | µ − b | | µ | + | b | . Expand and regroup the numerator | µ | − | b | + | µ − b | = 2 m ( m − b ) + 2 m ( m − b ) . (3.10)By the Cauchy Schwarz inequality, the right hand side (3.10) is bounded by2( m + ( m − b ) ) / ( m + ( m − b ) ) / . Note that m = ℜ µ and m − b = ℑ ( µ − b ) . It remains to observe that( m + ( m − b ) ) / ≤ | µ | + | b | . or equivalently m + m − m b + b ≤ | µ | + 2 | µ || b | + | b | , which is obviously true.Next we turn to verifying (3.9). We have a − | µ | = ( a + | µ | )( a − | µ | ) hence a − | µ | = (cid:20) | µ | + | µ − b | | µ | + | b | (cid:21) | µ − b | | µ | + | b | . (3.11)In view of (3.11) we get (3.9) by showing that2 | µ | + 2 | µ || b | + | µ − b | ≥
12 ( | µ | + | b | ) . (3.12)The left hand side of (3.12) is larger than | µ | + | b | while the right hand side of (3.12) issmaller | µ | + | b | . We merge the inequalities of Lemma 3.3 with the identity in Lemma 3.2.
Proposition 3.4.
Let b ∈ C and h ∈ H ( T ) . If u ( z ) = ( h ( z ) + h ( z )) / and |h u, σ i| + |h u, σ i − b | |h u, σ i| + | b | = a, then Z T | u − bσ | dm ≤ a − |h u, σ i| ) + Z T | u − h u, σ i σ | dm. (3.13) and for all w ∈ C , with | w | = 1 , ( a − | b | ) + Z T | u − h u, σ i σ | dm ≤ Z T ℑ ( w · ( h − bσ )) dm. (3.14) MARTINGALE ESTIMATES Proof.
Put J = Z T ℑ ( w · ( h − bσ )) dm. (3.15)The proof exploits the basic identities for the integral J and R T | u − bσ | dm and intertwinesthem with the arithmetic (3.7) – (3.9). Step 1.
Use the straight forward identity, Z T | u − bσ | dm = |h u, σ i − b | + Z T | u − h u, σ i σ | dm. (3.16)Apply (3.9), so that |h u, σ i − b | ≤ a − |h u, σ i| ) , hence by (3.16) we get (3.13), Z T | u − bσ | dm ≤ a − |h u, σ i| ) + Z T | u − h u, σ i σ | dm. Step 2.
The identity of Lemma 3.2 gives ℑ ( w · ( h u, σ i − b )) + ℜ ( w · h u, σ i ) + Z T | u − h u, σ i σ | dm = J . (3.17)Apply (3.8) with µ = h u, σ i to the left hand side in (3.17), and get (3.14),( a − | b | ) + Z T | u − h u, σ i σ | dm ≤ J . Proof of Theorem 3.2
Let { g k } be the martingale difference sequence of the Hardy martingale G = ( G k ) , and let { u k } be the martingale difference sequence of the associated cosine martingale U = ( U k ) . By convexity we have E ( ∞ X k =1 | E k − ( u k σ k ) | ) / = EE (( ∞ X k =1 | E k − ( u k σ k ) | ) / |D ) ≥ E ( ∞ X k =1 | E ( E k − ( u k σ k ) |D ) | ) / . Put b k = E ( E k − ( u k σ k ) |D ) and note that E ( u k |D ) = b k σ k . Step 1.
Let Y = P ∞ k =1 | E k − ( u k σ k ) | and Z = P ∞ k =1 | b k | . Then restating the aboveconvexity estimate we have E ( Y ) ≥ E ( Z ) . (3.18) Step 2.
Since E ( g k |D ) = E ( u k |D ) , the square of the conditioned square functions of T W ( G − E ( G |D )) coincides with X E k − |ℑ ( w k − · ( g k − b k σ k )) | . (3.19) EFERENCES Step 3.
The sequence { u k − b k σ k } is the martingale difference sequence of U − E D ( U ).The square of its conditioned square functions is hence given by X E k − | u k − b k σ k | . (3.20)Following the pattern of (3.7) define a k = | E k − ( u k σ k ) | + | E k − ( u k σ k ) − b k | | E k − ( u k σ k ) | + | b k | , and v k = u k − E k − ( u k σ k ) σ k , r k = E k − | v k | . By (3.13) E k − | u k − b k σ k | ≤ a k + r k − | E k − ( u k σ k ) | ) . (3.21) Step 4.
With X = P ∞ k =1 a k + r k , we have the obvious pointwise estimate, X ≥ Y .Taking into account (3.21) gives k U − E ( U |D ) k P ≤ √ E ( X − Y ) / ≤ √ E ( X − Y )) / ( E ( X + Y )) / . (3.22)The factor E ( X + Y ) in (3.22) admitts an upper bound by E ( X + Y ) ≤ C k U k P ≤ C k G k P . (3.23) Step 5.
Next we turn to estimates for E ( X − Y ) . By (3.18), E ( X − Y ) ≤ E ( X − Z ) , and by triangle inequality X − Z ≤ ( ∞ X k =1 ( a k − | b k | ) + r k ) / . By (3.14) ( a k − | b k | ) + r k ≤ E k − |ℑ ( w k − · ( g k − b k σ k )) | , and hence E ( X − Z ) ≤ C k T W ( G − E ( G |D )) k P . Invoking (3.22) and (3.23) completes the proof.
References [1] J. Bourgain. Embedding L in L /H . Trans. Amer. Math. Soc. , 278(2):689–702,1983.[2] D. J. H. Garling. On martingales with values in a complex Banach space.
Math. Proc.Cambridge Philos. Soc. , 104(2):399–406, 1988.[3] D. J. H. Garling. Hardy martingales and the unconditional convergence of martingales.
Bull. London Math. Soc. , 23(2):190–192, 1991.
EFERENCES
Bounded analytic functions , volume 96 of
Pure and Applied Mathemat-ics . Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1981.[5] A. M. Garsia.
Martingale inequalities: Seminar notes on recent progress . W. A. Ben-jamin, Inc., Reading, Mass.-London-Amsterdam, 1973. Mathematics Lecture NotesSeries.[6] P. F. X. M¨uller. A Decomposition for Hardy Martingales.
Indiana. Univ. Math. J. ,61(5):x1–x15, 2012.[7] P. F. X. M¨uller. A decomposition for Hardy martingales II.
Math. Proc. CambridgePhilos. Soc. , 157(2):189–207, 2014.[8] P. F. X. M¨uller. A decomposition for Hardy martingales III.