Boundedness of Journé operators with matrix weights
Komla Domelevo, Spyridon Kakaroumpas, Stefanie Petermichl, Odí Soler i Gibert
aa r X i v : . [ m a t h . C A ] F e b BOUNDEDNESS OF JOURN ´E OPERATORS WITH MATRIXWEIGHTS
K. DOMELEVO, S. KAKAROUMPAS, S. PETERMICHL, O. SOLER I GIBERT
Abstract.
We develop a biparameter theory for matrix weights and show variousbounds for Journ´e operators under the assumption of the matrix Muckenhoupt con-dition.
Contents
Notation 11. Introduction 22. Background 63. Biparameter matrix A p weights 114. Dyadic strong Christ–Goldberg maximal function 225. Matrix-weighted square functions 246. L p matrix-weighted bounds for biparameter Haar multipliers 307. L p matrix-weighted bounds for paraproduct free Journ´e operators 368. L matrix-weighted bounds for general Journ´e operators 509. Appendix 58References 65 Notation Z + the set of non-negative integers; E characteristic function of a set E ;d x integration with respect to Lebesgue measure; ∣ E ∣ n -dimensional Lebesgue measure of a measurable set E ⊆ R n ; ( f ) E average with respect to Lebesgue measure, ( f ) E ∶= ∣ E ∣ ∫ E f ( x ) d x ; ⨏ E d x integral with respect to the normalized Lebesgue measure on a set E ofpositive finite measure, ⨏ E f ( x ) d x ∶= ∣ E ∣ ∫ E f ( x ) d x = ( f ) E ; ∣ e ∣ usual Euclidean norm of a vector e ∈ C d ; ⟨ e, f ⟩ usual Euclidean pairing of vectors e, f ∈ C d ; M d ( C ) set of all d × d -matrices with complex entries; The authors are partially supported by ERC project CHRiSHarMa no. DLV-682402 and the Alexan-der von Humboldt Stiftung. ∣ A ∣ usual matrix norm (i.e. largest singular value) of a matrix A ∈ M d ( C ) ; I d the identity d × d -matrix; ∥ f ∥ X norm of the element f in a Banach space X ; L ∞ c the spaces of compactly supported L ∞ functions; L p ( W ) matrix-weighted Lebesgue space, ∥ f ∥ pL p ( W ) ∶= ∫ ∣ W ( x ) / p f ( x )∣ p d x ; ( f, g ) usual L -pairing, ( f, g ) ∶= ∫ ⟨ f ( x ) , g ( x )⟩ d x , where f, g take values in C d ; p ′ H¨older conjugate exponent to p , 1 / p + / p ′ = I − , I + left, respectively right half of an interval I ⊆ R ; h I , h I L -normalized cancellative and non-cancellative respectively Haar func-tions for an interval I ⊆ R , h I ∶= I + − I − √∣ I ∣ , h I ∶= I √∣ I ∣ ; for simplicity wedenote h I ∶= h I ; f I usual Haar coefficient of a function f ∈ L ( R ; C d ) , f I ∶= ∫ R h I ( x ) f ( x ) d x ; h ε ε R any of the four L -normalized Haar functions for a rectangle R in R , h ε ε R ∶= h ε I ⊗ h ε J , where R = I × J and ε , ε ∈ { , } ; for simplicity wedenote h R ∶= h R ; f R usual (biparameter) Haar coefficient of a function f ∈ L ( R ; C d ) , f R ∶=∫ R h R ( x ) f ( x ) d x .The notation x ≲ a,b,... y means x ≤ Cy with a constant 0 < C < ∞ depending only onthe quantities a, b, . . . ; the notation x ≳ a,b,... y means y ≲ a,b,... x. We use x ∼ a,b,... y if both x ≲ a,b,... y and x ≳ a,b,... y hold. 1. Introduction
The theory of weights has drawn much attention in recent years. In the scalar-valuedsetting, we say that a non-negative locally integrable function w on R is an A p weightif sup I ( w ) I ( w − p ′ / p ) p / p ′ I < ∞ , where the supremum runs over intervals I ⊆ R and () I returns the average of a functionover the interval I (here, 1 < p < ∞ is fixed, and p ′ ∶ = p /( p − ) ). It is classical that thisis a necessary and sufficient condition for square functions, Hardy–Littlewood maximalfunction, Hilbert transform and Calder´on–Zygmund operators to be bounded in theweighted Lebesgue space L p ( w ) , 1 < p < ∞ .Inspired by applications to multivariate stationary stochastic processes, a theory ofmatrix A weights was developed by S. Treil and A. Volberg [38], where a necessary andsufficient condition for the boundedness of the Hilbert transform on L matrix-weightedLebesgue spaces was found, the finiteness of the matrix A characteristic. This wasa difficult task at that time and required a number of new ideas. Under considerableextra effort, the result was extended to the L p case, 1 < p < ∞ by F. Nazarov and Treil[32], with an alternative proof given by Volberg [40]. M. Christ and M. Goldberg [5] OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 3 made highly interesting contributions to the field with their work on matrix-weightedmaximal functions. The field spiked recent interest in the light of the fashion of sharpweighted norm estimates. In the scalar case, the classical questions were settled be-tween 2000 and 2012 [17], [18], [36] but mostly remain an unsolved mystery in thematrix case. For example, the dyadic square function was shown to have the samebound as in the scalar case [20], [37], but the bound for the Hilbert transform or eventhe Haar multiplier is still open, with the current best estimates missing that of thescalar case by a long shot, see [31] for p = < p < ∞ (and in thegenerality of Orlicz matrix bump conditions). However, the approaches in [4], [31],both relying on a rather general convex body sparse domination principle establishedin [31], naturally cover all Calder´on–Zygmund operators.In this paper, we initiate the investigation of biparameter matrix-weighted estimates.In other words, we develop a biparameter A p class and prove various square functionestimates, estimates for martingale multipliers and their generalizations, the dyadicshifts, and deduce via a representation theorem estimates for Journ´e operators. Variousdifferent approaches can be made to work, but they have a common feature in their useof various matrix-weighted biparameter square function expressions. One importantfeature of square functions is that of the possibility to use Khintchine’s inequalityto approach higher parameter cases elegantly. Another one is the ideological placeof square functions in the biparameter theory that is quite natural. The conceptof stopping time is more delicate in higher parameters and the use of a bottom upapproach via the use of square functions is the natural one to get, for example, anatomic decomposition of H spaces in several parameters. While square functions alsoplayed a role in the one-parameter estimate of [31], they could have been completelyavoided via a different approach using maximal functions. Indeed, typically the use ofsquare functions does not result in sharp norm estimates, but in the matrix case, theuse of maximal functions does not seem to help either. As explained before, in this twoparameter setting here, there are numerous good reasons why square functions shouldarise.Let us give some overview of the central objects in this paper. The unweighted L p theory of biparameter singular integrals was developed in the 80s, with a beautifulunified definition found by J.-L. Journ´e [28]; we refer to the introduction therein foran intuitive historic perspective. The questions on weighted estimates arose and someturned out to be difficult. The biparameter A p condition takes a supremum over rect-angles of arbitrary eccentricity instead of intervals (or cubes). A version of a weightedTheorem 1.1 in the scalar setting first appeared in R. Fefferman and E. M. Stein [6],with restrictive assumptions on the kernel. Subsequently, the kernel assumptions wereweakened significantly by R. Fefferman in [7], at the cost of assuming the weight be-longs to the more restrictive class A p / . This was due to the use of his sharp function f ↦ M S ( f ) / , where M S is the strong maximal function. Finally, R. Fefferman im-proved his own result in [8], where he showed that the A p class sufficed and obtained thefull statement of the weighted theorem. This was achieved by a difficult bootstrappingargument based on his previous result [7]. In 2017, alternative arguments were given,independently [3], [15] for the weighted estimates of Journ´e operators, both benefiting K. DOMELEVO, S. KAKAROUMPAS, S. PETERMICHL, O. SOLER I GIBERT strongly from a simple but extremely useful generalization of the use of Haar shifts [18],[35] to Journ´e operators [30], [34]. Indeed, H. Martikainen’s work [30] (and Y. Ou’swork [34] in higher parameters) has given us a wonderful new tool to tackle Journ´e-typesingular integrals. Aside from a fused coefficient in the Haar representation, the ap-proximating biparameter Haar Shifts still have a faint resemblance to tensor products.Both [3] and [15] use this representation formula for their proofs and furthermore, botharguments use ”shifted” adapted square functions. The argument by [3] features a verybeautiful sparse domination by means of adapted square functions. Thanks to thesefairly recent developments in product theory, we can now tackle a problem that wouldhave seemed completely out of reach 20 years ago. It remains remarkable, that onecan obtain a L p weighted theory in the matrix case and in product spaces. We obtainsquare function estimates first, then provide several different ways to pass to Journ´eoperators. One approach gives a result of independent interest, a sparse dominationby means of matrix-weighted adapted square functions, inspired by [3].Our main result is the following. Theorem 1.1.
Let < p < ∞ , and let W, U be biparameter d × d -matrix valued A p weights on R with [ W, U ′ ] A p < ∞ . Let T be any Journ´e operator on R .(1) If T is paraproduct free, then ∥ T ∥ L p ( W )→ L p ( W ) ≲ d,p,T [ W ] α ( p )+ α ( p ) A p and ∥ T ∥ L p ( U )→ L p ( W ) ≲ d,p,T [ W, U ′ ] / pA p [ W ] α ( p ) A p [ U ] α ( p ) A p , where α ( p ) , α ( p ) are given by (7.5) , (7.6) respectively.(2) If p = , then ∥ T ∥ L ( W )→ L ( W ) ≲ d,T [ W ] A and ∥ T ∥ L ( U )→ L ( W ) ≲ d,T [ W, U ′ ] / A [ W ] / A [ U ] / A . See Subsection 2.4 for a brief overview of Journ´e operators and the explanation ofthe terminology “paraproduct free”. Let us note that in the special case that T isparaproduct free, we obtain slightly better estimates in part (2) of Theorem 1.1 (see(7.3) and (7.4) below). It should be noted that we do not expect any of the exponentsof [ W ] A p , [ U ] A p appearing in Theorem 1.1 to be optimal (even for p = L p matrix-weighted bounds for biparameter Haar multipliers (seeSubsections 6.1 and 6.2 respectively), the biparameter analog of the modified Christ–Goldberg dyadic maximal function (see Proposition 4.1), as well as a class of biparame-ter paraproducts (see 8.2.1). Moreover, we obtain L p upper and lower matrix-weightedestimates for biparameter square functions (see Section 5).One of the proofs we give for part (1) of Theorem 1.1 involves a sparse dominationresult involving matrix-weighted “shifted” biparameter dyadic square functions for bi-parameter cancellative Haar shifts inspired by [3], see Theorems 7.7 and 7.8. We also OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 5 prove an analogous sparse domination result involving matrix-weighted dyadic squarefunctions for biparameter Haar multipliers, see Theorems 6.1 and 6.2.Although we restrict our attention to the product space R × R , our arguments workin any biparameter product space R n × R m , with the usual modifications.1.1. Difficulties of proofs.
We give here a brief discussion of the main difficulties ofour proofs.First of all, we need to develop a theory of biparameter matrix A p weights almostfrom scratch. The powerful machinery of reducing operators developed by Goldbergin [10] works in great generality and therefore allows one to easily define biparametermatrix A p characteristics. However, since in many estimates for biparameter operatorswe rely on iterating known estimates for one-parameter counterparts, we also haveto study the one-parameter matrix characteristics that one gets when one fixes onevariable of a biparameter weight, or when one “averages” over one variable. While inthe scalar setting this can be readily done by just using the Lebesgue differentiationtheorem (see [15]), in the matrix setting several subtleties arise, partly due to lack ofcommutativity, and partly due to several measurability difficulties, absent in the case p = p ≠ K. DOMELEVO, S. KAKAROUMPAS, S. PETERMICHL, O. SOLER I GIBERT such operators (see [15]), in the present matrix-weighted setting even defining them ina reasonable way is challenging. While one can consider several different variants ofthem, it seems that only particular formulations of them are appropriate for estimatingJourn´e operators. See Section 8 for more details.Moreover, as of now there is no extrapolation scheme for matrix-weighted estimates,even in the one-parameter setting. That means that proving L estimates does notimmediately imply L p estimates for p different from 2. Thus, significant additionaleffort and new ideas are needed to tackle the case p ≠ different sparse domination results depending on whether one is interested in the one-weight orthe two-weight situation, so we do not really get “one” sparse domination bound peroperator (also unlike [3]). 2. Background
Here we recall the definition and some basic facts about dyadic cubes and rectanglesand the Haar system.2.1.
Dyadic grids.
For definiteness, in what follows intervals in R will always beassumed to be left-closed, right-open and bounded. A cube in R n will be a set of theform Q = I × . . . × I n , where I k , k = , . . . , n are intervals in R . A rectangle in R n × R m (with sides parallel to the coordinate axes) will be a set of the form R = R × R , where R is a cube in R n and R is a cube in R m .A collection D of intervals in R will be said to be a dyadic grid in R if one can write D = ⋃ k ∈ Z D k , such that the following hold: ● for all k ∈ Z , D k forms a partition of R , and all intervals in D k have the samelength 2 − k ● for all k ∈ Z , every J ∈ D k can be written as a union of exactly 2 intervals in D k + .We say that D is the standard dyadic grid in R if D ∶ = {[ m k , ( m + ) k ) ∶ k, m ∈ Z } . A collection D of cubes in R n will be said to be a dyadic grid in R n if for some dyadicgrids D , . . . , D n in R one can write D = ⋃ k ∈ Z D k , where D k = { I × . . . × I n ∶ I i ∈ D i , ∣ I i ∣ = − k , i = , . . . , n } . We say that D is the standard dyadic grid in R n if D ∶ = {[ m k , ( m + ) k ) × ⋅ ⋅ ⋅ × [ m n k , ( m n + ) k ) ∶ k, m , . . . , m n ∈ Z } . If D is a dyadic grid in R n , then we denotech i ( Q ) ∶ = { K ∈ D ∶ K ⊆ Q, ∣ K ∣ = − in ∣ Q ∣} , Q ∈ D , i = , , , . . . . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 7
A collection D is said to be a product dyadic grid in R n × R m if for a dyadic grid D in R n and a dyadic grid D in R m we have D ∶ = { R × R ∶ R i ∈ D i , i = , } , and in this case we write (slightly abusing the notation) D = D × D .If D is a product dyadic grid in R n × R m , then we denotech i ( R ) ∶ = { Q × Q ∶ Q i ∈ ch i j ( R j ) , j = , } , R ∈ D , i = ( i , i ) , i , i = , , , . . . . We say that R is the i -th ancestor of P in D if P ∈ ch i ( R ) .2.2. Sparse families.
Consider a dyadic grid D in R n . Following the terminology of[31, Subsection 2], we say that a family S of dyadic cubes in R n is dyadically δ -sparse(where 0 < δ <
1) if ∑ K ∈ ch S ( Q ) ∣ K ∣ ≤ ( − δ )∣ Q ∣ , ∀ Q ∈ S , where ch S ( Q ) is the family of all maximal cubes in S that are contained in Q .Consider now a product dyadic grid D in R n × R m and a collection S of rectanglesin D . Let also 0 < δ < . We say that S is a dyadic Carleson family with constant δ if ∑ R ∈ S R ⊆ Ω ∣ R ∣ ≤ δ ∣ Ω ∣ for all open sets Ω ⊆ R n × R m . On the other hand, we say that S is a weakly δ -sparse family with constant δ if for each R ∈ S there is a measurable set E R ⊆ R with ∣ E R ∣ ≥ δ ∣ R ∣ and such that the collection { E R } R ∈ S is pairwise disjoint.It had been proved by A. K. Lerner and F. Nazarov [29] that a collection of dyadic cubes S is a dyadic Carleson family if and only if it is a weakly sparse family of dyadiccubes. Later on, T. S. H¨anninen [13] proved that a collection of dyadic rectangles S isa dyadic Carleson family with constant δ if and only if it is a weakly δ -sparse family.In fact, he shows his result for collections of general Borel sets. Observe as well that,if one defines one-parameter analogues of dyadic Carleson families and weakly sparsefamilies of dyadic cubes, they are not equivalent to dyadically sparse families. In fact,a dyadically sparse family is also weakly sparse, but the converse only holds if theweakly sparse constant satisfies δ > / Haar systems.
K. DOMELEVO, S. KAKAROUMPAS, S. PETERMICHL, O. SOLER I GIBERT
Haar system on R . Let D be any dyadic grid in R . For any interval I ∈ D , h I , h I will denote, respectively, the L -normalized cancellative and non-cancellative Haar functions over the interval I ∈ D , that is h I ∶ = I + − I − √∣ I ∣ , h I ∶ = I √∣ I ∣ (so h I has mean 0). For simplicity we denote h I ∶ = h I . For any function f ∈ L ( R ) ,we denote f I ∶ = ( f, h I ) , I ∈ D . We will also denote by Q I the projection on theone-dimensional subspace spanned by h I , so that Q I f ∶ = f I h I , f ∈ L ( R ) . It is well-known that one has the expansion f = ∑ I ∈ D f I h I , ∀ f ∈ L ( R ) in the L ( R ) -sense, and that the system { h I } I ∈ D forms an orthonormal basis for L ( R ) .Of course, all these notations and facts extend to C d -valued functions in the obviouscoordinate-wise way.2.3.2. Haar system on R n . Let D be any dyadic grid in R n . Denote E ∶ = { , } n ∖ {( , . . . , )} , which we call the set of one-parameter signatures. Consider a cube I = I × ⋅ ⋅ ⋅ × I n ∈ D . For ε = ( ε , . . . , ε n ) ∈ E , we denote by h εI the L -normalized cancellativeHaar function over the cube I defined by h εI ( x ) ∶ = h ε I ( x ) . . . h ε n I n ( x n ) , x = ( x , . . . , x n ) ∈ R n . In some particular cases, we will need to consider the set E = E ∪ {( , . . . , )} of extended one-parameter signatures. If this is the case and ε = ( , . . . , ) , we say that h εI = h I ( x ) . . . h I n ( x n ) is the L -normalized non-cancellative Haar function over the cube I. For any function f ∈ L ( R n ) and ε ∈ E , we denote f εI ∶ = ( f, h εI ) , I ∈ D . Analogouslyto what we did before, we will also denote by Q εI the projection on the one-dimensionalsubspace spanned by h εI , that is Q εI f ∶ = f εI h εI , f ∈ L ( R n ) . It is well-known that one has the expansion f = ∑ I ∈ D ∑ ε ∈ E f εI h εI , ∀ f ∈ L ( R n ) in the L ( R n ) -sense, and that the system { h εI ∶ I ∈ D , ε ∈ E } forms an orthonormalbasis for L ( R n ) .Of course, all these notations and facts extend to C d -valued functions in the obviouscoordinate-wise way. OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 9
Haar system on the product space R × R . Let D = D × D be any product gridin R × R . If R = I × J ∈ D , we denote by h ε ε R any of the four L -normalized Haarfunctions over R defined by h ε ε R ∶ = h ε I ⊗ h ε J , ε , ε ∈ { , } , that is h ε ε R ( t , t ) = h ε I ( t ) h ε J ( t ) , ( t , t ) ∈ R . For simplicity we denote h R ∶ = h R . For any function f ∈ L ( R ) , we denote f ε ε R ∶ = ( f, h ε ε R ) , R ∈ D , ε , ε ∈ { , } , and we will often use the simplification f R ∶ = ( f, h R ) . We will also denote by Q R theprojection on the one-dimensional subspace spanned by h R , Q R f ∶ = f R h R , f ∈ L ( R ) . From the corresponding one-parameter facts we immediately deduce the expansion f = ∑ R ∈ D f R h R , ∀ f ∈ L ( R ) in the L ( R ) -sense, and that the system { h R } R ∈ D forms an orthonormal basis for L ( R ) . For I ∈ D , J ∈ D we denote by Q I , Q J the operators acting on functions f ∈ L ( R ) by Q I f ( t , t ) = Q I ( f ( ⋅ , t ))( t ) , Q J f ( t , t ) = Q J ( f ( t , ⋅ ))( t ) . Thus, if R = I × J then Q R = Q I Q J = Q J Q I . Moreover, we denote f I ( t ) ∶ = ( f ( ⋅ , t ) , h I ) , f J ( t ) ∶ = ( f ( t , ⋅ ) , h J ) . Of course, all these notations and facts extend to C d -valued functions in the obviouscoordinate-wise way.2.3.4. Haar system on the product space R n × R m . Let D = D × D be any product gridin R n × R m . Denote
E ∶ = ({ , } n ∖ {( , . . . , )}) × ({ , } m ∖ {( , . . . , )}) , which we shallcall the set of biparameter signatures. It can also be the case that we need to considerthe set E = { , } n × { , } m of extended biparameter signatures. Let us remark that thecontext will always make clear the dimensions of the underlying Euclidean space thatwe are using. It will also be clear from the context if we are treating the one-parameteror the biparameter case. Thus, the dimensions of the rectangles in the product grid D will be clear from the context, as well as whether E and E refer to one-parameter orbiparameter sets of signatures and their dimensions.If R = I × J ∈ D , we denote by h εR , with ε = ( ε , ε ) ∈ E , the L -normalized Haarfunction over R defined by h εR = h ε I ⊗ h ε J , that is h εR ( t , t ) = h ε I ( t ) h ε J ( t ) , ( t , t ) ∈ R n × R m . When ε ∈ E , we say that the Haar function h εR , with R ∈ D , is a cancellative Haarfunction; otherwise, we say that the Haar function is non-cancellative. For any function f ∈ L ( R n × R m ) , we denote f εR ∶ = ( f, h εR ) , for R ∈ D and ε ∈ E . As before, we will alsodenote by Q εR the projection on the one-dimensional subspace spanned by h εR , that is Q εR f ∶ = f εR h εR , f ∈ L loc ( R n × R m ) . Again from the corresponding one-parameter facts, we immediately deduce the expan-sion f = ∑ R ∈ D ∑ ε ∈ E f εR h εR , ∀ f ∈ L ( R n × R m ) in the L -sense, and that the system { h εR ∶ R ∈ D , ε ∈ E } forms an orthonormal basis for L ( R n × R m ) . For I ∈ D , J ∈ D and ε = ( ε , ε ) ∈ E , we denote by Q ε, I , Q ε, J the operators actingon functions f ∈ L loc ( R n × R m ) by Q ε, I f ( t , t ) = Q ε I ( f ( ⋅ , t ))( t ) , Q ε, J f ( t , t ) = Q ε J ( f ( t , ⋅ ))( t ) . Thus, if R = I × J and ε ∈ E , then Q εR = Q ε, I Q ε, J . Moreover, we denote f ε, I ( t ) ∶ = ( f ( ⋅ , t ) , h ε I ) , f ε, J ( t ) ∶ = ( f ( t , ⋅ ) , h ε J ) . Of course, all these notations and facts extend to C d -valued functions in the obviouscoordinate-wise way.2.4. Journ´e operators.
Consider functions f = f ⊗ f and g = g ⊗ g , where f , g ∈ C ∞ ( R n ) and f , g ∈ C ∞ ( R m ) . We say that a linear operator T ∶ C ∞ ( R n ) ⊗ C ∞ ( R m ) → ( C ∞ ( R n ) ⊗ C ∞ ( R m )) ′ is a Journ´e operator if it has certain weak boundedness andcancellation properties and it also has a kernel representation ⟨ T f, g ⟩ = ∫ f ( y ) K ( x, y ) g ( x ) d x d y whenever supp ( f ) ∩ supp ( g ) = ∅ and supp ( f ) ∩ supp ( g ) = ∅ , where K satisfies somegrowth and regularity conditions (see [15] or [30] for details). We say that a Journ´eoperator T is paraproduct free if it satisfies T ( ⋅ ⊗ ) = T ( ⊗ ⋅ ) = T ∗ ( ⋅ ⊗ ) = T ∗ ( ⊗ ⋅ ) = . Martikainen [30] proved a representation theorem for Journ´e operators in terms ofquickly decaying averages of Haar shifts. Namely, let ω n parametrize the random dyadicgrids D n in R n and ω m parametrize the random dyadic grids D m in R m (for a precisedefinition, see [33]). Given i, j ∈ Z + , let us denote by S i,j D n , D m a biparameter Haar shiftof complexity ( i, j ) with respect to the product dyadic grid D = D n ×D m on the productspace R n × R m , that is an operator of the form S i,j D n , D m f = ∑ R ∈ D ∑ P ∈ ch i ( R ) ∑ Q ∈ ch j ( R ) a εRP Q f εP h εQ , f ∈ L ( R n + m ) , where summation over all possible signatures ε ∈ E is implicit and ∣ a εRP Q ∣ ≤ √∣ P ∣∣ Q ∣∣ R ∣ = − ( i + i + j + j ) . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 11
Observe that it can either happen that all non-zero coefficients a εRP Q correspond tocancellative Haar functions, in which case the shift is called cancellative , or that thereappear non-cancellative terms in the sum, in which case it is called a non-cancellative shift. Martikainen representation theorem [30] states that for a Journ´e operator T there exist biparameter Haar shifts S i,j D n , D m such that(2.1) ⟨ T f, g ⟩ = C T E ω n E ω m ∑ i,j ∈ Z + − max ( i ,i ) δ / − max ( j ,j ) δ / ⟨ S i,j D n , D m f, g ⟩ , with expectation taken with respect to the standard probability measure on the spaceof random dyadic grids. Moreover, non-cancellative Haar shifts can only appear for i = ( , ) or j = ( , ) . In the particular case of paraproduct free Journ´e operators, itfollows from Martikainen’s proof [30] that all Haar shifts in this expression will be ofcancellative type. 3.
Biparameter matrix A p weights In what follows, we denote by { e , . . . , e d } the standard basis of C d . We will be oftenusing the fact that(3.1) ∣ A ∣ ∼ d d ∑ k = ∣ Ae k ∣ , ∀ A ∈ M d ( C ) . In particular, if
A, B, C, D ∈ M d ( C ) are self-adjoint matrices such that ∣ Ae ∣ ∼ d ∣ Ce ∣ and ∣ Be ∣ ∼ d ∣ De ∣ , for all e ∈ C d , then ∣ AB ∣ ∼ d d ∑ k = ∣ ABe k ∣ ∼ d ∑ k = ∣ CBe k ∣ ∼ d ∣ CB ∣ = ∣( CB ) ∗ ∣ (3.2) = ∣ BC ∣ ∼ d d ∑ k = ∣ BCe k ∣ ∼ d d ∑ k = ∣ DCe k ∣ ∼ d ∣ DC ∣ . Matrix-weighted Lebesgue spaces.
A function W on R n is said to be a d × d -matrix valued weight if it is a locally integrable M d ( C ) -valued function such that W ( x ) is a positive-definite matrix, for a.e. x ∈ R n .Given a d × d -matrix valued weight W on R n and 1 < p < ∞ , we define ∥ f ∥ L p ( W ) ∶ = ( ∫ R n ∣ W ( x ) / p f ( x )∣ p dx ) / p , for all C d -valued measurable functions f on R n . Assume that the (a.e. defined) function W ′ ∶ = W − p ′ / p = W − /( − p ) , where p ′ ∶ = p /( p − ) , is also a d × d -matrix valued weight on R n . Then, we notice that for all (suitable) C d -valued functions f on R n , we have ∥ f ∥ L p ( W ) = sup {∣ ∫ R n ⟨ W ( x ) / p f ( x ) , g ( x )⟩ dx ∣ ∶ ∥ g ∥ L p ′ ( R n ; C d ) = } = sup {∣ ∫ R n ⟨ f ( x ) , h ( x )⟩ dx ∣ ∶ ∥ W − / p h ∥ L p ′ ( R n ; C d ) = } = sup {∣ ∫ R n ⟨ f ( x ) , h ( x )⟩ dx ∣ ∶ ∥ h ∥ L p ′ ( W ′ ) = } . Therefore, under the standard (unweighted) L ( R n ; C d ) -pairing, we have ( L p ( W )) ∗ = L p ′ ( W ′ ) .3.2. Reducing matrices.
In this subsection we recall a few facts about reducingmatrices that are important for our purposes. They can all be found in [10], [23], [25],[27] among others. We refer the reader to these references and the references thereinfor more details and additional properties of reducing matrices.The following lemma is trivial, but we state and prove it precisely for the sake ofclarity.
Lemma 3.1.
Let H be a Hilbert space. Let T, S ∶ H → H be two invertible boundedlinear operators. Assume that ∥ T ( h )∥ H ≤ ∥ S ( h )∥ H , for all h ∈ H . Then ∥( S ∗ ) − ( h )∥ H ≤ ∥( T ∗ ) − ( h )∥ H , for all h ∈ H .Proof. This is immediate from the functional calculus for bounded linear operators onHilbert spaces. Nevertheless, we give a direct argument. For all h ∈ H and for all g ∈ H , we have ∣⟨ h, g ⟩ H ∣ = ∣⟨ T ∗ ( T ∗ ) − ( h ) , g ⟩ H ∣ = ∣⟨( T ∗ ) − ( h ) , T ( g )⟩ H ∣ ≤ ∥( T ∗ ) − ( h )∥ H ⋅ ∥ T ( g )∥ H ≤ ∥( T ∗ ) − ( h )∥ H ⋅ ∥ S ( g )∥ H , therefore ∣⟨( S ∗ ) − ( h ) , g ⟩ H ∣ = ∣⟨ h, S − ( g )⟩ H ∣ ≤ ∥( T ∗ ) − ( h )∥ H ⋅ ∥ g ∥ H , ∀ h, g ∈ H, which implies immediately ∥( S ∗ ) − ( h )∥ H ≤ ∥( T ∗ ) − ( h )∥ H , for all h ∈ H . (cid:3) Let 1 < p < ∞ . Let E be a bounded measurable subset of R n of nonzero measure. Inwhat follows, E will always be considered to be equipped with normalized Lebesguemeasure. Let W be a M d ( C ) -valued function on E that is integrable over E (meaningthat ⨏ E ∣ W ( x )∣ d x < ∞ ) and that takes a.e. values in the set of positive-definite d × d -matrices. Notice that by the spectral theorem we have ∣ W ( x ) / p ∣ p = ∣ W ( x )∣ , for a.e. x ∈ E , therefore W / p is p -integrable over E . It is proved in [10, Proposition 1.2] (inthe generality of abstract norms on C d ) as an application of the standard propertiesof John ellipsoids that there exists a (not necessarily unique) positive-definite matrix W E ∈ M d ( C ) , such that(3.3) ( ⨏ E ∣ W ( x ) / p e ∣ p d x ) / p ≤ ∣ W E e ∣ ≤ √ d ( ⨏ E ∣ W ( x ) / p e ∣ p d x ) / p , ∀ e ∈ C d . We emphasize that normalized Lebesgue measure over E is used in (3.3), and that theconstants in the inequalities in (3.3) do not depend on W , the set E or p : all thisinformation has been absorbed in W E . The matrix W E is called reducing matrix (or reducing operator ) of W over E corresponding to the exponent p . If d =
1, i.e. W isscalar-valued, then one can clearly take (and we will always be taking) W E ∶ = ( W ) / pE .Moreover, if p =
2, then ⨏ E ∣ W ( x ) / e ∣ d x = ⨏ E ⟨ W ( x ) e, e ⟩ d x = ⟨( W ) E e, e ⟩ = ∣( W ) / E e ∣ , ∀ e ∈ C d , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 13 and thus in this special case one can take (and we will always be taking) W E ∶ = ( W ) / E .We note that for any A ∈ M d ( C ) , using (3.1) we get ( ⨏ E ∣ W ( x ) / p A ∣ p d x ) / p ∼ d,p d ∑ k = ( ⨏ E ∣ W ( x ) / p Ae k ∣ p d x ) / p ∼ d d ∑ k = ∣ W E Ae k ∣ (3.4) ∼ d ∣ W E A ∣ . If the function W ′ ∶ = W − /( p − ) also happens to be integrable over E , then we let W ′ E be the reducing matrix of W ′ over E corresponding to the exponent p ′ ∶ = p /( p − ) , sothat ∣ W ′ E e ∣ ∼ d ( ⨏ E ∣ W ′ ( x ) / p ′ e ∣ p ′ d x ) / p ′ = ( ⨏ E ∣ W ( x ) − / p e ∣ p ′ d x ) / p ′ , ∀ e ∈ C d , and moreover, for all measurable C d -valued functions f on E we have by H¨older’sinequality ⨏ E ∣ f ( x )∣ dx ≤ ⨏ E ∣ W ( x ) − / p ∣ ⋅ ∣ W ( x ) / p f ( x )∣ d x ≤ ( ⨏ E ∣ W ( x ) − / p ∣ p ′ d x ) / p ′ ( ⨏ E ∣ W ( x ) / p f ( x )∣ p d x ) / p = ( ⨏ E ∣ W ′ ( x )∣ d x ) / p ′ ( ⨏ E ∣ W ( x ) / p f ( x )∣ p d x ) / p , therefore if ∥ f ∥ L p ( W ) ∶ = ( ⨏ E ∣ W ( x ) / p f ( x )∣ p d x ) / p < ∞ , then f ∈ L ( E ; C d ) .The following lemma is stated and proved for p = < p < ∞ in [10, Proposition 2.1] (adapting the ideas from[38, Lemma 2.1]). For the sake of completeness and clarity we give the full statementand proof here. Let A E be the averaging operator over E , i.e. A E acts on functions f ∈ L ( E ; C d ) by A E f ∶ = ( f ) E E . Lemma 3.2.
Set C E ∶ = ⨏ E ( ⨏ E ∣ W ( x ) / p W ′ ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x. Then, A E is bounded as an operator acting from L p ( W ) into L p ( W ) if any only if C E < ∞ , and in this case W ′ is integrable over E and ∥ A E ∥ L p ( W )→ L p ( W ) ∼ p,d ∣ W ′ E W E ∣ ∼ p,d C / pE . Proof.
First of all, note that if C E < ∞ , then there exists x ∈ E such that W ( x ) isinvertible and ⨏ E ∣ W ( x ) / p W ′ ( y ) / p ′ ∣ p ′ d y < ∞ , so it follows by the spectral theorem that ⨏ E ∣ W ′ ( y )∣ d y = ⨏ E ∣ W ′ ( y ) / p ′ ∣ p ′ d y ≤ ∣ W ( x ) − / p ∣ p ′ ⨏ E ∣ W ( x ) / p W ′ ( y ) / p ′ ∣ p ′ d y < ∞ . therefore W ′ is integrable over E . Assume now that A E is bounded from L p ( W ) into L p ( W ) . For all ε >
0, set W ε ( x ) ∶ = ( W ( x ) / p + εI d ) p , for a.e. x ∈ E . By the spectral theorem we deduce that for all ε > W ε , W ′ ε areintegrable over E (in fact W ′ ε is bounded) and W ε ( x ) / p = ( W ( x ) / p + εI d ) ≥ W ( x ) / p + ε I d , for a.e. x ∈ E. In particular, for all ε >
0, for a.e. x ∈ E , we have ∣ W ( x ) / p e ∣ = ⟨ W ( x ) / p e, e ⟩ ≤ ⟨ W ε ( x ) / p e, e ⟩ = ∣ W ε ( x ) / p e ∣ , ∀ e ∈ C d , therefore ⨏ E ∣ W ( x ) / p e ∣ p d x ≤ ⨏ E ∣ W ε ( x ) / p e ∣ p d x, ∀ e ∈ C d , which implies by the definition of reducing matrices that ∣ W E e ∣ ≲ p,d ∣( W ε ) E e ∣ , for all e ∈ C d , where ( W ε ) E is the reducing matrix of W ε over E corresponding to the exponent p , and therefore by Lemma 3.1 we deduce ∣ W − E e ∣ ≳ p,d ∣( W ε ) − E e ∣ , for all e ∈ C d .Set D ∶ = ∥ A E ∥ L p ( W )→ L p ( W ) < ∞ . Then, for all ε > f ∈ L p ( W ε ) , we have ∥ A E ( f )∥ pL p ( W ε ) = ⨏ E ∣( W ( x ) / p + εI d )( f ) E ∣ p d x ≲ p ∥ A E ( f )∥ pL p ( W ) + ε p ∥ A E ( f )∥ pL p ( E ; C d ) ≤ D p ⨏ E ∣ W ( x ) / p f ( x )∣ p d x + ε p ⨏ E ∣ f ( x )∣ p d x ≤ ( D p + ) ⨏ E ∣ W ε ( x ) / p f ( x )∣ p d x = ( D p + )∥ f ∥ pL p ( W ε ) , which shows that ∥ A E ∥ L p ( W ε )→ L p ( W ε ) ≤ ( D p + ) / p . Then, for all ε > ∥ A E ∥ L p ( W ε )→ L p ( W ε ) = sup ∥ f ∥ L p ( W ε ) = ( ⨏ E ∣ W ε ( x ) / p ( f ) E ∣ p d x ) / p ∼ p,d sup ∥ f ∥ L p ( W ε ) = ∣( W ε ) E ( f ) E ∣ = sup ∥ f ∥ L p ( W ε ) = sup e ∈ C d ∖{ } ∣⟨( W ε ) E ( f ) E , e ⟩∣∣ e ∣ = sup ∥ f ∥ L p ( W ε ) = sup e ∈ C d ∖{ } ∣⟨( f ) E , e ⟩∣∣( W ε ) − E e ∣ = sup e ∈ C d ∖{ } sup ∥ f ∥ L p ( W ε ) = ∣ ⨏ E ⟨ f ( x ) , e ⟩ d x ∣∣( W ε ) − E e ∣ = sup e ∈ C d ∖{ } sup ∥ f ∥ L p ( E ; C d ) = ∣ ⨏ E ⟨ W ε ( x ) − / p f ( x ) , e ⟩ d x ∣∣( W ε ) − E e ∣ = sup e ∈ C d ∖{ } ∥ W − / pε e ∥ L p ′ ( E ; C d ) ∣( W ε ) − E e ∣ . It follows that for all ε >
0, we have ⨏ E ∣ W ε ( x ) − / p e ∣ p ′ d x ≲ p,d ( D p + ) p ′ / p ∣( W ε ) − E e ∣ p ′ ≲ p,d ( D p + ) p ′ / p ∣ W − E e ∣ p ′ , ∀ e ∈ C d , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 15 therefore applying (3.1) we get ⨏ E ∣ W ′ ε ( x )∣ d x = ⨏ E ∣ W ε ( x ) − / p ∣ p ′ d x ≲ p,d ( D p + ) p ′ / p ∣ W − E ∣ p ′ , thus letting ε → + and using Fatou’s lemma we deduce that W ′ is integrable over E .Thus, both conditions ∥ A E ∥ L p ( W )→ L p ( W ) < ∞ and C E < ∞ imply separately that W ′ is integrable over E , thus we can assume that in what follows. Then, the previouscomputations are all valid with W ε replaced by W , and we can continue them to get ∥ A E ∥ L p ( W )→ L p ( W ) ∼ p,d sup e ∈ C d ∖{ } ∥ W − / p e ∥ L p ′ ( E ; C d ) ∣ W − E e ∣ ∼ p,d sup e ∈ C d ∖{ } ∣ W ′ E e ∣∣ W − E e ∣ = ∣ W ′ E W E ∣ . Moreover, using (3.4) repeatedly, we observe that C / pE = ( ⨏ E ( ⨏ E ∣ W ( x ) / p W ( y ) − / p ∣ p ′ d y ) p / p ′ d x ) / p = ( ⨏ E ( ⨏ E ∣ W ( y ) − / p W ( x ) / p ∣ p ′ d y ) p / p ′ d x ) / p ∼ p,d ( ⨏ E ∣ W ′ E W ( x ) / p ∣ p d x ) / p = ( ⨏ E ∣ W ( x ) / p W ′ E ∣ p d x ) / p ∼ d,p ∣ W E W ′ E ∣ = ∣ W ′ E W E ∣ ∼ d,p ( ⨏ E ( ⨏ E ∣ W ( x ) − / p W ( y ) / p ∣ p d y ) p ′ / p d x ) / p ′ , concluding the proof. (cid:3) Let us observe that more generally, if V is another integrable M d ( C ) -valued functionon E that takes a.e. values in the set of positive definite d × d -matrices, then denotingby V E the reducing matrix of V over E corresponding to the exponent p ′ and using(3.4) again, we obtain ( ⨏ E ( ⨏ E ∣ W ( x ) / p V ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x ) / p = ( ⨏ E ( ⨏ E ∣ V ( y ) / p ′ W ( x ) / p ∣ p ′ d y ) p / p ′ d x ) / p ∼ p,d ( ⨏ E ∣ V E W ( x ) / p ∣ p d x ) / p = ( ⨏ E ∣ W ( x ) / p V E ∣ p d x ) / p ∼ ∣ W E V E ∣ = ∣ V E W E ∣ ∼ d,p ( ⨏ E ( ⨏ E ∣ V ( x ) / p ′ W ( y ) / p ∣ p d y ) p ′ / p d x ) / p ′ , In the rest of this subsection, we assume that C E ∶ = ⨏ E ( ⨏ E ∣ W ( x ) / p W ′ ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x < ∞ . Note that it is clear by the definitions that we can always choose W ′′ E = W E . Lemma 3.3.
Consider the scalar-valued function u ∶ = ∣ W − / p W E ∣ p ′ = ∣ W E W − / p ∣ p ′ . Then ( ⨏ E u ( x ) d x ) ( ⨏ E u ( x ) − /( p ′ − ) d x ) p ′ − ≲ p,d C p ′ / pE . Proof.
We have ⨏ E u ( x ) d x ∼ p,d ∣ W ′ E W E ∣ p ′ ∼ p,d C p ′ / pE . Moreover, since for any invertible A ∈ M d ( C ) we have ∣ A ∣ − ≤ ∣ A − ∣ , obtain ⨏ E u ( x ) − /( p ′ − ) d x = ⨏ E ∣ W E W ( x ) − / p ∣ − p d x ≤ ⨏ E ∣( W E W ( x ) − / p ) − ∣ p d x = ⨏ E ∣ W ( x ) / p W − E ∣ p d x ∼ p,d , where we used (3.4) in the last step, concluding the proof. (cid:3) Lemma 3.4. (1) Assume that d = . Denote w ∶ = W . Then ( w ) / pE ≤ C / pE ( w / p ) E . (2) Let e ∈ C d ∖ { } be arbitrary. Set w e ∶ = ∣ W / p e ∣ p . Then ( ⨏ E w e ( x ) d x ) ( ⨏ E w − /( p − ) e ( x ) d x ) p − ≲ p,d C E . (3) For all e ∈ C d , there holds ∣ W E e ∣ ≲ p,d C / pE (∣ W / p e ∣) E . Proof. (1) Note that if d = C E = ( w ) E ( w − /( p − ) ) p − E , thus ( w ) / pE = C / pE ( w − /( p − ) ) −( p − )/ pE ≤ C / pE (( w − /( p − ) ) −( p − )/ p ) E = C / pE ( w / p ) E , where in the second step we used Jensen’s inequality for the exponent − ( p − )/ p < w e is integrable over E . Moreover, we have ∣ e ∣ ≤ ∣ W − / p ∣ ⋅ ∣ W / p e ∣ , i.e. ∣ W / p e ∣ − ≤ ∣ W − / p ∣ ⋅ ∣ e ∣ − , therefore w ′ e ∶ = w − /( p − ) e = ∣ W / p e ∣ − p /( p − ) ≤ ∣ W − / p ∣ p /( p − ) ∣ e ∣ − p /( p − ) = ∣ W ′ ∣ ⋅ ∣ e ∣ − p /( p − ) , therefore w e ∈ L ( E ) as well. Abusing notation, we denote by A E the averagingoperator over E acting on scalar-valued functions f ∈ L ( E ) , explicitly A E f ∶ = ( f ) E E . We note that since w e is scalar-valued, the weighted norm L p ( w e ) simply becomes ∥ f ∥ L p ( w e ) = ( ⨏ E ∣ f ( x )∣ p w e ( x ) d x ) / p = ( ⨏ E ∣ W ( x ) / p f ( x ) e ∣ p d x ) / p = ∥ f e ∥ L p ( W ) , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 17 for scalar-valued functions f on E . Thus, for all f ∈ L p ( w e ) , we have ∥ A E f ∥ pL p ( w e ) = ⨏ E ∣ W ( x ) / p ( f ) E e ∣ p d x = ∥ A E ( f e )∥ pL p ( W ) ≲ d,p C pE ∥ f e ∥ pL p ( W ) = C pE ∥ f ∥ pL p ( w e ) . The desired conclusion follows immediately from Lemma 3.2.(3) Immediate by combining (1) and (2) with definition (3.3). (cid:3)
Note that for all e ∈ C d we have by the definition (3.3) of reducing matrices andJensen’s inequality(3.5) ∣ W E e ∣ ≥ (∣ W / p e ∣ p ) / pE ≥ (∣ W / p e ∣) E ≥ ∣( W / p ) E e ∣ . Lemma 3.5.
There holds ∣ W − E e ∣ ≤ ∣ W ′ E e ∣ ≲ p,d C / pE ∣ W − E e ∣ , ∀ e ∈ C d . Moreover ∣( W ′ E ) − e ∣ ≤ ∣ W E e ∣ , ∀ e ∈ C d . Proof.
Let e ∈ C d ∖ { } be arbitrary. Set f ∶ = W / p W − E e /∣ e ∣ . Then ∥ f ∥ L p ( E ; C d ) ≤
1, thususing the definition (3.3) of reducing matrices in the first step and H¨older’s inequalityin the second step we obtain ∣ W ′ E W E e ∣ ≥ ( ⨏ E ∣ W ( x ) − / p W E e ∣ p ′ d x ) / p ′ ≥ ∣( W − / p W E e, f )∣ = ∣ e ∣ , so ∣ W ′ E W E e ∣ ≥ ∣ e ∣ . This implies directly ∣ W − E e ∣ ≤ ∣ W ′ E e ∣ , for all e ∈ C d .Moreover, we have ∣ W ′ E e ∣ ≤ ∣ W ′ E W E ∣ ⋅ ∣ W − E e ∣ ≲ d,p C / pE ∣ W − E e ∣ , ∀ e ∈ C d , concluding the proof of the first estimate.The second estimate follows by the first estimate coupled with Lemma 3.1. (cid:3) Notice that Lemma 3.5 implies in particular that(3.6) ∣ W ′ E W E ∣ ≥ . “Iterating” reducing operators. Let
E, F be measurable subsets of R n , R m re-spectively with 0 < ∣ E ∣ , ∣ F ∣ < ∞ . Let 1 < p < ∞ . Let W be a M d ( C ) -valued integrablefunction on E × F taking a.e. values in the set of positive-definite d × d -matrices. Forall x ∈ E , set W x ( x ) ∶ = W ( x , x ) , x ∈ F . For a.e. x ∈ E , denote by W x ,F thereducing operator of W x over F with respect to the exponent p . If p =
2, then W x ,F ∶ = ( W ( x , ⋅ )) / F . and therefore the function W x ,F is clearly measurable in x . In fact, it is not hard tosee that for any 1 < p < ∞ , one can choose the reducing operator W x ,F in a way thatis measurable in x . We supply the details in the appendix. Set W F ( x ) ∶ = ( W x ,F ) p , for a.e. x ∈ E . Then W F ∈ L ( E ; M d ( C )) , since ⨏ E ∣ W / pF ( x ) e ∣ p d x = ⨏ E ∣ W x ,F e ∣ p d x ∼ p,d ⨏ E ( ⨏ F ∣ W ( x , x ) / p e ∣ p d x ) d x = ⨏ E × F ∣ W ( x , x ) / p e ∣ p d x < ∞ , for all e ∈ C d . In particular, this computation shows that(3.7) ∣ W F,E e ∣ ∼ p,d ∣ W E × F e ∣ , ∀ e ∈ C d , where W F,E is the reducing operator of W F over E with respect to the exponent p , and W E × F is the reducing operator of W over E × F with respect to the exponent p .Notice now that W ′ F ( x ) ∶ = ( W F ( x )) − /( p − ) = W − p ′ x ,F , for a.e. x ∈ E . Thus, for all e ∈ C d , applying Lemma 3.5 in the second step below we have ⨏ E ∣ W ′ F ( x ) / p ′ e ∣ p ′ d x = ⨏ E ∣ W − x ,F e ∣ p ′ d x ≤ ⨏ E ∣ W ′ x ,F e ∣ p ′ d x ∼ p,d ⨏ E ( ⨏ F ∣ W ′ x ( x ) e ∣ p ′ d x ) d x = ⨏ E × F ∣ W ′ ( x , x ) / p ′ e ∣ p ′ d x < ∞ , where W ′ x ∶ = ( W x ) − /( p − ) , and W ′ x ,F is the reducing operator of W ′ x over F withrespect to the exponent p ′ , for a.e. x ∈ E . Thus, W ′ F ∈ L ( E ; M d ( C )) , and in particularthe last computation shows that(3.8) ∣ W ′ F,E e ∣ ≲ p,d ∣ W ′ E × F e ∣ , ∀ e ∈ C d , where W ′ F,E is the reducing operator of W ′ F over E with respect to the exponent p ′ ,and W ′ E × F is the reducing operator of W ′ ∶ = W − /( p − ) over E × F with respect to theexponent p ′ .Combining (3.7) and (3.8) with (3.2), we deduce(3.9) ∣ W ′ F,E W F,E ∣ ≲ p,d ∣ W ′ E × F W E × F ∣ . Of course, analogous facts hold if one “iterates” the operation of taking reducingoperators in the reverse order.These observations will be crucial for handling the “mixed” operators of Section8 that are given as iterations of one-parameter operators, each of them acting in adifferent direction.3.3.
One-parameter matrix A p weights. Let W be a d × d -matrix valued weight on R n , i.e. W is a locally integrable M d ( C ) -valued function on R n that takes a.e. valuesin the set of positive-definite d × d -matrices. We say that W is a d × d -matrix valued A p weight if [ W ] A p ∶ = sup Q ⨏ Q ( ⨏ Q ∣ W ( x ) / p W ( y ) − / p ∣ p ′ d y ) p / p ′ d x < ∞ , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 19 where the supremum is taken over all cubes Q in R n . Note that Lemma 3.2 shows thatif W is a d × d -matrix valued A p weight on R n , then W ′ ∶ = W − /( p − ) is a d × d -matrixvalued A p ′ weight on R n with [ W ′ ] / p ′ A p ′ ∼ p,d [ W ] / pA p , and [ W ] A p ∼ p,d sup Q ∣ W ′ Q W Q ∣ p , where the reducing matrices for W correspond to exponent p , and those for W ′ corre-spond to exponent p ′ .More generally, if V, W are d × d -matrix valued weights on R n we define [ W, V ] A p ∶ = sup Q ⨏ Q ( ⨏ Q ∣ W ( x ) / p V ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x, and we notice that by the observation immediately after Lemma 3.2 we have(3.10) [ W, V ] A p ∼ p,d sup Q ∣ V Q W Q ∣ p , where the reducing matrices for W correspond to exponent p , and those for V corre-spond to exponent p ′ . Notice that (3.10) implies that [ W, V ] A p ∼ p,d [ V, W ] A p .If D is any dyadic grid in R n , we define [ W ] A p , D ∶ = sup Q ∈D ⨏ Q ( ⨏ Q ∣ W ( x ) / p W ( y ) − / p ∣ p ′ d y ) p / p ′ d x, [ W, V ] A p , D ∶ = sup Q ∈D ⨏ Q ( ⨏ Q ∣ W ( x ) / p V ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x, and we say that W is a D -dyadic d × d -matrix valued A p weight if [ W ] A p , D < ∞ .3.4. Biparameter matrix A p weights. Let W be a d × d -matrix valued weight on R n × R m . We say that W is a d × d -matrix valued biparameter A p weight if [ W ] A p ∶ = sup R ⨏ R ( ⨏ R ∣ W ( x ) / p W ( y ) − / p ∣ p ′ d y ) p / p ′ d x < ∞ , where the supremum is taken over all rectangles R in R n (with sides parallel to thecoordinate axes). Note that if W is a d × d -matrix valued biparameter A p weight on R n × R m , then by Lemma 3.2 we deduce that W ′ ∶ = W − /( p − ) is a d × d -matrix valuedbiparameter A p ′ weight on R n × R m with [ W ′ ] / p ′ A p ′ ∼ d,p [ W ] / pA p , and [ W ] A p ∼ p,d sup R ∣ W ′ R W R ∣ p , where the reducing matrices for W correspond to exponent p , and those for W ′ corre-spond to exponent p ′ .If D is any product dyadic grid in R n × R m , we define [ W ] A p , D ∶ = sup R ∈ D ⨏ R ( ⨏ R ∣ W ( x ) / p W ( y ) − / p ∣ p ′ d y ) p / p ′ d x ;we say that W is a D -dyadic biparameter d × d -matrix valued A p weight if [ W ] A p , D < ∞ . Two-weight biparameter matrix A p -characteristics. Let again 1 < p < ∞ . If V, W are d × d -matrix valued weights on R n × R m we define(3.11) [ W, V ] A p ∶ = sup R ⨏ R ( ⨏ R ∣ W ( x ) / p V ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x, and we notice that by the observation immediately after Lemma 3.2 we have(3.12) [ W, V ] A p ∼ p,d sup R ∣ V R W R ∣ p , where the reducing matrices for V correspond to the exponent p ′ . Notice that (3.12)implies that [ W, V ] A p ∼ p,d [ V, W ] A p .If D is any product dyadic grid in R n × R m , we also define(3.13) [ W, V ] A p , D ∶ = sup R ∈ D ⨏ R ( ⨏ R ∣ W ( x ) / p V ( y ) / p ′ ∣ p ′ d y ) p / p ′ d x, where the reducing matrices for W correspond to exponent p , and those for V corre-spond to exponent p ′ .3.4.2. Matrix biparameter A p implies uniform matrix A p in each coordinate. Lemma 3.6.
Let
W, V be d × d -matrix valued weights on R n + m .(1) Let D = D × D be any product dyadic grid in R n × R m . Then, there holds [ W ( ⋅ , x ) , V ( ⋅ , x )] A p , D ≲ d,p [ W, V ] A p , D , for a.e. x ∈ R m , [ W ( x , ⋅ ) , V ( x , ⋅ )] A p , D ≲ d,p [ W, V ] A p , D , for a.e. x ∈ R n . (2) There holds [ W ( ⋅ , x ) , V ( ⋅ , x )] A p ≲ d,p,n [ W, V ] A p , for a.e. x ∈ R m , [ W ( x , ⋅ ) , V ( x , ⋅ )] A p ≲ d,p,m [ W, V ] A p , for a.e. x ∈ R n . Proof. (1) We identify Q d with the subset Q d + i Q d of C d throughout the proof. Recallthat we denote by { e , . . . , e d } the standard basis for C d .Assume that [ W, V ] A p , D < ∞ . We only show that [ W ( ⋅ , x ) , V ( ⋅ , x )] A p , D ≲ p,d [ W, V ] A p , D , for a.e. x ∈ R m , the proof of the other estimate being symmetric.Since W, V are locally integrable over R n × R m and R m , R n can be covered by count-ably many balls, we deduce that there exists a Lebesgue measurable subset A of R m with ∣ R m ∖ A ∣ = W x ∶ = W ( ⋅ , x ) , V x ∶ = V ( ⋅ , x ) are locally integrable over R n , for all x ∈ A .For each Q ∈ D and for each e ∈ Q d , we have that the function F Q,e ( y ) ∶ = ∫ Q ∣ W ( x , x ) / p e ∣ p d x , x ∈ A OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 21 is locally integrable over R n ; let A Q,e be the set of all its Lebesgue points. Moreover,for each Q ∈ D and for each e ∈ Q d , we have that the function G Q,e ( y ) ∶ = ∫ Q ∣ V ( x , x ) / p ′ e ∣ p ′ d x , x ∈ A is locally integrable over R n ; let B Q,e be the set of all its Lebesgue points. Set C ∶ = A ∩ ( ⋂ Q ∈D e ∈ Q d A Q,k ) ∩ ( ⋂ Q ∈D e ∈ Q d B Q,k ) . Then C is a Lebesgue measurable subset of R n with ∣ R n ∖ C ∣ =
0. We will now showthat [ W ( ⋅ , x ) , V ( ⋅ , x )] A p , D ≲ d [ W, V ] A p , D , ∀ x ∈ C. Let I ∈ D and x ∈ C be arbitrary. Pick a sequence ( J ℓ ) ∞ ℓ = of cubes in D shrinking to x . Then, for all e ∈ Q d we havelim ℓ →∞ ⨏ J ℓ ( ⨏ I ∣ W ( x , y ) / p e ∣ p d x ) d y = ⨏ I ∣ W ( x , x ) / p e ∣ p d x ∼ p,d ∣ W x ,I e ∣ p , where W x ,I is the reducing matrix over I of W x with respect to exponent p , and ⨏ J ℓ ( ⨏ I ∣ W ( x , y ) / p e ∣ p d x ) d y ∼ p,d ∣ W I × J ℓ e ∣ p , ∀ ℓ = , , . . . , where W I × J ℓ is the reducing matrix of W over I × J ℓ with respect to the exponent p ,for all ℓ = , , . . . . In particular, for each k = , . . . , d there exists 0 < c k < ∞ such that ∣ W I × J ℓ e k ∣ ≤ c k , ∀ ℓ = , , . . . . Then, by (3.1) we have ∣ W I × J ℓ ∣ ∼ d d ∑ k = ∣ W I × J ℓ e k ∣ ≤ d ∑ k = c k < ∞ , ∀ ℓ = , , . . . . Thus, the sequence of matrices ( W I × J ℓ ) ∞ ℓ = is bounded in matrix-norm, therefore bycompactness we can extract a subsequence ( W I × J ℓk ) ∞ k = converging in matrix-norm tosome self-adjoint matrix W I, ∈ M d ( C ) . Then, by density of Q d in C d we deduce ∣ W I, e ∣ ∼ p,d ∣ W x ,I e ∣ , ∀ e ∈ C d . Working similarly we can extract a further subsequence ( V I × J ℓtk ) ∞ k = converging inmatrix-norm to some self-adjoint matrix V I, ∈ M d ( C ) for which ∣ V I, e ∣ ∼ p,d ∣ V x ,I e ∣ , ∀ e ∈ C d , where V I × J ℓ is the reducing matrix of V over I × J ℓ with respect to the exponent p ′ , forall ℓ = , , . . . , and V x ,I is the reducing matrix over I of the weight V x with respectto exponent p ′ . It follows that ∣ V x ,I W x ,I ∣ ∼ p,d ∣ V I, W I, ∣ = lim k →∞ ∣ V I × J ℓtk W I × J ℓtk ∣ ≲ p,d [ W, V ] / pA p , D , where in the first step we used (3.2), yielding the desired result.(2) This follows by using the well-known 1 / A p characteristics. (cid:3) In the special case V ∶ = W ′ , a converse to Lemma 3.6 is true as well. Lemma 3.7.
Let W be any d × d -matrix valued weight on R n + m . Then, there holds [ W ] A p ≲ d,p ( ess sup x ∈ R m [ W ( ⋅ , x )] A p )( ess sup x ∈ R n [ W ( x , ⋅ )] A p ) . The dyadic version of this is true as well.Proof.
It follows immediately by noticing that if Q = Q × Q is any rectangle in R n × R m ,then A Q f ( x , x ) = A Q ( E Q ( ⋅ , x ))( x ) , E Q ( y , x ) ∶ = A Q ( f ( y , ⋅ ))( x ) , where A Q , A Q , A Q denote the averaging operators over Q, Q , Q respectively, andthen iterating Lemma 3.2. (cid:3) A p characteristics of “sliced” reducing operators. Let 1 < p < ∞ . Let W be abiparameter matrix A p weight on R n × R m . For a.e. x ∈ R n , set W x ( x ) ∶ = W ( x , x ) , x ∈ R m . Fix any cube Q in R m . For a.e. x ∈ R n , let W x ,Q be the reducing operator of W x over Q with respect to the exponent p . Set W Q ( x ) ∶ = ( W x ,Q ) p , for a.e. x ∈ R m .Then, by (3.9) we deduce(3.14) [ W Q ] A p ≲ p,d [ W ] A p . Of course, the dyadic version of this is also true. Moreover, both versions remain trueif one “slices” with respect to the first coordinate instead of the second one.4.
Dyadic strong Christ–Goldberg maximal function
Dyadic Christ–Goldberg maximal function.
Let D be any dyadic grid in R n . Let 1 < p < ∞ , and let W be a d × d -matrix valued weight on R n . Consider the (dyadic) Christ–Goldberg maximal function corresponding to the weight W (and theexponent p ) M W D f ( x ) ∶ = sup I ∈D (∣ W ( x ) / p f ∣) I I ( x ) , x ∈ R n . Consider also the (dyadic) modified Christ–Goldberg maximal function ̃ M W D f ∶ = sup I ∈D (∣ W I f ∣) I I , where the reducing operators for W correspond to the exponent p . Both of theseoperators were defined by M. Christ and M. Goldberg for p = OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 23 for general 1 < p < ∞ in [10]. Christ–Goldberg [5] (for p =
2) and Goldberg (for general1 < p < ∞ ) proved that if [ W ] A p , D < ∞ , then ∥ M W D ∥ L p ( W )→ L p ( R n ) , ∥ ̃ M W D ∥ L p ( W )→ L p ( R n ) < ∞ . In fact, J. Isralowitz and K. Moen [26] proved the (already in the scalar case sharp)bound(4.1) ∥ M W D ∥ L p ( W )→ L p ( R n ) ≲ p,d,n [ W ] /( p − ) A p , D , and J. Isralowitz, H. K. Kwon and S. Pott [25] proved the (already in the scalar casesharp) bound(4.2) ∥ ̃ M W D f ∥ L p ( W )→ L p ( R n ) ≲ n,p,d [ W ] /( p − ) A p , D . Modified dyadic strong Christ–Goldberg maximal function.
Let D beany product dyadic grid in R n × R m . Let 1 < p < ∞ , and let W be a d × d -matrix valuedweight on R n × R m . Consider the modified dyadic strong Christ–Goldberg maximalfunction ̃ M W D f ∶ = sup R ∈ D (∣ W R f ∣) R R , where reducing operators for W correspond to the exponent p . Proposition 4.1.
There holds ∥ ̃ M W D f ∥ L p ( W )→ L p ( R n + m ) ≲ p,d,n,m [ W ] p + p ( p − ) A p , D . Proof.
The proof will be a straightforward adaptation of the one-parameter case dueto Isralowitz–Kwon–Pott [25]. For completeness, we include all details.From Lemma 3.3 we get [∣ W − / p W R ∣ p ′ ] A p ′ , D ≤ C ( p, d )[ W ] p ′ / pA p , D , ∀ R ∈ D , where 0 < C ( p, d ) < ∞ depends only on p and d . The function ∣ W − / p W R ∣ p ′ is therefore ascalar D -dyadic biparameter A p ′ weight, and thus satisfies the reverse H¨older inequalityfrom [15, Proposition 2.2] (for the reader’s convenience, we include the statement andproof in the appendix), for all R ∈ D . Thus, setting ε ∶ = max ( m,n )+ C ( p, d )[ W ] p ′ / pA p , D , we have ( ⨏ R ∣ W R W − / p ( x )∣ p ′ ( + ε ) d x ) /( p ′ ( + ε )) = (( ⨏ R ∣ W − / p ( x ) W R ∣ p ′ ( + ε ) d x ) /( + ε ) ) / p ′ ≲ p,n,m ( ⨏ R ∣ W − / p W R ∣ p ′ d x ) / p ′ ∼ p,d ∣ W ′ R W R ∣ ≲ p,d [ W ] / pA p , D , for all R ∈ D . It follows by H¨older’s inequality that (∣ W R f ∣) R ≤ ( ⨏ R ∣ W R W − / p ( x )∣ p ′ ( + ε ) d x ) /( p ′ ( + ε )) ( ⨏ R ∣ W / p f ( x )∣ p ( + ε )/( + pε ) d x ) ( + pε )/( p ( + ε )) ≲ p,d [ W ] / pA p , D M a, D (∣ W / p f ∣)( x ) , for all x ∈ R and for all R ∈ D , where a ∶ = p ( + ε ) + pε ∈ ( , p ) and M a, D g ∶ = sup R ∈ D (∣ g ∣ a ) / aR R . Thus ̃ M W D f ≲ p,d,n,m [ W ] / pA p , D M a, D (∣ W / p f ∣) . Notice that by (3.6) we have ε ≤ c ( p, d ) , for some finite positive constant c ( p, d ) de-pending only on p, d . Recall also that by iterating the classical (unweighted) boundsfor the dyadic Hardy–Littlewood maximal function (see e.g. [11, Theorem 2.1.6] and[11, Exercise 2.1.12]) we get that the usual dyadic strong maximal function M D g ∶ = sup R ∈ D (∣ g ∣) R R satisfies the bound ∥ M D ∥ L q ( R n + m )→ L q ( R n + m ) ≤ ( q ′ ) / q , ∀ < q < ∞ . Therefore, we have ∥ M a, D g ∥ pL p ( R n + m ) = ∥ M D (∣ g ∣ a )∥ p / aL p / a ( R n + m ) ≤ p / a (( p / a ) ′ ) ∥∣ g ∣ a ∥ p / aL p / a ( R n + m ) ≲ p,d ε − ∥∣ g ∣ a ∥ p / aL p / a ( R n + m ) ∼ p,d,n,m [ W ] /( p − ) A p , D ∥ g ∥ pL p ( R n + m ) , where in the first ≲ p,d we used the facts that pa < p and that ( p / a ) ′ = + pε ( p − ) ε ≤ + p ⋅ c ( p, d ) p − ε − . Since ∥∣ W / p f ∣∥ L p ( R n + m ) = ∥ f ∥ L p ( W ) , we deduce ∥ ̃ M W D ∥ L p ( W )→ L p ( R n + m ) ≲ p,d,n,m [ W ] p + p ( p − ) A p , D = [ W ] p + p ( p − ) A p , D , concluding the proof. (cid:3) Matrix-weighted square functions
Matrix-weighted vector-valued extensions for linear operators.Lemma 5.1.
Let W be a d × d -matrix weight on R n . Let T ∶ L p ( W ) → L p ( R n ; H ) be abounded linear operator, where H is any Hilbert space. Then ∥( ∞ ∑ k = ∥ T ( f k )∥ ) / ∥ L p ( R n ) ≲ p ∥ T ∥ ⋅ ∥( ∞ ∑ k = ∣ W / p f k ∣ ) / ∥ L p ( R n ) , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 25 for any sequence ( f k ) ∞ k = in L p ( W ) , where the implied constant does not depend on theHilbert space H or d .Proof. First of all, by the Monotone Convergence Theorem it suffices to prove onlythat ∥( N ∑ k = ∥ T ( f k )∥ ) / ∥ L p ( R n ) ≲ p ∥ T ∥ ⋅ ∥( N ∑ k = ∣ W / p f k ∣ ) / ∥ L p ( R n ) , ∀ N = , , . . . , provided the implied constant does not depend on N .We fix now a positive integer N . Then, using Khintchine’s inequalities for Hilbertspaces in both ∼ p below (see e.g. [21]) we have ∥( N ∑ k = ∥ T ( f k )∥ ) / ∥ pL p ( R n ) ∼ p ∥( ∫ Ω ∥ N ∑ k = ε k ( ω ) T ( f k )∥ p d P ( ω )) / p ∥ pL p ( R n ) = ∫ Ω ∥ T ( N ∑ k = ε k ( ω ) f k )∥ pL p ( R n ; H ) d P ( ω ) ≤ ∥ T ∥ p ⋅ ∫ Ω ∥ N ∑ k = ε k ( ω ) f k ∥ pL p ( W ) d P ( ω ) = ∥ T ∥ p ⋅ ∫ Ω ∫ R n ∣ W ( x ) / p N ∑ k = ε k ( ω ) f k ( x )∣ p d x d P ( ω ) = ∥ T ∥ p ⋅ ∫ R n ∫ Ω ∣ N ∑ k = ε k ( ω ) W ( x ) / p f k ( x )∣ p d P ( ω ) d x ∼ p ∥ T ∥ p ⋅ ∫ R n ( N ∑ k = ∣ W ( x ) / p f k ( x )∣ ) p / d x, concluding the proof. (cid:3) One-parameter square functions.
Fix any dyadic grid D in R n . We define thestandard (unweighted) square function S D f ( x ) ∶ = ( ∑ Q ∈D ε ∈E ∣ f εQ ∣ Q ( x )∣ Q ∣ ) / , f ∈ L ( R n ; C d ) , and we recall that standard (unweighted) dyadic Littlewood–Paley theory gives ∥ S D f ∥ L p ( R n ) ∼ n,p,d ∥ f ∥ L p ( R n ) . Let 1 < p < ∞ , and let W be a d × d -matrix valued weight on R n . In the theory ofmatrix valued weights, it is usually convenient to define the matrix-weighted squarefunction corresponding to W (and the exponent p ) by S D ,W f ( x ) ∶ = ( ∑ Q ∈D ε ∈E ∣ W Q f εQ ∣ Q ( x )∣ Q ∣ ) / , x ∈ R n , f ∈ L ( R n ; C d ) , where W Q is the reducing operator over Q of the weight W corresponding to theexponent p over Q (so if p =
2, then just W Q = ( W ) / Q ). Notice that the definition takes into account the exponent p . The direct generalization of the square function inthe scalar context is ̃ S D ,W f ( x ) ∶ = ( ∑ Q ∈D ε ∈E ∣ W ( x ) / p f εQ ∣ Q ( x )∣ Q ∣ ) / , x ∈ R n , f ∈ L ( R n ; C d ) . Notice that in the special case p = ∥ S D ,W f ∥ L ( R n ) = ∑ Q ∈D ε ∈E ∣( W ) / Q f εQ ∣ = ∑ Q ∈D ε ∈E ⟨( W ) Q f εQ , f εQ ⟩ = ∑ Q ∈D ε ∈E (⟨ W f εQ , f εQ ⟩) Q = ∑ Q ∈D ε ∈E (∣ W / f εQ ∣ ) Q = ∥ ̃ S D ,W f ∥ L ( R n ) . It will be useful in the following to view ̃ S D ,W as the linear operator ⃗̃ S D ,W given by ⃗̃ S D ,W ( f )( x ) ∶ = { W / p ( x ) f εQ h εQ ( x )} ( Q,ε ) ∈D × E , x ∈ R n , in the sense that ̃ S D ,W ( f )( x ) = ∥ ⃗̃ S D ,W ( f )( x )∥ ℓ ( D × E ; C d ) . In particular ∥ ̃ S D ,W ( f )∥ L p ( R n ) = ∥ ⃗̃ S D ,W ( f )∥ L p ( R n ; ℓ ( D × E ; C d )) . It has been proved in [20], [37] that if p =
2, then(5.1) ∥ ̃ S D ,W f ∥ L ( R n ) ≲ n,d [ W ] A , D ∥ f ∥ L ( W ) , ∀ f ∈ L ( R n ; C d ) , and this estimate is sharp (even in the scalar case). For general 1 < p < ∞ , it has beenproved in [24] that ∥ ̃ S D ,W f ∥ L p ( R n ) ≲ p,d,n [ W ] γ ( p ) A p , D ∥ f ∥ L p ( W ) , ∀ f ∈ L ( R n ; C d ) , where(5.2) γ ( p ) ∶ = ⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩ p − , if 1 < p ≤ + p ( p − ) , if p > . It is noted in [24] that while this estimate is sharp in the regime p ≤ p > Biparameter square functions.
Let D = D × D be any dyadic product gridof rectangles in R n × R m . We define the standard (unweighted) biparameter squarefunction S D f ( x ) ∶ = ( ∑ R ∈ D ε ∈E ∣ f εR ∣ R ( x )∣ R ∣ ) / , f ∈ L ( R n + m ; C d ) , and we recall that standard (unweighted) dyadic Littlewood–Paley theory gives ∥ S D f ∥ L p ( R n + m ) ∼ p,d,n,m ∥ f ∥ L p ( R n + m ) , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 27 for all (suitable) functions f .Let 1 < p < ∞ , and let W be a d × d -matrix valued weight on R n × R m . We define thebiparameter matrix-weighted square function corresponding to W (and the exponent p ) by S D ,W f ( x ) ∶ = ( ∑ R ∈ D ε ∈E ∣ W R f εR ∣ R ( x )∣ R ∣ ) / , f ∈ L ( R n + m ; C d ) , where reducing operators for W are taken with respect to the exponent p . The directgeneralization of the biparameter square function in the scalar context is ̃ S D ,W f ( x ) ∶ = ( ∑ R ∈ D ε ∈E ∣ W ( x ) / p f εR ∣ R ( x )∣ R ∣ ) / , x ∈ R n + m , f ∈ L ( R n + m ; C d ) . As before, in the special case p = ∥ S D ,W f ∥ L ( R n + m ) = ∥ ̃ S D ,W f ∥ L ( R n + m ) . Lemma 5.2.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrix valued A p weight on R n × R m . Then, there holds ∥ ̃ S D ,W f ∥ L p ( R n + m ) ≲ d,p,n,m [ W ] γ ( p ) A p , D ∥ f ∥ L p ( W ) , ∀ f ∈ L ∞ c ( R n + m ; C d ) . Proof.
The proof will rely on Lemma 5.1. We have ∥ ̃ S D ,W f ∥ pL p ( R n + m ) = ∫ R n × R m ( ∑ R ∈ D ε ∈E ∣ W ( x , x ) / p f εR ∣ R ( x , x )∣ R ∣ ) p / d x d x = ∫ R n ( ∫ R m ( ∑ R ∈D ε ∈E ∑ R ∈D ε ∈E ∣ W ( x , x ) / p f ε ε R × R ∣ R ( x ) R ( x )∣ R ∣ ⋅ ∣ R ∣ ) p / d x ) d x = ∫ R n ( ∫ R m ( ∑ R ∈D ε ∈E ( ̃ S D ,W ( x , ⋅ ) (( Q ,ε R f )( x , ⋅ ))( x )) ) p / d x ) d x = ∫ R n ( ∫ R m ( ∑ R ∈D ε ∈E ∥ ⃗̃ S D ,W ( x , ⋅ ) (( Q ,ε R f )( x , ⋅ ))( x )∥ l ( D × E ) ) p / d x ) d x , where we recall that Q ,ε R f ( y , y ) ∶ = Q ε R ( f ( ⋅ , y ))( y ) = ( f ( ⋅ , y ) , h ε R ) h ε R ( y ) . Recalling that [ W ( x , ⋅ )] A p , D ≲ p,d [ W ] A p , D , for a.e. x ∈ R n (see Lemma 3.7), as wellas the one-parameter result from [24], we obtain in view of Lemma 5.1 ∥ ̃ S D ,W f ∥ pL p ( R n + m ) ≲ d,p,n [ W ] pγ ( p ) A p , D ∫ R n ( ∫ R m ( ∑ R ∈D ε ∈E ∣ W ( x , x ) / p ( Q ,ε R f )( x , x )∣ ) p / d x ) d x = [ W ] pγ ( p ) A p , D ∫ R m ( ∫ R n ( ∑ R ∈D ε ∈E ∣ W ( x , x ) / p ( Q ε R ( f ( ⋅ , x ))( x )∣ ) p / d x ) d x = [ W ] pγ ( p ) A p , D ∫ R m ∥ ̃ S D ,W ( ⋅ ,x ) ( f ( ⋅ , x ))∥ pL p ( R n ) d x ≲ m,d,p [ W ] pγ ( p ) A p , D ∫ R m ∥ f ( ⋅ , x )∥ pL p ( W ( ⋅ ,x )) dy = [ W ] pγ ( p ) A p , D ∥ f ∥ pL p ( W ) , where in the last ≲ p,m,d we used again the one-parameter result from [24], concludingthe proof. (cid:3) Lemma 5.3.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrix valued A p weight on R n × R m . Then, there holds ∥ S D ,W f ∥ L p ( R n + m ) ≲ d,p,n,m [ W ] / pA p , D ∥ ̃ S D ,W f ∥ L p ( R n + m ) , ∀ f ∈ L ( R n + m ; C d ) . Proof.
First of all, by the Monotone Convergence Theorem we can without loss ofgenerality assume that f has only finitely many nonzero Haar coefficients. Now, usingLemma 3.4 (3) we get ∥ S D ,W f ∥ pL p ( R n + m ) = ∫ R n × R m ( ∑ R ∈ D ε ∈E ∣ W R f εR ∣ R ( x )∣ R ∣ ) p / d x ≲ p,d [ W ] A p , D ∫ R n × R m ( ∑ R ∈ D ε ∈E (∣ W / p f εR ∣) R R ( x )∣ R ∣ ) p / d x. Thus, by standard (unweighted) dyadic Littlewood–Paley theory we have that the proofwill be complete once we prove that ∥ F ∥ L p ( R n + m ) ∶ = ∥ ∑ R ∈ D ε ∈E (∣ W / p f εR ∣) R h εR ∥ L p ( R n + m ) ≲ d,p,n,m ∥ ̃ S D ,W f ∥ L p ( R n + m ) . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 29
We use duality for that. Let g ∈ L p ′ ( R n + m ) . Then, we have ∣( F, g )∣ ≤ ∑ R ∈ D ε ∈E (∣ W / p f εR ∣) R ⋅ ∣ g εR ∣ = ∫ R n × R m ∑ R ∈ D ε ∈E ∣ W / p ( x ) f εR ∣ ⋅ h εR ( x ) ⋅ ∣ g εR ∣ ⋅ h εR ( x ) d x ≤ ∫ R n × R m ( ̃ S D ,W f )( x )( S D g )( x ) d x ≤ ∥ ̃ S D ,W f ∥ L p ( R n + m ) ∥ S D g ∥ L p ′ ( R n + m ) ∼ p,d,n,m ∥ ̃ S D ,W f ∥ L p ( R n + m ) ∥ g ∥ L p ′ ( R n + m ) , concluding the proof. (cid:3) Combining the previous lemmas we deduce the following.
Corollary 5.4.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrixvalued A p weight on R n × R m . Then, there holds ∥ S D ,W f ∥ L p ( R n + m ) ≲ d,p,n,m [ W ] p + γ ( p ) A p , D ∥ f ∥ L p ( W ) , ∀ f ∈ L ∞ c ( R n + m ; C d ) . Using the above upper bounds and duality, we can deduce lower bounds.
Corollary 5.5.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrixvalued A p weight on R n × R m . Then, there holds ∥ f ∥ L p ( W ) ≲ d,p,n,m [ W ] γ ( p ′) p − A p , D ∥ ̃ S D ,W f ∥ L p ( R n + m ) , ∀ f ∈ L ∞ c ( R n + m ; C d ) , ∥ f ∥ L p ( W ) ≲ d,p,n,m [ W ] p + γ ( p ′) p − A p , D ∥ S D ,W f ∥ L p ( R n + m ) , ∀ f ∈ L ∞ c ( R n + m ; C d ) . Proof.
We use duality. Let f, g ∈ L ∞ c ( R n + m ; C d ) . Then, we have ∣( f, g )∣ = ∣ ∑ R ∈ D ε ∈E ⟨ f εR , g εR ⟩∣ ≤ ∑ R ∈ D ε ∈E ⨏ R ∣ W ( x ) / p f εR ∣ ⋅ ∣ W ( x ) − / p g εR ∣ d x = ∑ R ∈ D ε ∈E ∫ R n + m ∣ W ( x ) / p f εR ∣ h εR ( x ) ⋅ ∣ W ′ ( x ) / p ′ g εR ∣ h εR ( x ) d x ≤ ∫ R n + m ( ̃ S D ,W f ( x ))( ̃ S D ,W ′ g ( x )) d x ≤ ∥ ̃ S D ,W f ∥ L p ( R n + m ) ∥ ̃ S D ,W ′ g ∥ L p ′ ( R n + m ) ≲ p,d,n,m ∥ ̃ S D ,W f ∥ L p ( R n + m ) [ W ] γ ( p ′ ) A p ′ , D ∥ g ∥ L p ′ ( W ′ ) ∼ p,d ∥ ̃ S D ,W f ∥ L p ( R n + m ) [ W ] γ ( p ′) p − A p , D ∥ g ∥ L p ′ ( W ′ ) , so ∥ f ∥ L p ( W ) ≲ p,d,n,m [ W ] γ ( p ′) p − A p , D ∥ ̃ S D ,W f ∥ L p ( R n + m ) .The proof for ̃ S W is similar. It suffices just to notice that for all R ∈ D and for all ε ∈ E we have ∣⟨ f εR , g εR ⟩∣ ≤ ∣ W R f εR ∣ ⋅ ∣( W R ) − g εR ∣ ≤ ∣ W R f εR ∣ ⋅ ∣ W ′ R g εR ∣ , where in the second ≤ we used Lemma 3.5. (cid:3) In particular, the set of all finite linear combinations of (bicancellative) D -Haarfunctions is dense in L p ( W ) .Using the above one-weight upper bounds, we can deduce two-weight upper bounds. Corollary 5.6.
Let
W, U be d × d -matrix valued weights on R n + m . Then, there holds ∥ S D ,W f ∥ L p ( R n + m ) ≲ d,p,n,m [ W, U ′ ] / pA p , D [ U ] p + γ ( p ) A p , D ∥ f ∥ L p ( U ) . If p = , then ∥ S D ,W f ∥ L ( R n + m ) ≲ d,n,m [ W, U ′ ] / A D [ U ] A , D ∥ f ∥ L ( U ) . Proof.
We notice that S D ,W f = ( ∑ R ∈ D ∣ W R f R ∣ R ∣ R ∣ ) / ≤ ( ∑ R ∈ D ∣ W R U − R ∣ ⋅ ∣ U R f R ∣ R ∣ R ∣ ) / ≲ d,p ( ∑ R ∈ D ∣ W R U ′ R ∣ ⋅ ∣ U R f R ∣ R ∣ R ∣ ) / ≲ p,d [ W, U ] / pA p , D ( ∑ R ∈ D ∣ U R f R ∣ R ∣ R ∣ ) / = [ W, U ] / pA p , D S D ,U f, where in the first ≲ d,p we used Lemma 3.4. We saw above that ∥ S D ,U f ∥ L p ( R n + m ) ≲ p,d,n,m [ U ] p + γ ( p ) A p , D ∥ f ∥ L p ( U ) , and that if p = ∥ S D ,U f ∥ L ( R n + m ) = ∥ ̃ S D ,U f ∥ L ( R n + m ) ≲ d,n,m [ U ] A , D ∥ f ∥ L ( U ) , yielding the desired result. (cid:3) L p matrix-weighted bounds for biparameter Haar multipliers One-weight sparse domination and bounds for Haar multipliers.
Let D be any product dyadic grid in R × R . Let ε = ( ε R ) R ∈ D by a sequence of complex numberssuch that C ∶ = sup R ∈ D ∣ ε R ∣ < ∞ . Let H be the set of all finite linear combinations of(bicancellative) D -Haar functionson R × R . Set T ε f ∶ = ∑ R ∈ D ε R f R h R , f ∈ L ( R ; C d ) . Theorem 6.1.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrix A p weight on R × R . For all f, g ∈ H and for < δ < / , there exists a weakly δ -sparsefamily S of dyadic rectangles (depending on W, d, f, g, p, δ ), such that ∣( T ε f, g )∣ ≲ δ,d C ∑ R ∈S ( ̃ S D ,W f ) R ( ̃ S D ,W ′ g ) R ∣ R ∣ . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 31
See Subsection 2.2 for a brief overview of the concept of sparse families. Note thatin the estimate of the theorem, the implied constant does not depend on W .The proof of the theorem is actually a straightforward adaptation of the scalar caseof [3] that is done in subsection 2.1 of that paper. Here we explain the main steps forthe reader’s convenience, emphasizing the necessary changes with respect to [3].Fix f, g ∈ H and 0 < δ < /
2. One can then pick k ∈ { , , , } and pairwise disjointdyadic rectangles Q , . . . , Q k in R for which there exists no R ∈ D with Q i ∪ Q j ⊆ R if i ≠ j, i, j = , . . . , k, and such that every dyadic rectangle over which at least one of f, g has a non-zero Haar coefficient is contained in one of Q , . . . , Q k . Thus ( T ε f, g ) = k ∑ j = ∑ R ∈ D R ⊆ Q j ε R ⟨ f R , g R ⟩ Thus, it suffices to prove a sparse bound as in Theorem 6.1 for each of the terms ∑ R ∈ D R ⊆ Q j ε R ⟨ f R , g R ⟩ , provided the obtained sparse family consists of dyadic subrectangles of Q j , for each j = , . . . , k . We will do this only for j = k =
0, since the proof for anyother case would be identical. In other words, without loss of generality we assume thatevery dyadic rectangle over which at least one of f, g has a non-zero Haar coefficient iscontained in Q .We begin by estimating ∣( T ε f, g )∣ ≤ C ∑ R ∈ D ∣⟨ f R , g R ⟩∣ . We notice that for all R ∈ D and for all x ∈ R we have a R ∶ = ∣⟨ f R , g R ⟩∣ = ∣⟨ W ( x ) / p f R , W ( x ) − / p g R ⟩∣ ≤ ∣ W ( x ) / p f R ∣ ⋅ ∣ W ′ ( x ) / p ′ g R ∣ . This is the first change compared to [3, Subsection 2.1]. There is only one morechange. Define a f ∶ = c ( ̃ S D ,W f ) Q , a g ∶ = c ( ̃ S D ,W ′ g ) Q , where c is some large finitepositive constant to be chosen below. Next, as in [3, Subsection 2.1], setΩ k ∶ = { x ∈ Q ∶ ̃ S D ,W f > k a f } ∪ { x ∈ Q ∶ ̃ S D ,W ′ g > k a g } , ∀ k = , , , . . . , take the previous constant c to be large enough (independently of both f and g andthe weight W ) so that ∣ Ω ∣ ≤ ∣ Q ∣/ R ∶ = { R ∈ D ∶ R ⊆ Q , ∣ R ∩ Ω ∣ ≤ ∣ R ∣/ } and F k ∶ = { R ∈ D ∶ R ⊆ Q , ∣ R ∩ Ω k ∣ > ∣ R ∣/ ∣ R ∩ Ω k + ∣ ≤ ∣ R ∣/ } . Observe that, since both f and g are in H , there are finitely many rectangles in R and F k , for each k ≥ , that correspond to non-zero Haar coefficients of either f or g. The estimate of ∑ R ∈ D a R follows the same lines as in [3, Subsection 2.1]. Note that,given f, g ∈ H , there exists k such that F k does not contribute to the previous sum for k ≥ k (again because both f and g have finitely many non-zero Haar coefficients).Thus, we split the sum as ∑ R ∈ D a R = ∑ R ∈R a R + k ∑ k = ∑ R ∈F k a R . In our vector-valued setting, the estimate for the first sum becomes ∑ R ∈R a R = ∑ R ∈R ∫ R ∖ Ω a R R ∖ Ω ( x )∣ R ∖ Ω ∣ d x ≤ ∑ R ∈R ∫ R ∖ Ω a R R ( x )∣ R ∣ d x ≤ ∑ R ∈R ∫ Q ∖ Ω ∣ W ( x ) / p f R ∣ ⋅ ∣ W ′ ( x ) / p ′ g R ∣ R ( x )∣ R ∣ d x ≤ ∫ Q ∖ Ω ̃ S D ,W f ( x ) ̃ S D ,W ′ g ( x ) d x ≤ c ∣ Q ∣( ̃ S D ,W f ) Q ( ̃ S D ,W ′ g ) Q . Next, using the C´ordoba–Fefferman algorithm, one defines a weakly sparse familyof dyadic rectangles that will bound the second sum in the exact same way as in [3,Subsection 2.1]. First take an ordering of the rectangles R k i ∈ F k and set R k = R k . Assuming that R k n − has been chosen, with n ≥ , take R k n to be the minimal rectanglewith respect to the previous ordering such that ∣ R k n ∩ ⋃ j < n R k j ∣ < ( − δ )∣ R k n ∣ . Let S k = { R k i } be the collection of selected rectangles in the weakly δ -sparse family sofar. Then one continues choosing the rectangles in the sparse family inductively overthe collections F k . Namely, assume that we have considered up to the collection F m + in this process, with m < k , and selected the set of rectangles S m + so far. Order therectangles R mi ∈ F m , for example with the same criterion as before. Set R m to be thefirst rectangle with respect to this ordering such that ∣ R m ∩ ⋃ R ∈S m + R ∣ < ( − δ )∣ R m ∣ and, assuming that R mn − has been chosen, with n ≥ , take R mn to be the minimalrectangle with respect to the previous ordering such that ∣ R mn ∩ ( ⋃ j < n R mj ∪ S m + )∣ < ( − δ )∣ R mn ∣ . Then, let S m = { R mi } ∪ S m + . Our sparse family is S = S . Indeed, note that for eachrectangle R mi ∈ S we can choose the set E ( R mi ) = R mi ∖ ( ⋃ j < i R mj ∪ S m + ) , so that thecollection { E ( R )} R ∈S is pairwise disjoint and, by construction, ∣ E ( R )∣ ≥ δ ∣ R ∣ for R ∈ S . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 33
To estimate the second sum, split it as k ∑ k = ∑ R = R k i α R + k ∑ k = ∑ R ∈F k R / ∈S α R . To bound the first of these two sums we use the defining properties of any givenrectangle R = R ki ∈ S to get α R = ∫ R k i ∩ Ω ck + α R R k i ∩ Ω ck + ( x )∣ R ki ∩ Ω ck + ∣ d x ≤ ∫ R k i ∩ Ω ck + α R R ( x )∣ R ∣ d x ≤ ∫ R k i ∩ Ω ck + ∣ W ( x ) / p f R ∣ ⋅ ∣ W ′ ( x ) / p ′ g R ∣ R ( x )∣ R ∣ d x ≤ ∫ R k i ∩ Ω ck + ̃ S D ,W f ( x ) ̃ S D ,W ′ g ( x ) d x. From here, the reasoning is exactly the same as in [3]. For x ∈ Ω ck + we have that both ̃ S D ,W f ( x ) ≲ k α f and ̃ S D ,W ′ g ( x ) ≲ k α g and, since R ki ∈ F k , either ̃ S D ,W f ( x ) ≳ k α f or ̃ S D ,W ′ g ( x ) ≳ k α g for x in some set which covers at least a quarter of R ki ∩ Ω ck + . Wecan assume that ̃ S D ,W f ( x ) ≳ k α f as it would be argued in the same way otherwise.Thus one gets ∫ R k i ∩ Ω ck + ̃ S D ,W f ( x ) ̃ S D ,W ′ g ( x ) d x ≲ ( k ( ̃ S D ,W f ) Q ∣ R ki ∣) ∣ R ki ∣ ∫ R k i ̃ S D ,W ′ g ( x ) d x ≲ ( ̃ S D ,W f ) R k i ( ̃ S D ,W ′ g ) R k i ∣ R ki ∣ , from which the sparse bound follows.For the second sum, since R / ∈ S , assuming that R ∈ F k , we have that ∣ R ∩ ⋃ m ≥ k ⋃ i ( R mi ∩ Ω cm + )∣ ≥ ( / − δ )∣ R ∣ if δ < / . Thus, the second sum is bounded by ( / − δ ) − ∑ k ∑ R ∈F k R / ∈S α R ∣ R ∩ ⋃ m ≥ k ⋃ i ( R mi ∩ Ω cm + )∣∣ R ∣ ≤ ( / − δ ) − ∑ k ∑ R ∈F k R / ∈S α R ∑ m ≥ k ∑ i ∣ R ∩ R mi ∩ Ω cm + ∣∣ R ∣ ≤ ( / − δ ) − ∑ k ∑ i ∫ R m i ∩ Ω cm + ( ∑ R ∈F k R / ∈S α R R ( x )∣ R ∣ ) d x. Finally, the same argument as for the previous sum gives the result.6.1.1.
One weight bounds for Haar multipliers.
Let 1 < p < ∞ . Let ε = ( ε R ) R ∈ D be asequence of complex numbers such that C ∶ = sup R ∈ D ∣ ε R ∣ < ∞ . We will prove that ∥ T ε ∥ L p ( W )→ L p ( W ) ≲ p,d C [ W ] α ( p ) A p , D , where(6.1) α ( p ) ∶ = γ ( p ) + γ ( p ′ ) p − , using the sparse domination of Theorem 6.1. Thus, it suffices to prove that if S is anyweakly sparse family of dyadic rectangles (say of sparseness constant 1 / ∑ R ∈S ( ̃ S D ,W f ) R ( ̃ S D ,W ′ g ) R ∣ R ∣ ≲ p,d [ W ] α ( p ) A p , D ∥ f ∥ L p ( W ) ∥ g ∥ L p ′ ( W ′ ) . We have ∑ R ∈S ( ̃ S D ,W f ) R ( ̃ S D ,W ′ g ) R ∣ R ∣ ≲ ∑ R ∈S ( ̃ S D ,W f ) R ( ̃ S D ,W ′ g ) R ∣ E ( R )∣ = ∫ R ∑ R ∈S ( ̃ S D ,W f ) R ( ̃ S D ,W ′ g ) R E ( R ) ( x ) d x ≤ ∫ R ∑ R ∈S M D ( ̃ S D ,W f )( x ) M D ( ̃ S D ,W ′ g )( x ) E ( R ) ( x ) d x ≤ ∫ R M D ( ̃ S D ,W f )( x ) M D ( ̃ S D ,W ′ g )( x ) d x ≤ ∥ M D ( ̃ S D ,W f )∥ L p ( R ) ∥ M D ( ̃ S D ,W ′ g )∥ L p ′ ( R ) ≲ ∥ ̃ S D ,W f ∥ L p ( R ) ∥ ̃ S D ,W ′ g ∥ L p ′ ( R ) ≲ d,p [ W ] γ ( p ) A p , D ∥ f ∥ L p ( W ) [ W ′ ] γ ( p ′ ) A p ′ , D ∥ g ∥ L p ′ ( W ′ ) ∼ d,p [ W ] α ( p ) A p , D ∥ f ∥ L p ( W ) ∥ g ∥ L p ′ ( W ′ ) , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 35 where { E ( R )} R ∈S are the pairwise disjoint sets related to the weakly sparse family S , M D denotes the usual (unweighted) dyadic strong maximal function in R , M D ( h ) ∶ = sup R ∈ D (∣ h ∣) R R , and in the first ≲ d,p we used Lemma 5.2.In particular, in the special case p = ∥ T ε ∥ L ( W )→ L ( W ) ≲ d C [ W ] A , D . Two-weight sparse domination and bounds for Haar multipliers.
Let1 < p < ∞ , and let W, U be D -dyadic biparameter d × d -matrix valued A p weights on R × R with [ W, U ′ ] A p , D < ∞ . Theorem 6.2.
For all functions f, g ∈ H and for < δ < / , there exists a δ -weaklysparse family S of dyadic rectangles (depending on W, U, d, f, g, δ ), such that ∣( T ε ( f ) , g )∣ ≲ δ,d C [ W, U ′ ] / pA p , D ∑ R ∈S ( S D ,U f ) R ( S D ,W ′ g ) R ∣ R ∣ . It will be obvious how to modify the proof in subsection 6.1 to prove this theorem,once we explain how one can estimate the terms a R ∶ = ∣⟨ f R , g R ⟩∣ . We have a R = ∣⟨ U R f R , ( U R ) − g R ⟩∣ ≤ ∣ U R f R ∣ ⋅ ∣( U R ) − ( W ′ R ) − ∣ ⋅ ∣ W ′ R g R ∣ . Observe that ∣( U R ) − ( W ′ R ) − ∣ ≲ d ∣ U ′ R ( W ′ R ) − ∣ = ∣( W ′ R ) − U ′ R ∣ ≲ d ∣ W ′′ R U ′ R ∣ = ∣ W R U ′ R ∣ ≲ d [ W, U ′ ] / pA p , where in the first and the second ≲ d we used Lemma 3.5 coupled with (3.1).The desired conclusion now follows from the same computations as in subsection 6.1.The only thing that one has to keep in mind is to make the necessary modificationsto take into account the different square functions appearing here. Namely, one has todefine α f = c ( S D ,U f ) Q and α g = c ( S D ,W ′ g ) Q , with c some large constant to be chosenin the same way. The same change applies to the sets Ω k , so that one takes hereΩ k ∶ = { x ∈ Q ∶ S D ,U f > k a f } ∪ { x ∈ Q ∶ S D ,W ′ g > k a g } , ∀ k = , , , . . . . The rest of the argument remains the same.Using the theorem, similarly to the one-weight case we deduce for p = ∥ T ε ∥ L ( U )→ L ( W ) ≲ d C [ W, U ′ ] / A , D [ W ] A , D [ U ] A , D , and for general 1 < p < ∞ the bound ∥ T ε ∥ L p ( U )→ L p ( W ) ≲ p,d C [ W, U ′ ] / pA p , D [ W ] p + γ ( p ′) p − A p , D [ U ] p + γ ( p ) A p , D . Matrix-weighted bounds without relying on sparse domination.
It isworth noting that one can get weighted bounds for Haar multipliers directly, with-out relying on sparse bounds. Namely, set g ∶ = T ε f . Then, using Corollary 5.5 in thefirst step and Corollary 5.6 in the last step we have ∥ g ∥ L p ( W ) ≲ p,d,n,m [ W ] p + γ ( p ′) p − A p , D ∥ S D ,W g ∥ L p ( R n + m ) ≤ C [ W ] p + γ ( p ′) p − A p , D ∥ S D ,W f ∥ L p ( R n + m ) ≲ p,d,n,m C [ W ] p + γ ( p ′) p − A p , D [ W, U ′ ] / pA p , D [ U ] p + γ ( p ) A p , D ∥ f ∥ L p ( U ) , and if p = ∥ g ∥ L ( W ) ≲ d,n,m [ W ] A , D ∥ S D ,W g ∥ L ( R n + m ) ≤ C [ W ] A , D ∥ S D ,W f ∥ L ( R n + m ) ≲ d,n,m C [ W ] A , D [ W, U ′ ] / A , D [ U ] A , D ∥ f ∥ L ( U ) . Similarly of course we get the one-weight bound (applying Corollary 5.5 for ̃ S D ,W instead of S D ,W , and Lemma 5.2 instead of Corollary 5.4) ∥ g ∥ L p ( W ) ≲ p,d,n,m C [ W ] γ ( p ) + γ ( p ′) p − A p , D ∥ f ∥ L p ( W ) , and if p = ∥ g ∥ L ( W ) ≲ d,n,m C [ W ] A , D ∥ f ∥ L ( W ) . L p matrix-weighted bounds for paraproduct free Journ´e operators One-parameter matrix-weighted shifted square functions.
Let D be anydyadic grid in R . Let i, j be non-negative integers. The standard (unweighed) shiftedsquare function corresponding to the shift parameters i, j is given by S i,j D f ∶ = ( ∑ I ∈D ( ∑ K ∈ ch i ( I ) ∣ f K ∣) ∑ L ∈ ch j ( I ) L ∣ L ∣ ) / , f ∈ L ( R ; C d ) . Let 1 < p < ∞ , and let W by a d × d -matrix valued weight on R . Define the matrix-weighted shifted square function corresponding to the weight W (and the exponent p )and the shift parameters i, j by S i,j D ,W f ∶ = ( ∑ I ∈D ( ∑ K ∈ ch i ( I ) ∣ W I f K ∣) ∑ L ∈ ch j ( I ) L ∣ L ∣ ) / , f ∈ L ( R ; C d ) , where reducing operators for W are taken with respect to the exponent p . We have ofcourse also the alternative definition ̃ S i,j D ,W f ( x ) ∶ = ( ∑ I ∈D ( ∑ K ∈ ch i ( I ) ∣ W ( x ) / p f K ∣) ∑ L ∈ ch j ( I ) L ( x )∣ L ∣ ) / , x ∈ R , f ∈ L ( R ; C d ) . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 37
Both of these definitions follow [14], up to incorporating the weights in the operator.Consider also the variant of ̃ S i,j D ,W ̃ S i,j , D ,W f ( x ) ∶ = ( ∑ I ∈D ∣ W ( x ) / p ∑ K ∈ ch i ( I ) f K ∣ ∑ L ∈ ch j ( I ) L ( x )∣ L ∣ ) / , x ∈ R , f ∈ L ( R ; C d ) . Let us remark that in fact S i,j D ,W f ∶ = j / ( ∑ I ∈D ( ∑ K ∈ ch i ( I ) ∣ W I f K ∣) I ∣ I ∣ ) / , ̃ S i,j D ,W f ( x ) ∶ = j / ( ∑ I ∈D ( ∑ K ∈ ch i ( I ) ∣ W ( x ) / p f K ∣) I ( x )∣ I ∣ ) / , ̃ S i,j , D ,W f ( x ) ∶ = j / ( ∑ I ∈D ∣ W ( x ) / p ∑ K ∈ ch i ( I ) f K ∣ I ( x )∣ I ∣ ) / . We note that we can view ̃ S i,j , D ,W as a linear vector valued operator. Namely, define ⃗̃ S i,j , D ,W f ( x ) ∶ = { j / W ( x ) / p ∑ K ∈ ch i ( I ) f K h I ( x )} I ∈D , x ∈ R . It is then clear that ̃ S i,j , D ,W f ( x ) = ∥ ⃗̃ S i,j , D ,W f ( x )∥ ℓ ( D ; C d ) . Below we obtain L p matrix-weighted bounds for ̃ S i,j D ,W .7.1.1. Sparse domination.
Let 1 < p < ∞ . Let D be any dyadic grid in R . Let W bea D -dyadic d × d -matrix valued A p weight on R . For all J ∈ D , define the localizedoperator ̃ S i,j D ,W,J f ( x ) ∶ = ( ∑ R ∈D ( J ) ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / p f P ∣) ( ∑ Q ∈ ch j ( R ) Q ( x )∣ Q ∣ )) / = j / ( ∑ R ∈D ( J ) ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / p f P ∣) R ( x )∣ R ∣ ) / , for f ∈ L ( R ) .Given any dyadically sparse collection S of dyadic intervals in R , following [24] wedefine the sparse positive operator A S ,W f ( x ) ∶ = ( ∑ L ∈S (∣ W L f ∣) L ∣ W ( x ) / p W − L ∣ L ( x )) / . See Subsection 2.2 for a brief overview of the concept of sparse families.
Proposition 7.1.
Let J ∈ D . Let i, j be non-negative integers. Then, for all ε ∈ ( , ) and for all f ∈ L ( R ) , there exists a dyadically ε -sparse collection S of dyadicsubintervals of J (depending on ε, J, p, W , f and i ), such that ̃ S i,j D ,W,J f ≲ p,d,ε i ( i + j )/ A S ,W f. The proof will be an adaptation of the proof of pointwise sparse domination of theusual matrix-weighted square function in [24, Section 2], in the same way that theproof in [3, Proposition 5.3] is an adaptation of the proof in [20].Set S ∶ = { J } . Let S be the family of all maximal dyadic subintervals L of J suchthat ∑ I ∈D ( J ) L ⊆ I ( ∑ P ∈ ch i ( I ) ∣ W J f P ∣) ∣ I ∣ > c i i (∣ W J f ∣) J , for some large enough c > ε and d ). Itis clear that I ⊆ { S i,j D ( J W J f ) > c ⋅ i ( i + j )/ (∣ W J f ∣) J } , ∀ I ∈ S (we emphasize that W J is a constant matrix). It was proved in [2, Proposition 3.6]that ∥ S i,j D ∥ L ( R ; C )→ L , ∞ ( R ) ≲ i ( i + j )/ , which trivially implies ∥ S i,j D ∥ L ( R ; C d )→ L , ∞ ( R ) ≲ d i ( i + j )/ . It follows that ∑ L ∈S ∣ L ∣ ≲ d ∣ J ∣ c , therefore ∑ L ∈S ∣ L ∣ ≤ ( − ε )∣ J ∣ provided c is chosen to be large enough (depending only on d and ε ).Set F ∶ = D ( J ) ∖ ( ⋃ L ∈S D ( L )) . Set also E ∶ = ⋃ L ∈F L . Then, we can write ̃ S i,j D ,W,J f ( x ) = j ∑ L ∈F ( ∑ I ∈ ch i ( L ) ∣ W ( x ) / p f I ∣) L ∩ E ( x )∣ L ∣ (7.1) + j ∑ L ∈F ( ∑ I ∈ ch i ( L ) ∣ W ( x ) / p f I ∣) L ∖ E ( x )∣ L ∣ + j ∑ L ∈S ∑ I ∈D ( L ) ( ∑ K ∈ ch i ( I ) ∣ W ( x ) / p f K ∣) I ( x )∣ I ∣ . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 39
In each of the terms2 j ∑ I ∈D ( L ) ( ∑ K ∈ ch i ( I ) ∣ W ( x ) / p f K ∣) I ( x )∣ I ∣ , L ∈ S we just iterate the construction. So it remains to estimate the first two terms in theright-hand side of (7.1). Call them A ( x ) and B ( x ) respectively.To estimate term A ( x ) , we notice that A ( x ) vanishes outside E , and that for all I ∈ S and for all x ∈ I , if L ∈ F is such that x ∈ L , then L ∩ I ≠ ∅ , therefore necessarily I ⊊ L , therefore A ( x ) = j ∑ L ∈F I ⊊ L ( ∑ K ∈ ch i ( L ) ∣ W ( x ) / p f K ∣) ∣ L ∣ ≤ j ∑ L ∈F I ⊊ L ( ∑ K ∈ ch i ( L ) ∣ W J f K ∣) ∣ W ( x ) / p W − J ∣ ∣ L ∣ = j ∑ L ∈F ˆ I ⊆ L ( ∑ K ∈ ch i ( L ) ∣ W J f K ∣) ∣ W ( x ) / p W − J ∣ ∣ L ∣ ≤ c ⋅ i i + j ∣ W ( x ) / p W − J ∣ (∣ W J f ∣) J J ( x ) , by the maximality of I .To estimate B ( x ) , we notice that for all x ∈ J ∖ E , we have ∑ I ∈D ( J ) L ⊆ I ( ∑ P ∈ ch i ( R ) ∣ W J f P ∣) ∣ I ∣ ≤ c ⋅ i i (∣ W J f ∣) J , for all L ∈ D ( J ) with x ∈ L , therefore if ( L n ) ∞ n = is the strictly decreasing (maybe finite)sequence of all intervals in F containing x , then ∑ I ∈D ( J ) L k ⊆ I ( ∑ P ∈ ch i ( I ) ∣ W J f P ∣) ∣ I ∣ ≤ c ⋅ i i (∣ W J f ∣) J , ∀ k = , , . . . , therefore B ( x ) = ∞ ∑ k = j ( ∑ I ∈ ch i ( L k ) ∣ W ( x ) / p f I ∣) ∣ L k ∣ ≤ ∣ W ( x ) / p W − J ∣ lim k → ∞ k ∑ l = j ( ∑ I ∈ ch i ( L l ) ∣ W J f I ∣) ∣ L l ∣ ≤ ∣ W ( x ) / p W − J ∣ lim k → ∞ j ∑ I ∈D ( J ) L k ⊆ I ( ∑ K ∈ ch i ( I ) ∣ W J f K ∣) ∣ I ∣ ≤ ci i + j ∣ W ( x ) / p W − J ∣ (∣ W J f ∣) J J ( x ) , concluding the proof. L p weighted bounds. Let p, D , W as before. Let f ∈ L ( R ) . Note that anapplication of the Monotone Convergence Theorem gives ∥ ̃ S i,j D ,W f ∥ L p ( R ) = lim N → ∞ ∥ ̃ S i,j D ,W,N f ∥ L p ( R ) , where ̃ S i,j D ,W,N f ∶ = j / ( ∑ J ∈D − N ∑ R ∈D ( J ) ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / p f P ∣) R ( x )∣ R ∣ ) / , and D − N ∶ = { J ∈ D ∶ ∣ J ∣ = N } , for all N = , , . . . . It is clear by disjointness of the intervals J ∈ D − N that ∥ S i,j D ,W,N f ∥ pL p ( R ) = ∑ J ∈D − N ∥ J S i,j D ,W,N f ∥ pL p ( R ) = ∑ J ∈D − N ∥ S i,j D ,W,J f ∥ pL p ( R ) = ∑ J ∈D − N ∥ S i,j D ,W,J ( f J )∥ pL p ( R ) . It was proved in [24, Section 3] that ∥ A W, S ∥ L p ( W )→ L p ( R ) ≲ p,d [ W ] γ ( p ) A p , D . It follows by the sparse domination from above that ∥ ̃ S i,j D ,W,J f ∥ L p ( R ) ≲ p,d i ( i + j )/ [ W ] γ ( p ) A p , D ∥ f J ∥ L p ( W ) . It then follows by disjointness of the intervals in D − N that ∥ ̃ S i,j D ,W,N f ∥ L p ( R ) ≲ p,d i ( i + j )/ [ W ] γ ( p ) A p , D ∥ f ∥ L p ( W ) , therefore ∥ ̃ S i,j D ,W ∥ L p ( W )→ L p ( R ) ≲ p,d i ( i + j )/ [ W ] γ ( p ) A p , D . (7.2)7.2. Biparameter matrix-weighted shifted square functions.
Let D = D × D be any dyadic product grid in R × R . Let i = ( i , i ) , j = ( j , j ) be pairs of non-negative integers. The standard (unweighted) biparameter shifted square functioncorresponding to the shift parameters i, j is given by S i,j D f ∶ = ( ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ f P ∣) ∑ Q ∈ ch j ( R ) Q ∣ Q ∣ ) / , f ∈ L ( R ; C d ) . Let 1 < p < ∞ , and let W be a d × d -matrix valued weight on R . Define thebiparameter matrix-weighted shifted square function corresponding to the weight W (and the exponent p ) and the shift parameters i, j by S i,j D ,W f ∶ = ( ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W R f P ∣) ∑ Q ∈ ch j ( R ) Q ∣ Q ∣ ) / , f ∈ L ( R ; C d ) , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 41 where reducing operators for W are taken with respect to the exponent p . We have ofcourse also the alternative definition ̃ S i,j D ,W f ( x ) ∶ = ( ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / p f P ∣) ∑ Q ∈ ch j ( R ) Q ( x )∣ Q ∣ ) / , for x ∈ R and f ∈ L ( R ; C d ) . Both of these definitions follow [15], up to incorporatingthe weights in the operator. Consider also the variant of ̃ S i,j D ,W ̃ S i,j , D ,W f ( x ) ∶ = ( ∑ R ∈ D ∣ W ( x ) / p ∑ P ∈ ch i ( R ) f P ∣ ∑ Q ∈ ch j ( R ) Q ( x )∣ Q ∣ ) / for x ∈ R and f ∈ L ( R ; C d ) . Let us remark that in fact S i,j D ,W f ∶ = ( j + j )/ ( ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W R f P ∣) R ∣ R ∣ ) / , ̃ S i,j D ,W f ( x ) ∶ = ( j + j )/ ( ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / p f P ∣) R ( x )∣ R ∣ ) / . ̃ S i,j , D ,W f ( x ) ∶ = ( j + j )/ ( ∑ R ∈ D ∣ W ( x ) / p ∑ P ∈ ch i ( R ) f P ∣ R ( x )∣ R ∣ ) / . We will now investigate L p matrix-weighted bounds for these operators. For p = Lemma 7.2.
Assume p = . Let W, U be d × d -matrix valued weights on R . Then,there holds ∥ ̃ S i,j D ,W f ∥ L ( R ) ≲ d ∥ S i,j D ,W f ∥ L ( R ) ≲ d ( i + i + j + j )/ [ W ] / A , D ∥ f ∥ L ( W ) and ∥ ̃ S i,j D ,W f ∥ L ( R ) ≲ d [ W, U ′ ] / A , D ∥ S i,j D ,U f ∥ L ≲ d ( i + i + j + j )/ [ W, U ′ ] / A , D [ U ] / A , D ∥ f ∥ L ( U ) , for all f ∈ L ∞ c ( R ; C d ) . Proof.
First of all, we have ∥ ̃ S i,j D ,W f ∥ L ( R ) = j + j ∑ R ∈ D ⨏ R ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / f P ∣) d x ≤ j + j ∑ R ∈ D ⨏ R ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / U − R ∣ ⋅ ∣ U R f P ∣) d x = j + j ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ U R f P ∣) ⨏ R ∣ W ( x ) / U − R ∣ d x ∼ d j + j ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ U R f P ∣) ∣ W R U − R ∣ ≲ d j + j ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ U R f P ∣) ∣ W R U ′ R ∣ ≲ d j + j [ W, U ′ ] A , D ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ U R f P ∣) = [ W, U ′ ] A , D ∥ S i,j D ,U f ∥ L ( R ) . In the first ∼ d , we used the definition of reducing matrices. Moreover, in the first ≲ d , we used Lemma 3.5 coupled with (3.2) (and the fact that reducing matrices areself-adjoint). Similarly we obtain ∥ ̃ S i,j D ,W f ∥ L ( R ) ≲ d ∥ S i,j D ,W f ∥ L ( R ) , using the definition of reducing matrices coupled with (3.4) to deduce that ⨏ R ∣ W ( x ) / W − R ∣ d x ∼ d , ∀ R ∈ D . Moreover, we have ∥ S i,j D ,W f ∥ L ( R ) = j + j ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W R f P ∣) ≤ j + j ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W R W − P ∣ ⋅ ∣ W P f P ∣) ≤ j + j ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ W P f P ∣ )( ∑ P ∈ ch i ( R ) ∣ W R W − P ∣ ) . We notice that for all R ∈ D , we have ∑ P ∈ ch i ( R ) ∣ W R W − P ∣ = ∑ P ∈ ch i ( R ) ∣ W − P W R ∣ ≲ d ∑ P ∈ ch i ( R ) ∣ W ′ P W R ∣ ∼ d ∑ P ∈ ch i ( R ) ⨏ P ∣ W ( x ) − / W R ∣ d x = i + i ⨏ R ∣ W ( x ) − / W R ∣ d x ≲ d i + i ∣ W ′ R W R ∣ ≲ d i + i [ W ] A , D , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 43 where in the first ≲ d we used Lemma 3.5 coupled with (3.1). Thus ∥ S i,j D ,W f ∥ L ( R ) ≲ d i + i + j + j [ W ] A , D ∑ R ∈ D ∑ P ∈ ch i ( R ) ∣ W P f P ∣ = i + i + j + j [ W ] A , D ∥ S D ,W f ∥ L ( R ) = i + i + j + j [ W ] A , D ∥ ̃ S D ,W f ∥ L ( R ) ≲ d i + i + j + j [ W ] A , D ∥ f ∥ L ( W ) , where in the last ≲ d we used Lemma 5.2. This computation of course remains valid if W is replaced by U , concluding the proof. (cid:3) We now turn to the case of general 1 < p < ∞ (in the one-weight setting). Broadlyspeaking, the proof is an adaptation of the proof of [15, Proposition 3.1]. Lemma 7.3.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrix valued A p weight on R × R . Then, there exists a sequence ε = ( ε R ) R ∈ D in { − , } dependingon d, i, W and f , such that ∥ S i,j D ,W f ∥ L p ( R ) ≲ d [ W ] / pA p , D ∥ ̃ S i,j , D ,W ( T ε f )∥ L p ( R ) , ∀ f ∈ L ( R ; C d ) . Proof.
By Lemma 7.4 below, we have that for all R ∈ D , there exists a sequence ( ε RP ) P ∈ ch i ( R ) in { − , } , depending on d, i, W and f , such that ∑ P ∈ ch i ( R ) ∣ W R f P ∣ ∼ d ∣ ∑ P ∈ ch i ( R ) ε RP W R f P ∣ . Clearly, every rectangle P ∈ D has a unique i -th ancestor P ( i ) in D . Therefore, we canconsider the sequence ε = ( ε P ) P ∈ D in { − , } given by ε P ∶ = ε P ( i ) P , ∀ P ∈ D . Consider the martingale transform ̃ f ∶ = T ε f of f . Then clearly S i,j D ,W ( f ) ∼ d ( j + j )/ ( ∑ R ∈ D ∣ W R ∑ P ∈ ch i ( R ) ̃ f P ∣ R ∣ R ∣ ) / , where ̃ f P ∶ = ε P f P , P ∈ D are the Haar coefficients of ̃ f . Then, we have ∥ S i,j D ,W f ∥ pL p ( R ) ∼ d p ( j + j )/ ∫ R × R ( ∑ R ∈ D ∣ W R ∑ P ∈ ch i ( R ) ̃ f P ∣ R ( x )∣ R ∣ ) p / d x ≲ p,d p ( j + j )/ [ W ] A p , D ∫ R × R ( ∑ R ∈ D (∣ W / p ∑ P ∈ ch i ( R ) ̃ f P ∣) R R ( x )∣ R ∣ ) p / d x. where we applied Lemma 3.4 to get from the first to the second line. Thus, the proofwill be complete once we prove that ∫ R × R ( ∑ R ∈ D (∣ W / p ∑ P ∈ ch i ( R ) ̃ f P ∣) R R ( x )∣ R ∣ ) p / d x ≲ p,d − p ( j + j )/ ∥ ̃ S i,j , D ,W ̃ f ∥ pL p ( R ) . By the Monotone Convergence Theorem, we can without loss of generality assume that f (and thus ̃ f ) has only finitely many nonzero Haar coefficients. Then, by standard(unweighted) dyadic Littlewood–Paley Theory we only need to prove that ∥ F ∥ L p ( R ) ∶ = ∥ ∑ R ∈ D (∣ W / p ∑ P ∈ ch i ( R ) ̃ f P ∣) R h R ∥ L p ( R ) ≲ d,p ∥ − ( j + j )/ ̃ S i,j , D ,W ̃ f ∥ L p ( R ) . We use duality for that. Let g ∈ L p ′ ( R ) . Then, we have ∣( F, g )∣ ≤ ∑ R ∈ D (∣ W / p ∑ P ∈ ch i ( R ) ̃ f P ∣) R ⋅ ∣ g R ∣ = ∫ R ∑ R ∈ D ∣ W ( x ) / p ∑ P ∈ ch i ( R ) ̃ f P ∣ h R ( x ) ⋅ ∣ g R ∣ h R ( x ) d x ≤ ∫ R × R − ( j + j )/ ( ̃ S i,j , D ,W ̃ f )( x ) S D g ( x ) d x ≤ ∥ − ( j + j )/ ( ̃ S i,j , D ,W ̃ f )∥ L p ( R ) ∥ S D g ∥ L p ′ ( R ) ∼ p,d ∥ − ( j + j )/ ( ̃ S i,j , D ,W ̃ f )∥ L p ( R ) ∥ g ∥ L p ′ ( R ) , yielding the desired result. (cid:3) Lemma 7.4.
Let v , . . . , v k be vectors in C d . Then, there exist ε , . . . , ε k ∈ { − , } , suchthat k ∑ i = ∣ v i ∣ ∼ d ∣ k ∑ i = ε i v i ∣ . Proof.
For each i = , . . . , k , write w i ∶ = Re ( v i ) and w i ∶ = Im ( v i ) . Also, for each i = , . . . , k and j = , , express the vector w ji in R d in components as w ji = ( w ji, , . . . , w ji,d ) .Then, it is clear that k ∑ i = ∣ v i ∣ ≤ C ( d ) k ∑ i = d ∑ ℓ = ∑ j = ∣ w ji,ℓ ∣ . Therefore, there exist ℓ ∈ { , . . . , d } and j ∈ { , } , such that k ∑ i = ∣ w ji,ℓ ∣ ≥ dC ( d ) k ∑ i = ∣ v i ∣ . Taking ε i ∶ = sgn ( w ji,ℓ ) , for all i = , . . . , k (where by convention sgn ( ) = (cid:3) Lemma 7.5.
Let < p < ∞ . Assume that there exist β ( p ) ∈ ( , ∞ ) and a function c ∶ { , , , . . . } → ( , ∞ ) , such that ∥ ̃ S k,l , D ,V ∥ L p ( V )→ L p ( R ) ≲ p,d c ( k ) l / [ V ] β ( p ) A p , D , for any (one-parameter) D -dyadic d × d -matrix valued A p weight V on R , for any grid D and for any non-negative integers k, l . Then, there holds ∥ ̃ S i,j , D ,W ∥ L p ( W )→ L p ( R ) ≲ p,d c ( i ) c ( i ) ( j + j )/ [ W ] β ( p ) A p , D , for any D -dyadic biparameter d × d -matrix valued A p weight W on R × R . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 45
Proof.
The proof will rely on Lemma 5.1. Let f ∈ L ∞ c ( R n + m ; C d ) . We have2 − p ( j + j )/ ∥ ̃ S i,j , D ,W f ∥ pL p ( R ) = ∫ R × R ( ∑ R ∈ D ∣ ∑ P ∈ ch i ( R ) W ( x , x ) / p f P ∣ R ( x , x )∣ R ∣ ) p / d x d x = ∫ R ( ∫ R ( ∑ R ∈D R ∈D ∣ ∑ P ∈ ch i ( R ) P ∈ ch i ( R ) W ( x , x ) / p f P × P ∣ R ( x ) R ( x )∣ R ∣ ⋅ ∣ R ∣ ) p / d x ) d x = − pj / ∫ R ( ∫ R ( ∑ R ∈D R ∈D ∣ ∑ P ∈ ch i ( R ) W ( x , x ) / p ( F R ( x , ⋅ )) P ∣ R ( x )∣ R ∣ ) p / d x ) d x = − p ( j + j )/ ∫ R ( ∫ R ( ∑ R ∈D ( ̃ S i ,j , D ,W ( x , ⋅ ) ( F R ( x , ⋅ ))( x )) ) p / d x ) d x = − p ( j + j )/ ∫ R ( ∫ R ( ∑ R ∈D ∥ ⃗̃ S i ,j , D ,W ( x , ⋅ ) ( F R ( x , ⋅ ))( x )∥ ℓ ( D ; C d ) ) p / d x ) d x , where F R ( y , y ) ∶ = j / ∑ P ∈ ch i ( R ) f P ( y ) h R ( y ) , and we recall that f P ( y ) ∶ = ( f ( ⋅ , y ) , h P ) . Recalling that [ W ( x , ⋅ )] A p , D ≤ [ W ] A p , D , for a.e. x ∈ R n , we obtain by the assump-tions and in view of Lemma 5.12 − p ( j + j )/ ∥ ̃ S i,j ,W f ∥ pL p ( R ) ≲ d,p c ( i ) p − pj / [ W ] β ( p ) A p , D ∫ R ( ∫ R ( ∑ R ∈D ∣ W ( x , x ) / p F R ( x , x )∣ ) p / d x ) d x = c ( i ) p [ W ] β ( p ) A p , D ∫ R ( ∫ R ( ∑ R ∈D ∣ ∑ P ∈ ch i ( R ) W ( x , x ) / p f P ( x )∣ R ( x )∣ R ∣ ) p / d x ) d x = c ( i ) p − pj / [ W ] β ( p ) A p , D ∫ R ∥ ̃ S i ,j , D ,W ( ⋅ ,x ) ( f ( ⋅ , x ))∥ pL p ( R ) d x ≲ p,d c ( i ) p c ( i ) p [ W ] β ( p ) A p , D ∫ R ∥ f ( ⋅ , x )∥ pL p ( W ( ⋅ ,x )) d x = c ( i ) p c ( i ) p [ W ] β ( p ) A p , D ∥ f ∥ pL p ( W ) , concluding the proof. (cid:3) Corollary 7.6.
Let < p < ∞ , and let W be a D -dyadic biparameter d × d -matrixvalued A p weight on R × R . Then, there holds ∥ S i,j D ,W ∥ L p ( W )→ L p ( R ) ≲ p,d i i ( i + j + i + j )/ [ W ] p + γ ( p ) + α ( p ) A p , D . Proof.
Immediate by combining Lemmas 7.3 and Lemma 7.5 with the weighted boundsfor martingale transforms of the previous section, (7.2) and the fact that ̃ S k,l ,V ≤ ̃ S k,lV . (cid:3) Sparse domination for cancellative Haar shifts.
Let D = D × D be anyproduct dyadic grid in R × R . Let i = ( i , i ) and j = ( j , j ) be pairs of non-negativeintegers. We define the cancellative biparameter dyadic shift of complexity ( i, j ) by T i,j f ∶ = ∑ R ∈ D ∑ P ∈ ch i ( R ) ∑ Q ∈ ch j ( R ) a P QR f P h Q , f ∈ L ( R n + m ; C d ) , where a P QR are complex numbers satisfying the bound ∣ a P QR ∣ ≤ √∣ P ∣ ⋅ ∣ Q ∣√∣ R ∣ = − ( i + i + j + j ) . Let 1 < p < ∞ , and let W by any D -dyadic biparameter d × d -matrix valued A p weighton R × R . Theorem 7.7.
For all f, g ∈ H and for < δ < / , there exists a weakly δ -sparse family S of dyadic rectangles (depending on W, d, f, g, δ ), such that ∣( T i,j f, g )∣ ≲ δ,d C − ( i + i + j + j ) ∑ R ∈S ( S i,j D ,W f ) R ( S j,i D ,W ′ g ) R ∣ R ∣ . As in the case of the martingale transforms, the proof will be a straightforwardadaptation of the proof in the scalar case in [3, Subsection 3.2]. Here, as we did forthe two-weight bounds for martingale transforms in subsection 6.2, we only explain thenecessary changes with respect to the one-weight bounds in subsection 6.1.We begin by writing ( T i,j f, g ) = ∑ R ∈ D ∑ P ∈ ch i ( R ) ∑ Q ∈ ch j ( R ) a P QR ⟨ f P , g Q ⟩ . The only estimate that is different when compared to [3] is the one for the terms a P QR ⟨ f P , g Q ⟩ , where P ∈ ch i ( R ) and Q ∈ ch j ( R ) . In our case, we have ∣ a P QR ⟨ f P , g Q ⟩∣ ≤ − ( i + i + j + j ) ∣⟨ W R f P , ( W R ) − g Q ⟩∣ ≤ − ( i + i + j + j ) ∣ W R f P ∣ ⋅ ∣( W R ) − g Q ∣ ≤ − ( i + i + j + j ) ∣ W R f P ∣ ⋅ ∣ W ′ R g Q ∣ , where in the last ≤ we applied Lemma 3.5. Now, one proceeds as in the proof ofTheorem 6.1 with the necessary modifications to take into account the shifted squarefunctions. That is, one defines α f = c ( S i,j D ,W f ) Q and α g = c ( S j,i D ,W ′ g ) Q for some largeconstant c that will be chosen as in subsection 6.1. Moreover, defineΩ k ∶ = { x ∈ Q ∶ S i,j D ,W f > k a f } ∪ { x ∈ Q ∶ S j,i D ,W ′ g > k a g } , ∀ k = , , , . . . . Then the argument follows the same steps as before.
OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 47
As an example, in this case, the contribution of those rectangles in R to ∣( T i,j f, g )∣ would be bounded by ∑ R ∈R ∑ P ∈ ch i ( R ) ∑ Q ∈ ch j ( R ) a P QR ⟨ f P , g Q ⟩ ≲ p,d − ( i + i + j + j ) ∫ Q ∖ Ω ∑ R ∈R ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∣ W R f P ∣ ⋅ ∣ W ′ R g Q ∣ R ( x )∣ R ∣ d x = − ( i + i + j + j ) ∫ Q ∖ Ω ∑ R ∈R ( ∑ P ∈ ch i ( R ) ∣ W R f P ∣)( ∑ Q ∈ ch j ( R ) ∣ W ′ R g Q ∣) R ( x )∣ R ∣ d x ≤ − ( j + j + i + i ) ∫ Q ∖ Ω ( S i,j D ,W f )( x )( S j,i D ,W ′ g )( x ) ≲ − ( j + j + i + i ) ∣ Q ∣( S i,j D ,W f ) Q ( S j,i D ,W ′ g ) Q . Two weight sparse domination.
Let 1 < p < ∞ , and let W, U be any D -dyadicbiparameter d × d -matrix valued A p weights on R × R with [ W, U ′ ] A p , D < ∞ . Theorem 7.8.
For all f, g ∈ H and for < δ < / , there exists a weakly δ -sparse family S of dyadic rectangles (depending on W, U, d, f, g, δ ), such that ∣( T i,j f, g )∣ ≲ δ,d C [ W, U ′ ] / pA p , D − ( i + i + j + j )/ ∑ R ∈S ( S i,j D ,U f ) R ( S j,i D ,W ′ g ) R ∣ R ∣ . The proof is as for Theorem 7.7, with the same modifications as in the two-weightcase for Haar multipliers (Theorem 6.2).7.4. L matrix-weighted bounds for cancellative Haar shifts. Using the sparse bound.
By the above sparse bounds and Lemma 7.2 we deduceimmediately the one-weight L bound ∥ T i,j ∥ L ( W )→ L ( W ) ≲ d [ W ] A , D , as well as the two-weight L bound ∥ T i,j ∥ L ( U )→ L ( W ) ≲ d [ W, U − ] / A , D [ W ] / A , D [ U ] / A , D . In particular, this is a slight improvement with respect to the one-weight bound ob-tained by Barron–Pipher [3] in the scalar setting using the same methods. Couplingthese bounds with Martikainen’s representation (2.1), we conclude that if T is anyparaproduct free Journ´e operator on R × R , then ∥ T ∥ L ( W )→ L ( W ) ≲ d [ W ] A and ∥ T ∥ L ( U )→ L ( W ) ≲ d [ W, U ′ ] / A [ W ] / A [ U ] / A . Without relying on the sparse bound.
Here we show that the approach of [15,Section 7] for deducing weighted bounds for biparameter cancellative shifts in thescalar-weighted setting can be adapted to the matrix-weighted setting for p = T i,j f ∶ = ∑ R ∈ D ∑ P ∈ ch i ( R ) ∑ Q ∈ ch j ( R ) a P QR f P h Q , f ∈ L ( R ; C d ) , where i, j are pairs of non-negative integers and a P QR are complex numbers satisfyingthe bound ∣ a P QR ∣ ≤ √∣ P ∣ ⋅ ∣ Q ∣√∣ R ∣ = − ( i + i + j + j ) . Let
W, U be D -dyadic biparameter d × d -matrix valued A -weights on R such that [ W, U ′ ] A , D < ∞ . By corollary 5.5 we have ∥ T i,j f ∥ L ( W ) ≲ d [ W ] A , D ∥ ̃ S D ,W ( T i,j f )∥ L ( R ) . Recall that every Q ∈ D has a unique j th ancestor in D . Therefore, we have ( ̃ S D ,W ( T i,j f )( x )) = ∑ R ∈ D ∑ Q ∈ ch j ( R ) ∣ W ( x ) / ∑ P ∈ ch i ( R ) a P QR f P ∣ Q ( x )∣ Q ∣ ≤ − ( i + j + i + j ) ∑ R ∈ D ∑ Q ∈ ch j ( R ) ( ∑ P ∈ ch i ( R ) ∣ W ( x ) / f P ∣) Q ( x )∣ Q ∣ = − ( i + j + i + j ) ( ̃ S i,j D ,W ( f )( x )) . By Lemma 7.2 we have ∥ ̃ S i,j D ,W f ∥ L ( R ) ≲ d ( i + j + i + j )/ [ W, U ′ ] / A , D [ U ] / A , D ∥ f ∥ L ( U ) . It follows that ∥ T i,j ∥ L ( U )→ L ( W ) ≲ d [ W, U ′ ] / A , D [ U ] / A , D [ W ] A , D . Note that a similar argument yields the one-weight bound ∥ T i,j ∥ L ( W )→ L ( W ) ≲ d [ W ] / A , D . Coupled with Martikainen’s representation (2.1), we conclude that if T is any para-product free Journ´e operator on R × R , then(7.3) ∥ T ∥ L ( W )→ L ( W ) ≲ d,T [ W ] / A and ∥ T ∥ L ( U )→ L ( W ) ≲ d,T [ W, U ′ ] / A [ W ] A [ U ] / A . Note that since T ∗ is also a paraproduct free Journ´e operator, by duality we have ∥ T ∥ L ( U )→ L ( W ) = ∥ T ∗ ∥ L ( W ′ )→ L ( U ′ ) ≲ d,T [ U ′ , ( W ′ ) ′ ] / A [ U ′ ] A [ W ′ ] / A ∼ d [ W, U ′ ] / A [ U ] A [ W ] / A , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 49 and therefore we obtain the bound(7.4) ∥ T ∥ L ( U )→ L ( W ) ≲ d,T [ W, U ′ ] / A min ([ W ] A [ U ] / A , [ W ] / A [ U ] A ) . L p matrix-weighted bounds for cancellative Haar shifts. Relying on the sparse bound for biparameter cancellative shifts.
Let i, j be pairsof non-negative integers. By the biparameter sparse bounds for the biparameter can-cellative Haar shift T i,j and Corollary 7.6 we deduce the one-weight bound ∥ T i,j ∥ L p ( W )→ L p ( W ) ≲ p,d i i j j [ W ] α ( p ) + α ( p ) A p , D , as well as the two-weight bound ∥ T i,j ∥ L p ( U )→ L p ( W ) ≲ p,d i i j j [ W, U ′ ] / pA p , D [ W ] α ( p ) A p , D [ U ] α ( p ) A p , D . where(7.5) α ( p ) ∶ = p + γ ( p ′ ) + α ( p ′ ) p − = p + γ ( p ′ ) p − + γ ( p ) , (7.6) α ( p ) ∶ = p + γ ( p ) + α ( p ) = p + γ ( p ) + γ ( p ′ ) p − α ( p ′ ) p − = α ( p ) ). In view of Martikainen’s representation (2.1), for anyparaproduct free Journ´e operators T on R we deduce in the one-matrix weightedsetting the bound ∥ T ∥ L p ( W )→ L p ( W ) ≲ p,d,T [ W ] α ( p ) + α ( p ) A p , and in the two-matrix weighted setting the bound ∥ T ∥ L p ( U )→ L p ( W ) ≲ p,d,T [ W, U ′ ] / pA p [ W ] α ( p ) A p [ U ] α ( p ) A p . Without relying on the sparse bound for biparameter cancellative Haar shifts.
It is worth noting that one can get L p weighted bounds for cancellative Haar shiftswithout relying on the sparse domination of these shifts, by using a factorization tricksimilar to the one used in [15, Proposition 7.6], and also implicitly used in [3] (and inthe sparse domination above).So let as previously T i,j f ∶ = ∑ R ∈ D ∑ P ∈ ch i ( R ) ∑ Q ∈ ch j ( R ) a P QR f P h Q , f ∈ L ( R ; C d ) , where i, j are pairs of non-negative integers, and a P QR are complex numbers satisfyingthe bound ∣ a P QR ∣ ≤ √∣ P ∣ ⋅ ∣ Q ∣√∣ R ∣ = − ( i + i + j + j ) . Let 1 < p < ∞ , and let W, U be D -dyadic biparameter d × d -matrix valued A p -weightson R such that [ W, U ′ ] A p , D < ∞ . Similarly to above we have ∣( T i,j f, g )∣ ≲ p,d [ W, U ′ ] / pA p , D − ( i + i + j + j ) ∫ R ∑ R ∈ D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∣ U R f P ∣ ⋅ ∣ W ′ R g Q ∣ R ( x )∣ R ∣ d x = [ W, U ′ ] / pA p , D − ( i + i + j + j ) ∫ R ∑ R ∈ D ( ∑ P ∈ ch i ( R ) ∣ U R f P ∣)( ∑ Q ∈ ch j ( R ) ∣ W ′ R g Q ∣) R ( x )∣ R ∣ d x ≤ [ W, U ′ ] / pA p , D − ( j + j + i + i ) ∫ R ( S i,j D ,U f )( x )( S j,i D ,W ′ g )( x ) d x ≤ [ W, U ′ ] / pA p , D − ( j + j + i + i ) ∥ S i,j D ,U f ∥ L p ( R ) ∥ S j,i D ,W ′ g ∥ L p ′ ( R ) . Similarly we have of course ∣( T i,j f, g )∣ ≲ p,d − ( j + j + i + i ) ∥ S i,j D ,W f ∥ L p ( R ) ∥ S j,i D ,W ′ g ∥ L p ′ ( R ) . Using the weighted bounds for S i,j D ,U , S i,j D ,W , S j,i D ,W ′ from Corollary 7.6 we deduce im-mediately weighted bounds for T i,j , and thus by Martikainen’s representation (2.1) forany paraproduct free Journ´e operator on R .8. L matrix-weighted bounds for general Journ´e operators In this section we obtain L matrix-weighted bounds for general (not necessarilyparaproduct free) Journ´e operators. In view of the previous section, it suffices onlyto estimate the non-cancellative terms in Martikainen’s representation (2.1). We ac-complish this by using factorization tricks inspired by [15], without relying on sparsedomination.8.1. Auxiliary mixed operators.
Let D = D ×D be any product dyadic grid in R .Let W be any D -dyadic biparameter A weight on R . For a.e. x ∈ R , we denote by W x ,I the reducing operator of the weight W x ( x ) ∶ = W ( x , x ) , x ∈ R over the interval I with respect to the exponent 2. For a.e. x ∈ R , W x ,J is defined in a symmetric way,for all intervals J ⊆ R .8.1.1. Mixed square function-maximal function operators.
Define [ S ̃ M ] D ,W f ∶ = ( ∑ I ∈D ( sup J ∈D ∣ W I × J ( f I ) J ∣ J ( x )) I ( x )∣ I ∣ ) / , ( x , x ) ∈ R × R , [ ̃ M S ] D ,W f ∶ = ( ∑ J ∈D ( sup I ∈D ∣ W I × J ( f J ) I ∣ I ( x )) J ( x )∣ J ∣ ) / , ( x , x ) ∈ R × R . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 51
We estimate ∥[ S ̃ M ] D ,W ∥ L ( W )→ L ( R ) . We have ∥[ S ̃ M ] D ,W f ∥ L ( R ) = ∫ R ∑ I ∈D ( sup J ∈D ∣ W I × J ( f I ) J ∣ J ( x )) I ( x )∣ I ∣ d x d x = ∑ I ∈D ∥ sup J ∈D ∣ W I × J ( f I ) J ∣ J ∥ L ( R ) . Define W I ( x ) ∶ = ( W x ,I ) , for a.e. x ∈ R . Denote by W I,J the reducing operator of W I over J with respect to the exponent 2. Applying (3.7) we obtainsup J ∈D ∣ W I × J ( f I ) J ∣ J ( x ) ∼ d sup J ∈D ∣ W I,J ( f I ) J ∣ J ( x ) ≤ ̃ M WI D ( f I )( x ) . Recall that by Lemma 3.6 we have [ W x ] A , D ≲ d [ W ] A , D , for a.e. x ∈ R . Thus ∑ I ∈D ∥ sup J ∈D ∣ W I × J ( f I ) J ∣ J ∥ L ( R ) ≲ d ∑ I ∈D ∥ ̃ M WI D ( f I )∥ L ( R ) ≲ d ∑ I ∈D [ W I ] A , D ∥ f I ∥ L ( WI ) ≲ d [ W ] A , D ∑ I ∈D ∥ f I ∥ L ( WI ) = [ W ] A , D ∑ I ∈D ∫ R ∣ W x ,I f I ( x )∣ d x = [ W ] A , D ∫ R ( ∫ R ∑ I ∈D ∣ W x ,I f I ( x )∣ I ( x )∣ I ∣ d x ) d x = [ W ] A , D ∫ R ∥ ̃ S D ,W ( ⋅ ,x ) ( f ( ⋅ , x ))∥ L ( R ) d x ≲ d [ W ] A , D ∫ R ∥ f ( ⋅ , x )∥ L ( W ( ⋅ ,x )) = [ W ] A , D ∥ f ∥ L ( W ) , where in the second ≲ d we applied (4.2), in the third ≲ d we applied (3.14), and in thelast ≲ d we applied (5.1). Thus ∥[ S ̃ M ] D ,W ∥ L ( W )→ L ( R ) ≲ d [ W ] A , D . The same bound is true for [ ̃
M S ] W, D (the proof is symmetric).8.1.2. Mixed shifted square function-maximal functions operators.
Let i be a nonnega-tive integer. Define [ S i, M ] D ,W f ( x ) ∶ = ( ∑ R ∈D ( ∑ P ∈ ch i ( R ) ( sup R ∈D ∣ W x ,R ( f P ) R ∣ R ( x ))) R ( x )∣ R ∣ ) / , [ M S i, ] D ,W f ( x ) ∶ = ( ∑ R ∈D ( ∑ P ∈ ch i ( R ) ( sup R ∈D ∣ W x ,R ( f P ) R ∣ R ( x ))) R ( x )∣ R ∣ ) / . Let us estimate ∥[ S i, M ] D ,W ∥ L ( W )→ L ( R ) . We will apply the same trick as in theproof of Lemma 7.2. We have ∥[ S i, M ] D ,W ∥ L ( R ) = ∫ R ∑ R ∈D ( ∑ P ∈ ch i ( R ) ( sup R ∈D ∣ W x ,R ( f P ) R ∣ R ( x ))) d x ≤ ∫ R ∑ R ∈D ( ∑ P ∈ ch i ( R ) ( sup R ∈D (∣ W x ,R W − x ,P ∣ ⋅ ∣ W x ,P ( f P ) R ∣ R ( x )))) d x ≤ ∫ R ∑ R ∈D ( ∑ P ∈ ch i ( R ) ∣ W x ,R W − x ,P ∣ )( ∑ P ∈ ch i ( R ) ( sup R ∈D ∣ W x ,P ( f P ) R ∣ R ( x )) ) d x where we applied the Cauchy–Schwarz inequality in the last step. As in the proof ofLemma 7.2 we have ∑ P ∈ ch i ( R ) ∣ W x ,R W − x ,P ∣ ≲ d i [ W x ] A , D ≲ d i [ W ] A , D , where we used Lemma 3.6 in the last step. Define W P ( x ) ∶ = ( W x ,P ) , for a.e. x ∈ R .Then ∥[ S i, M ] D ,W ∥ L ( R ) ≲ d i [ W ] A , D ∫ R ∑ R ∈D ∑ P ∈ ch i ( R ) ( sup R ∈D ∣ W x ,P ( f P ) R ∣ R ( x )) d x = i [ W ] A , D ∑ P ∈D ∥ M WP D ( f P )∥ L ( R ) ≲ d i [ W ] A , D ∑ P ∈D [ W P ] A , D ∥ f P ∥ L ( WP ) ≲ d i [ W ] A , D ∑ P ∈D ∫ R ∣ W x ,P f P ( x )∣ d x = i [ W ] A , D ∫ R ( ∫ R ∑ P ∈D ∣ W x ,P f P ( x )∣ P ( x )∣ P ∣ d x ) d x = i [ W ] A , D ∫ R ∥ ̃ S D ,W x ( f ( ⋅ , x ))∥ L ( R ) d x ≲ d i [ W ] A , D ∫ R ∥ f ( ⋅ , x )∥ L ( W ( ⋅ ,x )) d x = [ W ] A , D ∥ f ∥ L ( W ) . Thus ∥[ S i, M ] D ,W ∥ L ( W )→ L ( R ) ≲ d i / [ W ] / A , D . The same bound is true for [ M S i, ] D ,W (the proof is symmetric).8.1.3. Mixed shifted square function-square function operators.
Let j be a non-negativeinteger. Define [ S j, ̃ S ] D ,W f ( x ) ∶ = ( ∑ R ∈D ( ∑ P ∈ ch j ( R ) ( ∑ R ∈D ∣ W x ,R f P × R ∣ R ( x )∣ R ∣ ) / ) R ( x )∣ R ∣ ) / , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 53 [ ̃ SS j, ] D ,W f ( x ) ∶ = ( ∑ R ∈D ( ∑ P ∈ ch j ( R ) ( ∑ R ∈D ∣ W x ,R f P × R ∣ R ( x )∣ R ∣ ) / ) R ( x )∣ R ∣ ) / . Let us estimate ∥[ S j, ̃ S ] D ,W f ∥ L ( W )→ L ( R ) . We will again apply the same trick as inthe proof of Lemma 7.2. Define W P ( x ) ∶ = ( W x ,P ) , for a.e. x ∈ R . Then, we have ∥[ S j, ̃ S ] D ,W ∥ L ( R ) = ∫ R ∑ R ∈D ( ∑ P ∈ ch i ( R ) ( ∑ R ∈D ∣ W x ,R f P × R ∣ R ( x )∣ R ∣ ) / ) d x ≤ ∫ R ∑ R ∈D ( ∑ P ∈ ch i ( R ) ( ∑ R ∈D ∣ W x ,R W − x ,P ∣ ⋅ ∣ W x ,P f P × R ∣ R ( x )∣ R ∣ ) / ) d x ≤ ∫ R ∑ R ∈D ( ∑ P ∈ ch i ( R ) ∣ W x ,R W − x ,P ∣ )( ∑ P ∈ ch i ( R ) ∑ R ∈D ∣ W x ,P f P × R ∣ R ( x )∣ R ∣ ) d x ≲ d j [ W ] A , D ∫ R ∑ R ∈D ∑ P ∈ ch i ( R ) ∑ R ∈D ∣ W x ,P f P × R ∣ R ( x )∣ R ∣ d x = j [ W ] A , D ∑ P ∈D ∥ ̃ S D ,WP ( f P )∥ L ( R ) ≲ d j [ W ] A , D ∑ P ∈D [ W P ] A , D ∥ f P ∥ L ( WP ) ≲ d [ W ] A , D ∑ P ∈D ∫ R ∣ W x ,P f P ( x )∣ d x = [ W ] A , D ∫ R ( ∫ R ∑ P ∈D ∣ W x ,P f P ( x )∣ P ( x )∣ P ∣ d x ) d x = [ W ] A , D ∫ R ∥ ̃ S D ,W x ( f ( ⋅ , x ))∥ L ( R ) d x ≲ d [ W ] A , D ∫ R ∥ f ( ⋅ , x )∥ L ( W ( ⋅ ,x )) d x = [ W ] A , D ∥ f ∥ L ( W ) . Thus ∥[ S j, ̃ S ] D ,W ∥ L ( W )→ L ( R ) ≲ d j / [ W ] / A , D . The same bound is true for [ ̃ SS j, ] D ,W (the proof is symmetric).8.2. Estimating non-cancellative terms for p = . Here we estimate the variousnon-cancellative terms in Martikainen’s representation (2.1) for p =
2. We split theminto different classes following [15, Subsection 7.4].8.2.1.
Full standard paraproducts.
Let D = D × D be a product dyadic grid in R × R .Let a ∈ L ( R ) such that ∥ a ∥ BMO D ≤
1. Consider the paraproductΠ ( ) a f ∶ = ∑ R ∈ D a R ( f ) R h R . We have ( Π ( ) a f, g ) = ∑ R ∈ D a R ⟨( f ) R , g R ⟩ Let 1 < p < ∞ , and let W, U be D -dyadic biparameter d × d -matrix valued A p weightswith [ W, U ′ ] A p , D < ∞ . Then, by the well-known (scalar) H D ( R ) -BMO D ( R ) duality(see [15]) we have ∣( Π ( ) a f, g )∣ ≤ ∑ R ∈ D ∣ a R ∣ ⋅ ∣⟨( f ) R , g R ⟩∣ ≲ p,d [ W, U ′ ] / pA p , D ∑ R ∈ D ∣ a R ∣ ⋅ ∣ U R ( f ) R ∣ ⋅ ∣ W ′ R g R ∣ ≲ [ W, U ′ ] / pA p , D ∥ a ∥ BMOprod , D ∫ R ( ∑ R ∈ D (∣ U R ( f ) R ∣ ⋅ ∣ W ′ R g R ∣) R ( x )∣ R ∣ ) / d x ≤ [ W, U ′ ] / pA p , D ∥ a ∥ BMOprod , D ∫ R ̃ M U D f ( x )( ∑ R ∈ D (∣ W ′ R g R ∣) R ( x )∣ R ∣ ) / d x = [ W, U ′ ] / pA p , D ∥ a ∥ BMOprod , D ∫ R ̃ M U D f ( x ) S D ,W ′ g ( x ) d x ≤ [ W, U ′ ] / pA p , D ∥ ̃ M U D f ∥ L p ( R ) ∥ S D ,W ′ g ∥ L p ′ ( R ) . Similarly of course we obtain ∣( Π ( ) a f, g )∣ ≲ p,d ∥ ̃ M W D f ∥ L p ( R ) ∥ S D ,W ′ g ∥ L p ′ ( R ) . Therefore, we deduce the one-weight bound ∥ Π ( ) a ∥ L p ( W )→ L p ( W ) ≲ p,d [ W ] p + p ( p − ) + p + γ ( p ′) p − A p , D , as well as the two-weight bound ∥ Π ( ) a ∥ L p ( U )→ L p ( W ) ≲ p,d [ W, U ′ ] / pA p , D [ U ] ( p + ) p ( p − ) + p A p , D [ W ] p + γ ( p ′) p − A p , D . For p = ∥ Π ( ) a ∥ L ( W )→ L ( W ) ≲ d [ W ] / A , D , and ∥ Π ( ) a ∥ L ( U )→ L ( W ) ≲ d [ W, U ′ ] / A , D [ U ] / A , D [ W ] A , D . The weighted estimates for Π ( ) a imply by duality analogous weighted estimates forthe paraproduct Π ( ) a given by Π ( ) a f ∶ = ∑ R ∈ D a R f R R ∣ R ∣ , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 55 since ( Π ( ) a ) ∗ = Π ( ) ¯ a in the (unweighted) L ( R ; C d ) sense, where ¯ a denotes the com-plex conjugate of a . In particular, for p = ∥ Π ( ) a ∥ L ( U )→ L ( W ) = ∥ Π ( ) ¯ a ∥ L ( U ′ )→ L ( W ′ ) ≲ d [ U ′ , ( W ′ ) ′ ] / A , D [ W ′ ] / A , D [ U ′ ] A , D ∼ d [ W, U ′ ] / A , D [ U ] A , D [ W ] / A , D , and similarly ∥ Π ( ) a ∥ L ( U )→ L ( W ) ≲ d [ W ] / A , D . Of course, one can also obtain weighted estimates for Π ( ) a independently, workingsimilarly to above.8.2.2. Full mixed paraproducts.
Let D = D × D be a product dyadic grid in R × R .Let a ∈ L ( R ) such that ∥ a ∥ BMO D ≤
1. Consider the paraproductΠ ( ) a f ∶ = ∑ R ∈ D a R ( f R ) R R ∣ R ∣ ⊗ h R . We have ( Π ( ) a f, g ) = ∑ R ∈ D a R ⟨( f R ) R , ( g R ) R ⟩ . Let
W, U be D -dyadic biparameter d × d -matrix valued A weights on R with [ W, U ′ ] A , D < ∞ . Then, by the well-known (scalar) H D ( R ) -BMO D ( R ) duality (see [15]) we have ∣( Π ( ) a f, g )∣ ≲ d [ W, U ′ ] / A , D ∑ R ∈ D ∣ a R ∣ ⋅ ∣ U R × R ( f R ) R ∣ ⋅ ∣ W ′ R × R ( g R ) R ∣ ≲ [ W, U ′ ] / A , D ∥ a ∥ BMOprod , D ∫ R ( ∑ R ∈ D (∣ U R × R ( f R ) R ∣ ⋅ ∣ W ′ R × R ( g R ) R ∣) R ( x )∣ R ∣ ) / d x ≤ [ W, U ′ ] / A , D ∫ R ( ∑ R ∈ D ( F R ( x ) G R ( x )) R ( x )∣ R ∣ ) / d x = [ W, U ′ ] / A , D ( ∫ R [ SM ] U f ( x )[ M S ] W ′ g ( x ) d x ) / ≤ [ W, U ′ ] / A , D ∥[ S ̃ M ] D ,U f )∥ L ( R ) ∥[ ̃ M S ] D ,W ′ g )∥ L ( R ) , where F R ( x ) ∶ = sup R ∈D ∣ U R × R ( f R ) R ∣ R ( x ) , x ∈ R , R ∈ D ,G R ( x ) ∶ = sup R ∈D ∣ W ′ R × R ( g R ) R ∣ R ( x ) , x ∈ R , R ∈ D , Similarly of course we obtain ∣( Π ( ) a f, g )∣ ≲ p,d ∥[ S ̃ M ] W f ∥ L ( R ) ∥[ ̃ M S ] W ′ g ∥ L ( R ) . Therefore, we deduce the one-weight bound ∥ Π ( ) a ∥ L ( W )→ L ( W ) ≲ d [ W ] A , D , as well as the two-weight bound ∥ Π ( ) a ∥ L ( U )→ L ( W ) ≲ d [ W, U ′ ] / A , D [ U ] A , D [ W ] A , D . These weighted estimates for Π ( ) a imply the same weighted estimates for the para-product Π ( ) a given by Π ( ) a f ∶ = ∑ R ∈ D a R ( f R ) R h R ⊗ R ∣ R ∣ , since ( Π ( ) a ) ∗ = Π ( ) ¯ a in the (unweighted) L ( R ; C d ) sense, where ¯ a denotes again thecomplex conjugate of a (of course, one can also obtain weighted estimates for Π ( ) a independently, working similarly to above).8.2.3. Partial paraproducts.
Let D = D × D be any product dyadic grid in R . Con-sider the shifted paraproduct S i,j D f = ∑ R ∈ D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) a P Q R R f P × R h Q ⊗ R ∣ R ∣ , where i, j are non-negative integers, and for every P , Q , R , a P Q R is a function in L ( R ) with Haar coefficients a P Q R R , R ∈ D such that ∥ a P Q R ∥ BMO D ( R ) ≤ − ( i + j )/ . We factorize the duality form ( S i,j D f, g ) , adapting [15, Proposition 7.6].Let W, U be D -dyadic biparameter d × d -matrix valued A weights on R with [ W, U ′ ] A , D < ∞ . For a.e. x ∈ R , we denote by W x ,I the reducing operator of theweight W x ( x ) ∶ = W ( x , x ) , x ∈ R , over the interval I with respect to the exponent2. Define W ′ x ,I , U x ,I , U ′ x ,I corresponding to the weights W ′ , U, U ′ respectively in asimilar way.By the classical H D ( R ) -BMO D ( R ) duality we have ∣( S i,j D f, g )∣ ≤ ∑ R ∈ D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∣ a P Q R R ∣ ⋅ ∣⟨ f P × R , ( g Q ) R ⟩∣ ≲ ∑ R ∈D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∥ a P Q R ∥ BMO D ( R ) ∫ R ( ∑ R ∈D ∣⟨ f P × R , ( g Q ) R ⟩∣ R ( x )∣ R ∣ ) / d x ≤ − ( i + j )/ ∑ R ∈D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∫ R ( ∑ R ∈D ∣⟨ f P × R , ( g Q ) R ⟩∣ R ( x )∣ R ∣ ) / d x . OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 57
We observe that for a.e. x ∈ R we have ∣⟨ f P × R , ( g Q ) R ⟩∣ = ∣⟨ U x ,R f P × R , U − x ,R ( g Q ) R ⟩∣ ≤ ∣ U x ,R f P × R ∣ ⋅ ∣ U − x ,R ( W ′ x ,R ) − ∣ ⋅ ∣ W ′ x ,R ( g Q ) R ∣ ≲ d ∣ U x ,R f P × R ∣ ⋅ ∣ U ′ x ,R W x ,R ∣ ⋅ ∣ W ′ x ,R ( g Q ) R ∣ ≲ d [ W ( ⋅ , x ) , U ′ ( ⋅ , x )] / A , D ∣ U x ,R f P × R ∣ ⋅ ∣ W ′ x ,R ( g Q ) R ∣ ≲ d [ W, U ′ ] / A , D ∣ U x ,R f P × R ∣ ⋅ ∣ W ′ x ,R ( g Q ) R ∣ , where in the first ≲ d we applied Lemma 3.5 (coupled with (3.1)) twice, and in the last ≲ d we applied Lemma 3.6. It follows that2 − ( i + j )/ ∑ R ∈D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∫ R ( ∑ R ∈D ∣⟨ f P × R , ( g Q ) R ⟩∣ R ( x )∣ R ∣ ) / d x ≲ d − ( i + j )/ [ W, U ′ ] / A , D ∑ R ∈D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∫ R ( ∑ R ∈D ∣ U x ,R f P × R ∣ ⋅ ∣ W ′ x ,R ( g Q ) R ∣ R ( x )∣ R ∣ ) / d x . We have ∑ R ∈D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ∫ R ( ∑ R ∈D ∣ U x ,R f P × R ∣ ⋅ ∣ W ′ x ,R ( g Q ) R ∣ R ( x )∣ R ∣ ) / d x ≤ ∫ R ∑ R ∈D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) ( sup R ∈D ∣ W ′ x ,R ( g Q ) R ∣ R ( x ))( ∑ R ∈D ∣ U x ,R f P × R ∣ R ( x )∣ R ∣ ) / d x = ∫ R ∑ R ∈D ( ∑ Q ∈ ch j ( R ) ( sup R ∈D ∣ W ′ x ,R ( g Q ) R ∣ R ( x )))( ∑ P ∈ ch i ( R ) ( ∑ R ∈D ∣ U x ,R f P × R ∣ R ( x )∣ R ∣ ) / ) R ( x )∣ R ∣ d x ≤ ∫ R [ S i, ̃ S ] D ,U f ( x )[ S j, M ] D ,W ′ g ( x ) d x ≤ ∥[ S i, ̃ S ] D ,U f ∥ L ( R ) ∥[ S j, M ] D ,W ′ g ∥ L ( R ) , In conclusion ∣( S i,j D f, g )∣ ≲ d [ W, U ′ ] / A , D − ( i + j )/ ∥[ S i, ̃ S ] D ,U f ∥ L ( R ) ∥[ S j, M ] D ,W ′ g ∥ L ( R ) . Similarly of course we have the one-weight factorization ∣( S i,j D f, g )∣ ≲ d − ( i + j )/ ∥[ S i, ̃ S ] D ,W f ∥ L ( R ) ∥[ S j, M ] D ,W ′ g ∥ L ( R ) . Therefore ∥ S i,j D ∥ L ( U )→ L ( W ) ≲ d [ W, U ′ ] / A , D [ W ] / A , D [ U ] / A , D and ∥ S i,j D ∥ L ( W )→ L ( U ) ≲ d [ W, U ′ ] / A , D [ W ] A , D . Working similarly, we obtain the same weighted estimates for shifted paraproducts S i,j, ∗ D of the form S i,j, ∗ D f = ∑ R ∈ D ∑ P ∈ ch i ( R ) Q ∈ ch j ( R ) a P Q R R f R × P h Q ⊗ R ∣ R ∣ , where i, j are nonnegative integers, and for every P , Q , R , a P Q R is a function in L ( R ) with Haar coefficients a P Q R R , R ∈ D such that ∥ a P Q R ∥ BMO D ( R ) ≤ − ( i + j )/ . L matrix-weighted bounds for general Journ´e operators. Notice that for p = T is any (not necessarily paraproduct free) Journ´e operator on R , then ∥ T ∥ L ( U )→ L ( W ) ≲ d,T [ W, U ′ ] / A [ W ] / A [ U ] / A and ∥ T ∥ L ( W )→ L ( W ) ≲ d,T [ W ] A . Appendix
Reverse H¨older inequality for biparameter dyadic A p weights. For thereader’s convenience, we give here the statement and proof of the reverse H¨older in-equality for scalar biparameter dyadic A p weights due to Holmes–Petermichl–Wick [15,Proposition 2.2].First of all, if D is any dyadic grid in R n and w is any weight on R n , we define the dyadic A ∞ characteristic [ w ] A ∞ , D ∶ = sup Q ∈D ( M D ,Q w ) Q ( w ) Q , where M D ,Q denotes the dyadic Hardy–Littlewood maximal function adapted to Q , M D ,Q f ∶ = sup P ∈D P ⊆ Q ( f ) P P . A tiny modification of [31, Lemma 4.1] shows that [ w ] A ∞ , D ≤ [ w ] A p , D , < p < ∞ . We now give the statement of the sharp reverse H¨older inequality for one-parameterdyadic A p weights from [31] (attributed therein to [19], [39]). Lemma 9.1.
Let D be any dyadic grid in R n , and let w be a weight on R n with [ w ] A ∞ , D < ∞ . Then, for all δ ∈ ( , n + [ w ] A ∞ , D ) , OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 59 there holds ( w + δ ) Q ≤ ( w ) + δQ , ∀ Q ∈ D . We now state and prove the reverse H¨older inequality for scalar biparameter dyadic A p weights due to Holmes–Petermichl–Wick [15, Proposition 2.2], with emphasis onexplicitly tracking constants. Lemma 9.2 ([15]) . Let D ∶ = D × D be any product dyadic grid in R n × R m . Let < p < ∞ , and let w be a weight on R n + m with [ w ] A p , D < ∞ . Then, for all δ ∈ ( , max ( m,n ) + [ w ] A p , D ) , there holds ( w + δ ) R ≤ ( w ) + δR , ∀ R ∈ D . Proof.
We follow the proof of [15, Proposition 2.2]. For a.e. x ∈ R n , define w x ( x ) ∶ = w ( x , x ) , x ∈ R m . For a.e. x ∈ R n , we have that w x is a weight on R m , and since w is scalar valued itis trivial by classical dyadic Lebesgue differentiation to see that [ w x ] A p , D ≤ [ w ] A p , D .Therefore, for a.e. x ∈ R we have δ < m + [ w x ] A p , D ≤ m + [ w x ] A ∞ , D and therefore ( w + δ ) R = ⨏ R ( w + δx ) R d x ≤ ⨏ R ( w x ) + δR d x = ( v + δR ) R , where v R ( x ) ∶ = ( w ( x , ⋅ )) R , for a.e. x ∈ R n . We notice that for all P ∈ D , we have ( v R ) P = ( w ) P × R , and also by Jensen’s inequality (since − /( p − ) < v R ( x ) − /( p − ) = ( w ( x , ⋅ )) − /( p − ) R ≤ ( w ( x , ⋅ ) − /( p − ) ) R , therefore ( v R ) P ≤ ( w − /( p − ) ) P × R . We deduce [ v ] A p , D ≤ [ w ] A p , D . Therefore δ < n + [ v R ] A p , D ≤ n + [ v R ] A ∞ , D and therefore ( v + δR ) R ≤ ( v R ) + δR = ( w ) + δR , concluding the proof. (cid:3) Measurable choice of reducing matrix.
Here we show that if a family ofnorms is measurably parametrized, then one can choose a reducing matrix for each ofthem in a way that is measurable with respect to the parameter.We fix a positive integer d throughout this subsection. All metric spaces will beassumed to be equipped with the Borel σ -algebra with respect to the topology inducedby their metric, unless explicitly stated otherwise. Hausdorff distance.
Let F = R of F = C . Recall for any x ∈ F d and for anynonempty K ⊆ F d the notation dist ( x, K ) ∶ = inf y ∈ K ∣ x − y ∣ . Let X ( F d ) be the set of all nonempty compact subsets of F d . For all K , K ∈ X ( F d ) ,we define the Hausdorff distance of K , K by δ ( K , K ) ∶ = max ( sup y ∈ K dist ( y, K ) , sup z ∈ K dist ( z, K )) . Then, it is very well-known that ( X ( F d ) , δ ) is a metric space. Moreover, it is easyto see that ( X ( F d ) , δ ) is separable; for example, the family of all finite subsets of ( Q d + i Q d ) ∩ F d is dense in ( X ( F d ) , δ ) . See [1] for a more general version of that, aswell as for additional properties of the Hausdorff distance.9.2.2. Ellipsoids.
Let F = R or F = C . Let B d ( F ) ∶ = { v ∈ F d ∶ ∣ v ∣ ≤ } be the unit ballin F d . We identify linear maps A ∶ F d → F d with d × d -matrices with entries in F in thenatural way.A subset E of F d is said to be a centrally symmetric ellipsoid in F d if there exists a F -linear map A ∶ F d → F d such that E = A B d ( F ) ∶ = { Av ∶ v ∈ B d ( F )} . Simple facts of linear algebra, see for instance [9, page 304], imply that if
A, B ∈ M d ( F ) are such that E = A B d ( F ) = B B d ( F ) , then AA ∗ = BB ∗ and E = ( AA ∗ ) / B d ( F ) . Inparticular, if E is any centrally symmetric ellipsoid in F d , then there exists a uniquepositive semidefinite matrix A ∈ M d ( F ) such that E = A B d ( F ) . Denote by PS d ( F ) the set of all positive semidefinite matrices in M d ( F ) , and by E d ( F ) the set of allcentrally symmetric ellipsoids in F d . Then, the map M F ,d ∶ E d ( F ) → PS d ( F ) given by M F ,d ( A B d ( F )) ∶ = A , for all A ∈ PS d ( F ) , is well-defined bijective.A centrally symmetric ellipsoid E in F d is said to be nondegenerate if E = A B d ( F ) for some invertible linear map A ∶ F d → F d . Let E ∗ d ( F ) be the set of all nondegeneratecentrally symmetric ellipsoids in F d . Notice that any E ∈ E d ( F ) is nondegenerate if andonly if M F ,d ( E ) is positive definite. We denote by P d ( F ) the set of all positive definitematrices in M d ( F ) .9.2.3. John ellipsoids.
Let K be any convex compact subset of R d such that 0 ∈ Int ( K ) and − K ∶ = { − v ∶ v ∈ K } ⊆ K. Then, it is well-known that there exists a unique E ∈ E ∗ d ( R ) , such that E ⊆ K and ∣ E ∣ = max {∣ F ∣ ∶ F ∈ E ∗ d ( R ) , F ⊆ K } . The ellipsoid E is usually called the John ellipsoid of K . Then, it is well-known, see[10], [31], that(9.1) E ⊆ K ⊆ √ dE. OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 61
Let C d ( R ) be the set of all convex compact subsets of R d such that 0 ∈ Int ( K ) and − K ⊆ K . Then, it is well-known, see [16], that the map J d ∶ C d ( R ) → E ∗ d ( R ) sending each K ∈ C d ( R ) to its John ellipsoid is continuous with respect to the Hausdorff distance.9.2.4. John ellipsoids for unit balls of norms on R d . Let r be any norm on R d . Let K ∶ = { v ∈ R d ∶ r ( v ) ≤ } be the unit ball of r . Clearly, K ∈ C d ( R ) . Let E be the Johnellipsoid of K . Set A ∶ = M R ,d ( E ) . Then, (9.1) implies that for all e ∈ R d , we have ∣ e ∣ ≤ ⇒ r ( Ae ) ≤ r ( e ) ≤ ⇒ ∣ A − e ∣ ≤ √ d. It follows that r ( e ) ≤ ∣ A − e ∣ ≤ √ dr ( e ) , ∀ e ∈ R d . Measurable choice of reducing matrix for measurably parametrized norms on R d . Let ( X, F ) be any measurable space. Let r ∶ X × R d → [ , ∞ ) be a function such that r ( x, ⋅ ) is a norm on R d , for all x ∈ X , and r ( ⋅ , y ) is a measurable function on X , forall y ∈ R d . For all x ∈ X , we let K x be the unit ball of r ( x, ⋅ ) , E x be the John ellipsoidof K x , and we set A x ∶ = M R ,d ( E x ) . Proposition 9.3.
The map
R ∶ X → P d ( R ) given by R ( x ) ∶ = A x , for all x ∈ X , ismeasurable. The proof will be accomplished in several steps. We first consider the map
K ∶ X → C d ( R ) given by K ( x ) ∶ = K x , for all x ∈ X . Then, it is clear that R = M R ,d ○ J d ○ K . Recall that J d ∶ C d ( R ) → E ∗ d ( R ) is continuous, thus measurable. Lemma 9.4 ([9]) . The map M R ,d ∶ E d ( R ) → PS d ( R ) is continuous. In fact, there holds δ ( E, F ) ≤ ∣ A − B ∣ ≲ d δ ( E, F ) , where E = A B d ( R ) and F = B B d ( R ) , for all A, B ∈ PS d ( R ) , so M R ,d ∶ E d ( R ) → PS d ( R ) is a topological homeomorphism. Therefore, M R ,d ∣ E ∗ d ( R ) ∶ E ∗ d ( R ) → P d ( R ) is continuous, thus measurable. Thus, itsuffices to prove that the map K ∶ X → C d ( R ) is measurable. Lemma 9.5.
Let K be any compact convex subset of R d with Int ( K ) ≠ ∅ . Then, K ∩ Q d is dense in K . In particular dist ( x, K ) = inf y ∈ K ∩ Q d ∣ y − x ∣ , ∀ x ∈ R d . Proof.
It is a well-known fact of convex geometry, see e.g. [12, Proposition 3.1], that K = Cl ( Int ( K )) . Since H ∶ = Int ( K ) is an open subset of R d , we have that H ∩ Q d isdense in H . The desired result follows. (cid:3) Since the function dist ( ⋅ , K ) on R d is continuous, for any nonempty K ⊆ R d , theprevious lemma implies immediately the following corollary. Corollary 9.6.
Let K , K be any two convex compact subsets of R d with Int ( K i ) ≠ ∅ , i = , . Then δ ( K , K ) = max ( sup y ∈ K ∩ Q d dist ( y, K ) , sup z ∈ K ∩ Q d dist ( z, K )) . We can now prove that the map K is measurable. Lemma 9.7.
The map
K ∶ X → C d ( R ) given by K ( x ) ∶ = K x , for all x ∈ X , is measurable.Proof. Since ( X ( R d ) , δ ) is a separable metric space, it follows that ( C d ( R ) , δ ) is also aseparable metric space. Therefore, it suffices to prove that for all K ∈ C d ( R ) and forall ε >
0, the inverse image along K of the closed δ -ball of radius ε and center K , { x ∈ X ∶ δ ( K x , K ) ≤ ε } , belongs to F . We have { x ∈ X ∶ δ ( K x , K ) ≤ ε } = { x ∈ X ∶ sup y ∈ K ∩ Q d dist ( y, K x ) ≤ ε } ∩ { x ∈ X ∶ sup z ∈ K x ∩ Q d dist ( z, K ) ≤ ε } . Notice that { x ∈ X ∶ sup y ∈ K ∩ Q d dist ( y, K x ) ≤ ε } = ⋂ y ∈ K ∩ Q d ∞ ⋂ n = { x ∈ X ∶ dist ( y, K x ) < ε + n } , and { x ∈ X ∶ dist ( y, K x ) < ε + n } = { x ∈ X ∶ inf z ∈ K x ∩ Q d ∣ y − z ∣ < ε + n } = ⋃ z ∈ Q d ∣ y − z ∣ < ε + n { x ∈ X ∶ z ∈ K x } = ⋃ z ∈ Q d ∣ y − z ∣ < ε + n { x ∈ X ∶ r ( x, z ) ≤ } , for all y ∈ R d and for all n = , , . . . . Thus { x ∈ X ∶ sup y ∈ K ∩ Q d dist ( y, K x ) ≤ ε } ∈ F .Moreover, we have { x ∈ X ∶ sup z ∈ K x ∩ Q d dist ( z, K ) ≤ ε } = ⋂ z ∈ Q d dist ( z,K ) > ε { x ∈ X ∶ z ∉ K x } = ⋂ z ∈ Q d dist ( z,K ) > ε { x ∈ X ∶ r ( x, z ) > } . It follows that { x ∈ X ∶ sup z ∈ K x ∩ Q d dist ( z, K ) ≤ ε } ∈ F as well, concluding the proof. (cid:3) Identifying C d with R d . Recall that we identify complex d × d -matrices with C -linear maps on C d in the natural way, and that we identify ( d ) × ( d ) -matrices in R d with linear maps on R d in the natural way.Let B = { e , e , . . . , e d − , e d } be the standard R -basis of C d , and let B ′ = { e , . . . , e d } be the standard basis of R d . For all A ∈ M d ( C ) , it is easy to see by direct computation OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 63 that [ A ] BB = H ( A ) , where H ∶ M d ( C ) → M d ( R ) is the natural ring monomorphisminduced by the ring monomorphism H ∶ C → M ( R ) given by H ( z ) ∶ = [ Re ( z ) − Im ( z ) Im ( z ) Re ( z ) ] , ∀ z ∈ C . Consider the map R ∶ C d → R d given by R ( z , . . . , z d ) ∶ = ( Re ( z ) , Im ( z ) , . . . , Re ( z d ) , Im ( z d )) , ∀ z ∈ C d . Clearly, R is an R -linear isometric isomorphism, and it is also measure-preserving. Inparticular, R ( B d ( C )) = B d ( R ) . Note that R induces the map X ( R ) ∶ X ( C d ) → X ( R d ) given by X ( R )( K ) ∶ = R ( K ) , for all K ∈ X ( C d ) , which is an isometric isomorphism withrespect to the Hausdorff distance.Notice that for all C -linear maps A ∶ C d → C d , the map RAR − ∶ R d → R d is linear,and [ RAR − ] B ′ B ′ = [ R ] B ′ B [ A ] BB [ R − ] BB ′ = I d ⋅ H ( A ) ⋅ I − d = H ( A ) . Notice also that for all A ∈ M d ( C ) , we have H ( A ) T = H ( ¯ A T ) = H ( A ∗ ) , where ¯ A denotes the complex conjugate of A , A T denotes the transpose of A , and A ∗ denotesthe adjoint of A . Therefore, if A ∈ M d ( C ) is Hermitian, then H ( A ) is symmetric.Let now A ∈ PS d ( C ) . Then, we can write A = U DU ∗ , where U is a complex d × d -unitary matrix and D is a d × d -diagonal matrix with real non-negative entries. Then, H ( D ) is a ( d ) × ( d ) -diagonal matrix with real non-negative entries. Moreover, wehave H ( U ) T H ( U ) = H ( U ∗ U ) = H ( I d ) = I d , so H ( U ) is a ( d ) × ( d ) -orthogonal matrix. Thus H ( A ) = H ( U ) DH ( U ) T ∈ PS d ( R ) . Complex John ellipsoids.
Let C d ( C ) be the set of all compact convex subsets K of C d such that 0 ∈ Int ( K ) and zK ∶ = { zv ∶ v ∈ K } ⊆ K, for all z ∈ T ∶ = { w ∈ C ∶ ∣ w ∣ = } . Clearly, ̃ K ∶ = R ( K ) ∈ C d ( R ) . Let ̃ E be the Johnellipsoid of ̃ K in R d . Set E ∶ = R − ( ̃ E ) . We claim that E is an ellipsoid in C d .Set ̃ A ∶ = M R ,d ( ̃ E ) and A ∶ = R − ̃ AR . It is obvious that A ∶ C d → C d is an R -linearmap. Moreover, we have E = R − ( ̃ E ) = R − ̃ A B d ( R ) = R − ̃ AR B d ( C ) = A B d ( C ) . Therefore, it suffices to prove that A ∶ C d → C d is C -linear.For every z ∈ T , consider the C -linear map L z ∶ C d → C d given by L z ( v ) ∶ = zv , for all v ∈ C d . Then it is easy to see that RL z = ̃ L z R , where the linear map ̃ L z ∶ R d → R d is given by ̃ L z ∶ = H ( L z ) = ⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ Re ( z ) − Im ( z ) ( z ) Re ( z ) ⋱ Re ( z ) − Im ( z ) ( z ) Re ( z ) ⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ , for all z ∈ T . Clearly, for all z ∈ T , ̃ L z ∶ R d → R d is an orthogonal map, because ∣ z ∣ = z ∈ T , ̃ L z ( ̃ E ) is the John ellipsoid in R d of ̃ L z ( ̃ K ) . Notice thatby assumption ̃ L z ( ̃ K ) = ̃ L z R ( K ) = RL z ( K ) = R ( K ) = ̃ K, ∀ z ∈ T . Therefore, by uniqueness of the John ellipsoid we deduce ̃ L z ( ̃ E ) = ̃ E , for all z ∈ T .Notice that ̃ L Tz = ̃ L ¯ z = ̃ L − z , for all z ∈ T . Then, for all z ∈ T , ̃ L z ̃ A (̃ L z ) T ∈ PS d ( R ) (because ̃ A ∈ PS d ( R ) ) and ̃ L z ̃ A (̃ L z ) T B d ( R ) = ̃ L z ̃ A ̃ L ¯ z B d ( R ) = ̃ L z ̃ A B d ( R ) = ̃ L z ̃ E = ̃ E. By uniqueness of ̃ A we deduce ̃ L z ̃ A ̃ L ¯ z = ̃ A , therefore deduce ̃ L z ̃ A = ̃ A ̃ L z , for all z ∈ T .Thus AL z = R − ̃ ARL z = R − ̃ A ̃ L z R = R − ̃ L z ̃ AR = L z R − ̃ AR = L z A, ∀ z ∈ T , proving that A ∶ C d → C d is C -linear (since A ∶ C d → C d is already known to be R -linear). In the sequel, we will say that E is the complex John ellipsoid of K .Let J C ,d ∶ C d ( C ) → E ∗ d ( C ) be the map sending each K ∈ C d ( C ) to its complex Johnellipsoid. Since R is an isometry, we have δ ( E, F ) = δ ( R ( E ) , R ( F )) , for every nonempty compact subsets E, F of C d . Therefore, since J d ∶ C d ( R ) → E ∗ d ( R ) is continuous with respect to the Hausdorff distance, it follows that J C ,d ∶ C d ( C ) → E ∗ d ( C ) is also continuous with respect to the Hausdorff distance.9.2.8. From ellipsoids in C d to ellipsoids in R d . Let E ∈ E d ( C ) , and let A ∶ = M C ,d ( E ) ,so E = A B d ( C ) . Set ̃ E ∶ = R ( E ) . Then ̃ E = RAR − B d ( R ) , so since R ∶ C d → R d is an R -linear isometric isomorphism, it follows that ̃ E is anellipsoid in R d . Since A ∈ PS d ( C ) , by the above we have [ RAR − ] B ′ B ′ = H ( A ) ∈ PS d ( R ) .Thus M R , d ( ̃ E ) = H ( A ) = H ( M C ,d ( E )) . This implies that M C ,d = H − ○ M R , d ○ X ( R )∣ E d ( C ) . Since H ∶ M d ( C ) → M d ( R ) is atopological homeomorphism onto its image, X ( R ) ∶ X ( C d ) → X ( R d ) is an isometricisomorphism with respect to the Hausdorff distance, and M R , d ∶ E d ( R ) → PS d ( R ) is continuous, it follows that M C ,d ∶ E d ( C ) → PS d ( C ) is continuous. In fact, since OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 65 M R , d ∶ E d ( R ) → PS d ( R ) is a topological homeomorphism, it follows that M C ,d ∶ E d ( C ) → PS d ( C ) is also a topological homeomorphism.9.2.9. John ellipsoids for unit balls of norms on C d . Let r be any norm on C d . Let K ∶ = { v ∈ C d ∶ r ( v ) ≤ } be the unit ball of r . Clearly, K ∈ C d ( C ) . Let E be thecomplex John ellipsoid of K . Set A ∶ = M C ,d ( E ) . Then, (9.1) implies that R ( E ) ⊆ R ( K ) ⊆ √ dR ( E ) , therefore E ⊆ K ⊆ √ dE. In fact, it follows from the above that E is a maximum volume centrally symmetric non-degenerate complex ellipsoid contained in K , therefore by the proof of [10, Proposition1.2] we have E ⊆ K ⊆ √ dE. Thus, for all e ∈ C d , we have ∣ e ∣ ≤ ⇒ r ( Ae ) ≤ r ( e ) ≤ ⇒ ∣ A − e ∣ ≤ √ d. It follows that r ( e ) ≤ ∣ A − e ∣ ≤ √ dr ( e ) , for all e ∈ C d .9.2.10. Measurable choice of reducing matrix for measurably parametrized norms on C d . Let ( X, F ) be any measurable space. Let r ∶ X × C d → [ , ∞ ) be a function suchthat r ( x, ⋅ ) is a norm on C d , for all x ∈ X , and r ( ⋅ , y ) is a measurable function on X ,for all y ∈ C d . For all x ∈ X , we let K x be the unit ball of r ( x, ⋅ ) , E x be the complexJohn ellipsoid of K x , and we set A x ∶ = M C ,d ( E x ) . Proposition 9.8.
The map
R ∶ X → P d ( C ) given by R ( x ) ∶ = A x , for all x ∈ X , ismeasurable. Consider the map
K ∶ X → C d ( C ) given by K ( x ) ∶ = K x , for all x ∈ X . Then, it isclear that R = M C ,d ○ J C ,d ○ K . We already saw above that M C ,d ∣ E ∗ d ( C ) ∶ E ∗ d ( C ) → P d ( C ) and J C ,d ∶ C d ( C ) → E ∗ d ( C ) arecontinuous, therefore measurable. Moreover, similarly to the real case we have that K ∶ X → C d ( C ) is measurable, yielding the desired result. References [1] E. Aamari,
Hausdorff Distance and Measurability , Lecture notes 2 for thecourse
Geometric Inference (Sorbonne U., M2 Stats & M2A), available at [2] A. Barron,
Sparse Bounds in Harmonic Analysis and Semiperiodic Estimates , PhD Thesis, BrownUniversity (May 2019)[3] A. Barron, J. Pipher,
Sparse Domination for Bi-Parameter Operators Using Square Functions ,preprint, available at arXiv:1709.05009[4] D. Cruz-Uribe, J. Isralowitz, K. Moen,
Two-weight bump conditions for matrix weights , Int.Equat. and Oper. Theory , 90 (2018) [5] M. Christ, M. Goldberg,
Vector A Weights and a Hardy–Littlewood Maximal Function , Trans.Amer. Math. Soc.
353 (2001), 1995 – 2002.[6] R. Fefferman, E. Stein,
Singular integrals on product spaces , Adv. Math. , 45 (1982), no. 2, 117 –143.[7] R. Fefferman,
Harmonic Analysis on Product Spaces , Ann. of Math. (2)
126 (1987), no. 1, 109 –130.[8] R. Fefferman, A p weights and singular integrals , Amer. J. Math.
110 (1988), no. 5, 975 – 987.[9] J.-L. Goffin, A. J. Hoffman,
On the Relationship between the Hausdorff Distance and MatrixDistances of Ellipsoids , Lin. Alg. and its Appl. , vol. 52–53, 301–313 (1983)[10] M. Goldberg,
Matrix A p weights via maximal functions , Pac. J. of Math. , 211 (2003), no. 2, 201– 220[11] L. Grafakos,
Classical Fourier Analysis , Third Edition, Springer (2014)[12] P. M. Gruber,
Convex and Discrete Geometry , Springer , 2007[13] T. H¨anninen,
Equivalence of sparse and Carleson coefficients for general sets , Ark. Mat. , 56(2018), 333 – 339.[14] I. Holmes, M. T. Lacey, B. Wick,
Commutators in the two-weight setting , Math. Ann. , 367 (2017),51 – 80.[15] I. Holmes, S. Petermichl, B. Wick,
Weighted Little BMO and two-weight inequalities for Journ´ecommutators , Anal. and PDE , 11 (2018), no. 7, 1693 – 1740.[16] O. Mordhorst,
New results on affine invariant points , Israel J. of Math. , 219, 529–548 (2017)[17] S. Hukovic, S. Treil, A. Volberg,
The Bellman functions and sharp weighted inequalities for squarefunctions , In Complex analysis, operators, and related topics, 113
Oper. Theory Adv. Appl. (2000),97 – 113.[18] T. Hyt¨onen,
The sharp weighted bound for general Calderon Zygmund operators , Ann. of Math. ,175 (2012), no. 3, 1473 – 1506.[19] T. Hyt¨onen, C. P´erez, E. Rela,
Sharp reverse H¨older properties for A ∞ weights on spaces ofhomogeneous type , J. of Funct. Anal. , 263 (2012), no. 12, 3883 – 3899.[20] T. Hyt¨onen, S. Petermichl, A. Volberg,
The Sharp Square Function Estimate with Matrix Weights , Discr. Anal. , (2019), no. 2, 1 – 8.[21] T. Hyt¨onen, J. van Neerven, M. Veraar, L. Weis,
Analysis in Banach spaces, Volume II: Proba-bilistic Methods and Operator Theory , Springer (2017)[22] J. Isralowitz,
Matrix weighted Triebel–Lizorkin Bounds: A simple stopping time proof , preprint,available at arXiv:1507.06700[23] J. Isralowitz,
Boundedness of commutators and H -BMO duality in the two-weight matrixweighted setting , Int. Eq. and Oper. Th. , 89 (2017), 257 – 287.[24] J. Isralowitz,
Sharp Matrix Weighted Weak and Strong Type Inequalities for the Dyadic SquareFunction , Pot. Anal. , 53 (2020), 1529 – 1540.[25] J. Isralowitz, H.-K. Kwon, S. Pott,
Matrix weighted norm inequalities for commutators and para-products with matrix symbols , J. of London Math. Soc. , 96 (2017), no. 1, 243 – 270.[26] J. Isralowitz, K. Moen,
Matrix weighted Poincar´e inequalities and applications to degenerateelliptic systems , Indiana U. Math. J. , 68 (2019), no. 5, 1327 – 1377.[27] J. Isralowitz, S. Pott, S. Treil,
Commutators in the two scalar and matrix weighted setting ,preprint, available at arXiv:2001.11182[28] J.-L. Journ´e,
Calder´on–Zygmund operators on product spaces , Rev. Math. Iber. , 3 (1985), no. 1,55 – 91.[29] A. K. Lerner, F. Nazarov,
Intuitive dyadic calculus: the basics , preprint, available atarXiv:1508.05639[30] Henri Martikainen,
Representation of bi-parameter singular integrals by dyadic operators , Adv.Math.
229 (2012), no. 3, 1734 – 1761.[31] F. Nazarov, S. Petermichl, S. Treil, A. Volberg,
Convex Body Domination and weighted Estimateswith Matrix Weights , Adv. in Math. , 318 (2017), 279 – 306.
OUNDEDNESS OF JOURN´E OPERATORS WITH MATRIX WEIGHTS 67 [32] F. Nazarov, S. Treil,
The hunt for a Bellman function: applications to estimates for singularintegral operators and to other classical problems of harmonic analysis , Algebra i Analiz , 8 (1996),no. 5, 32 – 162.[33] F. Nazarov, S. Treil, A. Volberg,
The Tb-theorem on non-homogeneous spaces , textitActa Math.,vol. 190, n. 2, 151–239 (2003)[34] Y. Ou,
Multi-parameter singular integral operators and representation theorem , Rev. Mat.Iberoam. , 33 (2017), no. 1, 325 – 350.[35] S. Petermichl,
Dyadic Shifts and a Logarithmic Estimate for Hankel Operators with Matrix Sym-bol , Comptes Rendus Acad. Sci. Paris , 1 (2000), no. 1, 455 – 460.[36] S. Petermichl,
The sharp bound for the Hilbert transform on weighted Lebesgue spaces in termsof the classical A p characteristic , Amer. J. Math. , 129 (2007), no. 5, 1355 – 1375.[37] S. Treil,
Mixed A − A ∞ estimates of the nonhomogeneous vector square function with matrixweights , preprint, available at arXiv:1705.08854[38] S. Treil, A. Volberg, Wavelets and the Angle Between Past and Future , J. of Funct. Anal. , 143(1997), no. 2, 269 – 308.[39] V. Vasyunin,
The sharp constant in the reverse H¨older inequality for Muckenhoupt weights , St.Petersburg Math. J. , 15 (2004), 49 – 79.[40] A. Volberg,