Well-Localized Operators on Matrix Weighted L 2 Spaces
aa r X i v : . [ m a t h . C A ] J u l WELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES
KELLY BICKEL † AND BRETT D. WICK ‡ Abstract.
Nazarov-Treil-Volberg recently proved an elegant two-weight T1 theorem for“almost diagonal” operators that played a key role in the proof of the A conjecture fordyadic shifts and related operators. In this paper, we obtain a generalization of their T1theorem to the setting of matrix weights. Our theorem does differ slightly from the scalarresults, a fact attributable almost completely to differences between the scalar and matrixCarleson Embedding Theorems. The main tools include a reduction to the study of well-localized operators, a new system of Haar functions adapted to matrix weights, and a matrixCarleson Embedding Theorem. Introduction
In this paper, the dimension d is fixed and L will denote L ( R , C d ) , namely the set ofvector-valued functions satisfying k f k L ≡ Z R k f ( x ) k C d dx < ∞ . We will be primarily interested in matrix weights , d × d positive definite matrix-valued func-tions with locally integrable entries. Given such a weight W, let L ( W ) be the set of functionssatisfying k f k L ( W ) ≡ Z R (cid:13)(cid:13)(cid:13) W ( x ) f ( x ) (cid:13)(cid:13)(cid:13) C d dx = Z R h W ( x ) f ( x ) , f ( x ) i C d dx < ∞ . Given matrix weights V and W , a natural question is: when does a bounded operator T mapping L to itself extend to a bounded operator mapping L ( W ) to L ( V ) and what isthe norm of T as a map from L ( W ) to L ( V ) ?If we consider the special one-dimensional case when V = W = w , this question has aclassical answer. Indeed, a Calderón-Zygmund operator T extends to a bounded operator on L ( w ) if and only if w is an A Muckenhoupt weight, namely: [ w ] A ≡ sup I (cid:10) w i I h w − (cid:11) I < ∞ , where the supremum is taken over all intervals I and h w i I ≡ | I | R I w ( x ) dx. In contrast, thequestion of the operator norm of T on L ( w ) , and its sharp dependence on [ w ] A , calledthe A conjecture, remained open for decades. Lacey-Petermichl-Reguera made substantialprogress on this question in [8] by establishing the sharp bound for dyadic shifts and as acorollary, obtained new proofs of the bound for simple Calderón-Zygmund operators includingthe Hilbert transform, Riesz transforms, and Beurling transform. Their proof rested on an Date : July 31, 2018. † Research supported in part by National Science Foundation DMS grants ‡ Research supported in part by National Science Foundation DMS grant elegant two-weight T1 theorem due to Nazarov-Treil-Volberg [11] coupled with technicaltesting estimates.Using a refined method of decomposing Calderón-Zygmund operators as sums of dyadicshifts and an improvement of the Lacey-Petermichl-Reguera estimates, Hytönen resolved the A conjecture in 2012 in [4] and showed k T k L ( w ) → L ( w ) . [ w ] A for all Calderón-Zygmund operators T. We are interested the analogue of the A conjecture in the setting of matrix weights. How-ever, due to complications arising in the matrix case, the current literature is less developed.Still, the boundedness of Calderón-Zygmund operators is known. In 1997, Treil-Volbergshowed in [14] that the Hilbert transform H extends to a bounded operator on L ( W ) if andonly if W is an A matrix weight, i.e. if and only if (cid:2) W (cid:3) A ≡ sup I (cid:13)(cid:13)(cid:13) h W i I h W − i I (cid:13)(cid:13)(cid:13) < ∞ , where k · k denotes the norm of the matrix acting on C d . Soon after, Nazarov-Treil [12]extended this result to general (classical) Calderón-Zygmund operators and in the interim,the study of operators on matrix-weighted spaces has received a great deal of attention. See[2, 3, 5, 9, 10, 15]. However, the question of the sharp dependence on [ W ] A is still open andthis seems to be a very difficult problem. In [1], the two authors with S. Petermichl showedthat k H k L ( W ) → L ( W ) . [ W ] A log [ W ] A , for all A weights W , but this bound is unlikely to be sharp.Rather, a proof yielding a sharp estimate would likely follow, as in the scalar case, fromthe combination of (1) a sharp T1 theorem and (2) appropriate testing estimates. The goal ofthis paper is to establish the T1 theorem and specifically, obtain matrix generalizations of thetwo-weight T1 theorems of Nazarov-Treil-Volberg from [11] about “almost diagonal” operatorsincluding Haar multipliers and dyadic shifts. These generalizations are interesting in theirown right because they give two-weight results for all pairs of matrix A weights, which is anew development. It seems possible that, as in the scalar case, these T1 theorems will provea robust tool for studying the dependence of operator norms on the A characteristic. Beforediscussing the main results in more detail, we require several definitions.1.1. The Main Results.
Throughout the paper, D denotes the standard dyadic grid on R and A . B means A ≤ C ( d ) B , where C ( d ) is a (absolute) dimensional constant. For I ∈ D ,let h I be the standard Haar function defined by h I ≡ | I | − (cid:0) I + − I − (cid:1) , where I + is the right half of I and I − is the left half of I . To the dyadic grid D , associatethe unique binary tree where each I is connected to its two children I − and I + . Given thatdyadic tree, let d tree ( I, J ) denote the “tree distance” between I and J , namely, the number ofedges on the shortest path connecting I and J . The “almost diagonal” operators of interestpossess a band structure defined as follows: Definition 1.1.
A bounded operator T on L is a called a band operator with radius r if T satisfies h T h I e, h J v i L = 0 ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 3 for all intervals
I, J ∈ D with d tree ( I, J ) > r and vectors e, v ∈ C d . Given a matrix weight W and interval I in D , define the matrices: W ( I ) ≡ Z I W ( x ) dx and h W i I ≡ | I | Z I W ( x ) dx = W ( I ) | I | . In this paper, we will only consider weights W with the property of being an A weight, andwithout loss of generality, we can focus on the question of when a band operator T extendsto a bounded operator from L ( W − ) to L ( V ) with norm C for matrix weights V, W.
It isnot hard to show that this occurs precisely when (cid:13)(cid:13)(cid:13) M V T M W (cid:13)(cid:13)(cid:13) L → L = C. The main results of this paper are then the following theorems.
Theorem 1.2.
Let
W, V be matrix A weights and let T be a band operator with radius r .Then M V T M W extends to a bounded operator on L if and only if k T W I e k L ( V ) ≤ A h W ( I ) e, e i C d (1) k T ∗ V I e k L ( W ) ≤ A h V ( I ) e, e i C d (2) for all intervals I ∈ D and vectors e ∈ C d . Furthermore, (cid:13)(cid:13)(cid:13) M V T M W (cid:13)(cid:13)(cid:13) L → L ≤ r C ( d ) ( A B ( W ) + A B ( V )) , where C ( d ) is a dimensional constant and B ( W ) and B ( V ) are constants depending on W and V from an application of the matrix Carleson Embedding Theorem. The definitions of the constants B ( W ) and B ( V ) are given in Theorem 3.4, the matrixCarleson Embedding Theorem used in this paper, and discussed further in Remark 3.5. Asin [11], the conditions of Theorem 1.2 can be relaxed slightly to yield the following result: Theorem 1.3.
Let
W, V be matrix A weights and let T be a band operator with radius r . Then M V T M W extends to a bounded operator on L if and only if the following twoconditions hold: ( i ) For all intervals I ∈ D and vectors e ∈ C d , k I T W I e k L ( V ) ≤ A h W ( I ) e, e i C d k I T ∗ V I e k L ( W ) ≤ A h V ( I ) e, e i C d . ( ii ) For all intervals
I, J ∈ D satisfying − r | I | ≤ | J | ≤ r | I | and vectors e, ν ∈ C d , (cid:12)(cid:12)(cid:12) h T W I e, J ν i L ( V ) (cid:12)(cid:12)(cid:12) ≤ A h W ( I ) e, e i C d h V ( J ) ν, ν i C d . Furthermore, (cid:13)(cid:13)(cid:13) M V T M W (cid:13)(cid:13)(cid:13) L → L ≤ r C ( d ) ( A B ( W ) + A B ( V ) + A ) , where C ( d ) is a dimensional constant and B ( W ) and B ( V ) are constants depending on W and V from an application of the matrix Carleson Embedding Theorem. K. BICKEL AND B. D. WICK
Remark 1.4.
An observant reader, and expert in the area, will notice that Theorems 1.2 and1.3 are strictly weaker than the results of Nazarov-Treil-Volberg [11] in two respects. First,our results are only proved for pairs
V, W of matrix A weights and second, they introduceadditional constants B ( V ) and B ( W ) in the norm estimates, which do not come from thetesting conditions.However, it is worth pointing out that both of these shortcomings are the direct result ofdifferences between the scalar Carleson Embedding Theorem and the current matrix CarlesonEmbedding Theorem. In the scalar case, the Carleson Embedding Theorem holds for all weights and the embedding constant is an absolute multiple of the constant obtained fromthe testing condition. In the matrix case, the current Carleson Embedding Theorem, Theorem3.4, is only known for matrix A weights and the embedding constant is the testing constanttimes an additional constant B ( W ) , depending upon the weight W .A careful reading of our paper reveals that, if one can improve the underlying matrix Car-leson Embedding Theorem in these two respects, then our arguments will give T1 theoremswith sharp constants that hold for all pairs of matrix weights. It then seems likely that theseresults could be used as a tool to approach the matrix A conjecture, at least in the settingof dyadic shifts and related operators.Indeed, the authors recently learned that Amalia Culiuc and Sergei Treil have obtainedan improved Carleson Embedding Theorem for arbitrary matrix weights in the more generalnon-homogeneous setting. The two authors with Culiuc and Treil are currently investigatingthe behavior of well-localized operators in this more general setting.It is also worth observing that related and interesting results are obtained by R. Kerr in[6, 7]. He shows that band operators on L will be bounded from L ( W ) to L ( V ) if thematrix weights V and W are both in the matrix analogue of A ∞ (denoted A , ) and satisfya joint A condition. Remark 1.5.
If the entries of
W, V are not locally square-integrable, i.e. not in L loc ( R ) , oneneeds to be a little careful about interpreting the expressions on the left-hand sides of (1)and (2) and the analogous expressions in Theorem 1.3. This technicality can be handled in away similar to that found in [11]. Indeed, observe that if W, W ′ are matrix weights satisfying W ′ ≤ W , then (cid:13)(cid:13)(cid:13) M W ′ T ∗ M V (cid:13)(cid:13)(cid:13) L → L ≤ (cid:13)(cid:13)(cid:13) M W T ∗ M V (cid:13)(cid:13)(cid:13) L → L and taking adjoints gives (cid:13)(cid:13)(cid:13) M V T M W ′ (cid:13)(cid:13)(cid:13) L → L ≤ (cid:13)(cid:13)(cid:13) M V T M W (cid:13)(cid:13)(cid:13) L → L . Now, to interpret the first necessary condition appropriately, let { W n } be a sequence of matrixweights with entries in L loc ( R ) increasing to W . Then, the boundedness of M V T M W implies that k T W n I e k L ( V ) ≤ C < ∞ for some constant C uniformly in n . It is not difficult to show that this implies n M V T W n I e o has a limit in L , which is independent of the sequence { W n } chosen. So, there is no ambi-guity in calling this limit function V T W I e and interpreting the lefthand side of (1) as its L norm. The dual expressions are interpreted analogously. We can similarly interpret theterm in ( ii ) from Theorem 1.3 as the inner product between V T W I e and V J ν in L . ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 5
To interpret the sufficient condition, fix any sequences { W n } and { V n } in L loc ( R ) increasingto W and V respectively. Conditions (1) and (2) can be interpreted as the estimates k T W n I e k L ( V n ) ≤ A h W n ( I ) e, e i C d k T ∗ V n I e k L ( W n ) ≤ A h V n ( I ) e, e i C d , which are uniform in n , e , and I . Then Theorem 1.2 gives the bound for (cid:13)(cid:13)(cid:13) M V n T M W n (cid:13)(cid:13)(cid:13) L → L which implies the desired bound for (cid:13)(cid:13)(cid:13) M V T M W (cid:13)(cid:13)(cid:13) L → L . The analogous interpretations ofthe expressions in Theorem 1.3 should also be clear.1.2. Summary and Outline of the Paper.
The remainder of the paper consists of theproofs of Theorems 1.2 and 1.3. To outline the proof technique, assume that W , V are matrix A weights. It is not hard to show that M V T M W : L → L is bounded with operatornorm C if and only if the operator T W ≡ T M W : L ( W ) → L ( V ) satisfies k T W k L ( W ) → L ( V ) = C. Because T is a band operator, T W will have a particularly nice structure. Following thelanguage and proof strategy of Nazarov-Treil-Volberg [11], we will show T W is well-localized.Section 4 contains the details of well-localized operators, their connections to band operators,and the analogues of Theorems 1.2 and 1.3 for well-localized operators. We call these resultsTheorems 4.2 and 4.3. These theorems will immediately imply our main results: Theorems1.2 and 1.3.In Sections 2 and 3, the paper develops the tools need to prove Theorems 4.2 and 4.3. InSection 2, we define and outline the properties of a system of Haar functions adapted to ageneral matrix weight W . This system appears to be new in the context of matrix weights.We also require a matrix Carleson Embedding Theorem. We use the ideas of Treil-Volberg[14] and Isralowitz-Kwon-Pott [5] to obtain such a theorem with the best known constant.Details are given in Section 3.Section 5 contains the proofs of Theorems 4.2 and 4.3. The well-localized structure of T W makes T W amenable to separate analyses of its diagonal part and upper and lower triangularparts, which behave like nice paraproducts. We compute the norm by duality and as part ofthe argument, decompose the functions in question relative to weighted Haar bases adapted to W and V respectively. To control the upper and lower triangular pieces, we define associatedparaproducts and show they are bounded using the testing hypothesis and matrix CarlesonEmbedding Theorem. We bound the diagonal pieces using the well-localized structure of T W coupled with properties of the system of Haar functions and the given testing conditions.2. Weighted Haar Basis
Let W be a matrix weight, and let k · k denote the operator norm of a matrix on C d . Inthis section, we construct a set of disbalanced Haar functions adapted to W , which we denote H W . First, fix J ∈ D and let v J , . . . , v dJ be a set of orthonormal eigenvectors of the positivematrix:(3) W ( J − ) W ( J + ) − W ( J − ) + W ( J − ) = W ( J − ) W ( J + ) − W ( J − ) + W ( J + ) W ( J + ) − W ( J − )= W ( J ) W ( J + ) − W ( J − ) . K. BICKEL AND B. D. WICK
Furthermore, for ≤ j ≤ d , define the constant w jJ ≡ (cid:13)(cid:13)(cid:13)(cid:0) W ( J ) W ( J + ) − W ( J − ) (cid:1) v jJ (cid:13)(cid:13)(cid:13) . Since the matrix (3) is positive and v jJ is a normalized eigenvector, it follows that: ( w jJ ) − v jJ = (cid:0) W ( J ) W ( J + ) − W ( J − ) (cid:1) − v jJ ∀ ≤ j ≤ d. Definition 2.1.
For each J ∈ D , define the vector-valued Haar functions on J adapted to W as follows:(4) h W,jJ ≡ ( w jJ ) − (cid:0) W ( J + ) − W ( J − ) v jJ J + − v jJ J − (cid:1) ∀ ≤ j ≤ d. If the constant function [0 , ∞ ) e is in L ( W ) for any nonzero e in C d , let { e , . . . , e p } be anorthonormal basis of the subspace of C d satisfying [0 , ∞ ) e ∈ L ( W ) . Define h W,i ≡ c i [0 , ∞ ) e i for i = 1 , . . . , p , where c i is chosen so that k h W,i k L ( W ) = 1 . Define the functions h W,i ≡ c i ( −∞ , ν i for i = 1 , . . . , p , where { ν , . . . , ν p } is an orthonormal basis of the subspace of C d satisfying ( −∞ , ν ∈ L ( W ) , in an analogous way. Define H W , the system of Haar functions adapted to W, by: H W ≡ n h W,jJ o ∪ n h W,ik o . One should notice that if the constant functions [0 , ∞ ) e and ( −∞ , e are not in L ( V ) for all e ∈ C d , then H W = n h W,jJ o .We now show that H W is an orthonormal basis of L ( W ) . Lemma 2.2.
The system H W is an orthonormal system in L ( W ) . Proof.
We first prove that the system n h W,jJ o is orthogonal. Fix h W,jJ and h W,iI . First, assume I = J . Then, one interval must be strictly contained in the other because otherwise, theinner product trivially vanishes by support conditions. Without loss of generality, assume I ( J . This implies that h W,jJ equals a constant vector on I , which we will denote by e . Then D h W,iI , h
W,jJ E L ( W ) = Z I D W ( x ) h W,iI , e E C d dx = Z I ( w iI ) − (cid:10) W ( x ) (cid:0) W ( I + ) − W ( I − ) v iI I + − v iI I − (cid:1) , e (cid:11) C d dx = ( w iI ) − (cid:10) W ( I + ) W ( I + ) − W ( I − ) v iI , e (cid:11) C d − ( w iI ) − (cid:10) W ( I − ) v iI , e (cid:11) C d = 0 . One should notice that the definition of e played no role; in fact, the above arguments showthat each h W,jJ has mean zero with respect to W . Now assume I = J and i = j . Observe ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 7 that: D h W,iJ , h
W,jJ E L ( W ) = Z J D W ( x ) h W,iJ , h
W,jJ E C d dx = ( w jJ ) − ( w iJ ) − Z J (cid:10) W ( x ) (cid:0) W ( J + ) − W ( J − ) v iJ J + − v iJ J − (cid:1) , W ( J + ) − W ( J − ) v jJ J + − v jJ J − (cid:11) C d dx = ( w jJ ) − ( w iJ ) − (cid:0)(cid:10) W ( J + ) W ( J + ) − W ( J − ) v iJ , W ( J + ) − W ( J − ) v jJ (cid:11) C d + (cid:10) W ( J − ) v iJ , v jJ (cid:11) C d (cid:1) = ( w jJ ) − ( w iJ ) − (cid:10)(cid:0) W ( J − ) W ( J + ) − W ( J − ) + W ( J − ) (cid:1) v iJ , v jJ (cid:11) C d = 0 , since v iJ and v jJ are orthonormal eigenvectors of W ( J − ) W ( J + ) − W ( J − ) + W ( J − ) . Since each h W,jJ has mean zero with respect to W and since each h W,jJ is either supported in ( −∞ , or [0 , ∞ ) , it is clear that D h W,jJ , h
W,ik E L ( W ) = 0 ∀ J ∈ D and for all indices i, j, k. By construction, it is also clear that n h W,jk o is an orthonormal setin L ( W ) . Finally, to see that n h W,jJ o is normalized, fix h W,jJ and observe that D h W,jJ , h
W,jJ E L ( W ) = ( w jJ ) − (cid:10)(cid:0) W ( J − ) W ( J + ) − W ( J − ) + W ( J − ) (cid:1) v jJ , v jJ (cid:11) C d = D(cid:0) W ( J − ) W ( J + ) − W ( J − ) + W ( J − ) (cid:1) (cid:0) W ( J − ) W ( J + ) − W ( J − ) + W ( J − ) (cid:1) − v jJ , v jJ E C d = 1 , using the properties of v jJ and the definition of w jJ . This completes the proof. (cid:3) Lemma 2.3.
The orthonormal system H W is complete in L ( W ) . Proof.
Fix f in L ( W ) , and assume f is orthogonal to every function in H W . Specifically, f is orthogonal to the set n h W,jJ o . Then, for each J ∈ D and j = 1 , . . . , d, D f, h W,jJ E L ( W ) . Multiplying by a constant gives: | J − | − (cid:10) W ( J + ) − W ( J − ) v jJ J + − v jJ J − , f (cid:11) L ( W ) = | J − | − Z J (cid:10) W ( J + ) − W ( J − ) v jJ J + − v jJ J − , W ( x ) f ( x ) (cid:11) C d dx = D W ( J + ) − W ( J − ) v jJ , h W f i J + E C d − D v jJ , h W f i J − E C d = D v jJ , W ( J − ) W ( J + ) − h W f i J + − h W f i J − E C d . Since this holds for each j and v J , . . . , v dJ is an orthonormal basis of C d , we can conclude that(5) h W f i J − = W ( J − ) W ( J + ) − h W f i J + . Adding h W f i J + to both sides gives h W f i J = W ( J − ) W ( J + ) − h W f i J + + h W f i J + = (cid:0) W ( J − ) W ( J + ) − + W ( J + ) W ( J + ) − (cid:1) h W f i J + . K. BICKEL AND B. D. WICK
Rearranging by factoring out W ( J + ) − on the right from the term in parentheses and usingthe definitions gives h W i − J h W f i J = h W i − J + h W f i J + . Solving (5) for h W f i J + and using analogous arguments, one can show: h W i − J h W f i J = h W i − J − h W f i J − . Now fix any x, y ∈ (0 , ∞ ) and choose some dyadic interval J so that x, y ∈ J . Define twosequence of dyadic intervals: J = I ) I ) I · · · ) I i ) I i +1 . . .J = K ) K ) K · · · ) K k ) K k +1 . . . such that each I i is a parent of I i +1 and x ∈ I i for all i and similarly, each K k is a parent of K k +1 and y is in each K k . Our previous arguments imply that h W i − I i h W f i I i = h W i − J h W f i J = h W i − K k h W f i K k ∀ i, k ∈ N . Now we can use the Lebesgue Differentiation Theorem to conclude that W ( x ) − W ( x ) f ( x ) = W ( y ) − W ( y ) f ( y ) for almost every x, y in (0 , ∞ ) and so f ( x ) = f ( y ) for almost every x, y in [0 , ∞ ) . Analogousarguments imply f must be constant on ( −∞ , . But, by assumption, f is also orthogonalto the set { h W,ik } , which implies f is orthogonal to all of the nonzero constant functionssupported on [0 , ∞ ) or ( −∞ , in L ( W ) . Thus, we can conclude f ≡ . (cid:3) We require one additional fact about the weighted Haar system:
Lemma 2.4.
The orthonormal system H W satisfies (cid:13)(cid:13)(cid:13) W ( J − ) h W,jJ ( J − ) (cid:13)(cid:13)(cid:13) C d ≤ C ( d ) (cid:13)(cid:13)(cid:13) W ( J + ) h W,jJ ( J + ) (cid:13)(cid:13)(cid:13) C d ≤ C ( d ) for all J ∈ D and ≤ j ≤ d, where h W,jJ ( J ± ) is the constant value h W,jJ takes on J ± .Proof. We only prove the first inequality as the second is proved similarly. First, recall that W ( J ) W ( J + ) − W ( J − ) is a positive matrix and hence, W ( J − ) − W ( J + ) W ( J ) − is positive aswell. Now, observe that (cid:13)(cid:13)(cid:13) W ( J − ) h W,jJ ( J − ) (cid:13)(cid:13)(cid:13) C d ≤ (cid:13)(cid:13)(cid:13) W ( J − ) (cid:0) W ( J ) W ( J + ) − W ( J − ) (cid:1) − (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) W ( J − ) W ( J − ) − W ( J + ) W ( J ) − W ( J − ) (cid:13)(cid:13)(cid:13) ≤ C ( d ) Tr (cid:16) W ( J − ) W ( J − ) − W ( J + ) W ( J ) − W ( J − ) (cid:17) = C ( d ) Tr (cid:16) W ( J ) − W ( J + ) W ( J ) − (cid:17) ≤ C ( d ) (cid:13)(cid:13)(cid:13) W ( J ) − W ( J + ) W ( J ) − (cid:13)(cid:13)(cid:13) ≤ C ( d ) (cid:13)(cid:13)(cid:13) W ( J ) − W ( J ) W ( J ) − (cid:13)(cid:13)(cid:13) = C ( d ) , ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 9 where we used the fact that trace and operator norm are equivalent (up to a dimensionalconstant) for positive matrices. This completes the proof. (cid:3)
Remark 2.5.
In the proofs of Theorems 4.2 and 4.3, we will expand functions in L ( W ) with respect to the basis H W . Specifically, if f ∈ L ( W ) , we can expand f as f = X J ∈D ≤ j ≤ d D f, h W,jJ E L ( W ) h W,jJ + X ≤ k ≤ ≤ j ≤ p k D f, h W,jk E L ( W ) h W,jk . This means that for K ∈ D , we can express the weighted average of f on K as h W i − K h W f i K = X J ∈D ≤ j ≤ d D f, h W,jJ E L ( W ) h W i − K D W h
W,jJ E K + X ≤ k ≤ ≤ j ≤ p k D f, h W,jk E L ( W ) h W i − K D W h
W,jk E K = X J : K ( J ≤ j ≤ d D f, h W,jJ E L ( W ) h W,jJ ( K ) + X ≤ k ≤ ≤ j ≤ p k D f, h W,jk E L ( W ) h W,jk ( K ) , where h W,jJ ( K ) is the constant value that h W,jJ takes on K and h W,jk ( K ) is the constant valuethat h W,jk takes on K . Now, assume f is compactly supported, so that we can find two dyadicintervals I ⊂ [0 , ∞ ) and I ⊂ ( −∞ , such that supp ( f ) ⊆ I ∪ I . For I ∈ D , define thethe weighted expectation of f on I by E WI f ≡ h W i − I h W f i I I . Then, we can write f as f = X J ∈D ≤ j ≤ d D f, h W,jJ E L ( W ) h W,jJ + X ≤ k ≤ ≤ j ≤ p k D f, h W,jk E L ( W ) h W,jk = X J : J ⊆ I ∪ I ≤ j ≤ d D f, h W,jJ E L ( W ) h W,jJ + X ≤ ℓ ≤ h W i − I ℓ h W f i I ℓ I ℓ = X J : J ⊆ I ∪ I ≤ j ≤ d D f, h W,jJ E L ( W ) h W,jJ + X ≤ ℓ ≤ E WI ℓ f. (6) 3. Matrix Carleson Embedding Theorem
Let W be a matrix weight such that for all positive semi-definite matrices A and intervals J ∈ D , there is a uniform constant C satisfying(7) | J | Z J k AW ( x ) A k dx ≤ C (cid:18) | J | Z J k AW ( x ) A k dx (cid:19) . Define [ W ] R to be the smallest such constant C . Treil-Volberg’s arguments in Lemma 3.5and Lemma 3.6 in [14] show that, if W is an A matrix weight, then(8) [ W ] R ≤ C ( d )[ W ] A . In Theorem 6.1 in [14], Treil-Volberg prove an embedding theorem for a specific sequence ofpositive semi-definite matrices. Their arguments generalize easily to arbitrary sequences ofmatrices, yielding the following matrix Carleson Embedding Theorem:
Theorem 3.1.
Let W be a matrix weight satisfying (7) and let { A I } I ∈D be a sequence ofpositive semi-definite d × d matrices. Then X I ∈D h A I h f i I , h f i I i C d ≤ C k f k L ( W − ) if | J | X I : I ⊆ J (cid:13)(cid:13)(cid:13) h W i I A I h W i I (cid:13)(cid:13)(cid:13) ≤ C ∀ J ∈ D , where C = C C ( d )[ W ] R and C ( d ) is a dimensional constant. It should be noted that in [5], Isralowitz-Kwon-Pott obtained a more general version ofTheorem 3.1, which holds for all A p matrix weights. Remark 3.2.
Treil-Volberg’s arguments in [14] actually establish a seemingly stronger result.Namely, they show that if { B I } I ∈D is a sequence of positive semi-definite matrices, then(9) X I ∈D (cid:13)(cid:13)(cid:13) h W i − I B I h W i − I (cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13) h W i − I D W g E I (cid:13)(cid:13)(cid:13) C d ≤ C k g k L if | J | X I : I ⊆ J (cid:13)(cid:13)(cid:13) h W i − I B I h W i − I (cid:13)(cid:13)(cid:13) ≤ C , for all J ∈ D . To recover Theorem 3.1 from (9), note that X I ∈D D h W i − I B I h W i − I D W g E I , D W g E I E C d ≤ X I ∈D (cid:13)(cid:13)(cid:13) h W i − I B I h W i − I (cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13) h W i − I D W g E I (cid:13)(cid:13)(cid:13) C d . If one is given { A I } I ∈D and f ∈ L ( W − ) , then pairing the above inequality with (9) using B I ≡ h W i I A I h W i I and g ≡ W − f gives the inequalities in Theorem 3.1.Equation (9) is proved via arguments similar to those used in [13] to establish the standardCarleson Embedding Theorem. Specifically, Treil-Volberg define an associated embeddingoperator and show it is bounded using the Senichkin-Vinogradov Test: Theorem 3.3 (Senichkin-Vinogradov Test) . Let Z be a measure space, and let k be a locallysummable, nonnegative, measurable function on Z × Z . If Z Z k ( s, t ) k ( s, x ) ds ≤ C [ k ( x, t ) + k ( t, x )] a.e. on Z , then for all nonnegative g ∈ L ( Z ) , Z Z Z Z k ( s, t ) g ( s ) g ( t ) dsdt ≤ C k g k L ( Z ) . For the ease of the reader, we sketch the proof of (9). We focus on the first half of theproof, as the second half is given in detail in [14].
Proof.
First define µ I ≡ (cid:13)(cid:13)(cid:13) h W i − I B I h W i − I (cid:13)(cid:13)(cid:13) . Then, by assumption, { µ I } I ∈D is a scalarCarleson sequence with testing constant C . Define the embedding operator J : L → ℓ ( { µ I } , C d ) by J f = n h W i − I D W f E I o I ∈D ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 11 and observe that (9) is equivalent to J having operator norm bounded by √ C . To provethe norm bound, one shows that the formal adjoint J ∗ : ℓ ( { µ I } , C d ) → L defined by J ∗ { α I } ≡ X I ∈D µ I | I | I W h W i − I α I ∀ { α I } ∈ ℓ (cid:0) { µ I } , C d (cid:1) has the desired norm bound. First observe that J J ∗ { α I } = ( h W i − J X I ∈D µ I | I | h W I i J h W i − I α I ) J ∈D . One can use this to immediately show that for any { α I } in ℓ ( { µ I } , C d ) , kJ ∗ { α I }k L = hJ J ∗ { α I } , { α I }i ℓ ( { µ I } , C d ) = X J ∈D X I : I ⊆ J µ I µ J | J | D h W i − J h W i I α I , α J E C d + X I ∈D X J : J ( I µ I µ J | I | D h W i J h W i − I α I , α J E C d . Now, for
K, L ∈ D , define T LK by T LK ≡ | L | (cid:13)(cid:13)(cid:13) h W i K h W i − L (cid:13)(cid:13)(cid:13) = 1 | L | (cid:13)(cid:13)(cid:13) h W i − L h W i K (cid:13)(cid:13)(cid:13) if K ⊆ L and T KL = 0 otherwise. By symmetry in the sums, it is easy to show that(10) kJ ∗ { α I }k L ≤ X J ∈D X I : I ⊆ J µ I µ J T JI k α I k C d k α J k C d . Thus, the result will be proved if one can show that the righthand side of (10) is boundedby C k{ α I }k ℓ ( { µ I } , C d ) . This is where one uses the Senichkin-Vinogradov Test. Let Z be D ,the set of dyadic intervals, with point mass µ I on each interval I . Then, L ( Z ) is equivalentto ℓ ( { µ I } , C ) . Indeed, { β I } ∈ ℓ ( { µ I } , C ) if and only if the function β defined by β ( I ) = β I is in L ( Z ) . Moreover, k{ β I }k ℓ ( µ I , C ) = k β k L ( Z ) , so we can treat these as the same objects. Now, define the nonnegative function k : Z × Z → R + by k ( K, L ) ≡ X J ∈D X I : I ⊆ J T JI δ I ( K ) δ J ( L ) , where δ I ( K ) = 1 if K = I and zero otherwise. Fix a sequence { α I } ∈ ℓ ( { µ I } , C d ) . Then thesequence { a I } defined by a I ≡ k α I k C d is a nonnegative sequence in ℓ ( { µ I } , C ) or equivalently, a (defined by a ( I ) = a I ) is a nonnegative function in L ( Z ) , and the norms of the twosequences are equal. It is easy to show that Z Z Z Z k ( K, L ) a ( K ) a ( L ) dKdL = X J ∈D X I : I ⊆ J µ I µ J T JI a I a J = X J ∈D X I : I ⊆ J µ I µ J T JI k α I k C d k α J k C d , which is exactly the object we need to control. Indeed, if we can establish the conditions ofthe Senichkin-Vinogradov test with constant C , then the result will be proved. Let us firstrewrite the desired conditions. The definition of k implies that Z Z k ( K, J ) k ( K, J ′ ) dK = X I : I ⊆ J,J ′ T JI T J ′ I µ I ∀ J, J ′ ∈ D . Again using the definition of k , we have k ( J, J ′ ) + k ( J ′ , J ) = T JJ ′ + T J ′ J ∀ J, J ′ ∈ D . Since we only sum over dyadic I ⊆ J ∩ J ′ , to have a nonzero sum, we must have J ⊆ J ′ or J ′ ⊆ J . Without loss of generality, assume J ′ ⊆ J. Then, to establish the conditions of theSenichkin-Vinogradov test, one must simple show: X I : I ⊆ J ′ T JI T J ′ I µ I = X I : I ⊆ J ′ µ I | J | (cid:13)(cid:13)(cid:13) h W i − J h W i I (cid:13)(cid:13)(cid:13) | J ′ | (cid:13)(cid:13)(cid:13) h W i − J ′ h W i I (cid:13)(cid:13)(cid:13) ≤ C | J | (cid:13)(cid:13)(cid:13) h W i − J h W i J ′ (cid:13)(cid:13)(cid:13) . This inequality is proven in detail in [14]. The proof uses simple results about matrix weightsincluding the fact that all matrix A weights satisfy a reverse Hölder estimate as in (7). Thereverse Hölder estimate is used to turn the sum of interest into a sum of averages of a functionweighted by the constants µ I . Since { µ I } I ∈D is a scalar Carleson sequence, one can use thescalar Carleson Embedding Theorem to complete the proof. (cid:3) Using Theorem 3.1 and ideas from [5], we now obtain the following Carleson EmbeddingTheorem. Its testing conditions are particularly well-suited to the objects appearing in theproofs of Theorems 4.2 and 4.3, the well-localized analogues of Theorems 1.2 and 1.3.
Theorem 3.4.
Let W be an A weight and let { A I } I ∈D be a sequence of positive semi-definite d × d matrices. Then X I ∈D h A I h f i I , h f i I i C d ≤ C k f k L ( W − ) if | J | X I : I ⊆ J h W i I A I h W i I ≤ C h W i J ∀ J ∈ D , where C = C C ( d )[ W ] R [ W ] A . The existence of Theorem 3.4, albeit with a different constant, is mentioned by Isralowitz-Kwon-Pott in the final remarks of [5]. Indeed, according to these remarks, if one modifiestheir previous arguments and tracks all constants closely, one could obtain this Carleson Em-bedding Theorem with constant C ( d )[ W ] A . However, in light of Equation (8), our constantis very likely smaller than the one appearing in [5]. As the details of the proof are not givenin [5] and we obtain a different constant, we include the proof here.
Remark 3.5.
In Theorems 1.2, 1.3 and Theorems 4.2, 4.3, the constants B ( W ) and B ( V ) appear. Since dimensional constants are already included in the statement of those theorems,it should be clear from Theorem 3.4 that B ( W ) = [ W ] R [ W ] A and B ( V ) = [ V ] R [ V ] A . Now, to prove Theorem 3.4, we need the decaying stopping tree from Isralowitz-Kwon-Pott. Specifically, fix I ∈ D and let J ( I ) be the collection of maximal dyadic J ⊆ I suchthat (cid:13)(cid:13)(cid:13) h W i − J h W i I (cid:13)(cid:13)(cid:13) > λ or (cid:13)(cid:13)(cid:13) h W i J h W i − I (cid:13)(cid:13)(cid:13) > λ, for λ > to be determined later. Set F ( I ) to be the collection of J ⊆ I such that J isnot contained in any interval in J ( I ) . It is clear that I is always in F ( I ) . Set J ( I ) ≡ { I } . ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 13
Inductively define J j ( I ) and F j ( I ) by J j ( I ) = [ J ∈J j − ( I ) J ( J ) and F j ( I ) = [ J ∈J j − ( I ) F ( J ) . One can then prove the following lemma.
Lemma 3.6 (Lemma 2.1, [5]) . Given the stopping-tree set-up, if λ = 4 C ( d )[ W ] A , then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) [ J ∈J j ( I ) J ( J ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ − j | I | ∀ I ∈ D . We can now provide the proof of Theorem 3.4:
Proof of Theorem 3.4.
Using the equivalence, up to a dimensional constant, of norm andtrace for positive semi-definite matrices, our hypothesis implies X I : I ⊆ K (cid:13)(cid:13)(cid:13) h W i − K h W i I A I h W i I h W i − K (cid:13)(cid:13)(cid:13) . C | K | ∀ K ∈ D . We will use this to obtain the testing condition from Theorem 3.1. Specifically, fix J ∈ D .Then | J | X I : I ⊆ J (cid:13)(cid:13)(cid:13) h W i I A I h W i I (cid:13)(cid:13)(cid:13) = 1 | J | ∞ X j =1 X K ∈J j − ( J ) X I ∈F ( K ) (cid:13)(cid:13)(cid:13) h W i I A I h W i I (cid:13)(cid:13)(cid:13) ≤ | J | ∞ X j =1 X K ∈J j − ( J ) X I ∈F ( K ) (cid:13)(cid:13)(cid:13) h W i − I h W i K (cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13) h W i − K h W i I A I h W i I h W i − K (cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13) h W i K h W i − I (cid:13)(cid:13)(cid:13) = 1 | J | ∞ X j =1 X K ∈J j − ( J ) X I ∈F ( K ) (cid:13)(cid:13)(cid:13) h W i K h W i − I (cid:13)(cid:13)(cid:13) (cid:13)(cid:13)(cid:13) h W i − K h W i I A I h W i I h W i − K (cid:13)(cid:13)(cid:13) . [ W ] A | J | ∞ X j =1 X K ∈J j − ( J ) X I ∈F ( K ) (cid:13)(cid:13)(cid:13) h W i − K h W i I A I h W i I h W i − K (cid:13)(cid:13)(cid:13) ≤ [ W ] A | J | ∞ X j =1 X K ∈J j − ( J ) X I : I ⊆ K (cid:13)(cid:13)(cid:13) h W i − K h W i I A I h W i I h W i − K (cid:13)(cid:13)(cid:13) . C [ W ] A | J | ∞ X j =1 X K ∈J j − ( J ) | K |≤ C [ W ] A ∞ X j =1 − j = C [ W ] A . In the fourth line from the top we use the stopping criteria, which introduces the value [ W ] A .Pairing this estimate with Theorem 3.1 gives the desired result. (cid:3) Remark 3.7.
As mentioned in [5], one can prove a version of Lemma 3.6 for A , ∞ weightsusing Lemma 3.1 in [15]. Recall from [15] that W is an A , ∞ weight if there is some constant C such that e | I | Z I log k W ( t ) − x k dt ≤ C (cid:13)(cid:13)(cid:13) h W i − I x (cid:13)(cid:13)(cid:13) , ∀ x ∈ C d , I ∈ D . Denote the smallest such C by [ W ] A , ∞ . As is shown in [15], if W ∈ A , then W ∈ A , ∞ with [ W ] A , ∞ ≤ [ W ] A . If one tracks the constant in Lemma 3.1 from [15] and uses it inthe proof of Lemma 2.1 in [5], one can obtain Lemma 3.6 with λ = C ( d )[ W ] dA , ∞ . Thenthe proof of Theorem 3.4 immediately shows that Theorem 3.4 also holds with constant C = C C ( d )[ W ] R [ W ] dA , ∞ . Well-Localized Operators
We say an operator T W acts formally from L ( W ) to L ( V ) if the bilinear form h T W I e, J v i L ( V ) is given for all I, J ∈ D and e, v ∈ C d is well-defined. Then, the formal adjoint T ∗ V is definedby h T ∗ V I e, J v i L ( W ) ≡ h I e, T W J v i L ( V ) . Given this, we can define:
Definition 4.1.
An operator T W acting (formally) from L ( W ) to L ( V ) is called r -lowertriangular if for all ≤ j ≤ d and I, J ∈ D with | J | ≤ | I | and all e ∈ C d , T W satisfies D T W I e, h V,jJ E L ( V ) = 0 whenever J I ( r +1) or | J | ≤ − r | I | and J I. Here, n h V,jJ o is the set of V -weighted Haarfunctions on J as defined in (4) and I ( r +1) is the ( r + 1) th ancestor of I . We say T W is well-localized with radius r if both T W and its formal adjoint T ∗ V are r -lower triangular.This definition of well-localized is slightly different than the one appearing in [11]. In-deed, to define lower triangular, Nazarov-Treil-Volberg only impose conditions on T W when | J | ≤ | I | , rather than | J | ≤ | I | . Nevertheless, their ideas are clearly the correct ones andtheir definition is essentially correct; the difference is likely attributable to a typographicalerror. Still, after the establishing the related proofs, we do point out the necessity of havingconditions for | J | ≤ | I | in Remark 5.5.The main results about well-localized operators are the following two theorems, which arethe well-localized analogues of Theorems 1.2 and 1.3: Theorem 4.2.
Let
V, W be matrix A weights, and assume T W is a well-localized operatorof radius r acting formally from L ( W ) to L ( V ) . Then T W extends to a bounded operatorfrom L ( W ) to L ( V ) if and only if k T W I e k L ( V ) ≤ A h W ( I ) e, e i C d k T ∗ V I e k L ( W ) ≤ A h V ( I ) e, e i C d ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 15 for all I ∈ D and e ∈ C d . Furthermore, k T W k L ( W ) → L ( V ) ≤ r C ( d ) ( A B ( W ) + A B ( V )) , where C ( d ) is a dimensional constant and B ( W ) and B ( V ) are constants depending on W and V from an application of the matrix Carleson Embedding Theorem. Theorem 4.3.
Let
V, W be matrix A weights, and assume T W is a well-localized operatorof radius r acting formally from L ( W ) to L ( V ) . Then T W extends to a bounded operatorfrom L ( W ) to L ( V ) if and only if the following two conditions hold: ( i ) For all intervals I ∈ D and e ∈ C d , k I T W I e k L ( V ) ≤ A h W ( I ) e, e i C d k I T ∗ V I e k L ( W ) ≤ A h V ( I ) e, e i C d . ( ii ) For all intervals
I, J in D satisfying − r | I | ≤ | J | ≤ r | I | and vectors e, ν in C d , (cid:12)(cid:12)(cid:12) h T W I e, J ν i L ( V ) (cid:12)(cid:12)(cid:12) ≤ A h W ( I ) e, e i C d h V ( J ) ν, ν i C d . Furthermore, k T W k L ( W ) → L ( V ) ≤ r C ( d ) ( A B ( W ) + A B ( V ) + A ) , where C ( d ) is a dimensional constant and B ( W ) and B ( V ) are constants depending on W and V from an application of the matrix Carleson Embedding Theorem. Theorems 1.2 and 1.3 will follow immediately from these theorems once we establish thefollowing lemma:
Lemma 4.4. If V, W are matrix weights whose entries are in L loc ( R ) and if T is a bandoperator of radius r , then T W is a well-localized operator of radius r acting formally from L ( W ) to L ( V ) . Proof.
Assume T : L → L is a band operator with radius r , and W, V are matrix weightswhose entries are in L loc . Then the operators T W ≡ T M W and T ∗ V ≡ T ∗ M V act formally from L ( W ) to L ( V ) and L ( V ) to L ( W ) respectively since h T W I e, V J ν i L = h T W I e, J ν i L ( V ) and h W I e, T ∗ V J ν i L = h I e, T ∗ V J ν i L ( W ) are well-defined. To show T W is a well-localized operator with radius r, by symmetry, itsuffices to show that T W is r -lower triangular. First, fix an orthonormal basis { e i } di =1 of C d and for I ∈ D , define H I ≡ { h I e i } ≤ i ≤ d . Then we can write T = X I,J ∈D T IJ where T IJ : H I → H J , and each T IJ is given by T IJ = X ≤ i,j ≤ d h T h I e i , h J e j i L h· , h I e i i L h J e j . Since the entries of W are in L loc ( R ) , then W I e is in L and so, T W I e ≡ T W I e makessense for each I ∈ D and e ∈ C d . Given h V,jJ , a vector-valued Haar function on J adapted to V , one can write: D T W I e, h V,jJ E L ( V ) = D T W I e, V h V,jJ E L ≤ k T W I e k L (cid:13)(cid:13)(cid:13) V h
V,jJ (cid:13)(cid:13)(cid:13) L < ∞ , where the first term is bounded because T is bounded on L and the second term is boundedbecause h V,jJ is bounded and the entries of V are in L loc ( R ) . Given that, we are justified inexpanding T with respect to the Haar basis to obtain D T W I e, h V,jJ E L ( V ) = X K,L ∈D D T KL W I e, h V,jJ E L ( V ) = X K,L ∈D X ≤ k,ℓ ≤ d h T h K e k , h L e ℓ i L h W I e, h K e k i L D h L e ℓ , h V,jJ E L ( V ) . Observe that D T KL W I e, h V,jJ E L ( V ) is zero if d tree ( K, L ) > r , if I ∩ K = ∅ , or if L J. So,we only need consider terms where d tree ( K, L ) ≤ r , I ∩ K = ∅ , and L ⊆ J. To show T W is r -lower triangular let | J | ≤ | I | . First, assume that J I ( r +1) and bycontradiction, assume there is a nonzero term D T KL W I e, h V,jJ E L ( V ) in the above sum forsome K, L ∈ D . By our previous assertions, we must have | K | ≤ r | L | ≤ r | J | ≤ r +1 | I | . Since I ∩ K = ∅ , this implies that K ⊆ I ( r +1) . Since L ⊆ J , | L | ≤ | I | and L I ( r +1) . But,this immediately implies that d tree ( K, L ) ≥ r + 1 , a contradiction.Similarly, assume | J | ≤ − r | I | and J I and by contradiction, assume there is a nonzeroterm D T KL W I e, h V,jJ E L ( V ) for some K, L.
Then | L | ≤ − r | I | and L I. Furthermore, since d tree ( K, L ) ≤ r , this implies | K | ≤ | I | , so K ⊆ I. But | L | ≤ − r | I | , L I , and K ⊆ I impliesthat d tree ( K, L ) ≥ r + 1 , a contradiction.Thus, T W is r -lower triangular and symmetric arguments give the result for T ∗ V . Thisimplies T W is well-localized with radius r . (cid:3) Remark 4.5.
In Theorems 4.2 and 4.3, one must interpret the testing conditions correctlywhen the matrix weights’ entries are not in L loc ( R ) . We already outlined the remedy forthis problem in Remark 1.5. Similarly, one should notice that Lemma 4.4 only handles thecase where the matrix weights have entries in L loc ( R ) . Nevertheless, this result is sufficientto allow us to pass from Theorems 4.2 and 4.3 to Theorems 1.2 and 1.3. This is easy tosee since, as detailed in Remark 1.5, we interpret all statements about weights with locallyintegrable (but not necessary square-integrable) entries in Theorems 1.2 and 1.3 using limitsof weights with entries in L loc ( R ) .5. Proofs of Theorems 4.2 and 4.3
Paraproducts.
To prove Theorems 4.2 and 4.3, we require several results about relatedparaproducts. As before, let T W be a well-localized operator of radius r acting formallyfrom L ( W ) to L ( V ) with formal adjoint T ∗ V . Using these operators, define the following
ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 17 paraproducts: Π W f ≡ X I ∈D X ≤ j ≤ dJ ⊆ I : | J | =2 − r | I | D T W E WI f, h V,jJ E L ( V ) h V,jJ Π V g ≡ X I ∈D X ≤ j ≤ dJ ⊆ I : | J | =2 − r | I | D T ∗ V E VI g, h W,jJ E L ( W ) h W,jJ for f ∈ L ( W ) and g ∈ L ( V ) . Recall that the W -weighted expectation of f on I is definedby E WI f ≡ h W i − I h W f i I I . Now, observe that, as demonstrated by the following lemma,these paraproducts mimic the behavior of T W and T ∗ V respectively. Lemma 5.1.
Let
I, J ∈ D and let Π W be the paraproduct defined above using the well-localizedoperator T W with radius r acting (formally) from L ( W ) to L ( V ) . If | J | ≥ − r | I | , then D Π W h W,iI , h
V,jJ E L ( V ) = 0 ∀ ≤ i, j ≤ d. If | J | < − r | I | , then D Π W h W,iI , h
V,jJ E L ( V ) = D T W h W,iI , h
V,jJ E L ( V ) ∀ ≤ i, j ≤ d. If J I , then both sides of the equality are zero.Furthermore, analogous statements hold for the paraproduct Π V and formal adjoint T ∗ V . Proof.
First, observe that D Π W h W,iI , h
V,jJ E L ( V ) = X K ∈D X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | D T W E WK h W,iI , h
V,ℓL E L ( V ) D h V,ℓL , h
V,jJ E L ( V ) = D T W E WJ ( r ) h W,iI , h
V,jJ E L ( V ) , where J ( r ) is the r th ancestor of J . Now assume | J | ≥ − r | I | or J I. Then, either I ⊆ J ( r ) or I ∩ J ( r ) = ∅ . In either case, E WJ ( r ) h W,iI = 0 , so the corresponding inner product is zero. Now assume | J | < − r | I | , so that | J | ≤ − r | I − | =2 − r | I + | . If J I , then J I − , I + and since T W is well-localized with radius r , D T W h W,iI , h
V,jJ E L ( V ) = D T W h W,iI ( I − ) I − , h V,jJ E L ( V ) + D T W h W,iI ( I + ) I + , h V,jJ E L ( V ) = 0 . This gives equality if J I. Now assume | J | < − r | I | and J ⊆ I. Then D Π W h W,iI , h
V,jJ E L ( V ) = D T W E WJ ( r ) h W,iI , h
V,jJ E L ( V ) = D T W h W,iI (cid:0) J ( r ) (cid:1) J ( r ) , h V,jJ E L ( V ) = D T W h W,iI , h
V,jJ E L ( V ) , since for all I ′ ⊂ I \ J ( r ) , the tree distance d tree ( I ′ , J ) > r and so D T W h W,iI ( I ′ ) I ′ , h V,jJ E L ( V ) = 0 . Analogous statements hold for Π V , since it is defined using the operator T ∗ V , which is alsowell-localized with radius r . (cid:3) Now, we show that the testing condition ( i ) from Theorem 4.3 and hence, the strongertesting condition from Theorem 4.2, implies the boundedness of the paraproducts Π W and Π V . We state the result for Π W , but analogous arguments give the result for Π V . Lemma 5.2.
Let Π W be the paraproduct defined above and assume that the well-localizedoperator T W satisfies: k I T W I e k L ( V ) ≤ C h W ( I ) e, e i C d ∀ I ∈ D , e ∈ C d . Then Π W is bounded from L ( W ) to L ( V ) and (cid:13)(cid:13) Π W (cid:13)(cid:13) L ( W ) → L ( V ) ≤ CB ( W ) , where B ( W ) is the constant obtained from applying the matrix Carleson Embedding Theorem.Proof. Fix f ∈ L ( W ) , which implies W f ∈ L ( W − ) , and observe that (cid:13)(cid:13) Π W f (cid:13)(cid:13) L ( V ) = X K ∈D X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:12)(cid:12)(cid:12)(cid:12)D T W E WK f, h V,ℓL E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) = X K ∈D X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:12)(cid:12)(cid:12)(cid:12)D E WK f, T ∗ V h V,ℓL E L ( W ) (cid:12)(cid:12)(cid:12)(cid:12) = X K ∈D X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:12)(cid:12)(cid:10) h W i − K h W f i K , α L,ℓ (cid:11) C d (cid:12)(cid:12) , where we have set α L,ℓ to be the vector α L,ℓ ≡ Z L ( r ) W ( x ) T ∗ V h V,ℓL ( x ) dx. And so, letting ( α L,ℓ ) ∗ denote the × d adjoint row vector corresponding to α L,ℓ , we have (cid:13)(cid:13) Π W f (cid:13)(cid:13) L ( V ) = X K ∈D X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:10) α L,ℓ ( α L,ℓ ) ∗ h W i − K h W f i K , h W i − K h W f i K (cid:11) C d = X K ∈D h A K h W f i K , h W f i K i C d , where we have set A K ≡ X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | h W i − K α L,ℓ ( α L,ℓ ) ∗ h W i − K . ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 19
This is exactly the setup where we can apply Theorem 3.4. Specifically, we need to showthat for all J ∈ D , X K ⊆ J h W i K A K h W i K ≤ C W ( J ) . To prove this matrix inequality, fix e ∈ C d and observe that X K ⊆ J hh W i K A K h W i K e, e i C d = X K ⊆ J X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | h α L,ℓ ( α L,ℓ ) ∗ e, e i C d = X K ⊆ J X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:12)(cid:12) h α L,ℓ , e i C d (cid:12)(cid:12) = X K ⊆ J X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:12)(cid:12)(cid:12)(cid:12)D h V,ℓL , T W e K E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) . Notice that as T is r -lower triangular and L ⊆ K with | L | = 2 − r | K | , we have that D h V,ℓL , T W e J \ K E L ( V ) = X I ⊆ J : I = K, | I | = | K | D h V,ℓL , T W e I E L ( V ) = 0 . This means that X K ⊆ J hh W i K A K h W i K e, e i C d = X K ⊆ J X ≤ ℓ ≤ dL ⊆ K : | L | =2 − r | K | (cid:12)(cid:12)(cid:12)(cid:12)D h V,ℓL , T W e J E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ k J T W e J k L ( V ) ≤ C h W ( J ) e, e i C d . Since e ∈ C d was arbitrary, the matrix inequality follows, so we can apply Theorem 3.4 toobtain: k Π W f k L ( V ) = X K ∈D h A K h W f i K , h W f i K i C d ≤ C B ( W ) k W f k L ( W − ) = C B ( W ) k f k L ( W ) , as desired. (cid:3) Small Lemmas.
In this subsection, we verify several small lemmas that are trivial inthe scalar situation. As before, T W is a well-localized operator with radius r that satisfiesthe testing conditions from Theorem 4.2 or 4.3. Lemma 5.3.
Let T W be a well-localized operator with radius r acting (formally) from L ( W ) to L ( V ) that satisfies the testing condition from Theorem 4.2 with constant A . Then (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( d ) A ∀ I, J ∈ D , ≤ i, j ≤ d. Similarly, if T W satisfies the testing condition ( ii ) from Theorem 4.3 with constant A , then (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( d ) A ∀ I, J ∈ D , ≤ i, j ≤ d. Proof.
For the first part of the lemma, we can use Cauchy-Schwarz to obtain: (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:13)(cid:13)(cid:13) T W h W,iI (cid:13)(cid:13)(cid:13) L ( V ) ≤ (cid:13)(cid:13)(cid:13) T W h W,iI ( I − ) I − (cid:13)(cid:13)(cid:13) L ( V ) + (cid:13)(cid:13)(cid:13) T W h W,iI ( I + ) I + (cid:13)(cid:13)(cid:13) L ( V ) . It suffices to prove the desired bound for one term in the sum, since the arguments aresymmetric. Using the testing condition and Lemma 2.4, we have: (cid:13)(cid:13)(cid:13) T W h W,iI ( I − ) I − (cid:13)(cid:13)(cid:13) L ( V ) ≤ A D W ( I − ) h W,iI ( I − ) , h W,iI ( I − ) E C d = A (cid:13)(cid:13)(cid:13) W ( I − ) h W,iI ( I − ) (cid:13)(cid:13)(cid:13) C d ≤ C ( d ) A , which completes the first part of the lemma. For the second part, we can write: (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI ( I − ) I − , h V,jJ ( J − ) J − E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI ( I − ) I − , h V,jJ ( J + ) J + E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI ( I + ) I + , h V,jJ ( J − ) J − E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI ( I + ) I + , h V,jJ ( J + ) J + E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) . By Lemma 2.4 and testing hypothesis ( ii ) , we can conclude: (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI ( I − ) I − , h V,jJ ( J − ) J − E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ A D W ( I − ) h W,iI ( I − ) , h W,iI ( I − ) E C d D V ( I − ) h V,jJ ( J − ) , h V,jJ ( J − ) E C d = A (cid:13)(cid:13)(cid:13) W ( I − ) h W,iI ( I − ) (cid:13)(cid:13)(cid:13) C d (cid:13)(cid:13)(cid:13) V ( I − ) h V,jJ ( I − ) (cid:13)(cid:13)(cid:13) C d ≤ C ( d ) A . The other three terms in the sum can be handled similarly. (cid:3)
Lemma 5.4.
Let f ∈ L ( W ) . Then for all I ∈ D , | I | (cid:13)(cid:13)(cid:13) h W i − I h W f i I (cid:13)(cid:13)(cid:13) C d ≤ C ( d ) k f I k L ( W ) . Proof.
Using Hölder’s inequality and the fact that h W i − I W ( x ) h W i − I is positive a.e., wecan compute | I | (cid:13)(cid:13)(cid:13) h W i − I h W f i I (cid:13)(cid:13)(cid:13) C d = | I | − (cid:13)(cid:13)(cid:13)(cid:13)Z I h W i − I W ( x ) f ( x ) dx (cid:13)(cid:13)(cid:13)(cid:13) C d ≤ | I | − (cid:18)Z I (cid:13)(cid:13)(cid:13) h W i − I W ( x ) f ( x ) (cid:13)(cid:13)(cid:13) C d dx (cid:19) ≤ | I | − (cid:18)Z I (cid:13)(cid:13)(cid:13) h W i − I W ( x ) (cid:13)(cid:13)(cid:13) dx (cid:19) (cid:18)Z I (cid:13)(cid:13)(cid:13) W ( x ) f ( x ) (cid:13)(cid:13)(cid:13) C d dx (cid:19) = (cid:18) | I | − Z I (cid:13)(cid:13)(cid:13) h W i − I W ( x ) h W i − I (cid:13)(cid:13)(cid:13) dx (cid:19) k f I k L ( W ) ≤ C ( d ) k f I k L ( W ) (cid:13)(cid:13)(cid:13)(cid:13) | I | − Z I h W i − I W ( x ) h W i − I dx (cid:13)(cid:13)(cid:13)(cid:13) = C ( d ) k f I k L ( W ) , which gives the needed inequality. (cid:3) ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 21
Proofs of Theorems 4.2 and 4.3.
We first prove Theorem 4.2:
Proof.
We prove T W extends to a bounded operator from L ( W ) to L ( V ) using duality.Specifically we show(11) (cid:12)(cid:12)(cid:12) h T W f, g i L ( V ) (cid:12)(cid:12)(cid:12) ≤ C k f k L ( W ) k g k L ( V ) , for a fixed constant C and all f and g in dense sets of L ( W ) and L ( V ) respectively. Withoutloss of generality, we can assume f and g are compactly supported and so, we can choosedisjoint I , I ∈ D such that supp ( f ) , supp ( g ) ⊆ I ∪ I and | I | = | I | = 2 m , for some m ∈ N . Using (6), we can write f = f + f = X I : | I |≤ m ≤ i ≤ d D f, h W,iI E L ( W ) h W,iI + X k =1 E WI k f (12) g = g + g = X J : | J |≤ m ≤ j ≤ d D g, h V,jJ E L ( V ) h V,jJ + X ℓ =1 E VI ℓ g. (13)Using these decompositions, it suffices to show (cid:12)(cid:12)(cid:12) h T W f i , g j i L ( V ) (cid:12)(cid:12)(cid:12) ≤ C k f k L ( W ) k g k L ( V ) ∀ ≤ i, j ≤ . First, consider f and g . Using Lemma 5.1, we can write h T W f , g i L ( V ) = X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m ≤ j ≤ d D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) = X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m | J | < − r | I | ≤ j ≤ d D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) + X J : | J |≤ m ≤ j ≤ d X I : | I |≤ m | I | < − r | J | ≤ i ≤ d D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) + X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m − r | I |≤| J |≤ r | I | ≤ j ≤ d D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) = (cid:10) Π W f , g (cid:11) L ( V ) + (cid:10) f , Π V g (cid:11) L ( W ) + X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m − r | I |≤| J |≤ r | I | ≤ j ≤ d D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) . Lemma 5.2 implies that (cid:12)(cid:12)(cid:12)(cid:10) Π W f , g (cid:11) L ( V ) (cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:10) f , Π V g (cid:11) L ( W ) (cid:12)(cid:12)(cid:12) ≤ ( A B ( W ) + A B ( V )) k f k L ( W ) k g k L ( V ) . So, we just need to bound the last sum. We first apply Cauchy-Schwarz and exploit symmetryin the sums to obtain: X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m − r | I |≤| J |≤ r | I | ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m − r | I |≤| J |≤ r | I | ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D f, h W,iI E L ( W ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) / (14) × X J : | J |≤ m ≤ j ≤ d X I : | I |≤ m − r | J |≤| I |≤ r | J | ≤ i ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D g, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) / . Now, fix I ∈ D . Since T W is well-localized, it is not hard to show that there are only finitelymany J satisfying − r | I | ≤ | J | ≤ r | I | such that D T W h W,iI , h
V,jJ E L ( V ) = 0 . Specifically, the number of such J will always be bounded by a fixed constant times r .Similarly, if we fix J , there are only finitely many I satisfying − r | J | ≤ | I | ≤ r | J | such that D T W h W,iI , h
V,jJ E L ( W ) = D h W,iI , T ∗ V h V,jJ E L ( V ) = 0 . The number of such I will also be bounded by a fixed constant times r . Thus, we can usethe testing conditions and Lemma 5.3 to estimate(14) ≤ A r C ( d ) k f k L ( W ) k g k L ( V ) . The other terms are much simpler. First observe that for each k, ℓ : (cid:12)(cid:12)(cid:12)(cid:10) T W E WI k f, E VI ℓ g (cid:11) L ( V ) (cid:12)(cid:12)(cid:12) ≤ (cid:13)(cid:13) T W E WI k f (cid:13)(cid:13) L ( V ) (cid:13)(cid:13) h V i − I ℓ h V g i I ℓ I ℓ (cid:13)(cid:13) L ( V ) ≤ A (cid:13)(cid:13)(cid:13) W ( I k ) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d (cid:13)(cid:13)(cid:13) V ( I ℓ ) h V i − I ℓ h V g i I ℓ (cid:13)(cid:13)(cid:13) C d = A | I k | (cid:13)(cid:13)(cid:13) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d | I ℓ | (cid:13)(cid:13)(cid:13) h V i − I ℓ h V g i I ℓ (cid:13)(cid:13)(cid:13) C d ≤ A C ( d ) k f k L ( W ) k g k L ( V ) , by Lemma 5.4. This immediately implies the desired bound for h T W f , g i L ( V ) . The mixedterms are similarly straightforward. Specifically, observe that (cid:12)(cid:12)(cid:12) h T W f , g i L ( V ) (cid:12)(cid:12)(cid:12) ≤ k g k L ( V ) 2 X k =1 (cid:13)(cid:13) T W E WI k f (cid:13)(cid:13) L ( V ) ≤ A C ( d ) k f k L ( W ) k g k L ( V ) , ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 23 using the arguments that appeared in the previous bound. Similarly, (cid:12)(cid:12)(cid:12) h T W f , g i L ( V ) (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) h f , T ∗ V g i L ( W ) (cid:12)(cid:12)(cid:12) ≤ k f k L ( W ) 2 X ℓ =1 (cid:13)(cid:13) T ∗ V E VI ℓ g (cid:13)(cid:13) L ( W ) ≤ A C ( d ) k f k L ( W ) k g k L ( V ) , using Lemma 5.4 and the testing condition on T ∗ V . This completes the proof. (cid:3)
We now turn to the proof of Theorem 4.3.
Proof.
This theorem is established in basically the same manner as Theorem 4.2. We simplyneed to check that the weaker conditions ( i ) and ( ii ) in Theorem 4.3 allow us to deduce thesame estimates. As before, we establish boundedness by duality as in (11), fix f, g compactlysupported in I ∪ I with | I | = | I | = 2 m , and decompose f = f + f and g = g + g as in (12) and (13). As before, h T W f , g i L ( V ) = (cid:10) Π W f , g (cid:11) L ( V ) + (cid:10) f , Π V g (cid:11) L ( W ) + X I : | I |≤ m ≤ i ≤ d X J : | J |≤ m − r | I |≤| J |≤ r | I | ≤ j ≤ d D f, h W,iI E L ( W ) D g, h V,jJ E L ( V ) D T W h W,iI , h
V,jJ E L ( V ) . The first two terms can be controlled by testing hypothesis ( i ) and Lemma 5.2. For the sum,we can use Lemma 5.3 and testing hypothesis ( ii ) to conclude (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ C ( d ) A . Since T W is still well-localized with radius r , we can use the strategy from the proof ofTheorem 4.2 to immediately conclude: (cid:12)(cid:12)(cid:12) h T W f , g i L ( V ) (cid:12)(cid:12)(cid:12) ≤ r C ( d ) ( A B ( W ) + A B ( V ) + A ) k f k L ( W ) k g k L ( V ) . The other terms are also straightforward. First observe that since | I k | = | I ℓ | , assumption ( ii ) paired with Lemma 5.4 implies that for each k, ℓ : (cid:12)(cid:12)(cid:12)(cid:10) T W E WI k f, E VI ℓ g (cid:11) L ( V ) (cid:12)(cid:12)(cid:12) ≤ A (cid:13)(cid:13)(cid:13) W ( I k ) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d (cid:13)(cid:13)(cid:13) V ( I ℓ ) h V i − I ℓ h V g i I ℓ (cid:13)(cid:13)(cid:13) C d = A | I k | (cid:13)(cid:13)(cid:13) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d | I ℓ | (cid:13)(cid:13)(cid:13) h V i − I ℓ h V g i I ℓ (cid:13)(cid:13)(cid:13) C d ≤ A C ( d ) k f k L ( W ) k g k L ( V ) . (15)This immediately gives the desired bound for h T W f , g i L ( V ) . The mixed terms require abit more work. We consider h T W f , g i L ( V ) . The other term can be handled analogously. Observe that (cid:12)(cid:12)(cid:12) h T W f , g i L ( V ) (cid:12)(cid:12)(cid:12) ≤ X k =1 X J : | J |≤ m ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D g, h V,jJ E L ( V ) D T W E WI k f, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) = X k =1 X J : J ⊆ I k ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D g, h V,jJ E L ( V ) D T W E WI k f, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) (16) + X k =1 X J : | J |≤ m ,J I k ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D g, h V,jJ E L ( V ) D T W E WI k f, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) . (17)We have to handle (16) and (17) separately. To handle (16), simply use Cauchy-Schwarz,Lemma 5.4, and assumption ( i ) to conclude X k =1 X J : J ⊆ I k ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D g, h V,jJ E L ( V ) D T W E WI k f, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ X k =1 (cid:13)(cid:13) I k T W E WI k f (cid:13)(cid:13) L ( V ) k I k g k L ( V ) ≤ A k g k L ( V ) 2 X k =1 | I k | (cid:13)(cid:13)(cid:13) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d ≤ A C ( d ) k f k L ( W ) k g k L ( V ) . Now, consider (17). Since T W is well-localized with radius r , one can easily that show thatfor each I k , there are at most a fixed constant times r intervals J that satisfy D T W E WI k f, h V,jJ E L ( V ) = 0 , | J | ≤ m , and J I k . Indeed, for the inner product to be nonzero, J must satisfy J ⊂ I ( r +1) k and | J | > − r | I k | . Now, using assumption ( ii ) , Lemma 5.4, and Lemma 2.4, we can establish
ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 25 the following sequence:(17) = X k =1 X J :2 − r | I k | < | J |≤| I k | J ⊂ I ( r +1) k J I k ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D g, h V,jJ E L ( V ) D T W E WI k f, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ k g k L ( V ) 2 X k =1 X J :2 − r | I k |≤| J |≤| I k | J ⊂ I ( r +1) k , J I k ≤ j ≤ d (cid:12)(cid:12)(cid:12)(cid:12)D T W E WI k f, h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) ≤ A k g k L ( V ) 2 X k =1 X J :2 − r | I k | < | J |≤| I k | J ⊂ I ( r +1) k , J I k ≤ j ≤ d | I k | (cid:13)(cid:13)(cid:13) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d (cid:13)(cid:13)(cid:13) V ( I − ) h V,jJ ( J − ) (cid:13)(cid:13)(cid:13) C d + A k g k L ( V ) 2 X k =1 X J :2 − r | I k |≤| J |≤| I k | J ⊂ I ( r +1) k , J I k ≤ j ≤ d | I k | (cid:13)(cid:13)(cid:13) h W i − I k h W f i I k (cid:13)(cid:13)(cid:13) C d (cid:13)(cid:13)(cid:13) V ( J + ) h V,jJ ( J + ) (cid:13)(cid:13)(cid:13) C d ≤ r C ( d ) A k g k L ( V ) k f k L ( W ) , which completes the proof. (cid:3) Remark 5.5.
As mentioned earlier, our definition of well-localized is slightly different thanthe one appearing in [11], where Nazarov-Treil-Volberg only impose conditions on T W when | J | ≤ | I | , rather than | J | ≤ | I | . The difference is likely attributable to a typographical errorand their ideas are essentially correct.However, to see why imposing conditions on only | J | ≤ | I | is not quite sufficient, let usconsider the role of the well-localized property in the proofs of Theorems 4.2 and 4.3. It is usedto show that for each fixed I , there is at most a finite number of J with − r | I | ≤ | J | ≤ r | I | such that (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI , h
V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) = 0 . This allows one to control related sums given in (14). However, the definition of well-localizedgiven by Nazarov-Treil-Volberg is not quite enough for this, as it does not handle the casewhere | I | = | J | . In this case, one would need control over terms such as (cid:12)(cid:12)(cid:12)(cid:12)D T W h W,iI ( I + ) I + , h V,jJ E L ( V ) (cid:12)(cid:12)(cid:12)(cid:12) or (cid:12)(cid:12)(cid:12)(cid:12)D h W,iI , T ∗ V h V,jJ ( J + ) J + E L ( W ) (cid:12)(cid:12)(cid:12)(cid:12) , which are not addressed in their definition of well-localized since | I + | < | J | and | J + | < | I | . This case is no longer a problem if we impose conditions on all
I, J with | J | ≤ | I | as inDefinition 4.1. For an example of what can go wrong, fix K ∈ D . Fix a sequence { c K } in ℓ ( D ) with no nonzero terms, and define the operator T : L ( R ) → L ( R ) by T h K ≡ X K : | K | = | K | c K h K and T h L ≡ for L = K . It is not difficult to show T is well-localized (with radius ) from L ( R ) to L ( R ) accordingto the definition in [11]. Indeed, if | J | ≤ | I | , then h T I , h J i L = 0 = h T ∗ I , h J i L . To see these equalities, first write I = X K : I ( K h I , h K i L h K . Thus, if I is not strictly contained in K , then T I = 0 . So, we can assume I ( K . Then | J | ≤ | I | < | K | so h T I , h J i L = X K : | K | = | K | h I , h K i L c K h h K , h J i L = 0 . Now consider T ∗ . If | J | ≤ | I | and J = K , then h T ∗ I , h J i L = h I , T h J i L = h I , i L = 0 immediately. If J = K , then h T ∗ I , h J i L = X K : K = | K | c K h I , h K i L = 0 , since | K | = | J | ≤ | I | implies K ⊆ I or K ∩ I = 0 . However, for this operator T , h T h K , h J i L = c J = 0 , for all J with | J | = | K | . Since there are is infinite number of such J , this means we couldnot use the well-localized property to control the sums from (14) for this operator. Remark 5.6.
In this paper, we only considered band operators defined on L ( R , C d ) . How-ever, we anticipate that these T1 theorems will generalize without substantial difficulty toband operators on L ( R n , C d ) . One must define a slightly more complicated Haar system,but in general, the tools and proof strategy seem to work without issue. ELL-LOCALIZED OPERATORS ON MATRIX WEIGHTED L SPACES 27
References [1] K. Bickel, S. Petermichl, and B.D. Wick. Bounds for the Hilbert Transform with Matrix A Weights. preprint , available at http://arxiv.org/abs/1402.3886. 2[2] M. Christ and M. Goldberg. Vector A weights and a Hardy-Littlewood maximal function. Trans. Amer. Math. Soc. (2001), no. 5, 1995–2002. 2[3] M. Goldberg. Matrix A p weights via maximal functions. Pacific J. Math. (2003), no.2, 201–220. 2[4] P.T. Hytönen. The sharp weighted bound for general Calderón-Zygmund operators.
Ann.of Math. (2). (2012), no. 3, 1473–1506. 2[5] J. Isralowitz, H. Kwon, and S. Pott. A Matrix Weighted T Theorem for Matrix KerneledCalderón–Zygmund Operators - I preprint , available at http://arxiv.org/abs/1401.6570.2, 5, 10, 12, 13, 14[6] R. Kerr. Toeplitz products and two-weight inequalities on spaces of vector-valued func-tions. Thesis (Ph.D.)-University of Glasgow. 2011. 4[7] R. Kerr. Martingale transforms, the dyadic shift and the Hilbert transform: a suffi-cient condition for boundedness between matrix weighted spaces. preprint , available athttp://arxiv.org/abs/0906.4028. 4[8] M. Lacey, S. Petermichl, and M.C. Reguera. Sharp A inequality for Haar shift operators. Math. Ann. (2010), no. 1, 127–141. 1[9] M. Lauzon and S.Treil. Scalar and vector Muckenhoupt weights.
Indiana Univ. Math. J. (2007), no. 4, 1989–2015. 2[10] F. Nazarov, G. Pisier, S. Treil, and A. Volberg. Sharp estimates in vector Carlesonimbedding theorem and for vector paraproducts. J. Reine Angew. Math. (2002),147–171. 2[11] F. Nazarov, S. Treil, and A. Volberg. Two weight inequalities for individual Haar multi-pliers and other well-localized operators.
Math. Res. Lett. (2008), no. 3, 583–597. 2,3, 4, 5, 14, 25, 26[12] F. Nazarov and S. Treil. The hunt for a Bellman function: applications to estimates forsingular integral operators and to other classical problems of harmonic analysis. (Russian) Algebra i Analiz. (1996), no. 5, 32–162; translation in St. Petersburg Math. J. (1997),no. 5, 721–824. 2[13] N.K. Nikolskií. Treatise on the Shift Operator.
Translated from the Russian by JaakPeetre.
Grundlehren der Mathematischen Wissenschaften (Fundamental Principles ofMathematical Sciences), . Springer-Verlag, Berlin, 1986. 10[14] S. Treil and A. Volberg. Wavelets and the angle between past and future.
J. Funct. Anal. (1997), no. 2, 269–308. 2, 5, 9, 10, 12[15] A. Volberg. Matrix A p weights via S-functions. J. Amer. Math. Soc. (1997), no. 2,445–466. 2, 14 Kelly Bickel, Department of Mathematics, Bucknell University, 701 Moore Ave, Lewis-burg, PA 17837
E-mail address : [email protected] Brett D. Wick, School of Mathematics, Georgia Institute of Technology, 686 CherryStreet, Atlanta, GA USA 30332-0160
E-mail address : [email protected] URL ::