Random Sampling in Reproducing Kernel Subspace of Mixed Lebesgue Spaces
aa r X i v : . [ m a t h . F A ] F e b Random Sampling in Reproducing Kernel Subspace of Mixed LebesgueSpaces
Prashant Goyal Dhiraj Patel Sivananthan Sampath
Abstract
In this article, we consider the random sampling in a reproducing kernel space V ,defined as the range of an idempotent integral operator of mixed Lebesgue spaces. Atfirst, we approximate V with an element of finite-dimensional subspace of V. Similarto classical case, we estimate that if the sample size of both variables are large enoughthen the sampling inequality holds with high probability.
Sampling theory is one of the important part of signal processing. Recently, randomsampling is used widely in compressed sensing [8, 11] and image processing [9]. Randomsampling has been studied extensively in wast domains like multivariable trigonometricpolynomial [2], bandlimited functions [4, 5], signals in a shift-invariant subspaces or re-producing kernel subspace of Lebesgue spaces L p ( R d ) [14, 24, 28, 30, 16, 18, 22, 27] etc.In recent times, the function spaces with mixed norms being studied extensively fortheir usefulness in many fields like partial differential equations, random sampling, waveletanalysis etc. Benrdek and Panzone in [7] introduce the L ~p ( R n ) spaces which is the gen-eralization of the L p spaces via replacing the constant exponent p by an exponent vector ~p. Galmarino and Panzone in [15] extended the mixed Lebesgue spaces to L p ( R n ) with p being a sequence. A lot of study has been done since then see for instance [3, 1, 12].Torres and Ward studied the Plancherel-Polya inequality and wavelet characterization of L ~p spaces [26]. With the motivation from mixed Lebesgue spaces, numerous other functionspaces like Besov spaces, Sobolev spaces and Bessel potential spaces with mixed normsare introduced by Besov, Il’in and Nikol’skii in [6]. A Schur test for mixed-norm spacesis proved in [25]. The interested leader can look into [17] for more particulars related tomixed Lebesgue spaces.In recent years, random sampling draw attention towards mixed Lebesgue spaces L p,q ( R d +1 ) . The motivation behind considering random sampling in L p,q ( R d +1 ) is comesfrom the fact that some signals are time varying in reality that is the signal lives in time andspace domains at the same time. Francia in [13] studied about the Calder´on-Zygmundtheory for operator-valued kernels. The nonuniform sampling and reconstruction prob-lem in shift-invariant subspaces of mixed Lebesgue spaces is considered by Li, Liu, Liu,and Zhang [23]. Jiang and Sun in [20] define the reproducing kernel subspaces of mixedebesgue spaces and study the frame properties to show that the reproducing kernel sub-space has finite rate of innovation. They also introduce a semi-adaptive sampling schemefor time-space signals in a reproducing kernel subspace. Kumar, Patel and Sampath [21]consider the sampling and average sampling problems in a reproducing kernel subspace ofmixed Lebesgue space. In [19], Jiang and Li consider the problem of mulptiply generatedshift-invariant subspaces of mixed Lebesgue spaces.In this article, we consider the random sampling in a reproducing kernel space. Indeed,we prove the following result. Theorem 1.1.
Let p, q ∞ . Suppose that the kernel satisfies the regularity and decayconditions and { ( x j , y k ) : j, k ∈ N } is a sequence of independent random variables that areuniformly distributed over the cube C K,L . Then for any < µ < , the inequality a (1 − µ ) k f k L p,q ( R d +1 ) k{ f ( x j , y k ) } j =1 , ,...,n ; k =1 , ,...,m k ℓ p,q b k f k L p,q ( R d +1 ) holds uniformly for all f ∈ V ( K, L, δ ) with probability at least − aexp − b µ mnD − pq ) (1 − δ ) pq K dp − dp L q − q + L q K dp µD (1 − pq ) (1 − δ ) pq ! . Here, the positive constant a, b are a = m q n p (1 − δ ) pq D − pq L q K dp b = mnK dp L q (cid:18) µ (1 − δ ) pq D − pq K dp − dp L q − q (cid:19) . We have applied the similar technique as in the classical case see [24]. It is veryinteresting to see the difference between the two outcomes in terms of the order of thesampling inequality. For instance, in the classical case, the order of the sampling inequalityis O ( R n ). Although, in our case it is not definite to say anything about the order. Ithappens because of the assumption of the random variable see Section 3 below for details.This article is divided in four sections. Section 2 is dedicated to preliminaries whichincludes the basic definition and results require for our article. In Section 3, we show thefinite-dimensional approximation of the reproducing kernel space. In the last section, wegive the details of random sampling and prove Theorem 1.1. In this section, we give some definitions, notations and few results needed for this article.
Definition 2.1.
For p, q < ∞ , we define the mixed Lebesgue space as the collectionof all measurable functions f = f ( x, y ) defined on R d × R such that k f k qL p,q = Z R (cid:18)Z R d | f ( x, y ) | p dx (cid:19) qp dy < ∞ . e denote it by L p,q ( R d +1 ) = L p,q and k f k L ∞ , ∞ is the essential supremum of f on R d × R . Definition 2.2.
For p, q < ∞ , we define the mixed Lebesgue space for sequences asthe collection of all measurable functions c ( j, k ) defined on Z d × Z such that k c k qℓ p,q = X k ∈ Z X j ∈ Z d | c ( j, k ) | p qp < ∞ and we denote it by ℓ p,q ( Z d × Z ) = ℓ p,q and k c k ℓ ∞ , ∞ is the supremum of c over Z d × Z . We define the idempotent integral operator T from L p,q to L p,q with 1 p, q < ∞ as T f ( x, y ) = Z R Z R d K ( x, y ; t, s ) f ( t, s ) dtds where the symmetric kernel K is defined on ( R d × R ) × ( R d × R ) and satisfies k K k W = (cid:13)(cid:13)(cid:13) k K ( x, y ; t, s ) k W x,t (cid:13)(cid:13)(cid:13) W y,s < ∞ and lim δ → k ω δ ( K ) k W = 0with decay | K ( x, y ; t, s ) | c (1 + k x − t k ) α (1 + k y − s k ) β (2.1)where α > dp ′ + d + 1 , β > q ′ + 2 with p + p ′ = 1 , q + q ′ = 1 and k x k = P di =1 | x ( i ) | , x =( x (1) , x (2) . . . x ( d )) ∈ R d . Moreover, we have ω δ ( K )( x, y ; t, s ) = sup | ( x ′ ,y ′ ) | <δ | ( t ′ ,s ′ ) | <δ | K ( x + x ′ , y + y ′ ; t + t ′ , s + s ′ ) − K ( x, y ; t, s ) | and k K k W = max ( sup x ∈ R d k K ( x, · ) k L ( R d ) , sup y ∈ R d k K ( · , y ) k L ( R d ) ) . We define the range space V of the operator T as V = n T f : f ∈ L p,q ( R d +1 ) o = n f ∈ L p,q ( R d +1 ) : T f = f o . It can easily be verify that the space V is closed and reproducing kernel subspace of L p,q ( R d +1 ) . Let C K,L = [ − K , K ] d × [ − L , L ] is a subset of R d +1 . It is enough to show thesampling inequality for the set V ( K, L, δ ) = f ∈ V : (1 − δ ) k f k L p,q ( R d +1 ) Z [ − L , L ] Z [ − K , K ] d | f ( x, y ) | p dx qp dy q for some 0 < δ < . efinition 2.3. A set
Γ = { γ j,k = ( x j , y k ); x j ∈ R d , y k ∈ R , j, k ∈ Z } ⊂ R d +1 is said tobe relatively separated for both variables if A Γ ( δ ′ ) = sup x ∈ R d X j ∈ Z χ B ( x j ,δ ′ ) ( x ) < ∞ B Γ ( δ ′ ) = sup y ∈ R X k ∈ Z χ B ( y k ,δ ′ ) ( y ) < ∞ for some δ ′ > . Also, δ ′ are gaps for Γ if C Γ ( δ ′ ) = inf x ∈ R d X j ∈ Z χ B ( x j ,δ ′ ) ( x ) > D Γ ( δ ′ ) = inf y ∈ R X k ∈ Z χ B ( y k ,δ ′ ) ( y ) > where B ( x j , δ ′ ) , B ( y k , δ ′ ) are the balls with center x j , y k and radius δ ′ . As the kernel satisfies the decay and regularity condition from [29] there exists arelatively separate set Γ = { γ j,k = ( x j , y k ); x j ∈ R d , y k ∈ R , j, k ∈ Z } ⊂ R d +1 withpositive gap δ < ( d ) with two families Φ = { φ γ } γ ∈ Γ ⊆ L p,q ( R d +1 ) and ˜Φ = { ˜ φ γ } γ ∈ Γ ⊆ L p ′ ,q ′ ( R d +1 ) such that f ∈ V expressed as f ( x, y ) = X γ ∈ Γ h f, ˜ φ γ i φ γ ( x, y ) (2.2) Definition 2.4 ([29]) . Let V be a Banach subspace of L p,q ( R d +1 ) . A family ˜Φ = { ˜ φ γ } γ ∈ Γ ⊆ L p ′ ,q ′ ( R d +1 ) is a ( p, q ) − frame for V if there exists positive constants A and B such that A k f k L p,q k{h f, φ γ i} γ ∈ Γ k l p,q B k f k L p,q , ∀ f ∈ V. (2.3)We define the subspace V N of V for some positive real number N as V N = X γ ∈ Γ ∩ [ − N . N ] d +1 c γ φ γ : c γ ∈ R . Proposition 2.5 ([7]) . Let p, q ∞ , p + p ′ = 1 , q + q ′ = 1 . Then k f g k L , k f k L p,q k g k L p ′ ,q ′ . In this section we approximate functions of V by V N . We prove the following lemma.4 emma 3.1.
For a given ǫ > and f ∈ V, we have (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) f − X γ ∈ Γ ∩ [ − N . N ] d +1 h f, ˜ φ γ i φ γ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) L p,q ( C K,L ) < ǫ where N > K L d + B k f k L p,q ( R d +1 ) dp ′ + q ′ CK dq ǫω p ′ α αp ′− dp ′ ) + L q ω q ′ β ǫ d +1) βq ′− q ′ ) . Proof.
The proof of this lemma is similar to [24, Lemma 2.1]. Given f ∈ V, we consider f N ∈ V N by f N ( x, y ) = X γ ∈ Γ ∩ [ − N , N ] d +1 h f, ˜ φ γ i φ γ ( x, y ) (3.1)Then by (2.2), (3.1), Proposition 2.5 and (2.3), we have | f ( x, y ) − f N ( x, y ) | = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) X γ ∈ Γ \ [ − N , N ] d +1 h f, ˜ φ γ i φ γ ( x, y ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) X γ ∈ Γ \ [ − N , N ] X γ ∈ Γ \ [ − N , N ] d |h f, ˜ φ γ i| p qp q X γ ∈ Γ \ [ − N , N ] X γ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ q ′ p ′ q ′ B k f k L p,q ( R d +1 ) X γ ∈ Γ \ [ − N , N ] X γ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ q ′ p ′ q ′ , where φ γ ( x, y ) = δ − q δ − dp δ Z − δ Z [ − δ , δ ] d K ( λ + z , λ + z ; t, s ) dz dz and ( λ , λ ) = γ ∈ Γ . Therefore from (2.1), we have | φ γ ( x, y ) | cδ − q δ − dp Z [ − δ , δ ] k λ + z − s k ) β Z [ − δ , δ ] d k λ + z − t k ) α dz dz cδ − q δ − dp Z [ − δ , δ ] k λ − s k − δ ) β Z [ − δ , δ ] d k λ − t k − dδ ) α dz dz cδ q ′ δ dp ′ k λ − s k − δ ) β k λ − t k − dδ ) α . Hence X λ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ (cid:18) δ (cid:19) d X λ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ (cid:18) δ (cid:19) d (cid:18) δ (cid:19) d c p ′ δ p ′ q ′ δ d X λ ∈ Γ \ [ − N . N ] d k λ − s k − δ ) p ′ β ( λ − λ + δ ) d (1 + k λ − t k − dδ ) p ′ α d c p ′ δ p ′ q ′ k λ − s k − δ ) p ′ β X λ ∈ Γ \ [ − N . N ] d Z B δ λ dz (1 + k z − t k − dδ ) p ′ α where B δ ( λ ) = { t = ( t , t , . . . t d ) ∈ R d , λ − δ t i λ , i d } is a cube of length δ containing λ . Since each B δ overlap at most A Γ ( δ ′ ), we have X λ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ d c p ′ δ p ′ q ′ k λ − s k − δ ) p ′ β Z R d \ [ − N − δ , N − δ ] d A Γ ( δ ′ ) dz (1 + k z − t k − dδ ) p ′ α d c p ′ δ p ′ q ′ k λ − s k − δ ) p ′ β Z R d \ [ − N − δ − K , N − δ − K ] d A Γ ( δ ′ ) dz (1 + k z k − dδ ) p ′ α d c p ′ δ p ′ q ′ k λ − s k − δ ) p ′ β A Γ ( δ ′ ) Z R d \ [ N − δ − K , ∞ ] d dz k z k p ′ α d c p ′ δ p ′ q ′ k λ − s k − δ ) p ′ β A Γ ( δ ′ ) ω α ( N − δ − K d ) αp ′ − d where ω α = ( αp ′ − αp ′ − . . . ( αp ′ − d ) . Therefore, X λ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ q ′ p ′ dq ′ p ′ c q ′ δ k λ − s k − δ ) q ′ β A Γ ( δ ′ ) ω q ′ p ′ α ( N − δ − K d ) ( αp ′ − d ) q ′ p ′ . X λ ∈ Γ \ [ − N . N ] X λ ∈ Γ \ [ − N . N ] d | φ γ ( x, y ) | p ′ q ′ p ′ dq ′ p ′ c q ′ A Γ ( δ ′ ) ω q ′ p ′ α ( N − δ − K d ) ( αp ′ − d ) q ′ p ′ B Γ ( δ ′ ) ω β ( N − δ − L ) ( βq ′ − where ω β = ( βq ′ − . Let C q ′ = c q ′ A Γ ( δ ′ ) B Γ ( δ ′ ) . Hence, we get | f ( x, y ) − f N ( x, y ) | B k f k L p,q ( R d +1 ) dq ′ p ′ +1 C q ′ ω q ′ p ′ α ( N − δ − K d ) ( αp ′ − d ) q ′ p ′ ω β ( N − δ − L ) ( βq ′ − q ′ B k f k L p,q ( R d +1 ) dp ′ + q ′ C ω p ′ α ( N − δ − K d ) ( αp ′ − d ) p ′ ω q ′ β ( N − δ − L ) q ′ ( βq ′ − B k f k L p,q ( R d +1 ) dp ′ + q ′ C ω p ′ α ( N − K d − ( αp ′ − d ) p ′ ω q ′ β ( N − L − d ) q ′ ( βq ′ − . (3.2)Therefore, k f − f N k qL p,q ( R d +1 ) K d LB q k f k qL p,q ( R d +1 ) q ( dp ′ + q ′ ) C q ω q p ′ α ( N − K d − ( αp ′ − d ) qp ′ ω qq ′ β ( N − L − d ) qq ′ ( βq ′ − < ǫ q ǫ qd +1 < ǫ q whenever N > K L d + B k f k L p,q ( R d +1 ) dp ′ + q ′ CK dq ǫω p ′ α αp ′− dp ′ ) + L q ω q ′ β ǫ d +1) βq ′− q ′ ) . We recall the following result from [10].
Proposition 3.2.
Let Y be a Banach space of dimension d and B (0 , r ) denotes the closedball of radios r centred at origin. The the minimum number of open balls of radios r tocover ball B (0 , r ) is bounded by ( rr + 1) d . We can show the following results for mixed Lebesgue space similar to the Lebesguespace see [24]. 7 emma 3.3. If f ∈ V ( K, L, δ ) then k f k L ∞ , ∞ ( C K,L ) D k f k L p,q ( C K,L ) where D = sup ( x,y ) ∈ CK,L k K ( x,y ; · , · ) k Lp ′ ,q ′ ( R d +1) (1 − δ ) q . Lemma 3.4.
The set V ( K, L, δ ) is totally bounded with respect to k · k L ∞ , ∞ ( C K,L ) . Remark 3.5. If N = K + L + 2 + B k f k L p,q ( R d +1 ) dp ′ + q ′ Cω p ′ α αp ′− dp ′ ) + ω q ′ β βq ′− q ′ ) ǫ − d +1) , then dimension of V N is bounded by N d +1 N (Γ) d +1 N (Γ) h ( K + L + 2) d +1 + C ǫ − i = c ǫ where N (Γ) = sup k ∈ Z d +1 (cid:0) k + [ − , ] d +1 (cid:1) ∩ Γ and C = B dp ′ + q ′ CK dq ω p ′ α αp ′− dp ′ ) + L q ω q ′ β βq ′− q ′ ) . Note that the choice of N is just for the simplification of computations. A different valuecan also be consider according to the bound of N. Although, It will not affect the mainresult.
Remark 3.6. If N ( ǫ ) denotes the number of elements in A ( ǫ ) that is the minimum numberof open balls of radios ǫ covers for B (0 , D + ǫ ) with respect to k · k L ∞ , ∞ ( C K,L ) then fromProposition 3.2, we have N ( ǫ ) exp (cid:18) c ǫ log (cid:18) Dǫ (cid:19)(cid:19) . Let Z j,k = { ( x j , y k ); j, k ∈ N } be an infinite sequence of independent and identicallydistributed random variables which are uniformly distributed over the cube C K,L . Forevery f ∈ V, we define the random variable Z j,k ( f ) = | f ( x, y ) | − K d L Z Z C K,L | f ( x, y ) | dxdy with expectation of Z j,k that is E ( Z j,k ) = 0 see [19] for details, we recall the followingresult from [19]. 8 emma 4.1. For p, q ∞ , we have k{ f ( x j , y k ) } j =1 , ,...,n ; k =1 , ,...,m k ℓ p,q m X k =1 n X j =1 | f ( x j , y k ) | m − q n − p k{ f ( x j , y k ) } j =1 , ,...,n ; k =1 , ,...,m k ℓ p,q . (4.1) k f k L , ( C K,L ) K d (1 − p ) L (1 − q ) k f k L p,q ( C K,L ) . (4.2) k f k pqL p,q ( C K,L ) K pd (1 − p ) L q (1 − q ) D pq − k f k L , ( C K,L ) . (4.3)We prove the following lemma similar to L p case. Lemma 4.2.
For f, g ∈ V with k f k L p,q ( R d +1 ) = k g k L p,q ( R d +1 ) = 1 , j, k ∈ N , followinginequalities holds.1. V ar ( Z j,k ( f )) K dp L q sup x ∈ R d y ∈ R {k K ( x, y ; · , · ) k L p ′ q ′ ( R d +1 ) } . k Z j,k ( f ) k L ∞ , ∞ ( C K,L ) sup x ∈ R d y ∈ R {k K ( x, y ; · , · ) k L p ′ q ′ ( R d +1 ) } . V ar ( Z j,k ( f ) − Z j,k ( g )) K dp L q k f − g k L ∞ , ∞ ( C K,L ) . k Z j,k ( f ) − Z j,k ( g ) k L ∞ , ∞ ( C K,L ) k f − g k L ∞ , ∞ ( C K,L ) . Proof. (1) . From Lemma 3.3, we get
V ar ( Z j,k ( f )) = E ([ | f ( x j , y k ) | − E ( | f ( x j , y k ) | )] )= E ( | f ( x j , y k ) | ) − [ E ( | f ( x j , y k ) | )] E ( | f ( x j , y k ) | )= 1 K d L Z Z C K,L | f ( x, y ) | dxdy K d L k f k L ∞ , ∞ ( C K,L ) k f k L , ( C K,L ) K dp L q sup x ∈ R d y ∈ R {k K ( x, y ; · , · ) k L p ′ q ′ ( R d +1 ) } . (2) . Since, f ∈ V, k f k L p,q ( R d +1 ) = 1 and the fact that k f k L ∞ , ∞ ( C K,L ) sup x ∈ R d y ∈ R {k K ( x, y ; · , · ) k L p ′ q ′ ( R d +1 ) }k f k L p,q ( R d +1 ) , (4.4)we have k Z j,k ( f ) k L ∞ , ∞ ( C K,L ) = sup x ∈ R d y ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) f ( x j , y k ) − K d L Z Z C K,L | f ( x, y ) | dxdy (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) max (cid:26) k f k L ∞ , ∞ ( C K,L ) , K d L k f k L , ( C K,L ) (cid:27) sup x ∈ R d y ∈ R {k K ( x, y ; · , · ) k L p ′ q ′ ( R d +1 ) } . (3) . Similar to part (1) , we have V ar ( Z j,k ( f ) − Z j,k ( g )) = E (( | f ( x j , y k ) | − | g ( x j , y k ) | ) ) − [ E ( | f ( x j , y k ) | − | g ( x j , y k ) | )] = 1 K d L Z Z C K,L || f ( x, y ) | − | g ( x, y ) || ( | f ( x, y ) | + | g ( x, y ) | ) dxdy K dp L q k f − g k L ∞ , ∞ ( C K,L ) . (4) . Similar to part (2) , we have k Z j,k ( f ) − Z j,k ( g ) k L ∞ , ∞ ( C K,L ) = sup x ∈ R d y ∈ R (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) | f ( x j , y k ) | − | g ( x j , y k ) | − K d L Z Z C K,L | f ( x, y ) | − | g ( x, y ) | dxdy !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) max (cid:26) k f − g k L ∞ , ∞ ( C K,L ) , K d L k f − g k L , ( C K,L ) (cid:27) k f − g k L ∞ , ∞ ( C K,L ) . Let us denote k = sup x ∈ R d y ∈ R {k K ( x, y ; · , · ) k L p ′ q ′ ( R d +1 ) } . We recall the following Bernstein’sinequality.
Theorem 4.3 (Bernstein Inequality) . Let Z j,k , j = 1 , , . . . , n ; k = 1 , , . . . , m be indepen-dent random variable with expected value E ( Z j,k ) = 0 , ∀ j, k. Assume that
V ar ( Z j,k ) σ and | Z j,k | M almost surely for all j, k. Then for λ > we have P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ − λ mnσ + M λ ! . With the help of Theorem 4.3, we prove the following lemma which require for ourmain result.
Lemma 4.4.
Let ( x j , y k ; j, k ∈ N ) be a sequence of i.i.d random variables that are drawnuniformly from C K,L then there exist constants a, b > depending on d, K, L, δ such that P sup f ∈ V ( K,L,δ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ a exp − b λ mnK − dp L − q + λ ! . roof. For Simplicity, we divide the proof in following steps.Step-1 . Let f ∈ V ( K, L, δ ) then Lemma 3.4 implies that we can construct a sequence { f l } l ∈ N such that f l ∈ A (2 − l ) and k f − f l k L ∞ , ∞ ( C K,L ) < − l . Therefore, Z j,k ( f ) = Z j,k ( f ) + ∞ X l =2 ( Z j,k ( f l ) − Z j,k ( f l − )) . In fact, s m ( f ) = Z j,k ( f ) + m X l =2 ( Z j,k ( f l ) − Z j,k ( f l − )) = Z j,k ( f m ) (4.5)and k Z j,k ( f ) − Z j,k ( f m ) k L ∞ , ∞ ( C K,L ) k f − f m k L ∞ , ∞ ( C K,L ) → as m → ∞ . Now, we consider the event E = sup f ∈ V ( K,L,δ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ E = ∃ f ∈ A (cid:18) (cid:19) : (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ and for l > , we define E l = ( ∃ f l ∈ A (cid:16) − l (cid:17) and f l − ∈ A (cid:16) − l +1 (cid:17) with k f l − f l − k L ∞ , ∞ ( C K,L ) . − l : (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f l ) − Z j,k ( f l − ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ l ) . Claim : sup f ∈ V ( K,L,δ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ that is if P ( E ) > l > . Indeed,
E ⊂ ∪ ∞ l =1 E l . Suppose forall l, P ( E l ) = 0 . Then, for f ∈ V ( K, L, δ ) and from Inequality (4.5), we have (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + ∞ X l =2 (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 ( Z j,k ( f l ) − Z j,k ( f l − )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < λ which verifies our claim.Step-2 . From Theorem 4.3 for Z j,k ( f ) and results in Lemma 4.2(1 , , we get P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ − λ mnK − dp L − q k + kλ ! − k λ mnK − dp L − q + λ ! . Hence P ( E ) N (cid:18) (cid:19) exp − k λ mnK − dp L − q + λ ! . Step-3 . We can estimate the probability of the event E l in the similar way as in Step-2 . With the help of Theorem 4.3 and results in Lemma 4.2(3 , , we have P (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 ( Z j,k ( f l ) − Z j,k ( f l − )) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > λ l − λ l mnK − dp L − q k f l − f l − k L ∞ , ∞ ( C K,L ) + λ l k f l − f l − k L ∞ , ∞ ( C K,L ) − l l λ mnK − dp L − q + λ ! N (2 − l ) N (2 − l +1 ) exp − l l λ mnK − dp L − q + λ ! . From Remark 3.6, we have N ( ǫ ) exp (cid:18) d +1 N (Γ) h ( K + L + 2) d +1 + C ǫ − i log (cid:18) Dǫ (cid:19)(cid:19) . Therefore N (2 − l ) exp (cid:16) d +1 N (Γ) h ( K + L + 2) d +1 + C l i (2 l + 3) log 2 + log D (cid:17) N (2 − l +1 ) exp (cid:16) d +1 N (Γ) h ( K + L + 2) d +1 + C ( l − i (2 l + 2) log 2 + log D (cid:17) . Hence N (2 − l ) N (2 − l +1 ) exp (cid:16) d +1 N (Γ) h ( K + L + 2) d +1 + C l i (2 l + 5) log 2 + log D (cid:17) . Therefore P ( E l ) exp d +1 N (Γ) h ( K + L + 2) d +1 + C l i ((2 l + 5) log 2 + log D ) − l l λ mnK − dp L − q + λ ) ! . P ( E l ) exp " l d +1 N (Γ) h ( K + L + 2) d +1 − l + C − l i ((2 l + 5) log 2 + log D ) − l l λ mnK − dp L − q + λ ) ! exp " l d +1 N (Γ) " ( K + L + 2) d +1 − log (cid:18) log (cid:19) log 2+( K + L + 2) d +1 − log log D + C − log (cid:18) log (cid:19) log 2+2 C − log log D − ( ) (cid:16) log (cid:17) λ mnK − dp L − q + λ ) ! . Let C =(log 2) − C =2 d +1 N (Γ) " ( K + L + 2) d +1 − log (cid:18) log (cid:19) log 2+( K + L + 2) d +1 − log log D + C − log (cid:18) log (cid:19) log 2+2 C − log log D φ = λ mnK − dp L − q + λ ) . Then P ( E l ) (cid:16) − l ( C φ − C ) (cid:17) for λ large such that C φ − C > . Step-4 . Since
E ⊆ ∪ ∞ l =1 E l . We get P ( E ) ∞ X l =2 P ( E l )and ∞ X l =2 P ( E l ) C φ − C ) log 2 exp (cid:16) − ( C φ − C ) (cid:17) . Therefore, choose λ large enough such that ( C φ − C ) > . Hence ∞ X l =2 P ( E l ) exp (cid:16) C (cid:17) exp − C λ mnK − dp L − q + λ ! . P ( E ) N (cid:18) (cid:19) exp − k λ mnK − dp L − q + λ ! . Let a = max n exp (cid:16) C (cid:17) , N (cid:0) (cid:1) o and b = max n C , k o . Then P ( E ) a exp − b λ mnK − dp L − q + λ ! . We are only left to find the estimate for a . Now, considerexp (cid:16) C (cid:17) = exp (cid:16) d +1 N (Γ) " ( K + L + 2) d +1 − log (cid:18) log (cid:19) log 2+ ( K + L + 2) d +1 − log log D + C − log (cid:18) log (cid:19) log 2+ 2 C − log log D exp d +1 N (Γ) " K + L + 2) d +1 (log 2 D ) + 4 C log 16 D . Also N (cid:18) (cid:19) exp (cid:16) d +1 N (Γ) h ( K + L + 2) d +1 + 2 C i log 16 D (cid:17) . Now C = B k f k L p,q ( R d +1 ) dp ′ + q ′ Cω p ′ α αp ′− dp ′ ) + ω q ′ β βq ′− q ′ ) B k f k L p,q ( R d +1 ) dp ′ + q ′ Cω p ′ α d +1 + ω q ′ β d +1 ( dp ′ + q ′ ) × B d +1 C d +1 ω − d +1) p ′ α + ω − q ′ β . Hence, for M = 2 d +1 N (Γ) (cid:20) . d +3 + (4 d +1 ( dp ′ + q ′ )+1 × B d +1 C d +1 ω − d +1) p ′ α + 4 ω − q ′ β ) (cid:21) log 16 D and L, K > . we have a exp (cid:0) M ( K + L ) d +1 (cid:1) = a Now we will give a detail proof of our main result.14 roof of Theorem 1.1.
Let X = µmnD (1 − pq ) (1 − δ ) pq L q K dp . The event E = sup f ∈ V ( K,L,δ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) > X is the complement of the event E ∁ = sup f ∈ V ( K,L,δ ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 Z j,k ( f ) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) X . Let { ( x j , y k ) : j, k ∈ N } be a random sample set such that the event E ∁ is true then (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) m X k =1 n X j =1 | f ( x j , y k ) | − mnK d L Z Z C K,L | f ( x, y ) | dxdy (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) X. Therefore, mnK d L Z Z C K,L | f ( x, y ) | dxdy − X m X k =1 n X j =1 | f ( x j , y k ) | mnK d L Z Z C K,L | f ( x, y ) | dxdy + X. (4.6)Equations (4.1),(4.2),(4.3) and (4.6) yield m q − n p − (cid:18) mn (1 − δ ) pq D − pq L q K dp − X (cid:19) k{ f ( x j , y k ) } j =1 , ,...,n ; k =1 , ,...,m k ℓ p,q mnK dp L q + X. Hence, the sampling inequality holds with the probability P ( E ∁ ) = 1 − P ( E ) > − aexp − b µ mnD − pq ) (1 − δ ) pq K dp − dp L q − q + L q K dp µD (1 − pq ) (1 − δ ) pq ! . Remark 4.5.
The probability tends to as the sample size increases. Acknowledgement
The second author acknowledges Council of Scientic and Industrial Research for the fi-nancial support. The first and third authors would like to acknowledges the financialsupport through project no. CRG/2019/002412 funded by the Department of Science andTechnology, Government of India. 15 eferences [1] Adams, D.R., Bagby, R.J. and Ziemer, W.P., 1974. Translation-dilation invariant esti-mates for Riesz potentials. Indiana University Mathematics Journal, 23(11), pp.1051-1067.[2] Bass, R.F. and Gr¨ochenig, K., 2005. Random sampling of multivariate trigonometricpolynomials. SIAM journal on mathematical analysis, 36(3), pp.773-795.[3] Benedek, A., Calder´on, A.P. and Panzone, R., 1962. Convolution operators on Banachspace valued functions. Proceedings of the National Academy of Sciences of the UnitedStates of America, 48(3), p.356.[4] Bass, R.F. and Gr¨ochenig, K., 2010. Random sampling of bandlimited functions. IsraelJournal of Mathematics, 177(1), pp.1-28.[5] Bass, R.F. and Gr¨ochenig, K., 2013. Relevant sampling of band-limited functions.Illinois Journal of Mathematics, 57(1), pp.43-58.[6] Besov, O.V. and Ilyin, V.P., 1979. Integral representations of functions and imbeddingtheorems.[7] Benedek, A. and Panzone, R., 1961. The space L p , with mixed norm. Duke Mathe-matical Journal, 28(3), pp.301-324.[8] Cand`es, E.J., Romberg, J. and Tao, T., 2006. Robust uncertainty principles: Exact sig-nal reconstruction from highly incomplete frequency information. IEEE Transactionson information theory, 52(2), pp.489-509.[9] Chan, Stanley H., Todd Zickler, and Yue M. Lu. ”Monte Carlo non-local means: Ran-dom sampling for large-scale image filtering.” IEEE transactions on image processing23, no. 8 (2014): 3711-3725.[10] Cucker, F. and Zhou, D.X., 2007. Learning theory: an approximation theory view-point (Vol. 24). Cambridge University Press.[11] Eldar, Y.C., 2009. Compressed sensing of analog signals in shift-invariant spaces.IEEE Transactions on Signal Processing, 57(8), pp.2986-2997.[12] Fernandez, D.L., 1987. Vector-valued singular integral operators on Lp-spaces withmixed norms and applications. Pacific Journal of Mathematics, 129(2), pp.257-275.[13] de Francia, J.R., Ruiz, F.J. and Torrea, J., 1986. Calder´on-Zygmund theory foroperator-valued kernels. Advances in Mathematics, 62(1), pp.7-48.[14] F¨uhr, H. and Xian, J., 2019. Relevant sampling in finitely generated shift-invariantspaces. Journal of Approximation Theory, 240, pp.1-15.1615] Galmarino, A.R. and Panzone, R., 1965. L p -spaces with mixed norm, for P a sequence.Journal of Mathematical Analysis and Applications, 10(3), pp.494-518.[16] Han, D., Nashed, M.Z. and Sun, Q., 2009. Sampling expansions in reproducing kernelHilbert and Banach spaces. Numerical Functional Analysis and Optimization, 30(9-10),pp.971-987.[17] Huang, L. and Yang, D., On Function Spaces with Mixed Norms—A Survey. J. Math.Study, 54(3), pp.1-75.[18] Jiang, Y., 2016. Time sampling and reconstruction in weighted reproducing kernelsubspaces. Journal of Mathematical Analysis and Applications, 444(2), pp.1380-1402.[19] Jiang, Y. and Li, W., Random sampling in multiply generated shift-invariant sub-spaces of mixed Lebesgue spaces L p,q ( R × R d ). Journal of Computational and AppliedMathematics, 386, p.113237.[20] Jiang, Y. and Sun, W., 2020. Adaptive sampling of time-space signals in a reproducingkernel subspace of mixed Lebesgue space. Banach Journal of Mathematical Analysis,14(3), pp.821-841.[21] Kumar, A., Patel, D. and Sampath, S., 2020. Sampling and reconstruction in re-producing kernel subspaces of mixed Lebesgue spaces. Journal of Pseudo-DifferentialOperators and Applications, 11(2), pp.843-868.[22] Nashed, M.Z. and Sun, Q., 2010. Sampling and reconstruction of signals in a reproduc-ing kernel subspace of Lp (Rd). Journal of Functional Analysis, 258(7), pp.2422-2452.[23] Li, R., Liu, B., Liu, R. and Zhang, Q., 2017. Nonuniform sampling in principal shift-invariant subspaces of mixed Lebesgue spaces Lp, q (Rd+ 1). Journal of MathematicalAnalysis and Applications, 453(2), pp.928-941.[24] Patel, D. and Sampath, S., 2020. Random sampling in reproducing kernel subspacesof Lp (Rn). Journal of Mathematical Analysis and Applications, p.124270.[25] Samarah, S., Obeidat, S. and Salman, R., 2005. A Schur test for weighted mixed-normspaces. Analysis Mathematica, 31(4), pp.277-289.[26] Torres, R.H. and Ward, E.L., 2015. Leibniz’s rule, sampling and wavelets on mixedLebesgue spaces. Journal of Fourier Analysis and Applications, 21(5), pp.1053-1076.[27] Xian, J., 2010. Weighted sampling and reconstruction in weighted reproducing kernelspaces. Journal of mathematical analysis and applications, 367(1), pp.34-42.[28] Yang, J., 2019. Random sampling and reconstruction in multiply generated shift-invariant spaces. Analysis and Applications, 17(02), pp.323-347.1729] Jiang, Y. and Sun, W., 2020. Adaptive sampling of time-space signals in a reproducingkernel subspace of mixed Lebesgue space. Banach Journal of Mathematical Analysis,14(3), pp.821-841.[30] Yang, J. and Wei, W., 2013. Random sampling in shift invariant spaces. Journal ofMathematical Analysis and Applications, 398(1), pp.26-34.Goyal Prashant, Patel Dhiraj, Sampath SivananthanDepartment of MathematicsIndian Institute of Technology DelhiNew Delhi-110016, India [email protected]@[email protected]@[email protected]@maths.iitd.ac.in