Constrained Sampling: Optimum Reconstruction in Subspace with Minimax Regret Constraint
aa r X i v : . [ ee ss . SP ] O c t THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING. Constrained Sampling: Optimum Reconstruction inSubspace with Minimax Regret Constraint
Bashir Sadeghi , Runyi Yu, and Vishnu Naresh Boddeti
Abstract —This paper considers the problem of optimumreconstruction in generalized sampling-reconstruction processes(GSRPs). We propose constrained GSRP, a novel framework thatminimizes the reconstruction error for inputs in a subspace,subject to a constraint on the maximum regret-error for anyother signal in the entire signal space. This framework addressesthe primary limitation of existing GSRPs (consistent, subspaceand minimax regret), namely, the assumption that the a priori subspace is either fully known or fully ignored. We formulateconstrained GSRP as a constrained optimization problem, thesolution to which turns out to be a convex combination of thesubspace and the minimax regret samplings. Detailed theoreticalanalysis on the reconstruction error shows that constrainedsampling achieves a reconstruction that is 1) (sub)optimal forsignals in the input subspace, 2) robust for signals around theinput subspace, and 3) reasonably bounded for any other signalswith a simple choice of the constraint parameter. Experimentalresults on sampling-reconstruction of a Gaussian input anda speech signal demonstrate the effectiveness of the proposedscheme.
Index Terms —Consistent sampling, constrained optimization,generalized sampling-reconstruction processes, minimaxregret sampling, oblique projection, orthogonal projection,reconstruction error, subspace sampling.
I. I
NTRODUCTION
Sampling is the backbone of many applications in digitalcommunications and signal processing; for example, samplingrate conversion for software radio [1], biomedical imaging [2],image super resolution [3], machine learning and signalprocessing on graph [4], [5], etc. Many of the systems involvedin these applications can be modeled as the generalizedsampling-reconstruction process (GSRP) as shown in Fig. 1.A typical GSRP consists of a sampling operator S ∗ associatedwith a sampling subspace S in a Hilbert space H , areconstruction operator W associated with a reconstructionsubspace W ⊆ H , and a correction digital filter Q . For agiven subspace W , orthogonal projection onto W minimizesthe reconstruction error in W , as measured by the norm of H . As a result, orthogonal projection is considered to be thebest possible GSRP. However, the orthogonal projection isnot possible unless the reconstruction space is a subspace ofsampling space [6], i.e., W ⊆ S . Therefore, many solutionshave been developed for the GSRP problem under differentassumptions on S , W and the input subspace. These solutionscan be categorized into consistent , subspace , and minimaxregret samplings.When the inclusion property ( W ⊆ S ) does not hold, butone still wants to have the effect of orthogonal projection S ∗ Q Wx c x r Fig. 1. A typical GSRP: S ∗ is a sampling operator, Q is a linear discrete-timecorrection filter, and W a reconstruction operator. for any signals in the reconstruction space, Unser et al [7],[8] introduced the notion of consistent sampling for shiftablespaces. This sampling strategy has later been developed andgeneralized by Eldar and co-authors [9]–[12]. Common tothis body of work is the assumption that the subspace W and the orthogonal complement S ⊥ of subspace S satisfythe so-called direct-sum condition, i.e., W ⊕ S ⊥ = H . Thisimplies that W and S ⊥ uniquely decompose H . When thedirect-sum condition is relaxed to be a simple sum condition W + S ⊥ = H , the consistent sampling can still be developedin finite spaces [13], [14]. Further generalization of consistentsampling where even the sum condition is not satisfied can befound in [15], [16].In many instances and for various reasons, thereconstruction space W can be different from the inputsubspace A which models input signals based on a priori knowledge. On one hand, this may be the case due tolimitation on physical devices. On the other hand, it can alsobe advantageous to select suitable reconstruction spaces. Forexample, band-limited signals are often used to model naturalsignals. In this case, the sinc function as a generator for thecorresponding input space A suffers from slow convergencein reconstruction; it is preferable to use a different generatorthat has short support (thus allowing fast reconstruction)for the reconstruction space W . Eldar and Dvorkind in [6]introduced subspace sampling and showed that orthogonalprojection onto the reconstruction space for signals belongingto a priori subspace is feasible under the direct-sum conditionbetween A and S ⊥ (i.e., A ⊕ S ⊥ = H ). The subspace A can be learned empirically or by a training dataset [17].Nevertheless, it would still be subject to uncertainties dueto, for example, learning imperfection, noise or hardwareinability to sample at Nyquist rate. Knyazev et al useda convex combination of consistent and subspace GSRPto address the uncertainty of the a priori subspace [17].However, the reconstruction errors of consistent samplingand subspace sampling can be arbitrarily large if the anglebetween reconstruction (or a priori ) subspace and samplingsubspace approaches ◦ [6]. Minimax regret sampling was introduced by Eldar andDvorkind [6] to address the possibility of large errors
THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING. associated with consistent (and subspace) sampling for signalsaway from the sampling subspace. It minimizes the maximum(worst) regret-error (distance of the reconstructed signalfrom orthogonal projection). The minimax regret sampling,however, is found to be conservative as it ignores the a priori information on input signals.In the aforementioned GSRPs the a priori subspace isassumed to be either fully known or fully ignored, whichis not practically realizable. In addition, the angle betweensampling space and input space cannot be controlled (theycan get arbitrarily close to ◦ ). In this paper, we introduce constrained sampling to address these limitations. We designa robust (in the sense of angle between sampling and inputspaces) reconstruction for the signals that approximatelylies in the a priori subspace. To this end, we introduce anew sampling strategy that exploits the a priori subspaceinformation while enjoying the reasonably bounded error (forany input) of the minimax regret sampling. This is done byminimizing the reconstruction error for the signals lying in the a priori subspace while constraining the minimum regret-errorto be below certain level for any signal in H . The solution isshown to be a convex combination of minimax regret andconsistent sampling. To be specific, given an input x , thereconstruction of the proposed constrained sampling is givenas a convex combination x λ = λ x sub + (1 − λ ) x reg , λ ∈ [0 , (1)where x sub and x reg are the reconstructions of the subspaceand minimax sampling, respectively. The result is illustratedin Fig. 2 for a simple case where H = R and the apriori subspace A is equal to the reconstruction subspace W (therefore, subspace sampling is the same as consistentsampling). In the figure, x is the input signal; x opt = P W x is the optimal reconstruction, i.e., the orthogonal projection of x onto W ; x sub = P WS ⊥ x is the oblique projection onto W along the orthogonal complement of S ; and x reg = P W P S x isthe result of two successive orthogonal projections. The figureshows that as a combination of x sub and x reg , our constrainedsampling x λ can potentially be very close to orthogonalprojection. This desirable feature will also be demonstratedin the two examples in Section VI.The main contributions of this paper can be summarized asfollows:1) We propose and solve a constrained optimizationproblem which yields reconstruction that is (sub)optimalfor signals in input subspace and robust for any otherinput signals.2) The solution to the optimization problem leads to anew sampling strategy (i.e., the constrained sampling)which has consistent (or subspace) and minimax regretsamplings as special cases.3) We provide detailed analysis of reconstruction errors,and obtain reconstruction guarantees in the form oflower and upper bounds of errors.The organization of this paper is as follows. In Sections IIand III, we provide preliminaries and discuss related work,respectively. The proposed constrained sampling is described S ⊥ S A = W B xP S x x r e g x s ub x o p t x λ Fig. 2. Comparison of several sampling schemes: x opt = P W x is theorthogonal projection of x onto W ; x sub = P WS ⊥ x is the oblique projectiononto W along S ⊥ ; and x reg = P W P S x is the orthogonal projection onto S followed by orthogonal projection onto W . Our constrained reconstruction x λ is a simple convex combination of x sub and x reg and can be expressedas P W P B S ⊥ x for some subspace B given in Section IV. Note that x λ canpotentially get very close to x opt . in Section IV. In Section V, we obtain lower and upperbounds on the reconstruction error of the constrained GSRP.We then present two illustrative examples to demonstrate theeffectiveness of the new sampling scheme in Section VI.Finally, we conclude the paper in Section VII.II. P RELIMINARIES
A. Notation
We denote the set of real and integer numbers with R and Z respectively. Let (cid:0) H , h · i (cid:1) be a Hilbert space with the norm k·k induced by the inner product h · i . We assume throughout thepaper that H is infinite-dimensional unless otherwise stated.Vectors in H are represented by lowercase letters (e.g., x , v ).Capital letters are used to represent operators (e.g., S , W ). The(closed) subspaces of H are denoted by capital calligraphicletters (e.g., S , W ). S ⊥ is the orthogonal complement of S in H . For a linear operator V , its range and nullspace are denotedby R ( V ) (or V ) and N ( V ) respectively. In particular, theHilbert space of continuous-time square-integrable functions(discrete-time summable sequences, resp) is denoted by L ( ℓ , resp). At particular time instant t ∈ R ( n ∈ Z , resp), thevalue of signal x ∈ L ( d ∈ ℓ , resp) is denoted by x ( t ) ( d [ n ] ,resp). B. Subspaces and Projections
Given two subspaces V , V , if they satisfy the direct-sumcondition, i.e., V ⊕ V = H we can define an oblique projection onto V along V . Let itbe denoted as P V V . By definition [6], P V V is the uniqueoperator satisfying P V V x = n x, x ∈ V , x ∈ V . As a result, we have R ( P V V ) = V , N ( P V V ) = V . ADEGHI, YU, AND BODDETI: CONSTRAINED SAMPLING: OPTIMUM RECONSTRUCTION IN SUBSPACE WITH MINIMAX REGRET CONSTRAINT 3
Any projection P can be written, in terms of its range andnullspace, as P = P R ( P ) N ( P ) . By exchanging the role of V and of V , we also have theoblique projection P V V . And P V V + P V V = I (2)where I : H → H is the identity operator. In particular, if V = V ⊥ = V , then the oblique projections reduce to theorthogonal ones, and (2) specializes to P V + P V ⊥ = I. (3)An important characterization of projection is that a linearoperator P : H → H is an oblique projection if and only if P = P [18]. Note that the sum of two projections is generallynot a projection. Nevertheless, the following result states thattheir convex combination remains a projection if both sharethe same nullspace. This result will be useful in our study ofconstrained sampling. Proposition 1:
Let P and P be two projections. If N ( P ) = N ( P ) , then the following statements hold.1) P P = P and P P = P .2) P = λP + (1 − λ ) P is a projection for any λ ∈ R . Proof:
1) From (2), it follows P P = P ( I − P N ( P ) R ( P ) ) = P − P P N ( P ) R ( P ) . If N ( P ) = N ( P ) , then the last term becomes zero. Hence, P P = P . Similarly, we have that P P = P .2) It can be readily verified that P = P in view of theresult in 1).As consequences of Proposition 1, the following equalitieshold, which will be used in Section IV: P V P V V ⊥ = P V (4)and P V V ⊥ P V = P V V ⊥ . (5) C. Angle between Subspaces
The notion of angles between two subspaces characterizeshow far they are away from each other.Consider a subspace
V ⊂ H and a vector = x ∈ H . Theangle between x and V , denoted by ( x, V ) , is defined by cos( x, V ) := k P V x kk x k (6)or equivalently sin( x, V ) := k P V ⊥ x kk x k . (7)Let V , V ⊂ H be two subspaces, following [6], the(maximal principal) angle between V and V , denoted by ( V , V ) , is defined by cos( V , V ) := inf = x ∈V k P V x kk x k (8) or equivalently sin( V , V ) := sup = x ∈V k P V ⊥ x kk x k . (9)This angle can also be characterized via any linear operator B whose range is equal to V : cos( V , V ) = inf x ( B ) k P V Bx kk Bx k (10)or equivalently sin( V , V ) = sup x ( B ) k P V ⊥ Bx kk Bx k . (11)Note that ( V , V ) = ( V , V ) in general. However, if theirorthogonal complements are used instead, the order can beexchanged [6], [7]: ( V , V ) = ( V ⊥ , V ⊥ ) . (12)Moreover, under the direct-sum condition, commutativityholds [19]: ( V , V ) = ( V , V ) if V ⊕ V ⊥ = H . (13)The angle between subspaces allows descriptions of lowerand upper bounds for orthogonal projection of signals in V : cos( V , V ) k x k ≤ k P V x k ≤ sin( V , V ⊥ ) k x k , x ∈ V (14)and for any signal in H , via a linear operator B with R ( B ) = V : cos( V , V ) k Bx k ≤ k P V Bx k ≤ sin( V , V ⊥ ) k Bx k , x ∈ H . (15)For oblique projection, the following bounds are proven in [6] k P V ⊥ x k sin( V , V ) ≤ k P V V x k ≤ k P V ⊥ x k cos( V , V ⊥ ) . (16)III. R ELATED W ORK
In this Section, we review four important sampling schemes;namely, orthogonal, consistent, subspace, and minimax regretsamplings. For comparison, some properties of these schemes,are summarized in Table I, along with the properties of ourconstrained sampling framework.
A. Generalized Sampling-Reconstruction Processes
Consider the GSRP in Fig. 1, where x, x r ∈ H are the inputand output signals, respectively; S ∗ and W are the samplingand reconstruction operators, respectively; and Q : ℓ → ℓ isa bounded linear operator which acts as a correction filter.Sampling and reconstruction spaces are usually restrictedby acquisition and reconstruction devices or algorithms andare not free to be designed. Therefore, we assume that S ∗ and W are given in terms of sampling space S and reconstructionspace W , respectively. Let W be spanned by a set of vectors { w n } n ∈I , where I ⊆ Z is a set of indexes. Then W : ℓ ( I ) →H can be described by the synthesis operator W : c W c = X n ∈I c [ n ] w n , c ∈ ℓ ( I ) . THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING.
TABLE IS
AMPLING SCHEMES AND THEIR PROPERTIES
Sampling GSRP Optimal ErrorScheme T in A ? bounded? a Orthogonal b P W optimal boundedConsistent P WS ⊥ optimal unboundedSubspace P W P AS ⊥ optimal unboundedRegret P W P S non-optimal boundedConstrained λP W P AS ⊥ +(1 − λ ) P W P S sub-optimal bounded a regardless of ( A , S ) . b This is the optimal sampling scheme but possible only if
W ⊆ S . Note that the range of W is W .Similarly, let S be spanned by vectors { s n } n ∈I . Then S ∗ : H → ℓ ( I ) can be described by the adjoint (analysis)operator S ∗ : x S ∗ x = c, c [ n ] = h x, s n i , n ∈ I , x ∈ H (17)since by definition of adjoint operator [20] h Sa, x i = h a, S ∗ x i ℓ for all x ∈ H , a ∈ ℓ ( I ) . In (17), c represents a sample sequence due to the samplingoperation on x ∈ H , i.e., c = S ∗ x . Note that if c = S ∗ x then for any input v ∈ S ⊥ it holds c = S ∗ ( x + v ) , sincethe orthogonal complement S ⊥ is the nullspace of S ∗ , i.e., N ( S ∗ ) = S ⊥ [20].We assume throughout the paper that set { w n } constitutesa frame of W , that is, there exist two constant scalars <α ≤ β < ∞ such that α k x k ≤ X n ∈I |h x, w n i| ≤ β k x k , x ∈ W . Set { s n } is also assumed to be a frame of S .The overall GSRP can be described as a linear operator T : H → H T : x x r = W QS ∗ x, x ∈ H . (18)The reconstruction quality of the GSRP can be studied viathe error system E := I − T = I − W QS ∗ . (19)For any input x ∈ H , the reconstruction error signal is givenas Ex = x − x r . B. Orthogonal Projection
Consider the optimal reconstruction of signal x by the GSRPin Fig. 1. Since x r ∈ W , the (norm of) error Ex is minimizedby its orthogonal projection on W : x r = P W x and therefore, the optimal error system is E opt := I − P W = P W ⊥ . (20)For each x ∈ H , the optimal error signal is E opt x = P W ⊥ x. (21)The orthogonal projection P W can be represented in termsof analysis and synthesis operators as [6] P W = W ( W ∗ W ) † W ∗ (22)where ” † ” denotes the Moore-Penrose pseudoinverse.According to [6], P W is subject to a fundamental limitationon the GSRP. Specifically, unless the reconstruction subspaceis a subset of the sampling subspace, i.e., W ⊆ S (23)there exists no correction filter Q that renders the GSRP T tobe the orthogonal projection P W .Acknowledging the optimality as well as the limitation ofthe orthogonal projection, we now introduce the differencebetween T and P W , which is, in the spirit of [6], referred toas the regret-error system: R := P W − T = P W − W QS ∗ . (24)Then the regret-error signal is given as Rx = P W x − x r = ( P W − W QS ∗ ) x. (25)It is important to note that the two error systems are relatedas E = R + P W ⊥ . (26)As the optimal sampling, orthogonal projection P W enjoysthe following two desirable properties:1) Error-free in W : i.e., Ex = 0 for any x ∈ W ; and2) Least-error for x ∈ H : i.e., Ex = E opt x for any x ∈ H .Consequently, k Ex k ≤ k x k for any x ∈ H . C. Consistent Sampling
Consistent sampling achieves the property of beingerror-free in W without requiring the inclusion condition (23)for the orthogonal projection.Under the assumption of the following direct-sum condition W ⊕ S ⊥ = H . (27)it is shown [9] that the correction filter Q con := ( S ∗ W ) † (28)leads to an error-free reconstruction for input signals in W .The resulted GSRP is found to be an oblique projection T con := W ( S ∗ W ) † S ∗ = P WS ⊥ . (29)As a result, it is sample consistent, i.e., S ∗ ( T con x ) = S ∗ ( x − P S ⊥ W ) = S ∗ x, x ∈ H where we used (2) and the fact that N ( S ∗ ) = S ⊥ . ADEGHI, YU, AND BODDETI: CONSTRAINED SAMPLING: OPTIMUM RECONSTRUCTION IN SUBSPACE WITH MINIMAX REGRET CONSTRAINT 5
The consistent error system is E con := I − P WS ⊥ = P S ⊥ W (30)and the corresponding regret-error system also has a simpleform: R con := P W P S ⊥ W (31)since, from (24), (3), and (5), we have R con = P W − T con = P W − P W S ⊥ = P W − P W P W S ⊥ = P W ( I − P WS ⊥ )= P W P S ⊥ W . Therefore, E con x = R con x = 0 for any x ∈ W .The absolute error for any input can be derived as follows: k E con x k = k P W ⊥ x k + k P W P S ⊥ W x k , x ∈ H . (32)And the regret-error is k R con x k = k P W P S ⊥ W x k , x ∈ H . (33)From [6], the absolute error can be bounded in terms of thesubspace angles as E opt x sin( W ⊥ , S ) ≤ k E con x k ≤ E opt x cos( W , S ) . (34)The regret-error is shown in Section IV to be bounded as cos( W ⊥ , S )sin( W ⊥ , S ) k P W ⊥ x k ≤ k R con x k ≤ sin( W , S )cos( W , S ) k P W ⊥ x k . (35)It is clear from the left-hand sides of (34) and (35) that theabsolute error and regret-error for x ∈ W ⊥ can be arbitrarilylarge if angle ( W ⊥ , S ) approaches to zero. D. Subspace Sampling
The result on consistent sampling in the preceding sectionhas been extended in [6] to any input subspace
A ⊂ H thatsatisfies the direct-sum condition with S ⊥ , i.e., A ⊕ S ⊥ = H .Recall that subspace A models the input signals based onour a priori knowledge. Let { a n } be a frame of subspace A .Denote the corresponding synthesis operator by A . Then thecorrection filter Q sub := ( W ∗ W ) † W ∗ A ( S ∗ A ) † . (36)renders the GSRP to be the product of two projectionoperators: T sub := W ( W ∗ W ) † W ∗ A ( S ∗ A ) † S ∗ = P W P AS ⊥ . (37)The regret-error system now is R sub := P W − T sub = P W − P W P AS ⊥ = P W P S ⊥ A . (38)And the error system is E sub := P W ⊥ + P W P S ⊥ A . (39) Accordingly, the absolute error and the regret-error aregiven, respectively, by k E sub x k = k P W ⊥ x k + k P W P S ⊥ A x k , x ∈ H and k R sub x k = k P W P S ⊥ A x k , x ∈ H . And the regret-error verifies the following error bounds: cos( W ⊥ , S )sin( A ⊥ , S ) k P ⊥ A x k ≤ k R sub x k ≤ sin( W , S )cos( A , S ) k P ⊥ A x k (40)which will be shown in Section V.For any x ∈ A , it holds P S ⊥ A x = 0 , thus E sub x = E opt x and R sub x = 0 . This implies that the optimum reconstructionis achieved for any x ∈ A . However, the reconstruction errorof E sub x for x ∈ A ⊥ can still be excessively large if angle ( A ⊥ , S ) is very small, which can be seen from (40).Recall that filter Q sub is the minimizer of the reconstructionerror for any input x ∈ A ; it is the solution to the followingoptimization problem [6]: min Q k Ex k , x ∈ A . (41) E. Minimax Regret Sampling
Introduced in [6], the minimax regret sampling alleviatesthe drawback of large error associated with the consistentand subspace samplings. This is achieved by minimizing themaximum regret-error rather than the absolute error.Consider the optimization problem: min Q max x ∈D k Rx k (42)where D := { x ∈ H : k x k < L, c = S ∗ x } (43)where scalar L > is introduced as a norm bound tolimit contribution of inputs x ∈ S ⊥ to ensure that themaximum regret error in (42) is bounded, and L should alsobe sufficiently large to render D non-empty. Interestingly, thesolution to (42) is shown to be independent of L [6]. And theminimax regret solution is found to be Q reg := ( W ∗ W ) † W ∗ S ( S ∗ S ) † . (44)Consequently, the GSRP becomes the product of twoorthogonal projections: T reg := W Q reg S ∗ = P W P S . (45)Hence, the regret-error system is R reg := P W − T reg = P W P S ⊥ . (46)And the error system is E reg := P W ⊥ + P W P S ⊥ . (47)Moreover, the regret-error is shown [6] to be bounded as cos( W ⊥ , S ) k P S ⊥ x k ≤ k R reg x k ≤ sin( W , S ) k P S ⊥ x k . (48) THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING.
Clearly, k R reg x k ≤ k x k , x ∈ H . (49)And k E reg x k ≤ √ k x k , x ∈ H (50)since k E reg x k ≤ (cid:0) ( W , S ) (cid:1) k P S ⊥ x k . The above error estimates imply that T reg results in goodreconstruction for x ∈ H , at the cost of introducing errorfor x ∈ W (or A ). Since T reg does not differentiate any inputsignals, it could be very conservative for signals in the inputsubspace. IV. C ONSTRAINED R ECONSTRUCTION
Suppose that we know a prior that input signal x is close to A (i.e., ( x, A ) is small), but not necessarily lies in A . This isrelevant since in many practical scenarios, input signals cannotbe exactly modeled as elements in A . For example when A islearned via training set and only approximately described asan input subspace. It is also technically necessary when, forexample, the sampling hardware is unable to sample at Nyquistrate or the input signal is only approximately bandlimited. Wecan seek a correction filter to improve the conservativenessof the regret sampling, and in the meantime to achievingminimum error for each x ∈ A as in the case of subspacesampling. In other words, we wish to reach a trade-off betweenachieving the two properties of orthogonal projection P W . Itshould be noted that we assume that the direct sum property(i.e., A ⊕ S ⊥ = H ) holds throughout the paper.For this end, we propose the following optimization problem min Q k Ex k , x ∈ A ∩ D (51) s . t . max x ∈D k Rx k ≤ β ( c ) where D is given in (43), and β represents an appropriatebound that is dependent on the sampling sequence c . Byrestricting that x belongs to D , we imply that all such inputsignals give the same sequence c which is assumed to begiven (see [6]). Our problem is to find a correction filter Q that minimizes the reconstruction error subject to the minimaxregret constraint. We note that the union of such D ’s for all c ∈ R ( S ∗ ) is equal to the entire signal space H . The aboveoptimization problem (51) encapsulates two desiderata, (1)optimum reconstruction in A through the objective, and (2)minimax recovery for all inputs in H through the constraint.The regret-error in the above constraint can be relaxed withthe error between the GSRP itself and the minimax regretreconstruction (rather than the orthogonal projection), i.e., max x ∈D k P W P S x − W QS ∗ x k = k P W S ( S ∗ S ) † c − W Qc k . (52)Not only would this realization allow a simple and elegantsolution to our search for an alternative sampling scheme, it is also supported by the following arguments. On one hand,from triangular inequality, we have max x ∈D k Rx k = max x ∈D k P W x − W QS ∗ x k≤ max x ∈D (cid:8) k P W P S x − W QS ∗ x k + k P W x − P W P S x k (cid:9) = k P W S ( S ∗ S ) † c − W Qc k + max x ∈D k P W x − P W P S x k . (53)On the other hand, it is shown in Appendix A that max x ∈D k Rx k ≥ √ (cid:0) k P W S ( S ∗ S ) † c − W Qc k + max x ∈D k P W x − P W P S x k (cid:1) . (54)We complete the argument by noting that the last terms in (53)and (54) are independent of correction filter Q .In view of the above discussions, we now present theconstrained optimization problem as follows: min Q k x − W Qc k , x ∈ A ∩ D (55) s . t . k P W S ( S ∗ S ) † c − W Qc k ≤ β ( c ) . which would lead to an adequate approximation of theoptimization problem in (51). The upper bound β ( c ) in (55)needs to be properly chosen. Let us consider two extremecases: β ( c ) = 0 and β ( c ) = ∞ . If β ( c ) = 0 , the strictconstraint implies that the solution to (55) is the standardminimax regret filter in (44). On the other hand, if β ( c ) = ∞ (i.e., the constraint is removed), then the objective function in(55) is minimized by the correction filter Q sub of the subspacesampling, which is given in (36). Hence, the upper bound ofthe constraint in (55) becomes β ( c ) = k P W S ( S ∗ S ) † c − P W A ( S ∗ A ) † c k . (56)From the above discussions, we conclude that the upperbound in (55) can be set to be β ( c ) = λβ ( c ) for someparameter λ ∈ [0 , . Accordingly, we present the constrainedoptimization problem (55) and its solution in the next theorem. Theorem 1:
Consider the constrained sampling problem min Q k x − W Qc k , x ∈ A ∩ D (57) s . t . k P W S ( S ∗ S ) † c − W Qc k ≤ λβ ( c ) . A solution to it is given as Q λ := λQ sub + (1 − λ ) Q reg . (58) Proof : It is proved in Appendix B.Following Theorem 1, the constrained GSRP can beexpressed as T λ := λT sub + (1 − λ ) T reg . (59)The constrained GSRP T λ can be simplified to have asimple expression. Define B as the convex combination oftwo projections: B := λP AS ⊥ + (1 − λ ) P S . (60) ADEGHI, YU, AND BODDETI: CONSTRAINED SAMPLING: OPTIMUM RECONSTRUCTION IN SUBSPACE WITH MINIMAX REGRET CONSTRAINT 7
In view of (37) and (45), the GSRP can be further expressedcompactly as T λ = P W B. (61)The next result states that B is in fact also an obliqueprojection with the nullspace being S ⊥ . Proposition 2:
The linear operator B defined in (60) is givenas B = P BS ⊥ . (62)where B = R ( B ) . Proof : It is proved in Appendix C.Following Proposition 2, the resulting constrained GSRPcan be nicely described as the product of two projections: T λ = P W P B S ⊥ . (63)Then, the regret-error system is R λ := P W P S ⊥ B . (64)And the error system is given as E λ := P W ⊥ + P W P S ⊥ B . (65)In view of (26), and similar to the case of subspace sampling,the reconstruction error is given by k E λ x k = k P W ⊥ x k + k P W P S ⊥ B x k , x ∈ H (66)and the regret-error is k R λ x k = k P W P S ⊥ B x k , x ∈ H . (67)It is interesting to see that all the GSRPs discussed havethe same expression as in (63). When λ = 0 , then B = S and T λ = T reg ; and when λ = 1 , then B = A and T λ = T sub ,which becomes T con if additionally A = W . This showsthat our constrained sampling generalizes all the other threesamplings. Regarding these two particular values of λ , werecall that if the input signals can be precisely modelledby A , then the subspace sampling should be chosen for thereconstruction. On the other hand, if no a priori informationabout the input signal is available, it is better to choose theminimax regret sampling.The description of T λ in (63) shows that the constrainedsampling is essentially a subspace sampling with a newmodified subspace B , which is comprised of all the convexcombinations of vectors of A and S according to (60). Thus B is closer to S than A is, i.e., ( B , S ) < ( A , S ) , leadingto a more robust sampling strategy (i.e., better reconstructionfor signals not in A ; further explanations on this observationwill be given in Section V following the error analysis).A geometrical illustration of all the sampling schemes isprovided in Fig. 3.It should be noted that since P S ⊥ B is still an oblique projection, the error Ex can still be very large in general.However, we shall show in the next section that this concerncan be removed by properly choosing the value of parameter λ , one such choice is λ = cos( A , S ) . SW A P AS ⊥ xx opt x s ub P S xx reg P BS ⊥ x x λ x S ⊥ Fig. 3. An illustration of sampling schemes: S is the sampling space, W isthe reconstruction space and A is the input space. x opt = P W x , x sub = P W P AS ⊥ x , x reg = P W P S x , and x λ = P W P BS ⊥ x where P BS ⊥ = λP AS ⊥ + (1 − λ ) P S . Note that the constrained reconstruction x λ has thepotential to approach optimum reconstruction x opt . V. A
NALYSIS ON R ECONSTRUCTION E RRORS
This Section presents error performance for the proposedconstrained sampling. First, we compare the reconstructionerror of constrained sampling with those of the subspacesampling and that of minimax regret sampling.
Proposition 3:
The reconstruction error of constrainedsampling is upper-bounded by a convex combinations of thecorresponding errors of the subspace and minimax regretsamplings as follows: k E λ x k ≤ λ k E sub x k + (1 − λ ) k E reg x k , x ∈ H . (68)The regret error of constrained sampling is similarlyupper-bounded: k R λ x k ≤ λ k R sub x k + (1 − λ ) k R reg x k , x ∈ H . (69) Proof : In view of definitions of the error systemsinvolved, we have E λ = I − T λ = λE sub + (1 − λ ) E reg and similarly R λ = P W − T λ = λR sub + (1 − λ ) R reg . The results then readily follow from the triangular inequalityof norm.Proposition 3 implies that the reconstruction error ofconstrained sampling can never be larger than the other twocorresponding errors at the same time.Next, we present bounds on the regret-error of constrainedsampling T λ by examining regret-error system R λ . Theorem 2:
For any x ∈ H , the regret-error of constrainedsampling is bounded as α λ k P B ⊥ x k ≤ k R λ x k ≤ β λ k P B ⊥ x k (70)where the scalars are α λ = (cid:18) λ cos (cid:0) A ⊥ , S )sin (cid:0) A ⊥ , S ) (cid:19) cos( W ⊥ , S ) THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING.
TABLE IIS
AMPLING STRATEGIES AND THEIR REGRET - ERRORS
Sampling GSRP Correction Filter Regret Error k Rx k = k P W x − T x k a Scheme
T Q
Expression Lower Bound Upper BoundOrthogonal b P W ( W ∗ W ) † W ∗ S ( S ∗ S ) † Consistent P WS ⊥ ( S ∗ W ) † k P W P S ⊥ W x k cos( W ⊥ , S )sin( W ⊥ , S ) k P W ⊥ x k sin( W , S )cos( W , S ) k P W ⊥ x k Subspace P W P AS ⊥ ( W ∗ W ) † W ∗ A ( S ∗ A ) † k P W P S ⊥ A x k cos( W ⊥ , S )sin( A ⊥ , S ) k P A ⊥ x k sin( W , S )cos( A , S ) k P A ⊥ x k Regret P W P S ( W ∗ W ) † W ∗ S ( S ∗ S ) † k P W P S ⊥ x k cos( W ⊥ , S ) k P S ⊥ x k sin( W , S ) k P S ⊥ x k Constrained c P W P BS ⊥ λ ( W ∗ W ) † W ∗ A ( S ∗ A ) † +(1 − λ )( W ∗ W ) † W ∗ S ( S ∗ S ) † k P W P S ⊥ B x k (cid:16) λ ( A ⊥ , S )sin ( A ⊥ , S ) (cid:17) × cos( W ⊥ , S ) k P B ⊥ x k (cid:16) λ ( A , S )cos ( A , S ) (cid:17) × sin( W , S ) k P B ⊥ x k Constrained x ∈ A P W − (1 − λ ) P W P S ⊥ λ ( W ∗ W ) † W ∗ A ( S ∗ A ) † +(1 − λ )( W ∗ W ) † W ∗ S ( S ∗ S ) † (1 − λ ) ×k P W P S ⊥ x k (1 − λ ) cos( A , S ⊥ ) × cos( W ⊥ , S ) k x k (1 − λ ) sin( A , S ) × sin( W , S ) k x k a The absolute error is given by k Ex k = k x − T x k = k P W ⊥ x k + k Rx k . b This is the optimal sampling scheme but possible only if
W ⊆ S . The corresponding reconstruction error is k Ex k = k P ⊥W x k . c The modified subspace is B = R (cid:0) λP AS ⊥ + (1 − λ ) P S (cid:1) , λ ∈ [0 , . and β λ = (cid:18) λ sin ( A , S )cos ( A , S ) (cid:19) sin( W , S ) . Proof : First of all, since R ( P S ⊥ B ) = S ⊥ , it followsfrom (67) and (15) that cos( W ⊥ , S ) k P S ⊥ B x k ≤ k R λ x k ≤ sin (cid:0) W , S (cid:1) k P S ⊥ B x k . (71)Moreover, from (16) and (12), it follows that k P B ⊥ x k sin (cid:0) B ⊥ , S (cid:1) ≤ k P S ⊥ B x k ≤ k P B ⊥ x k cos (cid:0) B , S (cid:1) . (72)Consequently, the regret-error enjoys the following estimates cos( W ⊥ , S )sin( B ⊥ , S ) k P B ⊥ x k ≤ k R λ x k ≤ sin( W , S )cos( B , S ) k P B ⊥ x k . (73)We complete the proof by simplifying the above boundsusing the following estimates of the trigonometrical functionsinvolving subspace B :
11 + λ ( A , S )cos ( A , S ) ≤ cos (cid:0) B , S (cid:1) ≤
11 + λ ( A , S ⊥ )sin ( A , S ⊥ ) (74)and
11 + λ ( A , S )cos ( A , S ) ≤ sin (cid:0) B ⊥ , S (cid:1) ≤
11 + λ ( A ⊥ , S )sin ( A ⊥ , S ) (75)which are proved in Appendices D and E, respectively.Note that The bounds in Theorem 2 specialize those forthe other sampling schemes if λ = 0 or . Furthermore, it isimportant to point out that ( B , S ) ≤ ( A , S ) for any λ ∈ [0 , ,since cos (cid:0) B , S (cid:1) ≥ cos ( A , S )cos ( A , S ) + λ sin ( A , S ) ≥ cos (cid:0) A , S (cid:1) in view of lower bound of (74) and the inequality cos ( A , S )+ λ sin ( A , S ) ≤ for λ ∈ [0 , . In other words, the modifiedsubspace B inclines towards S than the input subspace A does.This explains from another perspective why the constrainedsampling would generally lead smaller maximum possibleerror than subspace sampling.It is pointed out that with a simple choice of parameter ≤ λ ≤ cos( A , S ) (76)the reconstruction error in (70) is seen to be bounded as below: k R λ x k ≤ √ k x k , x ∈ H . (77)Then, the absolute error is bounded as k E λ x k ≤ √ k x k , x ∈ H . (78)Finally, we turn to bounds on reconstruction errors for signalin input subspace A . If x ∈ A , then k R λ x k = k P W P S ⊥ B x k = k P W [ λP S ⊥ A + (1 − λ ) P S ⊥ ] x k = (1 − λ ) k P W P S ⊥ x k≤ (1 − λ ) sin( S ⊥ , W ⊥ ) k P S ⊥ x k where the first step is from (67) and the second step isfrom (60). Thus, using (12) and (14), we obtain an upperbound on regret-error k R λ x k ≤ (1 − λ ) sin( A , S ) sin( W , S ) k x k , x ∈ A . (79)Similarly, we can also obtain a lower bound on regret-error: k R λ x k ≥ (1 − λ ) cos( A , S ⊥ ) cos( W ⊥ , S ) k x k , x ∈ A . (80) ADEGHI, YU, AND BODDETI: CONSTRAINED SAMPLING: OPTIMUM RECONSTRUCTION IN SUBSPACE WITH MINIMAX REGRET CONSTRAINT 9
It then follows, from (26), (66), and (14), that the absoluteerror are bounded as α A k x k ≤ k E λ x k ≤ β A k x k , x ∈ A (81)where the scalars are α A = (cid:0) cos ( A , W ⊥ ) + (1 − λ ) cos( A , S ⊥ ) cos( W ⊥ , S ) (cid:1) and β A = (cid:0) sin ( A , W ) + (1 − λ ) sin( A , S ) sin( W , S ) (cid:1) . Table II summaries key results on all the sampling schemesconsidered in this paper.VI. E
XAMPLES
We now provide two illustrative examples which considerreconstruction of a typical Gaussian signal and a speech signal.These examples demonstrate the effectiveness of the proposedconstrained sampling.
A. Gaussian Signal
Most natural signals are approximately band-limited and canbe adequately modelled as Gaussian signals. We now considerreconstruction of a Gaussian signal of unit energy: x = (cid:16) πσ (cid:17) / exp( − t σ ) , (82)where σ = 0 . .Assume that sampling period T is one (i.e., the Nyquistradian frequency is π ) and the sampling space S is the shiftablesubspace generated by the B-spline of order zero: s ( t ) = β ( t ) = (cid:26) , t ∈ [ − . , . , otherwise . (83)In other words, S is spanned by frame vectors { β ( t − n ) } n ∈ Z .Since x has its of its energy in the content of frequenciesup to π , it is reasonable to assume that A is the subspace of π -bandlimited signals. In this situation, we have cos( A , S ) =0 . , which can be calculated by [7] cos ( A , S ) =inf ω ∈ [0 , π ) (cid:12)(cid:12) P n ∈ Z b s ∗ ( ω + 2 πn ) b a ( ω + 2 πn ) (cid:12)(cid:12) P n ∈ Z | b s ( ω + 2 πn ) | P n ∈ Z | b a ( ω + 2 πn ) | where “ b · ” represents the Fourier transform, and a ( t ) =sinc( t ) . We further assume that the reconstruction space W isthe shiftable subspace generated by the cubic B-splines [21] w ( t ) = β ( t ) = [ β ∗ β ∗ β ∗ β ]( t ) (84)where “ ∗ ” is the convolution operator.Fig. 4 presents the signal-to-noise ratio (SNR) in dB ofthe reconstruction error Ex for the three sampling schemes.We can observe from Fig. 4 that 1) the performance of theconstrained sampling is never below that of the minimaxregret regardless of the value of λ , demonstrating theconservativeness of the regret sampling for inputs close to A ; 2) the constrained sampling achieves better reconstruction SNR = 20 log (cid:0) k x k / k Ex k (cid:1) dB S NR [ d B ] optimumconstrainedsubspaceregret Fig. 4. Reconstruction error k Ex k of a Gaussian signal for all four samplingschemes ( S and W are generated by β and β , respectively, and A is the π -bandlimited subspace). than the subspace sampling for any λ ∈ (0 . , ; 3) withthe simple choice of λ = cos( A , S ) = 0 . , the improvementof the constrained sampling over the subspace and minimaxregret samplings are . dB and . dB, respectively.We recall that the Gaussian signal in (82) is quite closeto the π -bandlimited subspace A since ( x, A ) = 14 . ◦ . Thiscloseness explains the worst performance of the minimaxregret sampling which does not take advantage of any a priori information on input x . The SNR of minimax sampling isless than the SNR of subspace by . dB. On the other hand,since x does not completely belong to A , the performance ofsubspace sampling has also been improved by our constrainedsampling which is capable of limiting the reconstructionerror due to the content of frequencies beyond π . Theimprovement can be significant if parameter λ is properlyselected. Furthermore, it is worth pointing out the existenceof the optimal value (i.e., λ = 0 . ≈ cos( A , S ) ) such that k E λ x k is very close to (less than by . dB) the optimalerror k E opt x k , demonstrating high potential of constrainedsampling in approaching the orthogonal projection. B. Speech Signal
In this example, the input signal is chosen to be a speechsignal which is sampled at the rate of kHz. Since thesampling rate is sufficiently high, the discrete-time speechsignal x [ n ] can accurately approximate the continuous-timesignal x ( t ) on the fine grid. We assume that the samplingprocess S ∗ is an integration over one sampling duration T : c [ n ] = 1 T Z nT + T/ nT − T/ x ( t )d t, where T = 4000 − sec. This is equivalent to assuming s ( t ) =(1 /T ) β ( t/T ) or discrete-time filtering on the fine grid with downloaded from https://catalog.ldc.upenn.edu/ THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING. S NR [ d B ] optimumconstrainedregretsubspace Fig. 5. Reconstruction error k Ex k of a speech signal for all four samplingschemes ( S is generated by β ( t/T ) , W is generated by a non-ideal low-passfilter associated with a time-support of [ − T, T ] , T = 4000 − s, and A isthe kHz-bandlimited subspace). filter whose impulse response is s [ k ] = (cid:26) , k = − , , , otherwise . Since the original continuous-time signal is sampled at kHz,we assume that subspace A is the space of kHz-bandlimitedsignals. For calculation, we use a zero-phase discrete-timeFIR low pass filter with cutoff frequency at / and oforder to simulate A on the fine grid. The selected A is equivalent to continuous-time low-pass filter with support t ∈ [ − T, T ] which approximates sinc(4 t/T ) . For thesynthesis, we let w n ( t ) = w ( t − nT ) , where w ( t ) is chosen tohave a time-support of t ∈ [ − T, T ] and to render a low passfilter with cutoff frequency (i.e., Nyquist frequency) / (2 T ) .On the fine grid, this synthesis process is implemented via adiscrete-time low-pass FIR filter of order and with cutofffrequency / .In the experiment, following [6], we randomly chose segments (each with consecutive samples) of the speechsignal. The segments are found to be far away from the a priori A since the angles between them and A are found to be around . ◦ . Fig. 5 shows the reconstruction errors (averaged over allthe segments) of the three sampling schemes. As expected, theminimax regret sampling outperforms the subspace sampling(by . dB); and accordingly our constrained sampling alwaysoutperforms the subspace sampling (see also Proposition 3).Moreover, when λ ∈ [0 , . , the constrained sampling alsooutperforms the minimax regret sampling. For example, with asimple choice of λ = cos( A , S ) = 0 . , the improvement overthe minimax regret and subspace samplings are . dB and . dB, respectively. Also note that at the optimum value of λ = 0 . , the reconstruction error of the constrained samplingis only . dB away from that of the orthogonal projection.This result again shows the potential of the constrainedsampling in approaching the optimal reconstruction.The two examples above clearly demonstrate theeffectiveness of the proposed constrained sampling over the minimax regret and subspace samplings when all inputsignals can be modelled (properly to some extent butnot precisely) by a subspace. The results show that theconstrained sampling is robust to model uncertainties and thatit can potentially approach the optimal reconstruction whenparameter λ is made adaptive to input characteristics even ifthe input is away from the input subspace.VII. C ONCLUSIONS
This paper re-examined the sampling schemes forgeneralized sampling-reconstruction processes (GSRPs).Existing GSRP, namely, consistent, subspace, and minimaxregret GSRPs, either assume that the input subspace is fullyknown or it is completely ignored. To address this limitation,we proposed, constrained sampling , a new sampling schemethat is designed to minimize the reconstruction error forinputs that lie within a known subspace while simultaneouslybounding the maximum regret error for all other signals.The constrained sampling formulation leads to a convexcombination of the subspace and the minimax regretsamplings. It also yields an equivalent subspace samplingprocess with a modified input subspace. The constrainedsampling is shown to be 1) (sub)optimal for signals inthe input subspace, 2) robust for signals around the inputsubspace, 3) reasonably bounded for any signal in theentire space, and 4) flexible and easy to be implemented ascombination of the subspace and regret samplings. We alsopresented a detailed theoretical analysis of reconstruction errorof the proposed sampling. Additionally, we demonstrated theefficiency of constrained sampling through two illustrativeexamples. Our results suggest that the proposed samplingcould potentially approach the optimum reconstruction (i.e.,the orthogonal projection). It would be intriguing to study theoptimal selection of the parameter in the convex combinationwhen more a priori information about input signals becomeavailable. A
PPENDIX AP ROOF OF I NEQUALITY (54)As in the proof in [6, theorem ], we represent any x in D = { x : k x k ≤ L, c = S ∗ x } as x = P S x + P S ⊥ x = S ( S ∗ S ) † c + v for some v in G := { v ∈ S ⊥ : k v k ≤ L − k S ( S ∗ S ) † c k } .Let a c := W Qc − P W S ( S ∗ S ) † c . Then k Rx k = k P W x − W QS ∗ x k = k P W S ( S ∗ S ) † c + P W v − W Qc k = k P W v − a c k = k P W v k − {h P W v, a c i} + k a c k . Let v := − h P W v, a c i (cid:12)(cid:12) h P W v, a c i (cid:12)(cid:12) v. ADEGHI, YU, AND BODDETI: CONSTRAINED SAMPLING: OPTIMUM RECONSTRUCTION IN SUBSPACE WITH MINIMAX REGRET CONSTRAINT 11
Clearly, k v k = k v k and v ∈ G if and only if v ∈ G .Consequently max x ∈D k Rx k = max v ∈G (cid:8) k P W v k + 2 (cid:12)(cid:12) h P W v, a c i (cid:12)(cid:12) + k a c k (cid:9) ≥ k a c k + max v ∈G k P W v k = k a c k + max x ∈D k P W ( x − P S x ) k = k W Qc − P W S ( S ∗ S ) † c k + max x ∈D k P W x − P W P S x k . On the other hand, since for any complex numbers z and z , | z | + | z | ≥ (cid:0) | z | + | z | (cid:1) , we get max x ∈D k Rx k ≥ √ (cid:0) k W Qc − P W S ( S ∗ S ) † c k + max x ∈D k P W x − P W P S x k (cid:1) . The proof is complete. A
PPENDIX BP ROOF OF T HEOREM c ∈ R ( S ∗ ) be any given sample sequence. We firstshow that A ∩ D (in the objective function) contains onlyone element. If x ∈ A , then under the direct-sum property A ⊕ S ⊥ = H , we have x = P AS ⊥ x = A ( S ∗ A ) † S ∗ x. On the other hand, if x ∈ D , then S ∗ x = c according to thedefinition of D (43). Therefore, x = A ( S ∗ A ) † c. (85)For the constraint, we denote the set of admissible correctionfilters that satisfy the regret constraint as D Q := { Q : k P W S ( S ∗ S ) † c − W Qc k ≤ λβ ( c ) } where β ( c ) is given in (56), λ ∈ [0 , . The optimizationproblem in (57) now becomes min Q ∈D Q k A ( S ∗ A ) † c − W Qc k . (86)Invoking orthogonal decomposition of A ( S ∗ A ) † c − W Qc onto W and W ⊥ and using the triangular inequality, we havefor any Q ∈ D Q , the objective function in (86) satisfy k A ( S ∗ A ) † c − W Qc k = k P W A ( S ∗ A ) † c − W Qc k + k P W ⊥ A ( S ∗ A ) † c k ≥ k P W ⊥ A ( S ∗ A ) † c k + (cid:12)(cid:12)(cid:12) k P W S ( S ∗ S ) † c − W Qc k−k P W S ( S ∗ S ) † c − P W A ( S ∗ A ) † c k (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) k P W S ( S ∗ S ) † c − W Qc k − β ( c ) (cid:12)(cid:12)(cid:12) + k P W ⊥ A ( S ∗ A ) † c k ≥ (1 − λ ) β ( c ) + k P W ⊥ A ( S ∗ A ) † c k . (87)Substituting Q = λQ sub + (1 − λ ) Q reg into (86), we see that the lower bound in (87) is reached. Thatcompletes the proof. A PPENDIX CP ROOF OF P ROPOSITION P AS ⊥ and P S have the same nullspace S ⊥ , applyingProposition 1 on B in (60) concludes B is also an projection.It remains to be shown that N ( B ) = S ⊥ . It suffices if weshow that Bx = 0 if and only if P S x = 0 , which can beproved by an alternative expression of B (in terms of P S and P S ⊥ A ): B = λP AS ⊥ + (1 − λ ) P S = λP AS ⊥ + (1 − λ ) P S P AS ⊥ = [ λI + (1 − λ ) P S ] P AS ⊥ = [ P S + λ ( I − P S )] P AS ⊥ = [ P S + λP S ⊥ ] P AS ⊥ = P S + λP S ⊥ P AS ⊥ (88)where the second step is from (4), the second to the last stepis due to (2), and the last step is from (4). For any x ∈ H ,since P S x and P S ⊥ P AS ⊥ x are perpendicular to each other, thestatement then follows immediately. The proof is complete.A PPENDIX DP ROOF OF B OUNDS OF cos( B , S ) IN (74)Since N ( B ) = S ⊥ , we have from (10) that cos (cid:0) B , S (cid:1) = inf x ⊥ f ( x ) (89)where f ( x ) := k P S Bx k k Bx k . (90)Since B = P S + λP S ⊥ P AS ⊥ (see (88)), thus f ( x ) = k P S ( P S + λP S ⊥ P AS ⊥ ) x k k P S x + λP S ⊥ P AS ⊥ x k = k P S x k k P S x k + λ k P S ⊥ P AS ⊥ x k = 11 + λ k P S⊥ P AS⊥ x k k P S x k (91)where the second step for the denominator is due to theorthogonality of P S ⊥ P AS ⊥ x to P S x . From (14), it holds cos( A , S ⊥ ) k P AS ⊥ x k ≤ k P S ⊥ P AS ⊥ x k ≤ sin( A , S ) k P AS ⊥ x k . (92)Then, from (16), it follows that P AS ⊥ x satisfies k P S x k sin( A , S ⊥ ) ≤ k P AS ⊥ x k ≤ k P S x k cos (cid:0) A , S (cid:1) . (93)Combining (92) and (93) yields cos( A , S ⊥ )sin( A , S ⊥ ) k P S x k ≤ k P S ⊥ P AS ⊥ x k ≤ sin( A , S )cos (cid:0) A , S (cid:1) k P S x k . (94)As a result, we have from (91) that
11 + λ ( A , S )cos ( A , S ) ≤ f ( x ) ≤
11 + λ ( A , S ⊥ )sin ( A , S ⊥ ) . (95)Then (74) follows immediately from (89) and (95). THIS ARTICLE HAS BEEN ACCEPTED FOR PUBLICATION IN IEEE TRANSACTIONS ON SIGNAL PROCESSING. CITATION INFORMATION: DOI 10.1109/TSP.2019.2925608, IEEE TRANSACTIONS ON SIGNAL PROCESSING. A PPENDIX EP ROOF B OUNDS OF sin( B ⊥ , S ) IN (75)Since N ( P B ⊥ S ) = S , we have from (11) that sin (cid:0) B ⊥ , S (cid:1) = sup x g ( x ) (96)where g ( x ) := k P S ⊥ P B ⊥ S x k k P B ⊥ S x k . (97)According to [18], the adjoint operator of any projection P V V is also a projection and further we have P ∗V V = P V ⊥ V ⊥ . (98)Hence, P B ⊥ S = I − P SB ⊥ = I − B ∗ = I − (cid:0) λP AS ⊥ + (1 − λ ) P S ) ∗ = I − (cid:0) λP SA ⊥ + (1 − λ ) P S (cid:1) = λP A ⊥ S + (1 − λ ) P S ⊥ = λP A ⊥ S + (1 − λ ) P S ⊥ P A ⊥ S = [ λI + (1 − λ ) P S ⊥ ] P A ⊥ S = [ P S ⊥ + λP S ] P A ⊥ S = P S ⊥ + λP S P A ⊥ S . Note that g ( x ) in (97) has the same form as f ( x ) in (90),except that all the subspaces involved are replaced by theirrespective orthogonal complements. Using (95) and noting ( A ⊥ , S ⊥ ) = ( S , A ) = ( S , A ) . We finally obtain
11 + λ ( A , S )cos ( A , S ) ≤ g ( x ) ≤
11 + λ ( A ⊥ , S )sin ( A ⊥ , S ) . Then, inequality (75) follows immediately.R
EFERENCES[1] T. Hentschel and G. Fettweis, “Sample rate conversion for softwareradio,”
IEEE Commun. Mag. , vol. 38, no. 8, pp. 142–150, 2000.[2] T. M. Lehmann, C. Gonner, and K. Spitzer, “Survey: Interpolationmethods in medical image processing,”
IEEE Trans. Med. Imag. , vol. 18,no. 11, pp. 1049–1075, 1999.[3] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances andchallenges in super-resolution,”
Int. J. Imag. Syst. Technol. , vol. 14, no. 2,pp. 47–57, 2004.[4] A. Ortega, P. Frossard, J. Kovaˇcevi´c, J. M. Moura, and P. Vandergheynst,“Graph signal processing: Overview, challenges, and applications,”
Proc.IEEE , vol. 106, no. 5, pp. 808–828, 2018.[5] S. Chen, R. Varma, A. Sandryhaila, and J. Kovaˇcevi´c, “Discrete signalprocessing on graphs: Sampling theory,”
IEEE Trans. Signal Process. ,vol. 63, no. 24, pp. 6510–6523, 2015.[6] Y. C. Eldar and T. G. Dvorkind, “A minimum squared-error frameworkfor generalized sampling,”
IEEE Trans. Signal Process. , vol. 54, no. 6,pp. 2155–2167, 2006.[7] M. Unser and A. Aldroubi, “A general sampling theory for nonidealacquisition devices,”
IEEE Trans. Signal Process. , vol. 42, no. 11,pp. 2915–2925, 1994.[8] M. Unser and J. Zerubia, “A generalized sampling theory withoutband-limiting constraints,”
IEEE Trans. Circuits Syst. II, Analog Digit.Signal Process. , vol. 45, no. 8, pp. 959–969, 1998.[9] Y. C. Eldar, “Sampling with arbitrary sampling and reconstruction spacesand oblique dual frame vectors,”
J. Fourier Anal. Appl. , vol. 9, no. 1,pp. 77–96, 2003. [10] Y. C. Eldar, “Sampling without input constraints: Consistentreconstruction in arbitrary spaces,” in
Sampling, Wavelets, andTomography , pp. 33–60, Springer, 2004.[11] T. G. Dvorkind, Y. C. Eldar, and E. Matusiak, “Nonlinear and nonidealsampling: Theory and methods,”
IEEE Trans. Signal Process. , vol. 56,no. 12, pp. 5874–5890, 2008.[12] Y. C. Eldar and M. Unser, “Nonideal sampling and interpolationfrom noisy observations in shift-invariant spaces,”
IEEE Trans. SignalProcess. , vol. 54, no. 7, pp. 2636–2651, 2006.[13] A. Hirabayashi and M. Unser, “Consistent sampling and signalrecovery,”
IEEE Trans. Signal Process. , vol. 55, no. 8, pp. 4104–4115,2007.[14] T. G. Dvorkind and Y. C. Eldar, “Robust and consistent sampling,”
IEEESignal Proc. Lett. , vol. 16, no. 9, pp. 739–742, 2009.[15] M. L. Arias and C. Conde, “Generalized inverses and samplingproblems,”
J. Math. Anal. Appl. , vol. 398, no. 2, pp. 744–751, 2013.[16] K. H. Kwon and D. G. Lee, “Generalized consistent sampling in abstractHilbert spaces,”
J. Math. Anal. Appl. , vol. 433, no. 1, pp. 375–391, 2016.[17] A. Knyazev, A. Gadde, H. Mansour, and D. Tian, “Guided signalreconstruction theory,” arXiv preprint arXiv:1702.00852 , 2017.[18] O. Christensen and Y. C. Eldar, “Oblique dual frames and shift-invariantspaces,”
Appl. Comput. Harmon. Anal. , vol. 17, no. 1, pp. 48–68, 2004.[19] W.-S. Tang, “Oblique projections, biorthogonal Riesz bases andmultiwavelets in Hilbert spaces,”
Proc. American Math. Soc. , vol. 128,no. 2, pp. 463–473, 2000.[20] E. Kreyszig,
Introductory Functional Analysis with Applications . WileyNew York, 1978.[21] M. Unser, “Splines: A perfect fit for signal and image processing,”
IEEESignal Process. Mag. , vol. 16, no. 6, pp. 22–38, 1999.[22] B. Adcock, A. C. Hansen, and C. Poon, “Beyond consistentreconstructions: Optimality and sharp bounds for generalized sampling,and application to the uniform resampling problem,”
SIAM J. Math.Anal. , vol. 45, no. 5, pp. 3132–3167, 2013.[23] A. Anis, A. Gadde, and A. Ortega, “Efficient sampling set selection forbandlimited graph signals using graph spectral proxies,”
IEEE Trans.Signal Process. , vol. 64, no. 14, pp. 3775–3789, 2016.[24] T. Blu, P. Th´evenaz, and M. Unser, “Linear interpolation revitalized,”
IEEE Trans. Image Process. , vol. 13, no. 5, pp. 710–719, 2004.[25] T. Blu and M. Unser, “Quantitative Fourier analysis of approximationtechniques: Part I— Interpolators and projectors,”
IEEE Trans. SignalProcess. , vol. 47, no. 10, pp. 2783–2795, 1999.[26] T. G. Dvorkind, H. Kirshner, Y. C. Eldar, and M. Porat, “Minimaxapproximation of representation coefficients from generalized samples,”
IEEE Trans. Signal Process. , vol. 55, no. 9, pp. 4430–4443, 2007.[27] Y. C. Eldar, “Mean-squared error sampling and reconstruction in thepresence of noise,”
IEEE Trans. Signal Process. , vol. 54, no. 12,pp. 4619–4633, 2006.[28] Y. C. Eldar and T. Michaeli, “Beyond bandlimited sampling,”
IEEESignal Process. Mag. , vol. 26, no. 3, pp. 48–68, 2009.[29] B. B. Haro and M. Vetterli, “Sampling continuous-time sparse signals:A frequency-domain perspective,”
IEEE Trans. Signal Process. , vol. 66,no. 6, pp. 1410–1424, 2018.[30] A. Hirabayashi, “Consistent sampling and efficient signalreconstruction,”
IEEE Signal Process. Lett. , vol. 16, no. 12,pp. 1023–1026, 2009.[31] A. Knyazev, A. Jujunashvili, and M. Argentati, “Angles between infinitedimensional subspaces with applications to the Rayleigh-Ritz andalternating projectors methods,” arXiv preprint arXiv:0705.1023 , 2007.[32] T. Koˇsir and M. Omladiˇc, “Normalized tight vs. general frames insampling problems,”
Adv. Oper. Theory , vol. 2, no. 2, pp. 114–125,2017.[33] B. Sadeghi and R. Yu, “Shift-variance and cyclostationarity of linearperiodically shift-variant systems,” in , 2013.[34] B. Sadeghi and R. Yu, “Shift-variance and nonstationarity of linearperiodically shift-variant systems and applications to generalizedsampling-reconstruction processes,”
IEEE Trans. Signal Process. ,vol. 64, no. 6, pp. 1493–1506, 2016.[35] B. Sadeghi, R. Yu, and R. Wang, “Shifting interpolation kernel towardorthogonal projection,”
IEEE Trans. Signal Process. , vol. 66, no. 1,pp. 101–112, 2018.[36] J. Shi, X. Liu, L. He, M. Han, Q. Li, and N. Zhang, “Samplingand reconstruction in arbitrary measurement and approximation spacesassociated with linear canonical transform,”
IEEE Trans. Signal Process. ,vol. 64, no. 24, pp. 6379–6391, 2016.[37] M. Unser, “Sampling—50 years after Shannon,”
Proc. IEEE , vol. 88,pp. 569–587, 2000.
ADEGHI, YU, AND BODDETI: CONSTRAINED SAMPLING: OPTIMUM RECONSTRUCTION IN SUBSPACE WITH MINIMAX REGRET CONSTRAINT 13 [39] L. Xu, R. Tao, and F. Zhang, “Multichannel consistent sampling andreconstruction associated with linear canonical transform,”
IEEE SignalProcess. Lett. , vol. 24, no. 5, pp. 658–662, 2017. [38] M. Vetterli, J. Kovaˇcevi´c, and V. K. Goyal,