Oversampled Adaptive Sensing via a Predefined Codebook
aa r X i v : . [ c s . I T ] F e b Oversampled Adaptive Sensing viaa Predefined Codebook
Ali Bereyhi, Saba Asaad, and Ralf R. Müller
Friedrich-Alexander Universität Erlangen-Nürnberg, Germany{ali.bereyhi, saba.asaad, ralf.r.mueller}@fau.de
Abstract —Oversampled adaptive sensing (OAS) is a Bayesianframework recently proposed for effective sensing of structured signals in a time-limited setting. In contrast to the conventional blind oversampling, OAS uses the prior information on the signalto construct posterior beliefs sequentially. These beliefs help inconstructive oversampling which iteratively evolves through asequence of time sub-frames.The initial studies of OAS consider the idealistic assumption offull control on sensing coefficients which is not feasible in manyapplications. In this work, we extend the initial investigations onOAS to more realistic settings in which the sensing coefficients areselected from a predefined set of possible choices, referred to asthe codebook . We extend the OAS framework to these settings andcompare its performance with classical non-adaptive approaches.
Index Terms —Oversampled adaptive sensing, sparse recovery,Bayesian inference, stepwise regression.
I. I
NTRODUCTION
Consider the following classical problem which raises inseveral sensing scenarios: A set of signal samples x , . . . , x N are to be sensed in a noisy environment via K sensors withina limited time frame. Each sensor is tunable and can observevarious linear combinations of signal samples. The ultimategoal is to collect some observations from which the signalsamples are recovered with minimum distortion. There are ingeneral two approaches to address this goal:1) The undersampled non-adaptive approach, in which thesensors are tuned once at the beginning of the time frameand kept fixed for the whole duration. In this case, K highquality observations, i.e., observations with high signal-to-noise ratio (SNR), are collected. These are then givento an estimator to recover the signal samples.2) The oversampled adaptive sensing (OAS) framework [1]–[3], in which the given time frame is divided into mul-tiple subframes. In each subframe, K observations arecollected. These observations, along with those collectedin previous subframes, are used to decide for a new tuningstrategy in the next subframe. Clearly in this case, a largernumber of observations are collected. This increase inthe number of observations is obtained at the expense oflower quality for each particular observation . This work has been presented in 2021 IEEE International Symposium onJoint Communications & Sensing (JC&S). The link to the final version in theproceedings will be available later. Note that since subframes are shorter than the whole time frame, the noisevariance is higher. This results in lower SNR for a particular observation. Thispoint is illustrated in the Section II.
Despite initial studies on adaptive approaches for signalsensing, e.g., [4]–[6], it was commonly believed in the litera-ture that adaptation does not result in a significant performanceenhancement, assuming that the trade-off between the quality and quantity of the observations leads to no performance gain.In [1], this myth was shown to be wrong. OAS illustratesthis trade-off between the number of observations and theirquality as follows: For cases with some prior information onthe signal samples, e.g., sparse recovery, the adaptation canresult on a significant enhancement. Nevertheless, in lack ofprior information, the conventional belief holds.The superiority of OAS framework is intuitively understoodas follows: In presence of prior information, several samples are effectively recovered in the primary subframes. By omit-ting those samples, the space of unknowns shrinks, and hencethe recovery can be performed effectively. A. Contributions
The initial studies of OAS consider no restrictions on thesensor tuning. This is an idealistic assumption which is notnecessarily satisfied in practice; see for instance the examplegiven in Section I-B. It is hence more realistic to assume thatthe tuning of sensors is controlled by a predefined codebookwhich includes available choices of tuning. Given this new re-striction, this study develops an OAS algorithm which collectsobservations in each subframe using a predefined codebook.The performance of the proposed algorithm is investigated invarious respects, and the impacts of limited codebook size areclarified through several experiments.
B. Motivation and Applications
To give an intuition on sensing via a predefined codebook,let us consider the following example: A signal is sampled viaa preinstalled sensor network within a limited time frame. Ata given time, only a subset of sensors are active in the net-work. To apply OAS, the time frame is divided into severalsubframes, and the active sensors are altered in each subframe.Clearly, the sensing procedure in this setting is restricted bythe preinstalled network and cannot be changed arbitrarily.Similar to the given example, in several practical scenarios,sensing is performed by means of a preinstalled setting whosepossible configurations for observing a signal is restricted bya codebook. Examples of such scenarios are found in radar For instance, zero samples in the example of sparse recovery. nd positioning systems. In these applications, it is often thecase that prior information is available on the target signal; forinstance, in many radar systems, the observed signal is knownto be sparse. The main motivation of this study is to extendthe scope of OAS to these applications.
C. Notation
Scalars, vectors and matrices are shown with non-bold, boldlower case and bold upper case letters, respectively. I K and K × N are the K × K identity matrix and K × N all-zeromatrix, respectively. A T denotes the transpose of A . The setof real numbers is shown by R . We use the shortened notation [ N ] to represent { , . . . , N } .II. P ROBLEM F ORMULATION
Consider x ∈ R N containing N signal samples. We pos-tulate that the samples are independent and identically dis-tributed (i.i.d.) with prior distribution q ( x ) . These samplesare observed via K sensors within a restricted time intervalof duration T . A particular sensor k ∈ [ K ] observes a noisylinear combination of signal samples, i.e., y k = a T k x + z k (1)for additive noise z k and the vector of coefficients a k ∈ R N :1) The noise power is reversely proportional to the observa-tion time: Assume that sensor k operates for a time frameof length t ≤ T . Then, z k is assumed to be zero-meanGaussian with variance σ ( t ) = σ /t for some σ ≥ .2) The coefficient vectors a k for k ∈ [ K ] are selected froma predefined codebook C = { c , . . . , c S } where S ≥ K and c s ∈ R N for s ∈ [ S ] . This means that { a , . . . , a K } ⊆ C = { c , . . . , c S } . (2)There are two key points in the sensing model (1) whichdeviate from the classical models:1) The sensing quality is time-dependent .2) The sensing is performed using a predefined codebook.The time-dependent model follows the fact that the SNRof a particular observation linearly scales with the duration ofobservation . This is a typical assumption in OAS. The latterpoint is the key difference of the considered system model, tothe typical settings considered for OAS.III. OAS F RAMEWORK
In a nutshell, the OAS framework can be represented viathe following steps:(a) The time frame is divided into M subframes .(b) In subframe m ∈ [ M ] , the sensors observe y m = A m x + z m , (3)where z m ∼ N (cid:0) , σ I K (cid:1) with σ = M σ ( T ) , and A m = [ a ( m ) , . . . , a K ( m )] T (4) This postulation is not necessarily matched to the true distribution. See [1] for detailed illustrations. with { a ( m ) , . . . , a K ( m ) } ⊆ C containing the vectorsselected in subframe m .(c) A processing unit collects the observations up to sub-frame m in a matrix of stacked observations Y m := [ y , . . . , y m ] . (5)It then uses a Bayesian estimator to calculate some softinformation . In general, the soft information is written as ˆ x m = E { x | Y m , A m } , (6)where A m denotes the collection of sensing matrices upto subframe m , i.e., A m = { A , . . . , A m } , (7)and the expectation is taken with respect to the posteriordistribution p ( x | Y m , A m ) which is calculated from thelikelihood using the postulated prior distribution q ( x ) .(d) The processing unit specifies the sensing matrix of sub-frame m + 1 using the soft information. This means that A m +1 = f Adp (ˆ x m ) (8)for some adaptation function f Adp ( · ) . A. Performance Characterization
The conventional metric which quantifies the recovery per-formance is the average distortion : Let ∆ ( · ; · ) : R × R R +0 be a distortion function . The distortion between two vectors x , ˆ x ∈ R N with respect to ∆ ( · ; · ) is determined as ∆ ( x ; ˆ x ) = N X n =1 ∆ ( x n ; ˆ x n ) . (9)In an OAS-based algorithm, the signal samples are finallyrecovered as x ⋆ = g (ˆ x M ) where g ( · ) is a decision function used to recover the signal samples from the soft estimate ˆ x M .Consequently, the average distortion is D = E { ∆ ( x ; x ⋆ ) } . (10)A non-adaptive recovery technique can be seen as an OAS-based algorithm with a single time subframe. The performancein this case is hence characterized by setting M = 1 .IV. OAS VIA A P REDEFINED C ODEBOOK
In settings with no predefined codebook, designing OAS-based algorithms deals with high degrees of freedom. In fact,sensing matrices are freely constructed in such scenarios. Witha predefined codebook, algorithm design is significantly re-stricted, since degrees of freedom are limited via the codebook.In the sequel, we discuss the design via a sample algorithmwhich performs OAS via a predefined codebook. The algo-rithm is given in Algorithm 1. In this algorithm, • The selector operator
Sel ( X ) , for an index set X ⊆ [ N ] with L indices, returns an L × N matrix. Setting P =Sel ( X ) , the ℓ -th row of P is the standard basis of R N For instance, the Euclidean distance. For example, a hard or soft thresholding function. hose non-zero entry occurs at the ℓ -th index of X , e.g.,for N = 4 and X = { , } , we have Sel ( X ) = (cid:20) (cid:21) . (11) • The posterior distribution p (cid:0) u | y, σ (cid:1) is a scalar distri-bution and is given by p (cid:0) u | y, σ (cid:1) = exp ( − ( y − u ) σ ) q ( u ) Z exp ( − ( y − u ) σ ) q ( u ) d u . • A ( C , K, F , ˆ x ) is a selection algorithm selecting K vec-tors from the codebook C in order to sense signal sampleswhich are indexed by F using the current estimate ˆ x . A. Derivation of Algorithm 1
In a nutshell, Algorithm 1 follows these steps: It first focuseson a subset of L ≤ K signal samples via worst-case adaptationand selects K vectors from the codebook to sense them. Usingthe fact that recovery of this subset is a determined problem,the algorithm uses linear filtering to decouple the observations.It then cancels out the other N − L signal samples using theestimates obtained in previous subframes and estimates the L samples via a scalar Bayesian estimator assuming that theresidual terms are Gaussian.To derive the update rules given in the algorithm, considersubframe m − at which the signal samples are estimated as ˆ x , . . . , ˆ x N and posterior distortions d , . . . , d N are calculatedfor the signal samples. The posterior distortion d n for the n -thsignal sample is determined using y n , which is a statistic on x n , and estimated noise variance ˆ σ n as d n = Z ( u − ˆ x n ) p (cid:0) u | y n , ˆ σ n (cid:1) d u. (12)The statistic and estimated noise variance are calculated from Y m − and A m − in a way that will be clarified in the sequel.For L ≤ K , the algorithm finds L samples whose posteriordistortions are maximum. The indices of these samples arecollected in F m ⊆ [ N ] . A selection algorithm A ( C , K, F m , ˆ x ) is then used to select K vectors from the codebook C . At thispoint, we consider a generic selection algorithm, and leave thediscussions on the design of A ( C , K, F m , ˆ x ) for later.Let A m be the collection of vectors selected in subframe m . The observations are hence given by (3) which read y m = A m x + z m (13a) = (cid:0) A m P T m P m + A m E T m E m (cid:1) x + z m (13b)where P m = Sel ( F m ) and E m = Sel ([ N ] \ F m ) . Defining thematrices Q m = A m P T m and W m = A m E T m , we can write y m = Q m ˘ x m + W m ¯ x m + z m (14)where ˘ x m = P m x contains the L samples selected by F m and ¯ x m = E m x consists of the remaining N − L signal samples. Algorithm 1
OAS via a Predefined Codebook
Input: K , L , codebook C and postulated prior q ( x n ) . Initiate
For n ∈ [ N ] , set d n = + ∞ , ˆ x n = y n = ˆ σ n = 0 and M n (0) = ∅ . for m ∈ [ M ] do
1) Sort d , . . . d N as d i m , . . . d i mN such that d i m ≥ . . . ≥ d i mN .
2) Set F m by worst-case adaptation as F m = { i m , i mL } , and update M n ( m ) = M n ( m − ∪ { m } for n ∈ F m .3) Set A m = [ a ( m ) , . . . , a K ( m )] T where { a ( m ) , . . . , a K ( m ) } = A ( C , K, F m , ˆ x )
4) Sense the samples for duration
T /M via A m .5) Set P m = Sel ( F m ) and E m = Sel ([ N ] \ F m ) . Let ˜ x m = E m ˆ x , Q m = A m P T m and W m = A m E T m , and set F m = (cid:0) Q T m Q m (cid:1) − Q T m
6) Decouple the observations as ˆ y m = F m ( y m − W m ˜ x m ) .
7) For ℓ ∈ [ L ] , update y i mℓ = y i mℓ + ˆ y m,ℓ ˆ σ i mℓ = ˆ σ i mℓ + M k f m,ℓ k σ with f T m,ℓ being the ℓ -th row of F m .8) For n ∈ F m , update ˆ x n = Z u p (cid:18) u (cid:12)(cid:12)(cid:12)(cid:12) y n | M n ( m ) | , ˆ σ n | M n ( m ) | (cid:19) d ud n = Z ( u − ˆ x n ) p (cid:18) u (cid:12)(cid:12)(cid:12)(cid:12) y n | M n ( m ) | , ˆ σ n | M n ( m ) | (cid:19) d u end for Since the entries of ¯ x m are not estimated in this subframe, weapproximate them by the estimates of the previous subframe,i.e., ¯ x m ≈ ˜ x m = E m ˆ x . We hence can write y m − W m ˜ x m ≈ Q m ˘ x m + z m . (15)Noting that Q m ∈ R K × L with K ≥ L , we could concludethat for a proper selection of codewords Q T m Q m is full-rank,and thus, we can calculate the pseudo-inverse of Q m as F m = (cid:0) Q T m Q m (cid:1) − Q T m . (16)Consequently, the observations can be decoupled into L sam-ples with additive noise as ˆ y m = F m ( y m − W m ˜ x m ) (17) ≈ ˘ x m + F m z m . (18)e now consider this mismatched assumption that F m z m is a Gaussian vector with independent entries . As the result,the ℓ -th entry of ˆ y m can be written as ˆ y m,ℓ ≈ x i mℓ + ˆ z m,ℓ (19)where i mℓ denotes the ℓ -th entry of F m , and decoupled noise ˆ z m,ℓ is zero-mean Gaussian with variance M k f m,ℓ k σ . Here, f T m,ℓ denotes the ℓ -th row of F m .We now use ˆ y m,ℓ to update the statistic on x i mℓ by addingthe decoupled observation corresponding to x i mℓ to the currentstatistic: Let y i mℓ ( m − denote the statistic on x i mℓ collectedin subframes , . . . , m − . We update this parameter as y i mℓ ( m ) = y i mℓ ( m −
1) + ˆ y m,ℓ . (20)Let M n ( m ) ⊆ [ m ] collects all subframes at which n ∈ F m ,i.e., x n is selected to be sensed in these subframes. Moreover,assume that ˆ σ n ( m ) denotes the sum of all decoupled noisevariances in M n ( m ) which correspond to index n . Afterupdating our statistic on x i mℓ , we have y i mℓ ( m ) ≈ | M n ( m ) | x i mℓ + z i mℓ ( m ) (21)where z i mℓ ( m ) is given by z i mℓ ( m ) = z i mℓ ( m −
1) + ˆ z i mℓ ( m ) . (22)Hence, the estimated noise variance is updated as ˆ σ i mℓ ( m ) = ˆ σ i mℓ ( m −
1) + M k f m,ℓ k σ . (23)Using the Bayesian estimator, we finally update the estimate ˆ x n and the posterior distortion d n via the updated statistics y n for n ∈ F m . This concludes Algorithm 1.V. E XAMPLE OF S PARSE R ECOVERY
Initial investigations demonstrate that OAS can significantlyoutperform classical sparse recovery approaches. This is dueto the prior information on the sparsity of signal samples. Inthis section, we employ our algorithm to address this example.For a sparse signal, the prior distribution of the samples is q ( x ) = (1 − ρ ) δ ( x ) + ρφ ( x ) (24)for sparsity factor ρ ∈ [0 , and a distribution φ ( x ) satisfying Z + − φ ( x ) = 0 . (25)The model indicates that samples are zero with probability − ρ and are drawn from distribution φ ( x ) with probability ρ . In the sequel, we consider an i.i.d. sparse-Gaussian signalfor which the samples are i.i.d. and φ ( x ) = 1 √ π exp (cid:26) − x (cid:27) . (26)Considering the system model, the main task in this exampleis to recover the samples of a given sparse signal using the Such an assumption is asymptotically correct for large random codebookswith i.i.d. entries. observations made within a time frame of length T via the K active sensors. This task can be addressed in two ways: • Invoking the theory of compressive sensing, one can se-lect a sensing matrix from the codebook, and collect K observations. A classical sparse recovery algorithm, e.g.,LASSO, is then used to recover the signal samples. • An alternative approach is to use an OAS algorithm, e.g.,Algorithm 1, to adaptively estimate the signal samples.We investigate these two approaches in the sequel.
A. Sparse Recovery via OAS
For a sparse-Gaussian prior distribution, the Bayesian esti-mation in Algorithm 1 reduces to ˆ x n = G (cid:18) y n | M n ( m ) | (cid:12)(cid:12)(cid:12)(cid:12) ˆ σ n | M n ( m ) | (cid:19) ,d n = E (cid:18) y n | M n ( m ) | (cid:12)(cid:12)(cid:12)(cid:12) ˆ σ n | M n ( m ) | (cid:19) . where G (cid:0) y | σ (cid:1) and E (cid:0) y | σ (cid:1) are given by G (cid:0) y | σ (cid:1) = I (cid:0) y | σ (cid:1) yJ ( y | σ ) (27a) E (cid:0) y | σ (cid:1) = I (cid:0) y | σ (cid:1) J ( y | σ ) σ + I (cid:0) y | σ (cid:1) J ( y | σ ) y ! (27b)with I (cid:0) y | σ (cid:1) , I (cid:0) y | σ (cid:1) and J (cid:0) y | σ (cid:1) being I (cid:0) y | σ (cid:1) = (1 − ρ ) p σ exp (cid:26) − y σ (cid:27) (28a) I (cid:0) y | σ (cid:1) = ρ σ exp (cid:26) − y σ ) (cid:27) (28b) J (cid:0) y | σ (cid:1) = (cid:0) σ (cid:1) (cid:2) I (cid:0) y | σ (cid:1) + I (cid:0) y | σ (cid:1)(cid:3) . (28c)VI. N UMERICAL E XPERIMENTS
For numerical investigations, we consider a sparse-Gaussiansignal with N = 200 samples whose sparsity factor is ρ = 0 . . The noise variance for an interval of length T is σ = 0 . . This means that by dividing the time frame into M subframes, the noise variance in each subframe is . M .The compression rate is further defined as R = N/K .The codebook is generated randomly from an i.i.d. Gaussianensemble. This means that the vectors in C are independent andhave i.i.d. zero-mean Gaussian entries with variance / √ K .As benchmarks, we consider a classic compressive sensingsetting in which the K observations are collected using K randomly selected vectors. We evaluate the performance fortwo recovery algorithms: 1) LASSO in which the samples arerecovered via regularized ℓ -norm minimization, and 2) Min-imum mean squared error (MMSE) estimator in which thesamples are recovered via the optimal
Bayesian estimator.It is worth mentioning that the computational complexity ofthe former algorithm is moderate while the latter approachis not numerically tractable . We hence use the asymptoticcharacterizations for these schemes [7], [8]. − − − − (a) L M S E i n [ d B ] OASLASSOMMSE
20 40 60 80 100 − − − − (b) L M S E i n [ d B ]
10 20 30 40 50 − − − (c) L M S E i n [ d B ]
10 20 30 40 − − − (d) L M S E i n [ d B ] Fig. 1: MSE versus L for various compression rates: (a) R = 1 , (b) R = 2 , (c) R = 4 , and (d) R = 5 . Here, the time frame is divided to M = 80 subframes and S = 1000 . The performance is evaluated in terms of the average meansquared error (MSE) given by the average distortion when ∆ ( x n ; ˆ x n ) = 1 N | x n − ˆ x n | . (29)The first experiment considers a codebook of size S = 1000 and compares the classical approaches with an OAS schemein which M = 80 . The results are shown in Fig. 1 for multiplecompression rates. The OAS scheme uses Algorithm 1 witha random selection algorithm, i.e., the K vectors are selectedrandomly in each subframe. As the result shows, the OAS-based algorithm performs close to LASSO in low compressionrates, while at R = 5 , it even outperforms the MMSE bound.The behavior illustrated via Fig. 1 is consistent with theprior investigations on OAS with no predefined codebooks [1]–[3]. In fact, those initial cases can be observed as a special caseof OAS via a predefined codebook whose size goes to infinity.To investigate the performance degradation imposed by thecodebook restriction, we further plot the MSE achieved viathe OAS-based algorithm against the codebook size in Fig. 2.In this figure, we set R = 4 and M = 60 . L is further set to L = K/ . As the figure shows, the MSE drops fast andconverges to the asymptotic value which specifies the OASperformance with no predefined codebooks.Similar behavior is observed in Fig. 3 in which the MSEis plotted against the number of subframes for two differentcodebook sizes. Here, we set R = 4 and L = K/ . Fromthe figure, it is observed that the OAS-algorithm convergesto its asymptotic relatively fast, and further increase in thenumber of subframes improves the performance negligibly.
200 400 600 800 − − − − S M S E i n [ d B ] OASLASSOMMSE
Fig. 2: MSE against the codebook size. Here, the compression rateis set to R = 4 , number of subframes M = 60 and L = K/ . A. Impact of Selection Algorithm
Figs. 1-3 consider a random selection algorithm. The per-formance of the OAS algorithm can be further improved bydeveloping a more efficient selection algorithm. Such an algo-rithm can be developed by defining a concept of optimality .In the sequel, we give an instance of such algorithms.From derivations in Section IV-A, we know that Algorithm 1cancels out the residual samples in each subframe using the es-timates given in the previous subframes; see (15). This meansthat the algorithm relies on this postulation that estimatedsamples converge to good estimates as the algorithm evolves.Nevertheless, the approximated approach leads to unwantedinterference terms in each subframe.The interference in the subframe m is ideally suppressed, ifthe residual samples lie in the kernel of W m , i.e. W m ¯ x m = 0 . − − − − M M S E i n [ d B ] S = 500 S = 1000 LASSOMMSE
Fig. 3: MSE against the number of subframes for different codebooksizes. Here, we set R = 4 and L = K/ . Algorithm 2
Adaptation Algorithm A ( C , K, F , ˆ x ) Input: K , codebook C and index set F ⊂ [ N ] . Initiate
Set a = c i with c i being selected at random from C . Let I = [ S ] \ { i } , P = Sel ( F ) , E = Sel ([ N ] \ F ) and ˜ x = E ˆ x . for k ∈ [2 : K ] do
1) Set A k = [ a , . . . , a k − ] and let v k = PA k A T k E T ˜ x
2) For u ∈ R N , let f k ( u ) = k (cid:0) u T E T ˜ x (cid:1) Pu + v k k .
3) Set a k = c i k , where i k = argmin i ∈ I k − f k ( c i ) .
4) Update I k = I k − \ { i k } . end for However, this constraint is not necessarily fulfilled, since 1) theresidual samples are unknown, and 2) A m is constructed fromthe codebook. The first issue is addressed by following the ap-proximation used in Algorithm 1, i.e., replacing ¯ x m with ˜ x m .For the second issue, an optimal approach is to search thecodebook, such that the sum-power of the interference terms isminimized, i.e., setting A m = [ a ( m ) , . . . , a K ( m )] T where a ( m ) , . . . , a K ( m ) = argmin u ,..., u K ∈C k UE T m ˜ x m k (30)with U = [ u , . . . , u K ] T .The selection algorithm in (30) deals with an exhaustivesearch whose number of choices exponentially grows by thecodebook size. For practical scenarios with large codebooks,this selection algorithm is not numerically tractable. One canhence approximate the solution with a greedy algorithm. Anexample is Algorithm 2 which uses stepwise regression.Fig. 4 compares the performance of Algorithm 2 withrandom selection. Here, Algorithm 1 is run with both of theselection algorithms when R = 4 and a codebook of size
10 20 30 40 50 − − − − L M S E i n [ d B ] RandomStepwiseLASSOMMSE
Fig. 4: MSE versus L for random and stepwise adaptations. Here,the compression rate is set to R = 4 , codebook size S = 500 andnumber of subframes is M = 20 . S = 500 is available. The results are plotted for M = 20 sub-frames. As the figure shows, the greedy algorithm outperformsthe random selection which agrees with the intuition.VII. C ONCLUSIONS
In presence of a predefined codebook, design of OAS-basedalgorithms deal with more challenges. This is due to the lowerdegrees of freedom provided by the setting. Although thisrestriction leads to degraded performance, the same behavioras the one, observed in the basic form of OAS framework, isreported. The performance degradation is further compensatedby effective design of selection algorithms and higher numberof subframes. These are feasible means in many applicationsin which OAS seems to be a good candidate.R
EFERENCES[1] R. R. Müller, A. Bereyhi, and C. Mecklenbräuker, “Oversampled adaptivesensing,”
Proc. Inf. Th. & App. Work. (ITA) , pp. 1–7, Feb. 2018, USA.[2] R. R. Müller, A. Bereyhi, and C. F. Mecklenbräuker, “Oversampledadaptive sensing with random projections: Analysis and algorithmic ap-proaches,”
Proc. IEEE In. Symp. on Signal Proc. and Inf. Tech. (ISSPIT) ,pp. 336–341, December 2018, Louisville, USA.[3] A. Bereyhi and R. R. Müller, “An adaptive bayesian framework for re-covery of sources with structured sparsity,”
Proc. 8th Int. Work. on Comp.Adv. in Multi-Sensor Adap. Proc. (CAMSAP) , pp. 71–75, December 2019.[4] J. D. Haupt, R. G. Baraniuk, R. M. Castro, and R. D. Nowak, “Compres-sive distilled sensing: Sparse recovery using adaptivity in compressivemeasurements,”
Proc. 43rd Asilomar Conf. on Signals, Systems andComputers , pp. 1551–1555, November 2009, Pacific Grove, CA, USA.[5] J. Haupt, R. Nowak, and R. Castro, “Adaptive sensing for sparse signalrecovery,”
Proc. 13th Digital Signal Proc. Work. and 5th IEEE SignalProc. Edu. Work. , pp. 702–707, January 2009, Marco Island, FL, USA.[6] M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressedsensing,”
IEEE Trans. Inf. Th. , vol. 60, no. 7, pp. 4001–4012, Jul. 2014.[7] A. Bereyhi, R. R. Müller, and H. Schulz-Baldes, “Statistical mechanicsof MAP estimation: General replica ansatz,”
IEEE Trans. Inf. Th. , vol. 65,no. 12, pp. 7896–7934, Dec. 2019.[8] A. Bereyhi, R. Müller, and H. Schulz-Baldes, “Rsb decoupling propertyof map estimators,”