Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sina Jafarpour is active.

Publication


Featured researches published by Sina Jafarpour.


IEEE Journal of Selected Topics in Signal Processing | 2010

Construction of a Large Class of Deterministic Sensing Matrices That Satisfy a Statistical Isometry Property

A. Robert Calderbank; Stephen D. Howard; Sina Jafarpour

Compressed Sensing aims to capture attributes of k-sparse signals using very few measurements. In the standard compressed sensing paradigm, the N × C measurement matrix ¿ is required to act as a near isometry on the set of all k-sparse signals (restricted isometry property or RIP). Although it is known that certain probabilistic processes generate N × C matrices that satisfy RIP with high probability, there is no practical algorithm for verifying whether a given sensing matrix ¿ has this property, crucial for the feasibility of the standard recovery algorithms. In contrast, this paper provides simple criteria that guarantee that a deterministic sensing matrix satisfying these criteria acts as a near isometry on an overwhelming majority of k-sparse signals; in particular, most such signals have a unique representation in the measurement domain. Probability still plays a critical role, but it enters the signal model rather than the construction of the sensing matrix. An essential element in our construction is that we require the columns of the sensing matrix to form a group under pointwise multiplication. The construction allows recovery methods for which the expected performance is sub-linear in C, and only quadratic in N, as compared to the super-linear complexity in C of the Basis Pursuit or Matching Pursuit algorithms; the focus on expected performance is more typical of mainstream signal processing than the worst case analysis that prevails in standard compressed sensing. Our framework encompasses many families of deterministic sensing matrices, including those formed from discrete chirps, Delsarte-Goethals codes, and extended BCH codes.


IEEE Transactions on Information Theory | 2009

Efficient and Robust Compressed Sensing Using Optimized Expander Graphs

Sina Jafarpour; Weiyu Xu; Babak Hassibi; A. Robert Calderbank

Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any <i>n</i>-dimensional vector that is <i>k</i>-sparse can be fully recovered using <i>O</i>(<i>k</i>log<i>n</i>) measurements and only <i>O</i>(<i>k</i>log<i>n</i>) simple recovery iterations. In this paper, we improve upon this result by considering expander graphs with expansion coefficient beyond <sup>3</sup> <i>/</i> <sub>4</sub> and show that, with the same number of measurements, only <i>O</i>(<i>k</i>) recovery iterations are required, which is a significant improvement when <i>n</i> is large. In fact, full recovery can be accomplished by at most <i>2k</i> very simple iterations. The number of iterations can be reduced arbitrarily close to <i>k</i>, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time <i>O</i>(<i>n</i>log(<sup>n</sup>/<sub>k</sub>))). We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expander-graph-based methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally, we will show how our analysis extends to give a robust algorithm that finds the position and sign of the <i>k</i> significant elements of an almost <i>k</i>-sparse signal and then, using very simple optimization techniques, finds a <i>k</i>-sparse signal which is close to the best <i>k</i>-term approximation of the original signal.


Journal of Communications and Networks | 2010

Why Gabor frames? Two fundamental measures of coherence and their role in model selection

Waheed U. Bajwa; A. Robert Calderbank; Sina Jafarpour

The problem of model selection arises in a number of contexts, such as subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper studies non-asymptotic model selection for the general case of arbitrary (random or deterministic) design matrices and arbitrary nonzero entries of the signal. In this regard, it generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence — termed as the worst-case coherence and the average coherence — among the columns of a design matrix. It utilizes these two measures of coherence to provide an in-depth analysis of a simple, model-order agnostic one-step thresholding (OST) algorithm for model selection and proves that OST is feasible for exact as well as partial model selection as long as the design matrix obeys an easily verifiable property, which is termed as the coherence property. One of the key insights offered by the ensuing analysis in this regard is that OST can successfully carry out model selection even when methods based on convex optimization such as the lasso fail due to the rank deficiency of the submatrices of the design matrix. In addition, the paper establishes that if the design matrix has reasonably small worst-case and average coherence then OST performs near-optimally when either (i) the energy of any nonzero entry of the signal is close to the average signal energy per nonzero entry or (ii) the signal-to-noise ratio in the measurement system is not too high. Finally, two other key contributions of the paper are that (i) it provides bounds on the average coherence of Gaussian matrices and Gabor frames, and (ii) it extends the results on model selection using OST to low-complexity, model-order agnostic recovery of sparse signals with arbitrary nonzero entries. In particular, this part of the analysis in the paper implies that an Alltop Gabor frame together with OST can successfully carry out model selection and recovery of sparse signals irrespective of the phases of the nonzero entries even if the number of nonzero entries scales almost linearly with the number of rows of the Alltop Gabor frame.


IEEE Transactions on Signal Processing | 2011

Performance Bounds for Expander-Based Compressed Sensing in Poisson Noise

Maxim Raginsky; Sina Jafarpour; Zachary T. Harmany; Roummel F. Marcia; Rebecca Willett; A. Robert Calderbank

This paper provides performance bounds for compressed sensing in the presence of Poisson noise using expander graphs. The Poisson noise model is appropriate for a variety of applications, including low-light imaging and digital streaming, where the signal-independent and/or bounded noise models used in the compressed sensing literature are no longer applicable. In this paper, we develop a novel sensing paradigm based on expander graphs and propose a maximum a posteriori (MAP) algorithm for recovering sparse or compressible signals from Poisson observations. The geometry of the expander graphs and the positivity of the corresponding sensing matrices play a crucial role in establishing the bounds on the signal reconstruction error of the proposed algorithm. We support our results with experimental demonstrations of reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process.


arXiv: Information Theory | 2010

Reed muller sensing matrices and the LASSO

A. Robert Calderbank; Sina Jafarpour

We construct two families of deterministic sensing matrices where the columns are obtained by exponentiating codewords in the quaternary Delsarte-Goethals code DG(m, r). This method of construction results in sensing matrices with low coherence and spectral norm. The first family, which we call Delsarte-Goethals frames, are 2m - dimensional tight frames with redundancy 2rm. The second family, which we call Delsarte-Goethals sieves, are obtained by subsampling the column vectors in a Delsarte-Goethals frame. Different rows of a Delsarte-Goethals sieve may not be orthogonal, and we present an effective algorithm for identifying all pairs of non-orthogonal rows. The pairs turn out to be duplicate measurements and eliminating them leads to a tight frame. Experimental results suggest that all DG(m, r) sieves with m ≤ 15 and r ≥ 2 are tight-frames; there are no duplicate rows. For both families of sensing matrices, we measure accuracy of reconstruction (statistical 0-1 loss) and complexity (average reconstruction time) as a function of the sparsity level k. Our results show that DG frames and sieves outperform random Gaussian matrices in terms of noiseless and noisy signal recovery using the LASSO.


international symposium on information theory | 2010

Model selection: Two fundamental measures of coherence and their algorithmic significance

Waheed U. Bajwa; A. Robert Calderbank; Sina Jafarpour

The problem of model selection arises in a number of contexts, such as compressed sensing, subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper generalizes the notion of incoherence in the existing literature on model selection and introduces two fundamental measures of coherence—termed as the worst-case coherence and the average coherence—among the columns of a design matrix. In particular, it utilizes these two measures of coherence to provide an in-depth analysis of a simple one-step thresholding (OST) algorithm for model selection. One of the key insights offered by the ensuing analysis is that OST is feasible for model selection as long as the design matrix obeys an easily verifiable property. In addition, the paper also characterizes the model-selection performance of OST in terms of the worstcase coherence, μ, and establishes that OST performs nearoptimally in the low signal-to-noise ratio regime for N × C design matrices with μ ≈ O(N−1/2). Finally, in contrast to some of the existing literature on model selection, the analysis in the paper is nonasymptotic in nature, it does not require knowledge of the true model order, it is applicable to generic (random or deterministic) design matrices, and it neither requires submatrices of the design matrix to have full rank, nor does it assume a statistical prior on the values of the nonzero entries of the data vector.


international symposium on information theory | 2010

Sparse reconstruction via the Reed-Muller Sieve

A. Robert Calderbank; Stephen D. Howard; Sina Jafarpour

This paper introduces the Reed Muller Sieve, a deterministic measurement matrix for compressed sensing. The columns of this matrix are obtained by exponentiating codewords in the quaternary second order Reed Muller code of length N. For k = O(N), the Reed Muller Sieve improves upon prior methods for identifying the support of a k-sparse vector by removing the requirement that the signal entries be independent. The Sieve also enables local detection; an algorithm is presented with complexity N2 logN that detects the presence or absence of a signal at any given position in the data domain without explicitly reconstructing the entire signal. Reconstruction is shown to be resilient to noise in both the measurement and data domains; the ℓ2/ℓ2 error bounds derived in this paper are tighter than the ℓ2/ℓ1 bounds arising from random ensembles and the ℓ1/ℓ1 bounds arising from expander-based ensembles.


ieee international workshop on computational advances in multi sensor adaptive processing | 2009

A sublinear algorithm for sparse reconstruction with ℓ 2 /ℓ 2 recovery guarantees

Robert Calderbank; Stephen Howard; Sina Jafarpour

Compressed Sensing aims to capture attributes of a sparse signal using very few measurements. Candès and Tao showed that sparse reconstruction is possible if the sensing matrix acts as a near isometry on all k-sparse signals. This property holds with overwhelming probability if the entries of the matrix are generated by an iid Gaussian or Bernoulli process. There has been significant recent interest in an alternative signal processing framework; exploiting deterministic sensing matrices that with overwhelming probability act as a near isometry on k-sparse vectors with uniformly random support, a geometric condition that is called the Statistical Restricted Isometry Property or StRIP. This paper considers a family of deterministic sensing matrices satisfying the StRIP that are based on Delsarte-Goethals Codes codes (binary chirps) and a k-sparse reconstruction algorithm with sublinear complexity. In the presence of stochastic noise in the data domain, this paper derives bounds on the ℓ<inf>2</inf> accuracy of approximation in terms of the ℓ<inf>2</inf> norm of the measurement noise and the accuracy of the best k-sparse approximation, also measured in the ℓ<inf>2</inf> norm. This type of ℓ<inf>2</inf>/ℓ<inf>2</inf> bound is tighter than the standard ℓ<inf>2</inf>/ℓ<inf>1</inf> or ℓ<inf>1</inf>/ℓ<inf>1</inf> bounds.


international conference on acoustics, speech, and signal processing | 2012

Finding needles in compressed haystacks

A. Robert Calderbank; Sina Jafarpour

In this paper, we investigate the problem of compressed learning, i.e. learning directly in the compressed domain. In particular, we provide tight bounds demonstrating that the linear kernel SVMs classifier in the measurement domain, with high probability, has true accuracy close to the accuracy of the best linear threshold classifier in the data domain. Furthermore, we indicate that for a family of well-known deterministic compressed sensing matrices, compressed learning is provided on the fly. Finally, we support our claims with experimental results in the texture analysis application.


international symposium on information theory | 2012

Beyond worst-case reconstruction in deterministic compressed sensing

Sina Jafarpour; Marco F. Duarte; A. Robert Calderbank

The role of random measurement in compressive sensing is analogous to the role of random codes in coding theory. In coding theory, decoders that can correct beyond the minimum distance of a code allow random codes to achieve the Shannon limit. In compressed sensing, the counterpart of minimum distance is the spark of the measurement matrix, i.e., the size of the smallest set of linearly dependent columns. This paper constructs a family of measurement matrices where the columns are formed by exponentiating codewords from a classical binary error-correcting code of block length M. The columns can be partitioned into mutually unbiased bases, and the spark of the corresponding measurement matrix is shown to be O(√M) by identifying a configuration of columns that plays a role similar to that of the Dirac comb in classical Fourier analysis. Further, an explicit basis for the null space of these measurement matrices is given in terms of indicator functions of binary self-dual codes. Reliable reconstruction of k-sparse inputs is shown for k of order M/log(M) which is best possible and far beyond the worst case lower bound provided by the spark.

Collaboration


Dive into the Sina Jafarpour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rebecca Willett

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Volkan Cevher

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco F. Duarte

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge