Sohail Bahmani
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sohail Bahmani.
Siam Journal on Imaging Sciences | 2015
Sohail Bahmani; Justin K. Romberg
In this paper we analyze the blind deconvolution of an image and an unknown blur in a coded imaging system. The measurements consist of subsampled convolution of an unknown blurring kernel with multiple random binary modulations (coded masks) of the image. To perform the deconvolution, we consider a standard lifting of the image and the blurring kernel that transforms the measurements into a set of linear equations of the matrix formed by their outer product. Any rank-one solution to this system of equations provides a valid pair of an image and a blur. We first express the necessary and sufficient conditions for the uniqueness of a rank-one solution under some additional assumptions (uniform subsampling and no limit on the number of coded masks). These conditions are a special case of a previously established result regarding identifiability in the matrix completion problem. We also characterize a low-dimensional subspace model for the blur kernel that is sufficient to guarantee identifiability, includin...
IEEE Transactions on Information Theory | 2016
Sohail Bahmani; Petros T. Boufounos; Bhiksha Raj
Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper, we study the projected gradient descent with a non-convex structured-sparse parameter model as the constraint set. Should the cost function have a stable model-restricted Hessian, the algorithm produces an approximation for the desired minimizer. As an example, we elaborate on application of the main results to estimation in generalized linear models.
Applied and Computational Harmonic Analysis | 2013
Sohail Bahmani; Bhiksha Raj
Abstract In this paper we study the performance of the Projected Gradient Descent (PGD) algorithm for l p -constrained least squares problems that arise in the framework of compressed sensing. Relying on the restricted isometry property, we provide convergence guarantees for this algorithm for the entire range of 0 ⩽ p ⩽ 1 , that include and generalize the existing results for the iterative hard thresholding algorithm and provide a new accuracy guarantee for the iterative soft thresholding algorithm as special cases. Our results suggest that in this group of algorithms, as p increases from zero to one, conditions required to guarantee accuracy become stricter and robustness to noise deteriorates.
ieee international workshop on computational advances in multi sensor adaptive processing | 2015
Sohail Bahmani; Justin K. Romberg
We introduce a technique for estimating a structured covariance matrix from observations of a random vector which have been sketched. Each observed random vector xt is reduced to a single number by taking its inner product against one of a number of pre-selected vector aℓ. These observations are used to form estimates of linear observations of the covariance matrix Σ, which is assumed to be simultaneously sparse and low-rank. We show that if the sketching vectors aℓ have a special structure, then we can use straightforward two-stage algorithm that exploits this structure. We show that the estimate is accurate when the number of sketches is proportional to the maximum of the rank times the number of significant rows/columns of Σ. Moreover, our algorithm takes direct advantage of the low-rank structure of Σ by only manipulating matrices that are far smaller than the original covariance matrix.
IEEE Transactions on Computational Imaging | 2015
Sohail Bahmani; Justin K. Romberg
We investigate the problem of reconstructing signals from a subsampled convolution of their modulated versions and a known filter. The problem is studied as applies to a specific imaging architecture that relies on spatial phase modulation by randomly coded “masks.” The diversity induced by the random masks is deemed to improve the conditioning of the deconvolution problem while maintaining sampling efficiency. We analyze a linear model of the imaging system, where the joint effect of the spatial modulation, blurring, and spatial subsampling is represented concisely by a measurement matrix. We provide a bound on the conditioning of this measurement matrix in terms of the number of masks K, the dimension (i.e., the pixel count) of the scene image L, and certain characteristics of the blurring kernel and subsampling operator. The derived bound shows that the stable deconvolution is possible with high probability even if the number of masks (i.e., K) is as small as L log L /N , meaning that the total number of (scalar) measurements is within a logarithmic factor of the image size. Furthermore, beyond a critical number of masks determined by the extent of blurring and subsampling, use of every additional mask improves the conditioning of the measurement matrix. We also consider a more interesting scenario where the target image is known to be sparse. We show that under mild conditions on the blurring kernel, with high probability the measurement matrix is a restricted isometry when the number of masks is within a logarithmic factor of the sparsity of the scene image. Therefore, the scene image can be reconstructed using any of the well-known sparse recovery algorithms such as the basis pursuit. The bound on the required number of masks grows linearly in sparsity of the scene image but logarithmically in its ambient dimension. The bound provides a quantitative view of the effect of the blurring and subsampling on the required number of masks, which is critical for designing efficient imaging systems.
IEEE Transactions on Image Processing | 2010
Sohail Bahmani; Ivan V. Bajic; Atousa Hajshirmohammadi
In this paper we present joint decoding of JPEG2000 bitstreams and Reed-Solomon codes in the context of unequal loss protection. Using error resilience features of JPEG2000 bitstreams, the joint decoder helps to restore the erased symbols when the Reed-Solomon decoder fails to retrieve them on its own. However, the joint decoding process might become time-consuming due to a search through the set of possible erased symbols. We propose the use of smaller codeblocks and transmission of a relatively small amount of side information with high reliability as two approaches to accelerate the joint decoding process. The accelerated joint decoder can deliver essentially the same quality enhancement as the nonaccelerated one, while operating several times faster.
international conference on acoustics, speech, and signal processing | 2008
Sohail Bahmani; Ivan V. Bajic; Atousa Hajshirmohammadi
This paper presents a method for joint decoding of JPEG2000 bitstreams and Reed-Solomon codes in the context of unequal loss protection. When the Reed-Solomon decoder is unable to retrieve the erased source symbols, the proposed joint decoder searches through the set of possible erased source symbols, making use of error resilience features of JPEG2000 to retrieve correct symbols. The joint decoder can be used as an add-on module to some of the existing schemes for unequal loss protection, and can improve the PSNR of decoded images by over 10 dB in some cases.
ieee global conference on signal and information processing | 2013
Alireza Aghasi; Sohail Bahmani; Justin K. Romberg
The main result of this paper is providing a tight convex envelope to row sparse and rank one matrices which is of major interest in signal recovery applications. The resulting convexification turns out to be the ℓ1 norm of the matrix. This result highlights the fact that a joint convexification approach may not significantly improve the signal recovery process.
international conference on communications | 2009
Sohail Bahmani; Ivan V. Bajic; Atousa Hajshirmohammadi
In this paper we present improvements to the recently-proposed joint decoding of JPEG2000 bitstreams and Reed-Solomon codes in the context of unequal loss protection. Using error resilience features of JPEG2000 bitstreams, the joint decoder helps to restore the erased symbols when the Reed-Solomon decoder fails to retrieve them on its own. We make use of the ability of the JPEG2000 decoder to provide rough error localization within a coding pass to speed up the search for erased symbol values. In addition, we show how transmitting a relatively small amount of side information with high reliability may help the joint decoder by reducing the size of the search space and bypassing some of the JPEG2000 decoding iterations needed to verify the correctness of the restored source information. The improved joint decoder is up to 20 times faster compared to the previous one.
Foundations of Computational Mathematics | 2018
Sohail Bahmani; Justin K. Romberg
We consider the question of estimating a solution to a system of equations that involve convex nonlinearities, a problem that is common in machine learning and signal processing. Because of these nonlinearities, conventional estimators based on empirical risk minimization generally involve solving a non-convex optimization program. We propose anchored regression, a new approach based on convex programming that amounts to maximizing a linear functional (perhaps augmented by a regularizer) over a convex set. The proposed convex program is formulated in the natural space of the problem, and avoids the introduction of auxiliary variables, making it computationally favorable. Working in the native space also provides great flexibility as structural priors (e.g., sparsity) can be seamlessly incorporated. For our analysis, we model the equations as being drawn from a fixed set according to a probability law. Our main results provide guarantees on the accuracy of the estimator in terms of the number of equations we are solving, the amount of noise present, a measure of statistical complexity of the random equations, and the geometry of the regularizer at the true solution. We also provide recipes for constructing the anchor vector (that determines the linear functional to maximize) directly from the observed data.