Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Angshul Majumdar is active.

Publication


Featured researches published by Angshul Majumdar.


Magnetic Resonance Imaging | 2011

An algorithm for sparse MRI reconstruction by Schatten p-norm minimization.

Angshul Majumdar; Rabab K. Ward

In recent years, there has been a concerted effort to reduce the MR scan time. Signal processing research aims at reducing the scan time by acquiring less K-space data. The image is reconstructed from the subsampled K-space data by employing compressed sensing (CS)-based reconstruction techniques. In this article, we propose an alternative approach to CS-based reconstruction. The proposed approach exploits the rank deficiency of the MR images to reconstruct the image. This requires minimizing the rank of the image matrix subject to data constraints, which is unfortunately a nondeterministic polynomial time (NP) hard problem. Therefore we propose to replace the NP hard rank minimization problem by its nonconvex surrogate - Schatten p-norm minimization. The same approach can be used for denoising MR images as well. Since there is no algorithm to solve the Schatten p-norm minimization problem, we derive an efficient first-order algorithm. Experiments on MR brain scans show that the reconstruction and denoising accuracy from our method is at par with that of CS-based methods. Our proposed method is considerably faster than CS-based methods.


international conference on acoustics, speech, and signal processing | 2009

Classification via group sparsity promoting regularization

Angshul Majumdar; Rabab K. Ward

Recently a new classification assumption was proposed in [1]. It assumed that the training samples of a particular class approximately form a linear basis for any test sample belonging to that class. The classification algorithm in [1] was based on the idea that all the correlated training samples belonging to the correct class are used to represent the test sample. The Lasso regularization was proposed to select the representative training samples from the entire training set (consisting of all the training samples). Lasso however tends to select a single sample from a group of correlated training samples and thus does not promote the representation of the test sample in terms of all the training samples from the correct group. To overcome this problem, we propose two alternate regularization methods, Elastic Net and Sum-Over-l2-norm. Both these regularization methods favor the selection of multiple correlated training samples to represent the test sample. Experimental results on benchmark datasets show that our regularization methods give better recognition results compared to [1].


Signal Processing | 2010

Compressed sensing of color images

Angshul Majumdar; Rabab K. Ward

This work proposes a method for color imaging via compressive sampling. Random projections from each of the color channels are acquired separately. The problem is to reconstruct the original color image from the randomly projected (sub-sampled) data. Since each of the color channels are sparse in some domain (DCT, Wavelet, etc.) one way to approach the reconstruction problem is to apply sparse optimization algorithms. We note that the color channels are highly correlated and propose an alternative reconstruction method based on group sparse optimization. Two new non-convex group sparse optimization methods are proposed in this work. Experimental results show that incorporating group sparsity into the reconstruction problem produces significant improvement (more than 1dB PSNR) over ordinary sparse algorithm.


Signal Processing | 2011

Some empirical advances in matrix completion

Angshul Majumdar; Rabab K. Ward

Solving the matrix completion problem via rank minimization is NP hard. Recent studies have shown that this problem can be addressed as a convex nuclear-norm minimization problem, albeit at an increase in the required number of samples. This paper proposes a non-convex optimization problem (a variant of convex nuclear-norm minimization) for the solution of matrix completion. It also develops a fast numerical algorithm to solve the optimization. This empirical study shows that significant improvement can be achieved by the proposed method compared to the previous ones.The number of required samples is also dependent on the type of sampling scheme used. This work shows that blue-noise sampling schemes yield more accurate matrix completion results compared to the popular uniform random sampling.


Canadian Journal of Electrical and Computer Engineering-revue Canadienne De Genie Electrique Et Informatique | 2009

Fast group sparse classification

Angshul Majumdar; Rabab K. Ward

A recent work proposed a novel Group Sparse Classifier (GSC) that was based on the assumption that the training samples of a particular class approximately form a linear basis for any test sample belonging to that class. The Group Sparse Classifier requires solving an NP hard group-sparsity promoting optimization problem. Thus a convex relaxation of the optimization problem was proposed. The convex optimization problem, however, needs to be solved by quadratic programming and hence requires a large amount of computational time. To overcome this, we propose novel greedy (sub-optimal) algorithms for directly addressing the NP hard minimization problem. We call the classifiers based on these greedy group sparsity promoting algorithms as Fast Group Sparse Classifiers (FGSC). This work shows that the FGSC has nearly the same accuracy (at 95% confidence level) as the GSC, but with much faster computational speed (nearly two orders of magnitude). When certain conditions hold the GSC and the FGSC are robust to dimensionality reduction via random projection. By robust, we mean that the classification accuracy is approximately the same before and after random projection. The robustness of these classifi ers will be theoretically proved, and will be validated by thorough experimentation.


IEEE Transactions on Medical Imaging | 2012

Compressed Sensing Based Real-Time Dynamic MRI Reconstruction

Angshul Majumdar; Rabab K. Ward; Tyseer Aboulnasr

This work addresses the problem of real-time online reconstruction of dynamic magnetic resonance imaging sequences. The proposed method reconstructs the difference between the previous and the current image frames. This difference image is sparse. We recover the sparse difference image from its partial k-space scans by using a nonconvex compressed sensing algorithm. As there was no previous fast enough algorithm for real-time reconstruction, we derive a novel algorithm for this purpose. Our proposed method has been compared against state-of-the-art offline and online reconstruction methods. The accuracy of the proposed method is less than offline methods but noticeably higher than the online techniques. For real-time reconstruction we are also concerned about the reconstruction speed. Our method is capable of reconstructing 128 × 128 images at the rate of 6 frames/s, 180 × 180 images at the rate of 5 frames/s and 256 × 256 images at the rate of 2.5 frames/s.


systems man and cybernetics | 2010

Robust Classifiers for Data Reduced via Random Projections

Angshul Majumdar; Rabab K. Ward

The computational cost for most classification algorithms is dependent on the dimensionality of the input samples. As the dimensionality could be high in many cases, particularly those associated with image classification, reducing the dimensionality of the data becomes a necessity. The traditional dimensionality reduction methods are data dependent, which poses certain practical problems. Random projection (RP) is an alternative dimensionality reduction method that is data independent and bypasses these problems. The nearest neighbor classifier has been used with the RP method in classification problems. To obtain higher recognition accuracy, this study looks at the robustness of RP dimensionality reduction for several recently proposed classifiers - sparse classifier (SC), group SC (along with their fast versions), and the nearest subspace classifier. Theoretical proofs are offered regarding the robustness of these classifiers to RP. The theoretical results are confirmed by experimental evaluations.


Journal of Magnetic Resonance | 2011

Accelerating multi-echo T2 weighted MR imaging: analysis prior group-sparse optimization.

Angshul Majumdar; Rabab K. Ward

This works addresses the problem of reconstructing multi-echo T2 weighted MR images from partially sampled K-space data. Previous studies in reconstructing MR images from partial samples of the K-space used Compressed Sensing (CS) techniques to exploit the spatial correlation of the images (leading to sparsity in transform domain). Such techniques can be employed to reconstruct the individual T2 weighted images. However, in the current context, the different images are not independent; they are images of the same cross section, and hence are highly correlated. In this work, we not only exploit the spatial correlation within the image, but also the correlation between the images to achieve even better reconstruction results. For individual MR images, CS based techniques lead to a sparsity promoting optimization problem in a transform domain. In this paper, we show how to extend the same framework in order to incorporate correlation between images leading to group sparsity promoting optimization. Group sparsity promoting optimization is popularly formulated as a synthesis prior problem. The synthesis prior formulation for group sparsity leads to superior reconstruction results compared to ordinary sparse reconstruction. However, in this paper we show that when group sparsity is framed as an analysis prior problem the reconstruction results are even better for proper choice of the sparsifying transform. An interesting observation of this work is that when the same sampling pattern is used to sample the K-space for all the T2 weighted echoes, group sparsity does not yield any noticeable improvement, but when different sampling patterns are used for different echoes, our proposed group sparsity promoting formulation yields significant improvement (in terms of Normalized Mean Squared Error) over previous CS based techniques.Let <i>A</i> be an <i>M</i> by <i>N</i> matrix (M < N) which is an instance of a real random Gaussian ensemble. In compressed sensing we are interested in finding the sparsest solution to the system of equations <i>A</i> <b>x</b> = <b>y</b> for a given <b>y</b>. In general, whenever the sparsity of <b>x</b> is smaller than half the dimension of <b>y</b> then with overwhelming probability over <i>A</i> the sparsest solution is unique and can be found by an exhaustive search over <b>x</b> with an exponential time complexity for any <b>y</b>. The recent work of Candes, Donoho, and Tao shows that minimization of the <i>lscr</i> <sub>1</sub> norm of <b>x</b> subject to <i>A</i>x = <b>y</b> results in the sparsest solution provided the sparsity of <b>x</b>, say <i>K</i>, is smaller than a certain threshold for a given number of measurements. Specifically, if the dimension of <b>y</b> approaches the dimension of <b>x</b> , the sparsity of <b>x</b> should be K < 0.239 N. Here, we consider the case where <b>x</b> is block sparse, i.e., <b>x</b> consists of <i>n</i> = <i>N</i> /<i>d</i> blocks where each block is of length <i>d</i> and is either a zero vector or a nonzero vector (under nonzero vector we consider a vector that can have both, zero and nonzero components). Instead of lscr<sub>1</sub> -norm relaxation, we consider the following relaxation: <sub>times</sub> <sup>min</sup> ||<b>X</b> <sub>1</sub>||<sub>2</sub> + ||<b>X</b> <sub>2</sub>||<sub>2</sub> + ldrldrldr + ||<b>X</b> <i>n</i> ||<sub>2</sub>, subject to <i>A</i> <b>x</b> = <b>y</b> (*) where <b>X</b> <i>i</i> = (<b>x</b> <sub>(</sub> <i>i</i>-1)<i>d</i>+1, <b>x</b> <sub>(</sub> <i>i</i>-1)<i>d</i>+2, ldrldrldr , <b>x</b> <i>i</i> <i>d</i>)<i>T</i> for <i>i</i> = 1, 2, ldrldrldr , <i>N</i>. Our main result is that as <i>n</i> rarr infin, (*) finds the sparsest solution to <i>A</i>=<b>x</b> = <b>y</b>, with overwhelming probability in <i>A</i>, for any <b>x</b> whose sparsity is k/n < (1/2) - O (isin), provided <i>m</i> /<i>n</i> > 1 - 1/<i>d</i>, and <i>d</i> = Omega(log(1/isin)/isin<sup>3</sup>) . The relaxation given in (*) can be solved in polynomial time using semi-definite programming.


Magnetic Resonance Imaging | 2011

Joint reconstruction of multiecho MR images using correlated sparsity.

Angshul Majumdar; Rabab K. Ward

This works addresses the problem of reconstructing multiple T1- or T2-weighted images of the same anatomical cross section from partially sampled K-space data. Previous studies in reconstructing magnetic resonance (MR) images from partial samples of the K-space used compressed sensing (CS) techniques to exploit the spatial correlation of the images (leading to sparsity in wavelet domain). Such techniques can be employed to reconstruct the individual T1- or T2-weighted images. However, in the current context, the different images are not really independent; they are images of the same cross section and, hence, are highly correlated. We exploit the correlation between the images, along with the spatial correlation within the images to achieve better reconstruction results than exploiting spatial correlation only. For individual MR images, CS-based techniques lead to a sparsity-promoting optimization problem in the wavelet domain. In this article, we show that the same framework can be extended to incorporate correlation between images leading to group/row sparsity-promoting optimization. Algorithms for solving such optimization problems have already been developed in the CS literature. We show that significant improvement in reconstruction accuracy can be achieved by considering the correlation between different T1- and T2-weighted images. If the reconstruction accuracy is considered to be constant, our proposed group sparse formulation can yield the same result with 33% less K-space samples compared with simple sparsity-promoting reconstruction. Moreover, the reconstruction time by our proposed method is about two to four times less than the previous method.


international conference on acoustics, speech, and signal processing | 2012

Synthesis and analysis prior algorithms for joint-sparse recovery

Angshul Majumdar; Rabab K. Ward

This paper proposes a Majorization-Minimization approach for solving the synthesis and analysis prior joint-sparse multiple measurement vector reconstruction problem. The proposed synthesis prior algorithm yielded the same results as the Spectral Projected Gradient (SPG) method. The analysis prior algorithm is the first to be proposed for this problem. It yielded considerably better results than the proposed synthesis prior algorithm. For problems of a given size, the run times for our proposed algorithms are fixed; unlike SPG where the reconstruction time also depends on the support size of the vectors.

Collaboration


Dive into the Angshul Majumdar's collaboration.

Top Co-Authors

Avatar

Rabab K. Ward

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Hemant Kumar Aggarwal

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Anupriya Gogna

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Richa Singh

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Mayank Vatsa

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Vanika Singhal

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Ankita Shukla

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Jyoti Maggu

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Kavya Gupta

Indraprastha Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Snigdha Tariyal

Indraprastha Institute of Information Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge