Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mostafa Rahmani is active.

Publication


Featured researches published by Mostafa Rahmani.


IEEE Transactions on Signal Processing | 2017

Randomized Robust Subspace Recovery and Outlier Detection for High Dimensional Data Matrices

Mostafa Rahmani; George K. Atia

This paper explores and analyzes two randomized designs for robust principal component analysis employing low-dimensional data sketching. In one design, a data sketch is constructed using random column sampling followed by low-dimensional embedding, while in the other, sketching is based on random column and row sampling. Both designs are shown to bring about substantial savings in complexity and memory requirements for robust subspace learning over conventional approaches that use the full scale data. A characterization of the sample and computational complexity of both designs is derived in the context of two distinct outlier models, namely, sparse and independent outlier models. The proposed randomized approach can provably recover the correct subspace with computational and sample complexity which depend only weakly on the size of the data (only through the coherence parameters). The results of the mathematical analysis are confirmed through numerical simulations using both synthetic and real data.


IEEE Transactions on Signal Processing | 2017

Innovation Pursuit: A New Approach to Subspace Clustering

Mostafa Rahmani; George K. Atia

In subspace clustering, a group of data points belonging to a union of subspaces are assigned membership to their respective subspaces. This paper presents a new approach dubbed Innovation Pursuit (iPursuit) to the problem of subspace clustering using a new geometrical idea whereby subspaces are identified based on their relative novelties. We present two frameworks in which the idea of innovation pursuit is used to distinguish the subspaces. Underlying the first framework is an iterative method that finds the subspaces consecutively by solving a series of simple linear optimization problems, each searching for a direction of innovation in the span of the data potentially orthogonal to all subspaces except for the one to be identified in one step of the algorithm. A detailed mathematical analysis is provided establishing sufficient conditions for iPursuit to correctly cluster the data. The proposed approach can provably yield exact clustering even when the subspaces have significant intersections. It is shown that the complexity of the iterative approach scales only linearly in the number of data points and subspaces, and quadratically in the dimension of the subspaces. The second framework integrates iPursuit with spectral clustering to yield a new variant of spectral-clustering-based algorithms. The numerical simulations with both real and synthetic data demonstrate that iPursuit can often outperform the state-of-the-art subspace clustering algorithms, more so for subspaces with significant intersections, and that it significantly improves the state-of-the-art result for subspace-segmentation-based face clustering.


IEEE Transactions on Signal Processing | 2017

Coherence Pursuit: Fast, Simple, and Robust Principal Component Analysis

Mostafa Rahmani; George K. Atia

This paper presents a remarkably simple, yet powerful, algorithm termed coherence pursuit (CoP) to robust principal component analysis (PCA). As inliers lie in a low-dimensional subspace and are mostly correlated, an inlier is likely to have strong mutual coherence with a large number of data points. By contrast, outliers either do not admit low-dimensional structures or form small clusters. In either case, an outlier is unlikely to bear strong resemblance to a large number of data points. Given that, CoP sets an outlier apart from an inlier by comparing their coherence with the rest of the data points. The mutual coherences are computed by forming the Gram matrix of the normalized data points. Subsequently, the sought subspace is recovered from the span of the subset of the data points that exhibit strong coherence with the rest of the data. As CoP only involves one simple matrix multiplication, it is significantly faster than the state-of-the-art robust PCA algorithms. We derive analytical performance guarantees for CoP under different models for the distributions of inliers and outliers in both noise-free and noisy settings. CoP is the first robust PCA algorithm that is simultaneously non-iterative, provably robust to both unstructured and structured outliers, and can tolerate a large number of unstructured outliers.


electro information technology | 2016

Sparsity-based error detection in DC power flow state estimation

M.H. Amini; Mostafa Rahmani; Kianoosh G. Boroojeni; George K. Atia; S. Sitharama Iyengar; Orkun Karabasoglu

This paper presents a new approach for identifying the measurement error in the DC power flow state estimation problem. The proposed algorithm exploits the singularity of the impedance matrix and the sparsity of the error vector by posing the DC power flow problem as a sparse vector recovery problem that leverages the structure of the power system and uses l1-norm minimization for state estimation. This approach can provably compute the measurement errors exactly, and its performance is robust to the arbitrary magnitudes of the measurement errors. Hence, the proposed approach can detect the noisy elements if the measurements are contaminated with additive white Gaussian noise plus sparse noise with large magnitude, which could be caused by data injection attacks. The effectiveness of the proposed sparsity-based decomposition-DC power flow approach is demonstrated on the IEEE 118-bus and 300-bus test systems.


asilomar conference on signals, systems and computers | 2015

Randomized subspace learning approach for high dimensional low rank plus sparse matrix decomposition

Mostafa Rahmani; George K. Atia

In this paper, a randomized algorithm for high dimensional low rank plus sparse matrix decomposition is proposed. Existing decomposition methods are not scalable to big data since they rely on using the whole data to extract the low-rank/sparse components, and are based on an optimization problem whose dimensionality is equal to the dimension of the given data. We reformulate the low rank plus sparse matrix decomposition problem as a column-row subspace learning problem. It is shown that when the column/row subspace of the low rank matrix is incoherent with the standard basis, the column/row subspace can be obtained from a small random subset of the columns/rows of the given data matrix. Thus, the high dimensional matrix decomposition problem is converted to a subspace learning problem, which is a low-dimensional optimization problem, and the proposed method uses a small random subset of the data rather than the whole big data matrix. In the provided analysis, it is shown that the sufficient number of randomly sampled columns/rows scales linearly with the rank and the coherency parameter of the low rank component.


IEEE Signal Processing Letters | 2017

Spatial Random Sampling: A Structure-Preserving Data Sketching Tool

Mostafa Rahmani; George K. Atia

Random column sampling is not guaranteed to yield data sketches that preserve the underlying structures of the data and may not sample sufficiently from less-populated data clusters. Also, adaptive sampling can often provide accurate low rank approximations, yet may fall short of producing descriptive data sketches, especially when the cluster centers are linearly dependent. Motivated by that, this letter introduces a novel randomized column sampling tool dubbed spatial random sampling (SRS), in which data points are sampled based on their proximity to randomly sampled points on the unit sphere. The most compelling feature of SRS is that the corresponding probability of sampling from a given data cluster is proportional to the surface area the cluster occupies on the unit sphere, independently of the size of the cluster population. Although it is fully randomized, SRS is shown to provide descriptive and balanced data representations. The proposed idea addresses a pressing need in data science and holds potential to inspire many novel approaches for analysis of big data.


sensor array and multichannel signal processing workshop | 2016

A subspace method for array covariance matrix estimation

Mostafa Rahmani; George K. Atia

This paper introduces a subspace method for the estimation of an array covariance matrix. When the received signals are uncorrelated, it is shown that the array covariance matrices lie in a special subspace defined through all possible correlation vectors of the received signals and whose dimension is typically much smaller than the ambient dimension. Based on this observation, a subspace-based covariance matrix estimator is proposed as a solution to a semi-definite convex optimization problem. While the optimization problem has no closed-form solution, a nearly optimal closed-form solution that is easily implementable is proposed. The proposed approach is shown to yield higher estimation accuracy than conventional approaches since it eliminates the estimation error that does not lie in the subspace of the true covariance matrices. The numerical examples demonstrate that the proposed estimator can significantly improve the estimation quality of the covariance matrix.


international workshop on machine learning for signal processing | 2015

Randomized robust subspace recovery for big data

Mostafa Rahmani; George K. Atia

In this paper, a randomized PCA algorithm that is robust to the presence of outliers and whose complexity is independent of the dimension of the given data matrix is proposed. Using random sampling and random embedding techniques, the given data matrix is turned to a small compressed data. A subspace learning approach is proposed to extract the columns subspace of the low rank matrix from the compressed data. Two ideas for robust subspace learning are proposed to work under two different model assumptions. The first idea is based on the linear dependence between the columns of the low rank matrix, and the second is based on the independence between the columns subspace of the low rank matrix and the subspace spanned by the outlying columns. We derive sufficient conditions to guarantee the performance of the proposed approach with high probability. It is shown that the proposed algorithm can successfully identify the outliers just by using roughly O(r2) random linear data observations, where r is the rank of the low rank matrix, and provably achieve notable speedups in comparison to existing approaches.


2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE) | 2015

Analysis of randomized robust PCA for high dimensional data

Mostafa Rahmani; George K. Atia

Robust Principal Component Analysis (PCA) (or robust subspace recovery) is a particularly important problem in unsupervised learning pertaining to a broad range of applications. In this paper, we analyze a randomized robust subspace recovery algorithm to show that its complexity is independent of the size of the data matrix. Exploiting the intrinsic low-dimensional geometry of the low rank matrix, the big data matrix is first turned to smaller size compressed data. This is accomplished by selecting a small random subset of the columns of the given data matrix, which is then projected into a random low-dimensional subspace. In the next step, a convex robust PCA algorithm is applied to the compressed data to learn the columns subspace of the low rank matrix. We derive new sufficient conditions, which show that the number of linear observations and the complexity of the randomized algorithm do not depend on the size of the given data.


IEEE Signal Processing Letters | 2017

Subspace Clustering via Optimal Direction Search

Mostafa Rahmani; George K. Atia

This paper presents a new spectral-clustering-based approach to the subspace clustering problem in which the data lies in the union of an unknown number of unknown linear subspaces. Underpinning the proposed method is a convex program for optimal direction search, which for each data point d, finds an optimal direction in the span of the data that has minimum projection on the other data points and non-vanishing projection on d. The obtained directions are subsequently leveraged to identify a neighborhood set for each data point. An Alternating Direction Method of Multipliers (ADMM) framework is provided to efficiently solve for the optimal directions. The proposed method is shown to often outperform the existing subspace clustering methods, particularly for unwieldy scenarios involving high levels of noise and close subspaces, and yields the state-of-the-art results for the problem of face clustering using subspace segmentation.

Collaboration


Dive into the Mostafa Rahmani's collaboration.

Top Co-Authors

Avatar

George K. Atia

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Andre Beckus

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Kianoosh G. Boroojeni

Florida International University

View shared research outputs
Top Co-Authors

Avatar

M.H. Amini

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Orkun Karabasoglu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

S. Sitharama Iyengar

Florida International University

View shared research outputs
Researchain Logo
Decentralizing Knowledge