Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vishal Monga is active.

Publication


Featured researches published by Vishal Monga.


IEEE Transactions on Information Forensics and Security | 2007

Robust and Secure Image Hashing via Non-Negative Matrix Factorizations

Vishal Monga; M.K. Mhcak

In this paper, we propose the use of non-negative matrix factorization (NMF) for image hashing. In particular, we view images as matrices and the goal of hashing as a randomized dimensionality reduction that retains the essence of the original image matrix while preventing intentional attacks of guessing and forgery. Our work is motivated by the fact that standard-rank reduction techniques, such as QR and singular value decomposition, produce low-rank bases which do not respect the structure (i.e., non-negativity for images) of the original data. We observe that NMFs have two very desirable properties for secure image hashing applications: 1) The additivity property resulting from the non-negativity constraints results in bases that capture local components of the image, thereby significantly reducing misclassification and 2) the effect of geometric attacks on images in the spatial domain manifests (approximately) as independent identically distributed noise on NMF vectors, allowing the design of detectors that are both computationally simple and, at the same time, optimal in the sense of minimizing error probabilities. Receiver operating characteristics analysis over a large image database reveals that the proposed algorithms significantly outperform existing approaches for image hashing.


IEEE Transactions on Image Processing | 2006

Perceptual Image Hashing Via Feature Points: Performance Evaluation and Tradeoffs

Vishal Monga; Brian L. Evans

We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification


IEEE Transactions on Information Forensics and Security | 2006

A clustering based approach to perceptual image hashing

Vishal Monga; Arindam Banerjee; Brian L. Evans

A perceptual image hash function maps an image to a short binary string based on an images appearance to the human eye. Perceptual image hashing is useful in image databases, watermarking, and authentication. In this paper, we decouple image hashing into feature extraction (intermediate hash) followed by data clustering (final hash). For any perceptually significant feature extractor, we propose a polynomial-time heuristic clustering algorithm that automatically determines the final hash length needed to satisfy a specified distortion. We prove that the decision version of our clustering problem is NP complete. Based on the proposed algorithm, we develop two variations to facilitate perceptual robustness versus fragility tradeoffs. We validate the perceptual significance of our hash by testing under Stirmark attacks. Finally, we develop randomized clustering algorithms for the purposes of secure image hashing.


international conference on image processing | 2004

Robust perceptual image hashing using feature points

Vishal Monga; Brian L. Evans

Perceptual image hashing maps an image to a fixed length binary string based on the images appearance to the human eye, and has applications in image indexing, authentication, and watermarking. We present a general framework for perceptual image hashing using feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to enhance perceptual robustness further. The proposed hash algorithm withstands standard benchmark (e.g. Stirmark) attacks including compression, geometric distortions of scaling and small angle rotation, and common signal processing operations. Content changing (malicious) manipulations of image data are also accurately detected.


IEEE Geoscience and Remote Sensing Letters | 2013

Exploiting Sparsity in Hyperspectral Image Classification via Graphical Models

Umamahesh Srinivas; Yi Chen; Vishal Monga; Nasser M. Nasrabadi; Trac D. Tran

A significant recent advance in hyperspectral image (HSI) classification relies on the observation that the spectral signature of a pixel can be represented by a sparse linear combination of training spectra from an overcomplete dictionary. A spatiospectral notion of sparsity is further captured by developing a joint sparsity model, wherein spectral signatures of pixels in a local spatial neighborhood (of the pixel of interest) are constrained to be represented by a common collection of training spectra, albeit with different weights. A challenging open problem is to effectively capture the class conditional correlations between these multiple sparse representations corresponding to different pixels in the spatial neighborhood. We propose a probabilistic graphical model framework to explicitly mine the conditional dependences between these distinct sparse features. Our graphical models are synthesized using simple tree structures which can be discriminatively learnt (even with limited training samples) for classification. Experiments on benchmark HSI data sets reveal significant improvements over existing approaches in classification rates as well as robustness to choice of training.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Adaptive Sparse Representations for Video Anomaly Detection

Xuan Mo; Vishal Monga; Raja Bala; Zhigang Fan

Video anomaly detection can be used in the transportation domain to identify unusual patterns such as traffic violations, accidents, unsafe driver behavior, street crime, and other suspicious activities. A common class of approaches relies on object tracking and trajectory analysis. Very recently, sparse reconstruction techniques have been employed in video anomaly detection. The fundamental underlying assumption of these methods is that any new feature representation of a normal/anomalous event can be approximately modeled as a (sparse) linear combination prelabeled feature representations (of previously observed events) in a training dictionary. Sparsity can be a powerful prior on model coefficients but challenges remain in the detection of anomalies involving multiple objects and the ability of the linear sparsity model to effectively allow for class separation. The proposed research addresses both these issues. First, we develop a new joint sparsity model for anomaly detection that enables the detection of joint anomalies involving multiple objects. This extension is highly nontrivial since it leads to a new simultaneous sparsity problem that we solve using a greedy pursuit technique. Second, we introduce nonlinearity into, that is, kernelize. The linear sparsity model to enable superior class separability and hence anomaly detection. We extensively test on several real world video datasets involving both single and multiple object anomalies. Results show marked improvements in detection of anomalies in both supervised and unsupervised scenarios when using the proposed sparsity models.


IEEE Transactions on Aerospace and Electronic Systems | 2014

SAR Automatic Target Recognition Using Discriminative Graphical Models

Umamahesh Srinivas; Vishal Monga; Raghu G. Raj

The problem of automatically classifying sensed imagery such as synthetic aperture radar (SAR) into a canonical set of target classes is widely known as automatic target recognition (ATR). A typical ATR algorithm comprises the extraction of a meaningful set of features from target imagery followed by a decision engine that performs class assignment. While ATR algorithms have significantly increased in sophistication over the past two decades, two outstanding challenges have been identified in the rich body of ATR literature: 1) the desire to mine complementary merits of distinct feature sets (also known as feature fusion), and 2) the ability of the classifier to excel even as training SAR images are limited. We propose to apply recent advances in probabilistic graphical models to address these challenges. In particular we develop a two-stage target recognition framework that combines the merits of distinct SAR image feature representations with discriminatively learned graphical models. The first stage projects the SAR image chip to informative feature spaces that yield multiple complementary SAR image representations. The second stage models each individual representation using graphs and combines these initially disjoint and simple graphs into a thicker probabilistic graphical model by leveraging a recent advance in discriminative graph learning. Experimental results on the benchmark moving and stationary target acquisition and recognition (MSTAR) data set confirm the benefits of our framework over existing ATR algorithms in terms of improvement in recognition rates. The proposed graphical classifiers are particularly robust when feature dimensionality is high and number of training images is small, a commonly observed constraint in SAR imagery-based target recognition.


international conference on multimedia and expo | 2005

Image Authentication Under Geometric Attacks Via Structure Matching

Vishal Monga; Divyanshu Vats; Brian L. Evans

Surviving geometric attacks in image authentication is considered to be of great importance. This is because of the vulnerability of classical watermarking and digital signature based schemes to geometric image manipulations, particularly local geometric attacks. In this paper, we present a general framework for image content authentication using salient feature points. We first develop an iterative feature detector based on an explicit modeling of the human visual system. Then, we compare features from two images by developing a generalized Hausdorff distance measure. The use of such a distance measure is crucial to the robustness of the scheme, and accounts for feature detector failure or occlusion, which previously proposed methods do not address. The proposed algorithm withstands standard benchmark (e.g. Stirmark) attacks including compression, common signal processing operations, global as well as local geometric transformations, and even hard to model distortions such as print and scan. Content changing (malicious) manipulations of image data are also accurately detected


computer vision and pattern recognition | 2017

NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results

Radu Timofte; Eirikur Agustsson; Luc Van Gool; Ming-Hsuan Yang; Lei Zhang; Bee Lim; Sanghyun Son; Heewon Kim; Seungjun Nah; Kyoung Mu Lee; Xintao Wang; Yapeng Tian; Ke Yu; Yulun Zhang; Shixiang Wu; Chao Dong; Liang Lin; Yu Qiao; Chen Change Loy; Woong Bae; Jaejun Yoo; Yoseob Han; Jong Chul Ye; Jae Seok Choi; Munchurl Kim; Yuchen Fan; Jiahui Yu; Wei Han; Ding Liu; Haichao Yu

This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.


IEEE Transactions on Medical Imaging | 2016

Histopathological Image Classification Using Discriminative Feature-Oriented Dictionary Learning

Tiep Huu Vu; Hojjat Seyed Mousavi; Vishal Monga; Ganesh Rao; U. K. Arvind Rao

In histopathological image analysis, feature extraction for classification is a challenging task due to the diversity of histology features suitable for each problem as well as presence of rich geometrical structures. In this paper, we propose an automatic feature discovery framework via learning class-specific dictionaries and present a low-complexity method for classification and disease grading in histopathology. Essentially, our Discriminative Feature-oriented Dictionary Learning (DFDL) method learns class-specific dictionaries such that under a sparsity constraint, the learned dictionaries allow representing a new image sample parsimoniously via the dictionary corresponding to the class identity of the sample. At the same time, the dictionary is designed to be poorly capable of representing samples from other classes. Experiments on three challenging real-world image databases: 1) histopathological images of intraductal breast lesions, 2) mammalian kidney, lung and spleen images provided by the Animal Diagnostics Lab (ADL) at Pennsylvania State University, and 3) brain tumor images from The Cancer Genome Atlas (TCGA) database, reveal the merits of our proposal over state-of-the-art alternatives. Moreover, we demonstrate that DFDL exhibits a more graceful decay in classification accuracy against the number of training images which is highly desirable in practice where generous training is often not available.

Collaboration


Dive into the Vishal Monga's collaboration.

Top Co-Authors

Avatar

Umamahesh Srinivas

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Muralidhar Rangaswamy

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Hojjat Seyed Mousavi

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Brian L. Evans

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trac D. Tran

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Bosung Kang

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Raghu G. Raj

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge