Parul Shah
Indian Institute of Technology Bombay
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Parul Shah.
Signal, Image and Video Processing | 2013
Parul Shah; S. N. Merchant; Uday B. Desai
Image fusion has been receiving increasing attention in the research community with the aim of investigating general formal solutions to a wide spectrum of applications. The objective of this work is to formulate a method that can efficiently fuse multifocus as well as multispectral images for context enhancement and thus can be used by different applications. We propose a novel pixel fusion rule based on multiresolution decomposition of the source images using wavelet, wavelet-packet, and contourlet transform. To compute fused pixel value, we take weighted average of the source pixels, where the weight to be given to the pixel is adaptively decided based on the significance of the pixel, which in turn is decided by the corresponding children pixels in the finer resolution bands. The fusion performance has been extensively tested on different types of images viz. multifocus images, medical images (CT and MRI), as well as IR − visible surveillance images. Several pairs of images were fused to compare the results quantitatively as well as qualitatively with various recently published methods. The analysis shows that for all the image types into consideration, the proposed method increases the quality of the fused image significantly, both visually and quantitatively, by preserving all the relevant information. The major achievement is average 50% reduction in artifacts.
International Journal of Wavelets, Multiresolution and Information Processing | 2010
Parul Shah; S. N. Merchant; Uday B. Desai
This paper presents two methods for fusion of infrared (IR) and visible surveillance images. The first method combines Curvelet Transform (CT) with Discrete Wavelet Transform (DWT). As wavelets do not represent long edges well while curvelets are challenged with small features, our objective is to combine both to achieve better performance. The second approach uses Discrete Wavelet Packet Transform (DWPT), which provides multiresolution in high frequency band as well and hence helps in handling edges better. The performance of the proposed methods have been extensively tested for a number of multimodal surveillance images and compared with various existing transform domain fusion methods. Experimental results show that evaluation based on entropy, gradient, contrast etc., the criteria normally used, are not enough, as in some cases, these criteria are not consistent with the visual quality. It also demonstrates that the Petrovic and Xydeas image fusion metric is a more appropriate criterion for fusion of IR and visible images, as in all the tested fused images, visual quality agrees with the Petrovic and Xydeas metric evaluation. The analysis shows that there is significant increase in the quality of fused image, both visually and quantitatively. The major achievement of the proposed fusion methods is its reduced artifacts, one of the most desired feature for fusion used in surveillance applications.
international conference on vehicular electronics and safety | 2009
Parul Shah; Sunil Karamchandani; Taskeen Nadkar; Nikita Gulechha; Kaushik Koli; Ketan Lad
The automatic detection and recognition of car number plates has become an important application of artificial vision systems. Since the license plates can be replaced, stolen or simply tampered with, they are not the ultimate answer for vehicle identification. The objective is to develop a system whereby vehicle identification number (VIN) or vehicle chassis number is digitally photographed, and then identified electronically by segmenting the characters from the embossed VIN. In this paper we present a novel algorithm for vehicle chassis number identification based on optical character recognition (OCR) using artificial neural network. The algorithm is tested on over thousand vehicle images of different ambient illumination. While capturing these images, the VIN was kept in-focus, while the angle of view and the distance from the vehicle varied according to the experimental setup. These images were subjected to pre-processing which comprises of some standard image processing algorithms. The resultant images were then fed to the proposed OCR system. The OCR system is a three-layer artificial neural network (ANN) with topology 504-600-10. The major achievement of this work is the rate of correct identification, which is 95.49% with zero false identification.
international conference on industrial and information systems | 2008
Parul Shah; Pranali Choudhari; Suresh Sivaraman
In this paper we present a novel method for digital audio steganography where encrypted covert data is embedded by adaptively modifying wavelet packet coefficients of host audio signal. The major contribution of the proposed scheme is the technique introduced for adaptively modifying the host audio to embed the covert data. The modification of host audio is done by imposing a constraint which forces the modified value to be in the same range as its neighborhood. Due to this constraint the noise introduced due to embedding is very low compared to existing methods. The main advantage with proposed embedding scheme is superior Signal to Noise Ratio (SNR) values, with good hiding capacity and speed. Listening test results also show that distortions in the stego audio is imperceptible from the original audio even with highest hiding capacity. Our method also has zero bit error in recovered data which is one of the most desired features of any steganography technique.
Signal, Image and Video Processing | 2013
Parul Shah; Chilamakuri Chandra Sekhar Reddy; S. N. Merchant; Uday B. Desai
Camouflage is an attempt to conceal a target by making it look similar to the background to make the detection and recognition of the target difficult for the observer (a man or a machine or both). Detecting a camouflaged target in a video, captured in visible band is a big challenge. One can use the infrared (IR) video where the visibility of the same target is much better, but the problem with IR band is that most of the contrast, color, and edge information about the background are lost and thus localizing the target becomes difficult. The objective of this work is to fuse registered videos, captured in visible and IR band such that the target is no longer camouflaged and hence clearly visible to the human monitor. More importantly, this should be done without losing the background details. We have proposed four different video fusion methods. All the proposed methods are intensity invariant so that camouflaged targets are detected independent of the illumination conditions. The performance of these methods has been compared using Wang–Bovik and Petrovic–Xydeas fusion metric along with other information theoretic indices. Experimental video results show that the proposed methods improve the perception and thus facilitates detection and localization of a camouflaged target. Moreover, the fused video has minimum artefacts as indicated by the highest peak signal-to-noise ratio, which is one of the most desired quality of a good fusion method.
international conference on multimedia and expo | 2011
Parul Shah; S. N. Merchant; Uday B. Desai
In this paper we present a spatial domain method to fuse multifocus images. Here the fused image pixel is selected from one of the source images based on a novel selection criterion that utilizes the statistical properties of the neighborhood. The eigen value of the unbiased estimate of the covariance matrix of an image block depends on the strength of edges in the block and thus provides a good base for selecting/ rejecting a pixel, giving preference to the pixel with the higher eigen value and thus the sharper neighborhood. To prevent a noise pixel from getting selected as a fused image pixel, a continuity constraint is imposed on the selection criteria. The performance of the method have been extensively tested on several pairs of multifocus images and compared quantitatively with existing methods. Experimental results show that the proposed method improves fusion quality by reducing loss of information by almost 50% and noise by more than 95%. It also show that evaluation based on widely used criteria like entropy, gradient, deviation, may not be enough; as in some cases, these criteria are not consistent with the ground truth. It demonstrates that Petrovic metrics are in correlation with the ground truth as well as visual quality.
Signal, Image and Video Processing | 2014
Parul Shah; T. V. Srikanth; S. N. Merchant; Uday B. Desai
Image fusion has been receiving increasing attention in the research community with the aim of investigating general formal solutions to a wide spectrum of applications such as multifocus, multiexposure, multispectral (
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis | 2011
Parul Shah; T. V. Srikanth; S. N. Merchant; Uday B. Desai
international conference on information and communication security | 2009
Krishnendu Saha; Parul Shah; S. N. Merchant; Uday B. Desai
IR
international conference on information and communication security | 2011
Parul Shah; Maxime Drumetz; S. N. Merchant; Uday B. Desai