Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Soundararajan Ezekiel is active.

Publication


Featured researches published by Soundararajan Ezekiel.


Proceedings of SPIE | 2010

Wavelet-based image registration

Christopher Paulson; Soundararajan Ezekiel; Dapeng Wu

Image registration is a fundamental enabling technology in computer vision. Developing an accurate image registration algorithm will significantly improve the techniques for computer vision problems such as tracking, fusion, change detection, autonomous navigation. In this paper, our goal is to develop an algorithm that is robust, automatic, can perform multi-modality registration, reduces the Root Mean Square Error (RMSE) below 4, increases the Peak Signal to Noise Ratio (PSNR) above 34, and uses the wavelet transformation. The preliminary results show that the algorithm is able to achieve a PSNR of approximately 36.7 and RMSE of approximately 3.7. This paper provides a comprehensive discussion of wavelet-based registration algorithm for Remote Sensing applications.


applied imagery pattern recognition workshop | 2013

Multi-scale decomposition tool for Content Based Image Retrieval

Soundararajan Ezekiel; Mark G. Alford; David D. Ferris; Eric K. Jones; Adnan Bubalo; Mark Gorniak; Erik Blasch

Content Based Image Retrieval (CBIR) is a technical area focused on answering “Who, What, Where and When,” questions associated with the imagery. A multi-scale feature extraction scheme based on wavelet and Contourlet transforms is proposed to reliably extract objects in images. First, we explore Contourlet transformation in association with Pulse Coupled Neural Network (PCNN) while the second technique is based on Rescaled Range (R/S) Analysis. Both methods provide flexible multi-resolution decomposition, directional feature extraction and are suitable for image fusion. The Contourlet transformation is conceptually similar to a wavelet transformation, but simpler, faster and less redundant. The R/S analysis, uses the range R of cumulative deviations from the mean divided by the standard deviation S, to calculate the scaling exponent, or a Hurst exponent, H. Following the original work of Hurst, the exponent H provides a quantitative measure of the persistence of similarities in a signal. For images, if information exhibits self-similarity and fractal correlation then H gives a measure of smoothness of the objects. The experimental results demonstrate that our proposed approach has promising applications for CBIR. We apply our multiscale decomposition approach to images with simple thresholding of wavelet/curvelet coefficients for visually sharper object outlines, salient extraction of object edges, and increased perceptual quality. We further explore these approaches to segment images and, the empirical results reported here are encouraging to determine who or what is in the image.


national aerospace and electronics conference | 2014

No-reference objective blur metric based on the notion of wavelet gradient, magnitude edge width

Soundararajan Ezekiel; Kyle Harrity; Mark G. Alford; Erik Blasch; David D. Ferris; Adnan Bubalo

In the past decade, the number and popularity of digital cameras has increased many fold, increasing the demand for a blur metric and quality assessment techniques to evaluate digital images. There is still no widely accepted industry standard by which an images blur content may be assessed so it is imperative that better, more reliable, no-reference metrics be created to fill this gap. In this paper, a new wavelet based scheme is proposed as a blur metric. This method does not rely on subjective testing other than for verification. After applying the discrete wavelet transform to an image, we use adaptive thresholding to identify edge regions in the horizontal, vertical, and diagonal sub-images. For each sub-image, we utilize the fact that detected edges can be separated into connected components. We do this because, perceptually, blur is most apparent on edge regions. From these regions it is possible to compute properties of the edge such as length and width. The length and width can then be used to measure the area of a blurred region which in turn yields the number of blurred pixels for each connected region. Ideally, an edge point is represented by only a single pixel so if a found edge has a width greater than one it likely contains blur. In order to not skew our results, a one by n-length rectangle is removed from the computed blur area. The areas are summed which will represent the total blur pixel count per image. Using a series of test images, we determined the blur pixel ratio as the number of blur pixels to the total pixels in an image.


national aerospace and electronics conference | 2014

No-reference blur metric using double-density and dual-tree two-dimensional wavelet transformation

Soundararajan Ezekiel; Kyle Harrity; Erik Blasch; Adnan Bubalo

Over the past decade the digital camera has become widely available in many devices such as cell phones, computers, etc. Therefore, the perceptual quality of digital images is an important and necessary requirement to evaluate digital images. To improve the quality of images captured with camera, we must identify and measure the artifacts that cause blur within the images. Blur is mainly caused by pixel intensity due to multiple sources. The most common types of blurs are known as object motion, defocus, and camera motion. In the last two decades, the discrete wavelet transformation (DWT) has become a cutting-edge technology in the signal and image processing field for such applications as denoising. The disadvantage of the DWT is that it is not able to directly observe blur coefficients. In this paper, we propose a novel framework for a blur metric for an image. Our approach is based on the double-density dual tree two dimensional wavelet transformations (D3TDWT) which simultaneously processes the properties of both the double-density DWT and dual tree DWT. We also utilize gradient to evaluate blurring artifacts and measure the image quality.


national aerospace and electronics conference | 2014

No-reference multi-scale blur metric

Kyle Harrity; Soundararajan Ezekiel; Michael H Ferris; Maria Cornacchia; Erik Blasch

In recent years, digital cameras have been widely used for image capturing. These devices are equipped in cell phones, laptops, tablets, webcams, etc. Image quality is an important characteristic for any digital image analysis. Historically, techniques to assess image quality for these mobile products require a standard image to be used as a reference image. In this case, Root Mean Square Error and Peak Signal to Noise Ratio can be employed to measure the quality of the images. However, these methods are not valid if there is no reference image. Recent studies show that a Contourlet is a multi-scale transformation - which is an extension of two dimensional wavelet transformations - that can operate on an image at different noise levels without a reference image. In this paper, we develop a no-reference blur metric for digital images based on edges and noises in images. In our approach, a Contourlet transformation is applied to the blurred image, which applies a Laplacian Pyramid and Directional Filter Banks to get various image representations. The Laplacian Pyramid is a difference of Gaussian Pyramids between two consecutive levels. At each level in the Gaussian Pyramid, an image is smoothed with two Gaussians of different sizes then subtracted, subsampled and the input image is decomposed into directional sub-bands of images. Directional filter banks are designed to capture high frequency components representing directionality of the images which is similar to detailed coefficient in wavelet transformation. We focus on blur-measuring for each level and directions at the finest level of images to assess the image quality. Using the ratio of blur pixels to total pixels, we compare our results, which require no reference image, to standard full-reference image statistics. The results demonstrate that our proposed no reference metric has an increasing relationship with the blurriness of an image and is more sensitive to blur than the correlation full-reference metric.


national aerospace and electronics conference | 2014

Denoising one-dimensional signals with curvelets and contourlets

Ryan Moore; Soundararajan Ezekiel; Erik Blasch

Fast Fourier Transforms (FFTs) and Discrete Wavelet Transformations (DWTs) have been routinely used as methods of denoising signals. DWT limitations include the inability to detect contours, curves and directional information of multi-dimensional signals. In the past decade, two new approaches have surfaced: curvelets, developed by Candès; and contourlets, developed by Do et al. The typical applications of contourlets and curvelets include two-dimensional image data denoising. We explore the use of curvelets and contourlets to the one-dimensional (1D) denoising problem. Working with seismic data, we introduce various types of data noise and the wavelet, curvelet, and contourlet transforms are applied to each signal. We tested multiple decomposition levels and different thresholding values. The benchmark for determining the effectiveness of each transform is the peak signal-to-noise ratio (PSNR) between the original signal and the denoised signal. The proposed denoising methods demonstrate contourlets and curvelets as a viable alternative to the DWT and FFT during signal processing. The initial results indicate that the contourlet and curvelet methods yield a higher PSNR and lower error than the DWT and FFT for 1D data.


applied imagery pattern recognition workshop | 2013

Multi-perspective anomaly prediction using neural networks

Aaron Waibel; Abdullah Alshehri; Soundararajan Ezekiel

In this paper, we introduce a technique for predicting anomalies in a signal by observing relationships between multiple meaningful transformations of the signal called perspectives. In particular, we use the Fourier transform to provide a holistic view of the frequencies present in a signal, along with a wavelet denoised signal that is filtered to locate anomalous peaks. Then we input these perspectives of the signal into a feedforward neural network technique to recognize patterns in the relationship between perspectives, and the presence of anomalies. The neural network is trained using a supervised learning algorithm for a given data set. Once trained, the neural network outputs the probability of a significant event occurring later in the signal based on anomalies occurring in the early part of the signal. A large collection of seismic signals was used in this study to illustrate the underlying methodology. Using this method we were able to achieve 54.7% accuracy in predicting anomalies further in a seismic signal. The techniques we present in this paper, with some refinement, can readily be applied to detect anomalies in seismic, electrocardiogram, electroencephalogram, and other non-stationary signals.


Proceedings of SPIE | 2015

Information fusion performance evaluation for motion imagery data using mutual information: initial study

Samuel Grieggs; Michael J. McLaughlin; Soundararajan Ezekiel; Erik Blasch

As technology and internet use grows at an exponential rate, video and imagery data is becoming increasingly important. Various techniques such as Wide Area Motion imagery (WAMI), Full Motion Video (FMV), and Hyperspectral Imaging (HSI) are used to collect motion data and extract relevant information. Detecting and identifying a particular object in imagery data is an important step in understanding visual imagery, such as content-based image retrieval (CBIR). Imagery data is segmented and automatically analyzed and stored in dynamic and robust database. In our system, we seek utilize image fusion methods which require quality metrics. Many Image Fusion (IF) algorithms have been proposed based on different, but only a few metrics, used to evaluate the performance of these algorithms. In this paper, we seek a robust, objective metric to evaluate the performance of IF algorithms which compares the outcome of a given algorithm to ground truth and reports several types of errors. Given the ground truth of a motion imagery data, it will compute detection failure, false alarm, precision and recall metrics, background and foreground regions statistics, as well as split and merge of foreground regions. Using the Structural Similarity Index (SSIM), Mutual Information (MI), and entropy metrics; experimental results demonstrate the effectiveness of the proposed methodology for object detection, activity exploitation, and CBIR.


national aerospace and electronics conference | 2014

Double-density dual-tree wavelet-based polarimetry analysis

Kyle Harrity; Soundararajan Ezekiel; Adnan Bubalo; Erik Blasch; Mark G. Alford

For the past two decades, the Discrete Wavelet Transformation (DWT) has been successfully applied to many fields. For image processing applications, the DWT can produce non-redundant representations of an input image with greater performance than other wavelet methods. Further, the DWT provides a better spatial and spectral localization of image representation, capable of revealing smaller changes, trends, and breakdown points that classical methods often miss. However, the DWT has its own limitations and disadvantages such as lack of shift invariance. That is, if the input signal or image is shifted, then the wavelet coefficients will exacerbate that shift. The DWT also lacks the ability to represent directional cases. The Double Density Dual-Tree Discrete Wavelet Transformation (D3TDWT) is a relatively new and enhanced version of the DWT with two scaling functions and four distinct wavelets designed in such a way that one pair of wavelets is offset with another pair so that the first pair lies in between the second. In this paper, we propose a D3TDWT polarimetry analysis method to analyze Long Wave Infrared (LWIR) polarimetry imagery to discriminate objects such as people and vehicles from background clutter. The D3TDWT method can be applied to a wide range of applications such as change detection, shape extraction, target recognition, and simultaneous tracking and identification.


Proceedings of SPIE | 2014

Wavelet-based polarimetry analysis

Soundararajan Ezekiel; Kyle Harrity; Waleed Farag; Mark G. Alford; David D. Ferris; Erik Blasch

Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

Collaboration


Dive into the Soundararajan Ezekiel's collaboration.

Top Co-Authors

Avatar

Erik Blasch

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark G. Alford

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Adnan Bubalo

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael Giansiracusa

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Adam Lutz

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Kyle Harrity

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Maria Cornacchia

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Neal Messer

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David D. Ferris

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Aaron Waibel

Indiana University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge