Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adnan Bubalo is active.

Publication


Featured researches published by Adnan Bubalo.


international conference on information fusion | 2008

Curvature nonlinearity measure and filter divergence detector for nonlinear tracking problems

Ruixin Niu; Pramod K. Varshney; Mark G. Alford; Adnan Bubalo; Eric K. Jones; Maria Scalzo

Several nonlinear filtering techniques are investigated for nonlinear tracking problems. Experimental results show that for a weakly nonlinear tracking problem, the extended Kalman filter and the unscented Kalman filter are good choices, while a particle filter should be used for problems with strong nonlinearity. To quantitatively determine the nonlinearity of a nonlinear tracking problem, we propose two types of measures: one is the differential geometry curvature measure and the other is based on the normalized innovation squared (NIS) of the Kalman filter. Simulation results show that both measures can effectively quantify the nonlinearity of the problem. The NIS is capable of detecting the filter divergence online. The curvature measure is more suitable for quantifying the nonlinearity of a tracking problem as determined via simulations.


Proceedings of SPIE | 2011

Measures of Nonlinearity for Single Target Tracking Problems

Eric K. Jones; Maria Scalzo; Adnan Bubalo; Mark G. Alford; Benjamin Arthur

The tracking of objects and phenomena exhibiting nonlinear motion is a topic that has application in many areas ranging from military surveillance to weather forecasting. Observed nonlinearities can come not only from the nonlinear dynamic motion of the object, but also from nonlinearities in the measurement model. Many techniques have been developed that attempt to deal with this issue, including the development of various types of filters, such as the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF), variants of the Kalman Filter (KF), as well as other filters such as the Particle Filter (PF). Determining the effectiveness of any of these techniques in nonlinear scenarios is not straightforward. Testing needs to be accomplished against scenarios whose degree of nonlinearity is known. This is necessary if reliable assessments of the effectiveness of nonlinear mitigation techniques are to be accomplished. In this effort, three techniques were investigated regarding their ability to provide useful measures of nonlinearity for representative scenarios. These techniques were the Parameter Effects Curvature (PEC), the Normalized Estimation Error Squared (NEES), and the Normalized Innovation Squared (NIS). Results indicated that the NEES was the most effective, although it does require truth values in its formulation.


applied imagery pattern recognition workshop | 2013

Multi-scale decomposition tool for Content Based Image Retrieval

Soundararajan Ezekiel; Mark G. Alford; David D. Ferris; Eric K. Jones; Adnan Bubalo; Mark Gorniak; Erik Blasch

Content Based Image Retrieval (CBIR) is a technical area focused on answering “Who, What, Where and When,” questions associated with the imagery. A multi-scale feature extraction scheme based on wavelet and Contourlet transforms is proposed to reliably extract objects in images. First, we explore Contourlet transformation in association with Pulse Coupled Neural Network (PCNN) while the second technique is based on Rescaled Range (R/S) Analysis. Both methods provide flexible multi-resolution decomposition, directional feature extraction and are suitable for image fusion. The Contourlet transformation is conceptually similar to a wavelet transformation, but simpler, faster and less redundant. The R/S analysis, uses the range R of cumulative deviations from the mean divided by the standard deviation S, to calculate the scaling exponent, or a Hurst exponent, H. Following the original work of Hurst, the exponent H provides a quantitative measure of the persistence of similarities in a signal. For images, if information exhibits self-similarity and fractal correlation then H gives a measure of smoothness of the objects. The experimental results demonstrate that our proposed approach has promising applications for CBIR. We apply our multiscale decomposition approach to images with simple thresholding of wavelet/curvelet coefficients for visually sharper object outlines, salient extraction of object edges, and increased perceptual quality. We further explore these approaches to segment images and, the empirical results reported here are encouraging to determine who or what is in the image.


national aerospace and electronics conference | 2014

No-reference objective blur metric based on the notion of wavelet gradient, magnitude edge width

Soundararajan Ezekiel; Kyle Harrity; Mark G. Alford; Erik Blasch; David D. Ferris; Adnan Bubalo

In the past decade, the number and popularity of digital cameras has increased many fold, increasing the demand for a blur metric and quality assessment techniques to evaluate digital images. There is still no widely accepted industry standard by which an images blur content may be assessed so it is imperative that better, more reliable, no-reference metrics be created to fill this gap. In this paper, a new wavelet based scheme is proposed as a blur metric. This method does not rely on subjective testing other than for verification. After applying the discrete wavelet transform to an image, we use adaptive thresholding to identify edge regions in the horizontal, vertical, and diagonal sub-images. For each sub-image, we utilize the fact that detected edges can be separated into connected components. We do this because, perceptually, blur is most apparent on edge regions. From these regions it is possible to compute properties of the edge such as length and width. The length and width can then be used to measure the area of a blurred region which in turn yields the number of blurred pixels for each connected region. Ideally, an edge point is represented by only a single pixel so if a found edge has a width greater than one it likely contains blur. In order to not skew our results, a one by n-length rectangle is removed from the computed blur area. The areas are summed which will represent the total blur pixel count per image. Using a series of test images, we determined the blur pixel ratio as the number of blur pixels to the total pixels in an image.


national aerospace and electronics conference | 2014

No-reference blur metric using double-density and dual-tree two-dimensional wavelet transformation

Soundararajan Ezekiel; Kyle Harrity; Erik Blasch; Adnan Bubalo

Over the past decade the digital camera has become widely available in many devices such as cell phones, computers, etc. Therefore, the perceptual quality of digital images is an important and necessary requirement to evaluate digital images. To improve the quality of images captured with camera, we must identify and measure the artifacts that cause blur within the images. Blur is mainly caused by pixel intensity due to multiple sources. The most common types of blurs are known as object motion, defocus, and camera motion. In the last two decades, the discrete wavelet transformation (DWT) has become a cutting-edge technology in the signal and image processing field for such applications as denoising. The disadvantage of the DWT is that it is not able to directly observe blur coefficients. In this paper, we propose a novel framework for a blur metric for an image. Our approach is based on the double-density dual tree two dimensional wavelet transformations (D3TDWT) which simultaneously processes the properties of both the double-density DWT and dual tree DWT. We also utilize gradient to evaluate blurring artifacts and measure the image quality.


applied imagery pattern recognition workshop | 2014

Modified deconvolution using wavelet image fusion

Michael J. McLaughlin; En-Ui Lin; Erik Blasch; Adnan Bubalo; Maria Cornacchia; Mark G. Alford; Millicent Thomas

Image quality is affected by two predominant factors, noise and blur. Blur typically manifests itself as a smoothing of edges, and can be described as the convolution of an image with an unknown blur kernel. The inverse of convolution is deconvolution, a difficult process even in the absence of noise, which aims to recover the true image. Removing blur from an image has two stages: identifying or approximating the blur kernel, then performing a deconvolution of the estimated kernel and blurred image. Blur removal is often an iterative process, with successive approximations of the kernel leading to optimal results. However, it is unlikely that a given image is blurred uniformly. In real world situations most images are already blurred due to object motion or camera motion/de focus. Deconvolution, a computationally expensive process, will sharpen blurred regions, but can also degrade the regions previously unaffected by blur. To remedy the limitations of blur deconvolution, we propose a novel, modified deconvolution, using wavelet image fusion (moDuWIF), to remove blur from a no-reference image. First, we estimate the blur kernel, and then we perform a deconvolution. Finally, wavelet techniques are implemented to fuse the blurred and deblurred images. The details in the blurred image that are lost by deconvolution are recovered, and the sharpened features in the deblurred image are retained. The proposed technique is evaluated using several metrics and compared to standard approaches. Our results show that this approach has potential applications to many fields, including: medical imaging, topography, and computer vision.


applied imagery pattern recognition workshop | 2014

Multi-resolution deblurring

Michael J. McLaughlin; En-Ui Lin; Erik Blasch; Adnan Bubalo; Maria Cornacchia; Mark G. Alford; Millicent Thomas

As technology advances; blur in an image remains as an ever-present issue in the image processing field. A blurred image is mathematically expressed as a convolution of a blur function with a sharp image, plus noise. Removing blur from an image has been widely researched and is still important as new images are collected. Without a reference image, identifying, measuring, and removing blur from a given image is very challenging. Deblurring involves estimating the blur kernel to match with various types of blur including camera motion/de focus or object motion. Various blur kernels have been studied over many years, but the most common function is the Gaussian. Once the blur kernel (function) is estimated, a deconvolution is performed with the kernel and the blurred image. Many existing methods operate in this manner, however, these methods remove blur from the blurred region, but alter the un-blurred regions of the image. Pixel alteration is due to the actual intensity values of the pixels in the image becoming easily distorted while being used in the deblurring process. The method proposed in this paper uses multi-resolution analysis (MRA) techniques to separate blur, edge, and noise coefficients. Deconvolution with the estimated blur kernel is then performed on these coefficients instead of the actual pixel intensity values before reconstructing the image. Additional steps will be taken to retain the quality of un-blurred regions of the blurred image. Experimental results on simulated and real data show that our approach achieves higher quality results than previous approaches on various blurry and noise images using several metrics including mutual information and structural similarity based metrics.


Proceedings of SPIE | 2009

Adaptive Filtering for Single Target Tracking

Maria Scalzo; Gregory Horvath; Eric K. Jones; Adnan Bubalo; Mark G. Alford; Ruixin Niu; Pramod K. Varshney

Many algorithms may be applied to solve the target tracking problem, including the Kalman Filter and different types of nonlinear filters, such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Particle Filter (PF). This paper describes an intelligent algorithm that was developed to elegantly select the appropriate filtering technique depending on the problem and the scenario, based upon a sliding window of the Normalized Innovation Squared (NIS). This technique shows promise for the single target, single radar tracking problem domain. Future work is planned to expand the use of this technique to multiple targets and multiple sensors.


national aerospace and electronics conference | 2014

Double-density dual-tree wavelet-based polarimetry analysis

Kyle Harrity; Soundararajan Ezekiel; Adnan Bubalo; Erik Blasch; Mark G. Alford

For the past two decades, the Discrete Wavelet Transformation (DWT) has been successfully applied to many fields. For image processing applications, the DWT can produce non-redundant representations of an input image with greater performance than other wavelet methods. Further, the DWT provides a better spatial and spectral localization of image representation, capable of revealing smaller changes, trends, and breakdown points that classical methods often miss. However, the DWT has its own limitations and disadvantages such as lack of shift invariance. That is, if the input signal or image is shifted, then the wavelet coefficients will exacerbate that shift. The DWT also lacks the ability to represent directional cases. The Double Density Dual-Tree Discrete Wavelet Transformation (D3TDWT) is a relatively new and enhanced version of the DWT with two scaling functions and four distinct wavelets designed in such a way that one pair of wavelets is offset with another pair so that the first pair lies in between the second. In this paper, we propose a D3TDWT polarimetry analysis method to analyze Long Wave Infrared (LWIR) polarimetry imagery to discriminate objects such as people and vehicles from background clutter. The D3TDWT method can be applied to a wide range of applications such as change detection, shape extraction, target recognition, and simultaneous tracking and identification.


Proceedings of SPIE | 2016

A comparative study of multi-focus image fusion validation metrics

Michael Giansiracusa; Adam Lutz; Neal Messer; Soundararajan Ezekiel; Mark G. Alford; Erik Blasch; Adnan Bubalo; Michael Manno

Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).

Collaboration


Dive into the Adnan Bubalo's collaboration.

Top Co-Authors

Avatar

Mark G. Alford

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Erik Blasch

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Soundararajan Ezekiel

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Eric K. Jones

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Maria Scalzo

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Maria Cornacchia

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Adam Lutz

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Michael J. McLaughlin

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Neal Messer

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Kyle Harrity

Indiana University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge