Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark G. Alford is active.

Publication


Featured researches published by Mark G. Alford.


Proceedings of the IEEE | 1997

Distributed fusion architectures and algorithms for target tracking

Martin E. Liggins; Chee-Yee Chong; Ivan Kadar; Mark G. Alford; Vincent C. Vannicola; Stelios C. A. Thomopoulos

Modern surveillance systems often utilize multiple physically distributed sensors of different types to provide complementary and overlapping coverage on targets. In order to generate target tracks and estimates, the sensor data need to be fused. While a centralized processing approach is theoretically optimal, there are significant advantages in distributing the fusion operations over multiple processing nodes. This paper discusses architectures for distributed fusion, whereby each node processes the data from its own set of sensors and communicates with other nodes to improve on the estimates, The information graph is introduced as a way of modeling information flow in distributed fusion systems and for developing algorithms. Fusion for target tracking involves two main operations: estimation and association. Distributed estimation algorithms based on the information graph are presented for arbitrary fusion architectures and related to linear and nonlinear distributed estimation results. The distributed data association problem is discussed in terms of track-to-track association likelihoods. Distributed versions of two popular tracking approaches (joint probabilistic data association and multiple hypothesis tracking) are then presented, and examples of applications are given.


Enabling Technologies for Law Enforcement and Security | 1997

Concealed weapon detection: an image fusion approach

Mucahit K. Uner; Liane C. Ramac; Pramod K. Varshney; Mark G. Alford

This paper presents an approach to image fusion for concealed weapon detection (CWD) applications. In this work, we use image fusion to combine complementary image information from different sensors to obtain a single composite image with more detailed and complete information content. As a result of this processing, the new images are more useful for human perception and automatic computer analysis, tasks such as feature extraction and object recognition. In the fusion process, the images are first decomposed based on wavelet transform. Then at each lower resolution the images are fused by using several feature selection algorithms. The final composite image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. This technique has been applied to real data obtained from IR sensors. Special emphasis is placed on situations when weapons may not be completely visible from the sensors. Fusion results that demonstrate the utility of our approach are presented.


Proceedings of SPIE | 1998

Morphological filters and wavelet-based image fusion for concealed weapons detection

Liane C. Ramac; Mucahit K. Uner; Pramod K. Varshney; Mark G. Alford; David D. Ferris

When viewing a scene for an object recognition task, one imaging sensor may not provide all the information needed for recognition. One way to obtain more information is to use multiple sensors. These sensors should provide images that contain complementary information about the same scene. After preprocessing the source images, we use image fusion to combine the information from the difference sensors. The images to be fused may have some details such as shadows, wrinkles, imaging artifacts, etc., that are not needed in the final fused image. One application of morphological filters is to remove objects of a given size range from the image. Therefore, we use morphological filters in conjunction with wavelets to improve the recognition performance after fusion. After morphological filtering, wavelets are used to construct multiresolution representations of the source images. Once the source images are decomposed, the details are combined to form a composite decomposed image. This method allows details at different levels to be combined independently so that important information is maintained in the final composite image. We are developing image fusion algorithms for concealed weapon detection (CWD) applications. Fusion is useful in situations where the sensor types have different properties, e.g., IR and MMW sensors. Fusing these types of images results in composite images which contain more complete information for CWD applications such as detection of concealed weapons on a person. In this paper we present our most recent results in this area.


international conference on image processing | 1999

Registration and fusion of infrared and millimeter wave images for concealed weapon detection

Pramod K. Varshney; Hua Mei Chen; Liane C. Ramac; Mucahit K. Uner; David D. Ferris; Mark G. Alford

We present an approach to automatically register and fuse IR and MMW images for concealed weapon detection. The distortion between the two images is assumed to be a rigid body transformation without rotation and we assume that the scale factor can be found from both the sensor parameters and the distance ratio of the object to the two sensors. Our registration procedure involves image segmentation, binary correlation and other image processing algorithms. Our fusion method involves a pyramidal image decomposition scheme based on the wavelet transform. Performance of the image registration and image fusion algorithm is illustrated through an example.


international conference on image processing | 1999

Image processing tools for the enhancement of concealed weapon detection

Mohamed Adel Slamani; Pramod K. Varshney; Raghuveer M. Rao; Mark G. Alford; David D. Ferris

A number of technologies are being developed for Concealed Weapon Detection (CWD). Use of appropriate processing techniques will be very important to the success of such technologies. This article describes digital image processing procedures currently being investigated to enhance the detection of weapons concealed underneath clothing.


international conference on information fusion | 2008

Curvature nonlinearity measure and filter divergence detector for nonlinear tracking problems

Ruixin Niu; Pramod K. Varshney; Mark G. Alford; Adnan Bubalo; Eric K. Jones; Maria Scalzo

Several nonlinear filtering techniques are investigated for nonlinear tracking problems. Experimental results show that for a weakly nonlinear tracking problem, the extended Kalman filter and the unscented Kalman filter are good choices, while a particle filter should be used for problems with strong nonlinearity. To quantitatively determine the nonlinearity of a nonlinear tracking problem, we propose two types of measures: one is the differential geometry curvature measure and the other is based on the normalized innovation squared (NIS) of the Kalman filter. Simulation results show that both measures can effectively quantify the nonlinearity of the problem. The NIS is capable of detecting the filter divergence online. The curvature measure is more suitable for quantifying the nonlinearity of a tracking problem as determined via simulations.


Proceedings of SPIE | 2011

Measures of Nonlinearity for Single Target Tracking Problems

Eric K. Jones; Maria Scalzo; Adnan Bubalo; Mark G. Alford; Benjamin Arthur

The tracking of objects and phenomena exhibiting nonlinear motion is a topic that has application in many areas ranging from military surveillance to weather forecasting. Observed nonlinearities can come not only from the nonlinear dynamic motion of the object, but also from nonlinearities in the measurement model. Many techniques have been developed that attempt to deal with this issue, including the development of various types of filters, such as the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF), variants of the Kalman Filter (KF), as well as other filters such as the Particle Filter (PF). Determining the effectiveness of any of these techniques in nonlinear scenarios is not straightforward. Testing needs to be accomplished against scenarios whose degree of nonlinearity is known. This is necessary if reliable assessments of the effectiveness of nonlinear mitigation techniques are to be accomplished. In this effort, three techniques were investigated regarding their ability to provide useful measures of nonlinearity for representative scenarios. These techniques were the Parameter Effects Curvature (PEC), the Normalized Estimation Error Squared (NEES), and the Normalized Innovation Squared (NIS). Results indicated that the NEES was the most effective, although it does require truth values in its formulation.


applied imagery pattern recognition workshop | 2013

Multi-scale decomposition tool for Content Based Image Retrieval

Soundararajan Ezekiel; Mark G. Alford; David D. Ferris; Eric K. Jones; Adnan Bubalo; Mark Gorniak; Erik Blasch

Content Based Image Retrieval (CBIR) is a technical area focused on answering “Who, What, Where and When,” questions associated with the imagery. A multi-scale feature extraction scheme based on wavelet and Contourlet transforms is proposed to reliably extract objects in images. First, we explore Contourlet transformation in association with Pulse Coupled Neural Network (PCNN) while the second technique is based on Rescaled Range (R/S) Analysis. Both methods provide flexible multi-resolution decomposition, directional feature extraction and are suitable for image fusion. The Contourlet transformation is conceptually similar to a wavelet transformation, but simpler, faster and less redundant. The R/S analysis, uses the range R of cumulative deviations from the mean divided by the standard deviation S, to calculate the scaling exponent, or a Hurst exponent, H. Following the original work of Hurst, the exponent H provides a quantitative measure of the persistence of similarities in a signal. For images, if information exhibits self-similarity and fractal correlation then H gives a measure of smoothness of the objects. The experimental results demonstrate that our proposed approach has promising applications for CBIR. We apply our multiscale decomposition approach to images with simple thresholding of wavelet/curvelet coefficients for visually sharper object outlines, salient extraction of object edges, and increased perceptual quality. We further explore these approaches to segment images and, the empirical results reported here are encouraging to determine who or what is in the image.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Regularized multitarget particle filter for sensor management

A. El-Fallah; A. Zatezalo; R. Mahler; R. K. Mehra; Mark G. Alford

Sensor management in support of Level 1 data fusion (multisensor integration), or Level 2 data fusion (situation assessment) requires a computationally tractable multitarget filter. The theoretically optimal approach to this multi-target filtering is a suitable generalization of the recursive Bayes nonlinear filter. However, this optimal filter is intractable and computationally challenging that it must usually be approximated. We report on the approximation of a multi-target non-linear filtering for Sensor Management that is based on the particle filter implementation of Stein-Winter probability hypothesis densities (PHDs). Our main focus is on the operational utility of the implementation, and its computational efficiency and robustness for sensor management applications. We present a multitarget Particle Filter (PF) implementation of the PHD that include clustering, regularization, and computational efficiency. We present some open problems, and suggest future developments. Sensor management demonstrations using a simulated multi-target scenario are presented.


Proceedings of SPIE | 2001

Scientific performance evaluation for distributed sensor management and adaptive data fusion

Adel El-Fallah; Ravi B. Ravichandran; Raman K. Mehra; John R. Hoffman; Tim Zajic; Chad A. Stelzig; Ronald P. S. Mahler; Mark G. Alford

For the last two years at this conference, we have described the implementation of a unified, scientific approach to performance measurement for data fusion algorithms based on FINITE-SET STATISTICS (FISST). FISST makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems. In previous papers we described application of information Measures of Effectiveness (MoEs) to multisource-multitarget data fusion and to non-distributed sensor management. In this follow-on paper we show how to generalize this work to DISTRIBUTED sensor management and ADAPTIVE DATA FUSION.

Collaboration


Dive into the Mark G. Alford's collaboration.

Top Co-Authors

Avatar

Adnan Bubalo

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Erik Blasch

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Soundararajan Ezekiel

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David D. Ferris

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric K. Jones

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Maria Scalzo

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Adam Lutz

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Maria Cornacchia

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Neal Messer

Indiana University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge