Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph A. O'Sullivan is active.

Publication


Featured researches published by Joseph A. O'Sullivan.


IEEE Transactions on Signal Processing | 1992

Deblurring subject to nonnegativity constraints

Donald L. Snyder; Timothy J. Schulz; Joseph A. O'Sullivan

Csiszars I-divergence is used as a discrepancy measure for deblurring subject to the constraint that all functions involved are nonnegative. An iterative algorithm is proposed for minimizing this measure. It is shown that every function in the sequence is nonnegative and the sequence converges monotonically to a global minimum. Other properties of the algorithm are shown, including lower bounds on the improvement in the I-divergence at each step of the algorithm and on the difference between the I-difference at step k and at the limit point. A method for regularizing the solution is proposed. >


IEEE Transactions on Information Forensics and Security | 2012

ECG Biometric Recognition: A Comparative Analysis

Ikenna Odinaka; Po-Hsiang Lai; Alan D. Kaplan; Joseph A. O'Sullivan; Erik J. Sirevaag; John W. Rohrbaugh

The electrocardiogram (ECG) is an emerging biometric modality that has seen about 13 years of development in peer-reviewed literature, and as such deserves a systematic review and discussion of the associated methods and findings. In this paper, we review most of the techniques that have been applied to the use of the electrocardiogram for biometric recognition. In particular, we categorize the methodologies based on the features and the classification schemes. Finally, a comparative analysis of the authentication performance of a few of the ECG biometric systems is presented, using our inhouse database. The comparative study includes the cases where training and testing data come from the same and different sessions (days). The authentication results show that most of the algorithms that have been proposed for ECG-based biometrics perform well when the training and testing data come from the same session. However, when training and testing data come from different sessions, a performance degradation occurs. Multiple training sessions were incorporated to diminish the loss in performance. That notwithstanding, only a few of the proposed ECG recognition algorithms appear to be able to support performance improvement due to multiple training sessions. Only three of these algorithms produced equal error rates (EERs) in the single digits, including an EER of 5.5% using a method proposed by us.


IEEE Transactions on Aerospace and Electronic Systems | 2000

Automatic target recognition using sequences of high resolution radar range-profiles

Steven P. Jacobs; Joseph A. O'Sullivan

In this paper, we address the problem of joint tracking and recognition of a target using a sequence of high resolution radar (HRR) range-profiles. The likelihood function for the scene configuration combines a dynamics-based prior on the sequence of target orientations with a likelihood for range-profiles given the target orientation. A deterministic model and a conditionally Gaussian model for range-profiles are introduced, and the likelihood functions under each model are compared. Simulations are presented demonstrating recognition of mobile aircraft and ground targets, and results showing performance of the algorithm are given in terms of the expected angular estimation error and the rate of correct recognition.


Medical Physics | 2006

Properties of preprocessed sinogram data in x-ray computed tomography

Bruce R. Whiting; Parinaz Massoumzadeh; Orville A. Earl; Joseph A. O'Sullivan; Donald L. Snyder; Jeffrey F. Williamson

The accurate determination of x-ray signal properties is important to several computed tomography (CT) research and development areas, notably for statistical reconstruction algorithms and dose-reduction simulation. The most commonly used model of CT signal formation, assuming monoenergetic x-ray sources with quantum counting detectors obeying simple Poisson statistics, does not reflect the actual physics of CT acquisition. This paper describes a more accurate model, taking into account the energy-integrating detection process, nonuniform flux profiles, and data-conditioning processes. Methods are developed to experimentally measure and theoretically calculate statistical distributions, as well as techniques to analyze CT signal properties. Results indicate the limitations of current models and suggest improvements for the description of CT signal properties.


IEEE Transactions on Medical Imaging | 2007

Alternating Minimization Algorithms for Transmission Tomography

Joseph A. O'Sullivan; Jasenka Benac

A family of alternating minimization algorithms for finding maximum-likelihood estimates of attenuation functions in transmission X-ray tomography is described. The model from which the algorithms are derived includes polyenergetic photon spectra, background events, and nonideal point spread functions. The maximum-likelihood image reconstruction problem is reformulated as a double minimization of the I-divergence. A novel application of the convex decomposition lemma results in an alternating minimization algorithm that monotonically decreases the objective function. Each step of the minimization is in closed form. The family of algorithms includes variations that use ordered subset techniques for increasing the speed of convergence. Simulations demonstrate the ability to correct the cupping artifact due to beam hardening and the ability to reduce streaking artifacts that arise from beam hardening and background events


IEEE Transactions on Information Theory | 1998

Information-theoretic image formation

Joseph A. O'Sullivan; Richard E. Blahut; Donald L. Snyder

The emergent role of information theory in image formation is surveyed. Unlike the subject of information-theoretic communication theory, information-theoretic imaging is far from a mature subject. The possible role of information theory in problems of image formation is to provide a rigorous framework for defining the imaging problem, for defining measures of optimality used to form estimates of images, for addressing issues associated with the development of algorithms based on these optimality criteria, and for quantifying the quality of the approximations. The definition of the imaging problem consists of an appropriate model for the data and an appropriate model for the reproduction space, which is the space within which image estimates take values. Each problem statement has an associated optimality criterion that measures the overall quality of an estimate. The optimality criteria include maximizing the likelihood function and minimizing mean squared error for stochastic problems, and minimizing squared error and discrimination for deterministic problems. The development of algorithms is closely tied to the definition of the imaging problem and the associated optimality criterion. Algorithms with a strong information-theoretic motivation are obtained by the method of expectation maximization. Related alternating minimization algorithms are discussed. In quantifying the quality of approximations, global and local measures are discussed. Global measures include the (mean) squared error and discrimination between an estimate and the truth, and probability of error for recognition or hypothesis testing problems. Local measures include Fisher information.


IEEE Transactions on Aerospace and Electronic Systems | 2001

SAR ATR performance using a conditionally Gaussian model

Joseph A. O'Sullivan; Michael D. DeVore; Vikas S. Kedia; Michael I. Miller

A family of conditionally Gaussian signal models for synthetic aperture radar (SAR) imagery is presented, extending a related class of models developed for high resolution radar range profiles. This signal model is robust with respect to the variations of the complex-valued radar signals due to the coherent combination of returns from scatterers as those scatterers move through relative distances on the order of a wavelength of the transmitted signal (target speckle). The target type and the relative orientations of the sensor, target, and ground plane parameterize the conditionally Gaussian model. Based upon this model, algorithms to jointly estimate both the target type and pose are developed. Performance results for both target pose estimation and target recognition are presented for publicly released data from the MSTAR program.


IEEE Transactions on Magnetics | 2003

Iterative detection and decoding for separable two-dimensional intersymbol interference

Yunxiang Wu; Joseph A. O'Sullivan; Naveen Singla; Ronald S. Indeck

We introduce two detection methods for uncoded two-dimensional (2-D) intersymbol interference (ISI) channels. The detection methods are suitable for a special case of 2-D ISI channels where the channel response is separable. In this case, the 2-D ISI is treated as the concatenation of two one-dimensional ISI channels. The first method uses equalization to reduce the ISI in one of the two dimensions followed by use of a maximum a posteriori (MAP) detector for the ISI in the other dimension. The second method employs modified MAP algorithms to reduce the ISI in each dimension. The implementation complexity of the two methods grows exponentially in the ISI length in either the row or column dimension. We develop two iterative decoding schemes based on these detection methods and low-density parity-check codes as error correction codes. Simulation results show that the bit-error-rate performance loss caused by the 2-D ISI for the separable channel response considered is less than 1 dB over a channel without ISI. This motivates equalizing a general 2-D ISI channel response to a nearby separable matrix.


information processing in sensor networks | 2004

Co-Grid: an efficient coverage maintenance protocol for distributed sensor networks

Guoliang Xing; Chenyang Lu; Robert Pless; Joseph A. O'Sullivan

Wireless sensor networks often face the critical challenge of sustaining long-term operation on limited battery energy. Coverage maintenance protocols can effectively prolong network lifetime by maintaining sufficient sensing coverage over a region using a small number of active nodes while scheduling the others to sleep. We present a novel distributed coverage maintenance protocol called the coordinating grid (Co-Grid). In contrast to existing coverage maintenance protocols which are based on simpler detection models, Co-Grid adopts a distributed detection model based on data fusion that is more consistent with many distributed sensing applications. Co-Grid organizes the network into coordinating fusion groups located on overlapping virtual grids. Through coordination among neighboring fusion groups, Co-Grid can achieve comparable number of active nodes as a centralized algorithm, while reducing the network (re-)configuration time by orders of magnitude. Co-Grid is especially suitable for large and energy-constrained sensor networks that require quick (re-)configuration in response to node failures and environmental changes. We validate our claims by both theoretical analysis and simulations.


Medical Physics | 2002

Prospects for quantitative computed tomography imaging in the presence of foreign metal bodies using statistical image reconstruction.

Jeffrey F. Williamson; Bruce R. Whiting; Jasenka Benac; Ryan Murphy; G. James Blaine; Joseph A. O'Sullivan; David G. Politte; Donald L. Snyder

X-ray computed tomography (CT) images of patients bearing metal intracavitary applicators or other metal foreign objects exhibit severe artifacts including streaks and aliasing. We have systematically evaluated via computer simulations the impact of scattered radiation, the polyenergetic spectrum, and measurement noise on the performance of three reconstruction algorithms: conventional filtered backprojection (FBP), deterministic iterative deblurring, and a new iterative algorithm, alternating minimization (AM), based on a CT detector model that includes noise, scatter, and polyenergetic spectra. Contrary to the dominant view of the literature, FBP streaking artifacts are due mostly to mismatches between FBPs simplified model of CT detector response and the physical process of signal acquisition. Artifacts on AM images are significantly mitigated as this algorithm substantially reduces detector-model mismatches. However, metal artifacts are reduced to acceptable levels only when prior knowledge of the metal object in the patient, including its pose, shape, and attenuation map, are used to constrain AMs iterations. AM image reconstruction, in combination with object-constrained CT to estimate the pose of metal objects in the patient, is a promising approach for effectively mitigating metal artifacts and making quantitative estimation of tissue attenuation coefficients a clinical possibility.

Collaboration


Dive into the Joseph A. O'Sullivan's collaboration.

Top Co-Authors

Avatar

David G. Politte

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Donald L. Snyder

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Bruce R. Whiting

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jeffrey F. Williamson

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Yuan-Chuan Tai

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Heyu Wu

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Po-Hsiang Lai

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Debashish Pal

Washington University in St. Louis

View shared research outputs
Researchain Logo
Decentralizing Knowledge