Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ikenna Odinaka is active.

Publication


Featured researches published by Ikenna Odinaka.


IEEE Transactions on Information Forensics and Security | 2012

ECG Biometric Recognition: A Comparative Analysis

Ikenna Odinaka; Po-Hsiang Lai; Alan D. Kaplan; Joseph A. O'Sullivan; Erik J. Sirevaag; John W. Rohrbaugh

The electrocardiogram (ECG) is an emerging biometric modality that has seen about 13 years of development in peer-reviewed literature, and as such deserves a systematic review and discussion of the associated methods and findings. In this paper, we review most of the techniques that have been applied to the use of the electrocardiogram for biometric recognition. In particular, we categorize the methodologies based on the features and the classification schemes. Finally, a comparative analysis of the authentication performance of a few of the ECG biometric systems is presented, using our inhouse database. The comparative study includes the cases where training and testing data come from the same and different sessions (days). The authentication results show that most of the algorithms that have been proposed for ECG-based biometrics perform well when the training and testing data come from the same session. However, when training and testing data come from different sessions, a performance degradation occurs. Multiple training sessions were incorporated to diminish the loss in performance. That notwithstanding, only a few of the proposed ECG recognition algorithms appear to be able to support performance improvement due to multiple training sessions. Only three of these algorithms produced equal error rates (EERs) in the single digits, including an EER of 5.5% using a method proposed by us.


international workshop on information forensics and security | 2010

ECG biometrics: A robust short-time frequency analysis

Ikenna Odinaka; Po-Hsiang Lai; Alan D. Kaplan; Joseph A. O'Sullivan; Erik J. Sirevaag; Sean D. Kristjansson; Amanda K. Sheffield; John W. Rohrbaugh

In this paper, we present the results of an analysis of the electrocardiogram (ECG) as a biométrie using a novel short-time frequency method with robust feature selection. Our proposed method incorporates heartbeats from multiple days and fuses information. Single lead ECG signals from a comparatively large sample of 269 subjects that were sampled from the general population were collected on three separate occasions over a seven-month period. We studied the impact of long-term variability, health status, data fusion, the number of training and testing heartbeats, and database size on ECG biométrie performance. The proposed method achieves 5.58% equal error rate (EER) in verification, 76.9% accuracy in rank-1 recognition, and 93.5% accuracy in rank-15 recognition when training and testing heartbeats are from different days. If training and testing heartbeats are collected on the same day, we achieve 0.37% EER and 99% recognition accuracy for decisions based on a single heartbeat.


IEEE Transactions on Information Forensics and Security | 2015

Cardiovascular Biometrics: Combining Mechanical and Electrical Signals

Ikenna Odinaka; Joseph A. O'Sullivan; Erik J. Sirevaag; John W. Rohrbaugh

The electrical signal originating from the heart, the electrocardiogram (ECG), has been examined for its potential use as a biometric. Recent ECG studies have shown that an intersession authentication performance <;6% equal error rate (EER) can be achieved using training data from two days while testing with data from a third day. More recently, a mechanical measurement of cardiovascular activity, the laser Doppler vibrometry (LDV) signal, was proposed by our group as a biometric trait. The intersession authentication performance of the LDV biometric system is comparable to that of the ECG biometric system. Combining both the electrical and mechanical aspects of the cardiovascular system, an overall improvement in authentication performance can be attained. In particular, the multibiometric system achieves ~2% EER. Moreover, in the identification mode, with a testing database containing 200 individuals, the rank-1 accuracy improves from ~80% for each individual biometric system, to ~92% for the multibiometric system. Although there are implementation issues that would need to be resolved before this combined method could be applied in the field, this report establishes the basis and utility of the method in principle, and it identifies effective signal analysis approaches.


Optics Express | 2016

Snapshot fan beam coded aperture coherent scatter tomography.

Mehadi Hassan; Joel A. Greenberg; Ikenna Odinaka; David J. Brady

We use coherently scattered X-rays to measure the molecular composition of an object throughout its volume. We image a planar slice of the object in a single snapshot by illuminating it with a fan beam and placing a coded aperture between the object and the detectors. We characterize the system and demonstrate a resolution of 13 mm in range and 2 mm in cross-range and a fractional momentum transfer resolution of 15%. In addition, we show that this technique allows a 100x speedup compared to previously-studied pencil beam systems using the same components. Finally, by scanning an object through the beam, we image the full 4-dimensional data cube (3 spatial and 1 material dimension) for complete volumetric molecular imaging.


nuclear science symposium and medical imaging conference | 2015

Spectrally grouped total variation reconstruction for scatter imaging using ADMM

Ikenna Odinaka; Yan Kaganovsky; Joel A. Greenberg; Mehadi Hassan; David G. Politte; Joseph A. O'Sullivan; Lawrence Carin; David J. Brady

We consider X-ray coherent scatter imaging, where the goal is to reconstruct momentum transfer profiles (spectral distributions) at each spatial location from multiplexed measurements of scatter. Each material is characterized by a unique momentum transfer profile (MTP) which can be used to discriminate between different materials. We propose an iterative image reconstruction algorithm based on a Poisson noise model that can account for photon-limited measurements as well as various second order statistics of the data. To improve image quality, previous approaches use edge-preserving regularizers to promote piecewise constancy of the image in the spatial domain while treating each spectral bin separately. Instead, we propose spectrally grouped regularization that promotes piecewise constant images along the spatial directions but also ensures that the MTPs of neighboring spatial bins are similar, if they contain the same material. We demonstrate that this group regularization results in improvement of both spectral and spatial image quality. We pursue an optimization transfer approach where convex decompositions are used to lift the problem such that all hyper-voxels can be updated in parallel and in closed-form. The group penalty introduces a challenge since it is not directly amendable to these decompositions. We use the alternating directions method of multipliers (ADMM) to replace the original problem with an equivalent sequence of sub-problems that are amendable to convex decompositions, leading to a highly parallel algorithm. We demonstrate the performance on real data.


allerton conference on communication, control, and computing | 2010

On estimating biometric capacity: An example based on LDV biometrics

Ikenna Odinaka; Po-Hsiang Lai; Alan D. Kaplan; Joseph A. O'Sullivan

It is known that when a pattern recognition system is not subject to memory constraints on pattern representations and sensory information is not compressed, the recognition rate is bounded by the mutual information between the memory and sensory representations. In this paper, we investigate the recognition rates of Laser Doppler Vibrometry (LDV) signals obtained from 285 individuals. We consider four cases corresponding to four different assumptions as to the structure of the data source and noisy measurements, and present the results of the mutual information bounds. In particular, we show the bounds of the recognition rates for each feature of the LDV signal.


Proceedings of SPIE | 2016

Coded aperture x-ray diffraction imaging with transmission computed tomography side-information

Ikenna Odinaka; Joel A. Greenberg; Yan Kaganovsky; Andrew D. Holmgren; Mehadi Hassan; David G. Politte; Joseph A. O'Sullivan; Lawrence Carin; David J. Brady

Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.


Proceedings of SPIE | 2015

Alternating minimization algorithm with iteratively reweighted quadratic penalties for compressive transmission tomography

Yan Kaganovsky; Soysal Degirmenci; Shaobo Han; Ikenna Odinaka; David G. Politte; David J. Brady; Joseph A. O'Sullivan; Lawrence Carin

We propose an alternating minimization (AM) algorithm for estimating attenuation functions in x-ray transmission tomography using priors that promote sparsity in the pixel/voxel differences domain. As opposed to standard maximum-a-posteriori (MAP) estimation, we use the automatic relevance determination (ARD) framework. In the ARD approach, sparsity (or compressibility) is promoted by introducing latent variables which serve as the weights of quadratic penalties, with one weight for each pixel/voxel; these weights are then automatically learned from the data. This leads to an algorithm where the quadratic penalty is reweighted in order to effectively promote sparsity. In addition to the usual object estimate, ARD also provides measures of uncertainty (posterior variances) which are used at each iteration to automatically determine the trade-off between data fidelity and the prior, thus potentially circumventing the need for any tuning parameters. We apply the convex decomposition lemma in a novel way and derive a separable surrogate function that leads to a parallel algorithm. We propose an extension of branchless distance-driven forward/back-projections which allows us to considerably speed up the computations associated with the posterior variances. We also study the acceleration of the algorithm using ordered subsets.


IEEE Transactions on Computational Imaging | 2017

Joint System and Algorithm Design for Computationally Efficient Fan Beam Coded Aperture X-Ray Coherent Scatter Imaging

Ikenna Odinaka; Joseph A O’Sullivan; David G. Politte; Kenneth P. MacCabe; Yan Kaganovsky; Joel A. Greenberg; Manu N. Lakshmanan; Kalyani Krishnamurthy; Anuj J. Kapadia; Lawrence Carin; David J. Brady

In x-ray coherent scatter tomography, tomographic measurements of the forward scatter distribution are used to infer scatter densities within a volume. A radiopaque 2D pattern placed between the object and the detector array enables the disambiguation between different scatter events. The use of a fan beam source illumination to speed up data acquisition relative to a pencil beam presents computational challenges. To facilitate the use of iterative algorithms based on a penalized Poisson log-likelihood function, efficient computational implementation of the forward and backward models are needed. Our proposed implementation exploits physical symmetries and structural properties of the system and suggests a joint system-algorithm design, where the system design choices are influenced by computational considerations, and in turn lead to reduced reconstruction time. Computational-time speedups of approximately 146 and 32 are achieved in the computation of the forward and backward models, respectively. Results validating the forward model and reconstruction algorithm are presented on simulated analytic and Monte Carlo data.


Proceedings of SPIE | 2016

Multi-view coded aperture coherent scatter tomography

Andrew D. Holmgren; Ikenna Odinaka; Joel A. Greenberg; David J. Brady

We use coded apertures and multiple views to create a compressive coherent scatter computed tomography (CSCT) system. Compared with other CSCT systems, we reduce object dose and scan time. Previous work on coded aperture tomography resulted in a resolution anisotropy that caused poor or unusable momentum transfer resolution in certain cases. Complimentary and multiple views resolve the resolution issues, while still providing the ability to perform snapshot tomography by adding sources and detectors.

Collaboration


Dive into the Ikenna Odinaka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph A. O'Sullivan

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David G. Politte

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan D. Kaplan

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Erik J. Sirevaag

Washington University in St. Louis

View shared research outputs
Researchain Logo
Decentralizing Knowledge