Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John A. Onofrey is active.

Publication


Featured researches published by John A. Onofrey.


IEEE Transactions on Medical Imaging | 2015

Low-Dimensional Non-Rigid Image Registration Using Statistical Deformation Models From Semi-Supervised Training Data

John A. Onofrey; Xenophon Papademetris; Lawrence H. Staib

Accurate and robust image registration is a fundamental task in medical image analysis applications, and requires non-rigid transformations with a large number of degrees of freedom. Statistical deformation models (SDMs) attempt to learn the distribution of non-rigid deformations, and can be used both to reduce the transformation dimensionality and to constrain the registration process. However, high-dimensional SDMs are difficult to train given orders of magnitude fewer training samples. In this paper, we utilize both a small set of annotated imaging data and a large set of unlabeled data to effectively learn an SDM of non-rigid transformations in a semi-supervised training (SST) framework. We demonstrate results applying this framework towards inter-subject registration of skull-stripped, magnetic resonance (MR) brain images. Our approach makes use of 39 labeled MR datasets to create a set of supervised registrations, which we augment with a set of over 1200 unsupervised registrations using unlabeled MRIs. Through leave-one-out cross validation, we show that SST of a non-rigid SDM results in a robust registration algorithm with significantly improved accuracy compared to standard, intensity-based registration, and does so with a 99% reduction in transformation dimensionality.


medical image computing and computer assisted intervention | 2013

Learning Nonrigid Deformations for Constrained Multi-modal Image Registration

John A. Onofrey; Lawrence H. Staib; Xenophon Papademetris

We present a new strategy to constrain nonrigid registrations of multi-modal images using a low-dimensional statistical deformation model and test this in registering pre-operative and post-operative images from epilepsy patients. For those patients who may undergo surgical resection for treatment, the current gold-standard to identify regions of seizure involves craniotomy and implantation of intracranial electrodes. To guide surgical resection, surgeons utilize pre-op anatomical and functional MR images in conjunction with post-electrode implantation MR and CT images. The electrode positions from the CT image need to be registered to pre-op functional and structural MR images. The post-op MRI serves as an intermediate registration step between the pre-op MR and CT images. In this work, we propose to bypass the post-op MR image registration step and directly register the pre-op MR and post-op CT images using a low-dimensional nonrigid registration that captures the gross deformation after electrode implantation. We learn the nonrigid deformation characteristics from a principal component analysis of a set of training deformations and demonstrate results using clinical data. We show that our technique significantly outperforms both standard rigid and nonrigid intensity-based registration methods in terms of mean and maximum registration error.


international symposium on biomedical imaging | 2015

Learning nonrigid deformations for constrained point-based registration for image-guided MR-TRUS prostate intervention

John A. Onofrey; Lawrence H. Staib; Saradwata Sarkar; Rajesh Venkataraman; Xenophon Papademetris

This paper presents and validates a low-dimensional nonrigid registration method for fusing magnetic resonance imaging (MRI) and trans-rectal ultrasound (TRUS) in image-guided prostate biopsy. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. Conventional clinical practice uses TRUS to guide prostate biopsies when there is a suspicion of cancer. Pre-procedural MRI information can reveal lesions and may be fused with intra-procedure TRUS imaging to provide patient-specific, localization of lesions for targeting. The state-of-the-art MRI-TRUS nonrigid image fusion process relies upon semi-automated segmentation of the prostate in both the MRI and TRUS images. In this paper, we develop a fast, automated nonrigid registration approach to MRI-TRUS fusion based on a statistical deformation model of intra-procedural deformations derived from a clinical sample.


International MICCAI Workshop on Medical Computer Vision | 2013

Semi-supervised Learning of Nonrigid Deformations for Image Registration

John A. Onofrey; Lawrence H. Staib; Xenophon Papademetris

The existence of large medical image databases have made large amounts of neuroimaging data accessible and freely available to the research community. In this paper, we harness both the vast quantity of unlabeled anatomical MR brain scans from the 1000 Functional Connectomes Project (FCP1000) database and the smaller, but richly-annotated brain images from the LONI Probabilistic Brain Atlas (LPBA40) database to learn a statistical deformation model (SDM) of the nonrigid transformations in a semi-supervised learning (SSL) framework. We make use of 39 LPBA40 labeled MR datasets to create a set of supervised registrations and augment these results with a set of unsupervised registrations using 1247 unlabeled MRIs from the FCP1000. We show through leave-one-out cross validation that SSL of a nonrigid SDM results in a registration algorithm with significantly improved accuracy compared to standard, intensity-based registration, and does so with a 99 % reduction in transformation dimensionality.


Medical Image Analysis | 2017

Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention

John A. Onofrey; Lawrence H. Staib; Saradwata Sarkar; Rajesh Venkataraman; Cayce Nawaf; Preston Sprenkle; Xenophon Papademetris

HighlightsModel non‐rigid deformations typically encountered when fusing pre‐procedure MR and intra‐procedure TRUS images for image‐guided prostate biopsy.A large database of clinical prostate biopsy interventions is used to train a statistical deformation model (SDM).The SDM prevents the registration process from failing in the presence of prostate gland segmentation errors.Rigorous validation using synthetic data and clinical landmarks demonstrates accurate, reliable, robust, and consistent registration results. Graphical abstract Figure. No Caption available. ABSTRACT Accurate and robust non‐rigid registration of pre‐procedure magnetic resonance (MR) imaging to intra‐procedure trans‐rectal ultrasound (TRUS) is critical for image‐guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer‐related death in men in the United States. TRUS‐guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State‐of‐the‐art, clinical MR‐TRUS image fusion relies upon semi‐automated segmentations of the prostate in both the MR and the TRUS images to perform non‐rigid surface‐based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false‐negative cancer detection. In this paper, we present a non‐rigid surface registration approach to MR‐TRUS fusion based on a statistical deformation model (SDM) of intra‐procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI‐RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low‐dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.


NeuroImage: Clinical | 2016

Learning intervention-induced deformations for non-rigid MR-CT registration and electrode localization in epilepsy patients.

John A. Onofrey; Lawrence H. Staib; Xenophon Papademetris

This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods.


international symposium on biomedical imaging | 2013

Fast nonrigid image registration using statistical deformation models learned from richly-annotated data

John A. Onofrey; Lawrence H. Staib; Xenophon Papademetris

Nonrigid image registrations require a large number of degrees of freedom (DoFs) to capture intersubject anatomical variations. With such high DoFs and lack of anatomical correspondences, algorithms may not converge to the globally optimal solution. In this work, we propose a fast, two-step nonrigid registration procedure with low DoFs to accurately register brain images. Our method makes use of a statistical deformation model based upon a principal component analysis of deformations learned from a manually-segmented dataset to perform an initial registration. We then follow with a low DoF nonrigid transformation to complete the registration. Our results show the same registration accuracy in terms of volume of interest overlap as high DoF transformations, but with a 96% reduction in DoF and 98% decrease in computation time.


International Workshop on Simulation and Synthesis in Medical Imaging | 2016

MRI-TRUS Image Synthesis with Application to Image-Guided Prostate Intervention

John A. Onofrey; Ilkay Oksuz; Saradwata Sarkar; Rajesh Venkataraman; Lawrence H. Staib; Xenophon Papademetris

Accurate and robust fusion of pre-procedure magnetic resonance imaging (MRI) to intra-procedure trans-rectal ultrasound (TRUS) imaging is necessary for image-guided prostate cancer biopsy procedures. The current clinical standard for image fusion relies on non-rigid surface-based registration between semi-automatically segmented prostate surfaces in both the MRI and TRUS. This surface-based registration method does not take advantage of internal anatomical prostate structures, which have the potential to provide useful information for image registration. However, non-rigid, multi-modal intensity-based MRI-TRUS registration is challenging due to highly non-linear intensities relationships between MRI and TRUS. In this paper, we present preliminary work using image synthesis to cast this problem into a mono-modal registration task by using a large database of over 100 clinical MRI-TRUS image pairs to learn a joint model of MR-TRUS appearance. Thus, given an MRI, we use this learned joint appearance model to synthesize the patient’s corresponding TRUS image appearance with which we could potentially perform mono-modal intensity-based registration. We present preliminary results of this approach.


The Journal of Nuclear Medicine | 2018

Respiratory Motion Compensation for PET/CT with Motion Information Derived from Matched Attenuation Corrected Gated PET Data

Yihuan Lu; Kathryn Fontaine; Tim Mulnix; John A. Onofrey; Silin Ren; Vladimir Y. Panin; Judson Jones; Michael E. Casey; Robert Barnett; Peter L. Kench; Roger Fulton; Richard E. Carson; Chi Liu

Respiratory motion degrades the detection and quantification capabilities of PET/CT imaging. Moreover, mismatch between a fast helical CT image and a time-averaged PET image due to respiratory motion results in additional attenuation correction artifacts and inaccurate localization. Current motion compensation approaches typically have 3 limitations: the mismatch among respiration-gated PET images and the CT attenuation correction (CTAC) map can introduce artifacts in the gated PET reconstructions that can subsequently affect the accuracy of the motion estimation; sinogram-based correction approaches do not correct for intragate motion due to intracycle and intercycle breathing variations; and the mismatch between the PET motion compensation reference gate and the CT image can cause an additional CT-mismatch artifact. In this study, we established a motion correction framework to address these limitations. Methods: In the proposed framework, the combined emission–transmission reconstruction algorithm was used for phase-matched gated PET reconstructions to facilitate the motion model building. An event-by-event nonrigid respiratory motion compensation method with correlations between internal organ motion and external respiratory signals was used to correct both intracycle and intercycle breathing variations. The PET reference gate was automatically determined by a newly proposed CT-matching algorithm. We applied the new framework to 13 human datasets with 3 different radiotracers and 323 lesions and compared its performance with CTAC and non–attenuation correction (NAC) approaches. Validation using 4-dimensional CT was performed for one lung cancer dataset. Results: For the 10 18F-FDG studies, the proposed method outperformed (P < 0.006) both the CTAC and the NAC methods in terms of region-of-interest–based SUVmean, SUVmax, and SUV ratio improvements over no motion correction (SUVmean: 19.9% vs. 14.0% vs. 13.2%; SUVmax: 15.5% vs. 10.8% vs. 10.6%; SUV ratio: 24.1% vs. 17.6% vs. 16.2%, for the proposed, CTAC, and NAC methods, respectively). The proposed method increased SUV ratios over no motion correction for 94.4% of lesions, compared with 84.8% and 86.4% using the CTAC and NAC methods, respectively. For the 2 18F-fluoropropyl-(+)-dihydrotetrabenazine studies, the proposed method reduced the CT-mismatch artifacts in the lower lung where the CTAC approach failed and maintained the quantification accuracy of bone marrow where the NAC approach failed. For the 18F-FMISO study, the proposed method outperformed both the CTAC and the NAC methods in terms of motion estimation accuracy at 2 lung lesion locations. Conclusion: The proposed PET/CT respiratory event-by-event motion-correction framework with motion information derived from matched attenuation-corrected PET data provides image quality superior to that of the CTAC and NAC methods for multiple tracers.


international conference information processing | 2015

Segmenting the Brain Surface from CT Images with Artifacts Using Dictionary Learning for Non-rigid MR-CT Registration.

John A. Onofrey; Lawrence H. Staib; Xenophon Papademetris

This paper presents a dictionary learning-based method to segment the brain surface in post-surgical CT images of epilepsy patients following surgical implantation of electrodes. Using the electrodes identified in the post-implantation CT, surgeons require accurate registration with pre-implantation functional and structural MR imaging to guide surgical resection of epileptic tissue. In this work, we use a surface-based registration method to align the MR and CT brain surfaces. The key challenge here is not the registration, but rather the extraction of the cortical surface from the CT image, which includes missing parts of the skull and artifacts introduced by the electrodes. To segment the brain from these images, we propose learning a model of appearance that captures both the normal tissue and the artifacts found along this brain surface boundary. Using clinical data, we demonstrate that our method both accurately extracts the brain surface and better localizes electrodes than intensity-based rigid and non-rigid registration methods.

Collaboration


Dive into the John A. Onofrey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge