Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Chappelow is active.

Publication


Featured researches published by Jonathan Chappelow.


Journal of Magnetic Resonance Imaging | 2012

Central gland and peripheral zone prostate tumors have significantly different quantitative imaging signatures on 3 tesla endorectal, in vivo T2-weighted MR imagery

Satish Viswanath; Nicholas B. Bloch; Jonathan Chappelow; Robert Toth; Neil M. Rofsky; Elizabeth M. Genega; Robert E. Lenkinski; Anant Madabhushi

To identify and evaluate textural quantitative imaging signatures (QISes) for tumors occurring within the central gland (CG) and peripheral zone (PZ) of the prostate, respectively, as seen on in vivo 3 Tesla (T) endorectal T2‐weighted (T2w) MRI.


Computerized Medical Imaging and Graphics | 2011

Determining histology-MRI slice correspondences for defining MRI-based disease signatures of prostate cancer

Gaoyu Xiao; B. Nicolas Bloch; Jonathan Chappelow; Elizabeth M. Genega; Neil M. Rofsky; Robert E. Lenkinski; John E. Tomaszewski; Michael Feldman; Mark A. Rosen; Anant Madabhushi

Mapping the spatial disease extent in a certain anatomical organ/tissue from histology images to radiological images is important in defining the disease signature in the radiological images. One such scenario is in the context of men with prostate cancer who have had pre-operative magnetic resonance imaging (MRI) before radical prostatectomy. For these cases, the prostate cancer extent from ex vivo whole-mount histology is to be mapped to in vivo MRI. The need for determining radiology-image-based disease signatures is important for (a) training radiologist residents and (b) for constructing an MRI-based computer aided diagnosis (CAD) system for disease detection in vivo. However, a prerequisite for this data mapping is the determination of slice correspondences (i.e. indices of each pair of corresponding image slices) between histological and magnetic resonance images. The explicit determination of such slice correspondences is especially indispensable when an accurate 3D reconstruction of the histological volume cannot be achieved because of (a) the limited tissue slices with unknown inter-slice spacing, and (b) obvious histological image artifacts (tissue loss or distortion). In the clinic practice, the histology-MRI slice correspondences are often determined visually by experienced radiologists and pathologists working in unison, but this procedure is laborious and time-consuming. We present an iterative method to automatically determine slice correspondence between images from histology and MRI via a group-wise comparison scheme, followed by 2D and 3D registration. The image slice correspondences obtained using our method were compared with the ground truth correspondences determined via consensus of multiple experts over a total of 23 patient studies. In most instances, the results of our method were very close to the results obtained via visual inspection by these experts.


Proceedings of SPIE | 2009

Integrating Structural and Functional Imaging for Computer Assisted Detection of Prostate Cancer on Multi-Protocol In Vivo 3 Tesla MRI

Satish Viswanath; B. Nicolas Bloch; Mark A. Rosen; Jonathan Chappelow; Robert Toth; Neil M. Rofsky; Robert E. Lenkinski; Elizabeth M. Genega; Arjun Kalyanpur; Anant Madabhushi

Screening and detection of prostate cancer (CaP) currently lacks an image-based protocol which is reflected in the high false negative rates currently associated with blinded sextant biopsies. Multi-protocol magnetic resonance imaging (MRI) offers high resolution functional and structural data about internal body structures (such as the prostate). In this paper we present a novel comprehensive computer-aided scheme for CaP detection from high resolution in vivo multi-protocol MRI by integrating functional and structural information obtained via dynamic-contrast enhanced (DCE) and T2-weighted (T2-w) MRI, respectively. Our scheme is fully-automated and comprises (a) prostate segmentation, (b) multimodal image registration, and (c) data representation and multi-classifier modules for information fusion. Following prostate boundary segmentation via an improved active shape model, the DCE/T2-w protocols and the T2-w/ex vivo histological prostatectomy specimens are brought into alignment via a deformable, multi-attribute registration scheme. T2-w/histology alignment allows for the mapping of true CaP extent onto the in vivo MRI, which is used for training and evaluation of a multi-protocol MRI CaP classifier. The meta-classifier used is a random forest constructed by bagging multiple decision tree classifiers, each trained individually on T2-w structural, textural and DCE functional attributes. 3-fold classifier cross validation was performed using a set of 18 images derived from 6 patient datasets on a per-pixel basis. Our results show that the results of CaP detection obtained from integration of T2-w structural textural data and DCE functional data (area under the ROC curve of 0.815) significantly outperforms detection based on either of the individual modalities (0.704 (T2-w) and 0.682 (DCE)). It was also found that a meta-classifier trained directly on integrated T2-w and DCE data (data-level integration) significantly outperformed a decision-level meta-classifier, constructed by combining the classifier outputs from the individual T2-w and DCE channels.


medical image computing and computer assisted intervention | 2008

A Comprehensive Segmentation, Registration, and Cancer Detection Scheme on 3 Tesla In Vivo Prostate DCE-MRI

Satish Viswanath; B. Nicolas Bloch; Elizabeth M. Genega; Neil M. Rofsky; Robert E. Lenkinski; Jonathan Chappelow; Robert Toth; Anant Madabhushi

Recently, high resolution 3 Tesla (T) Dynamic Contrast-Enhanced MRI (DCE-MRI) of the prostate has emerged as a promising modality for detecting prostate cancer (CaP). Computer-aided diagnosis (CAD) schemes for DCE-MRI data have thus far been primarily developed for breast cancer and typically involve model fitting of dynamic intensity changes as a function of contrast agent uptake by the lesion. Comparatively there is relatively little work in developing CAD schemes for prostate DCE-MRI. In this paper, we present a novel unsupervised detection scheme for CaP from 3 T DCE-MRI which comprises 3 distinct steps. First, a multi-attribute active shape model is used to automatically segment the prostate boundary from 3 T in vivo MR imagery. A robust multimodal registration scheme is then used to non-linearly align corresponding whole mount histological and DCE-MRI sections from prostatectomy specimens to determine the spatial extent of CaP. Non-linear dimensionality reduction schemes such as locally linear embedding (LLE) have been previously shown to be useful in projecting such high dimensional biomedical data into a lower dimensional subspace while preserving the non-linear geometry of the data manifold. DCE-MRI data is embedded via LLE and then classified via unsupervised consensus clustering to identify distinct classes. Quantitative evaluation on 21 histology-MRI slice pairs against registered CaP ground truth estimates yielded a maximum CaP detection accuracy of 77.20% while the popular three time point (3TP) scheme yielded an accuracy of 67.37%.


Medical Physics | 2012

Concurrent segmentation of the prostate on MRI and CT via linked statistical shape models for radiotherapy planning.

Najeeb Chowdhury; Robert Toth; Jonathan Chappelow; Sung Kim; Sabin Motwani; Salman Punekar; Haibo Lin; Stefan Both; Neha Vapiwala; Stephen M. Hahn; Anant Madabhushi

PURPOSE Prostate gland segmentation is a critical step in prostate radiotherapy planning, where dose plans are typically formulated on CT. Pretreatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to perform, compared to delineation on CT. In this work, the authors present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate boundary delineations of the SOI on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. In this work the authors apply the LSSM in the context of multimodal prostate segmentation for radiotherapy planning, where the prostate is concurrently segmented on MRI and CT. METHODS The framework comprises a number of logically connected steps. The first step utilizes multimodal registration of MRI and CT to map 2D boundary delineations of the prostate from MRI onto corresponding CT images, for a set of training studies. Hence, the scheme obviates the need for expert delineations of the gland on CT for explicitly constructing a SSM for prostate segmentation on CT. The delineations of the prostate gland on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. In order to perform concurrent prostate MRI and CT segmentation using the LSSM, the authors employ a region-based level set approach where the authors deform the evolving prostate boundary to simultaneously fit to MRI and CT images in which voxels are classified to be either part of the prostate or outside the prostate. The classification is facilitated by using a combination of MRI-CT probabilistic spatial atlases and a random forest classifier, driven by gradient and Haar features. RESULTS The authors acquire a total of 20 MRI-CT patient studies and use the leave-one-out strategy to train and evaluate four different LSSMs. First, a fusion-based LSSM (fLSSM) is built using expert ground truth delineations of the prostate on MRI alone, where the ground truth for the gland on CT is obtained via coregistration of the corresponding MRI and CT slices. The authors compare the fLSSM against another LSSM (xLSSM), where expert delineations of the gland on both MRI and CT are employed in the model building; xLSSM representing the idealized LSSM. The authors also compare the fLSSM against an exclusive CT-based SSM (ctSSM), built from expert delineations of the gland on CT alone. In addition, two LSSMs trained using trainee delineations (tLSSM) on CT are compared with the fLSSM. The results indicate that the xLSSM, tLSSMs, and the fLSSM perform equivalently, all of them out-performing the ctSSM. CONCLUSIONS The fLSSM provides an accurate alternative to SSMs that require careful expert delineations of the SOI that may be difficult or laborious to obtain. Additionally, the fLSSM has the added benefit of providing concurrent segmentations of the SOI on multiple imaging modalities.


Computerized Medical Imaging and Graphics | 2011

HistoStitcher©: An interactive program for accurate and rapid reconstruction of digitized whole histological sections from tissue fragments

Jonathan Chappelow; John E. Tomaszewski; Michael Feldman; Natalie Shih; Anant Madabhushi

We present an interactive program called HistoStitcher(©) for accurate and rapid reassembly of histology fragments into a pseudo-whole digitized histological section. HistoStitcher(©) provides both an intuitive graphical interface to assist the operator in performing the stitch of adjacent histology fragments by selecting pairs of anatomical landmarks, and a set of computational routines for determining and applying an optimal linear transformation to generate the stitched image. Reconstruction of whole histological sections from images of slides containing smaller fragments is required in applications where preparation of whole sections of large tissue specimens is not feasible or efficient, and such whole mounts are required to facilitate (a) disease annotation and (b) image registration with radiological images. Unlike manual reassembly of image fragments in a general purpose image editing program (such as Photoshop), HistoStitcher(©) provides memory efficient operation on high resolution digitized histology images and a highly flexible stitching process capable of producing more accurate results in less time. Further, by parameterizing the series of transformations determined by the stitching process, the stitching parameters can be saved, loaded at a later time, refined, or reapplied to multi-resolution scans, or quickly transmitted to another site. In this paper, we describe in detail the design of HistoStitcher(©) and the mathematical routines used for calculating the optimal image transformation, and demonstrate its operation for stitching high resolution histology quadrants of a prostate specimen to form a digitally reassembled whole histology section, for 8 different patient studies. To evaluate stitching quality, a 6 point scoring scheme, which assesses the alignment and continuity of anatomical structures important for disease annotation, is employed by three independent expert pathologists. For 6 studies compared with this scheme, reconstructed sections generated via HistoStitcher(©) scored higher than reconstructions generated by an expert pathologist using Photoshop.


international symposium on biomedical imaging | 2007

A COMBINED FEATURE ENSEMBLE BASED MUTUAL INFORMATION SCHEME FOR ROBUST INTER-MODAL, INTER-PROTOCOL IMAGE REGISTRATION

Jonathan Chappelow; Anant Madabhushi; Mark A. Rosen; John E. Tomaszeweski; Michael Feldman

In this paper we present a new robust method for medical image registration called combined feature ensemble mutual information (COFEMI). While mutual information (MI) has become arguably the most popular similarity metric for image registration, intensity based MI schemes have been found wanting in inter-modal and interprotocol image registration, especially when (1) significant image differences across modalities (e.g. pathological and radiological studies) exist, and (2) when imaging artifacts have significantly altered the characteristics of one or both of the images to be registered. Intensity-based MI registration methods operate by maximization of MI between two images A and B. The COFEMI scheme extracts over 450 feature representations of image B that provide additional information about A not conveyed by image B alone and are more robust to the artifacts affecting original intensity image B. COFEMI registration operates by maximization of combined mutual information (CMI) of the image A with the feature ensemble associated with B. The combination of information from several feature images provides a more robust similarity metric compared to the use of a single feature image or the original intensity image alone. We also present a computer-assisted scheme for determining an optimal set of maximally informative features for use with our CMI formulation. We quantitatively and qualitatively demonstrate the improvement in registration accuracy by using our COFEMI scheme over the traditional intensity based-Mi scheme in registering (1) prostate whole mount histological sections with corresponding magnetic resonance imaging (MRI) slices; and (2) phantom brain T1 and T2 MRI studies, which were adversely affected by imaging artifacts


Proceedings of SPIE | 2011

Enhanced multi-protocol analysis via intelligent supervised embedding (EMPrAvISE): detecting prostate cancer on multi-parametric MRI

Satish Viswanath; B. Nicolas Bloch; Jonathan Chappelow; Pratik Patel; Neil M. Rofsky; Robert E. Lenkinski; Elizabeth M. Genega; Anant Madabhushi

Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).


Proceedings of SPIE | 2009

COLLINARUS: collection of image-derived non-linear attributes for registration using splines

Jonathan Chappelow; B. Nicolas Bloch; Neil M. Rofsky; Elizabeth M. Genega; Robert E. Lenkinski; William C. DeWolf; Satish Viswanath; Anant Madabhushi

We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer extent. The set of synthetic multiprotocol images, acquired from the BrainWeb Simulated Brain Database, comprises 11 pairs of T1-w and proton density (PD) MRI of the brain. Following the application of a known warping to misalign the images, non-rigid registration was then performed to recover the original, correct alignment of each image pair. Quantitative evaluation of brain registration was performed by direct comparison of (1) the recovered deformation field to the applied field and (2) the original undeformed and recovered PD MRI. For each of the data sets, COLLINARUS is compared with the MI-driven counterpart of the B-spline technique. In each of the quantitative experiments, registration accuracy was found to be significantly (p < 0.05) for COLLINARUS compared with MI-driven B-spline registration. Over 11 slices, the mean absolute error in the deformation field recovered by COLLINARUS was found to be 0.8830 mm.


international conference of the ieee engineering in medicine and biology society | 2011

Spatially weighted mutual information (SWMI) for registration of digitally reconstructed ex vivo whole mount histology and in vivo prostate MRI

Pratik Patel; Jonathan Chappelow; John E. Tomaszewski; Michael Feldman; Mark A. Rosen; Natalie Shih; Anant Madabhushi

In this work, we present a scheme for the registration of digitally reconstructed whole mount histology (WMH) to pre-operative in vivo multiprotocol prostate MR imagery (T2w and DCE) using spatially weighted mutual information (SWMI). Spatial alignment of ex vivo histological sections to pre-operative in vivo MRI for prostate cancer (CaP) patients undergoing radical prostatectomy is a necessary first step in the discovery of quantitative multiprotocol MRI signatures for CaP. This may be done by spatially mapping delineated extent of disease on ex vivo histopathology onto pre-operative in vivo MRI via image registration. Apart from the challenges in spatially registering multi-modal data (histology and MRI) on account of (a) modality specific differences, (b) deformation due to the endorectal coil and tissue loss on histology, another complication is that the ex vivo histological sections, in the lab, are usually obtained as quadrants. This means they need to be reconstituted as a pseudo-whole mount histologic section (WMHS) prior to registration with MRI. An additional challenge is that most registration techniques rely on availability of the pre-segmented prostate capsule on T2w MRI. The novel contribution of this paper is that it leverages a spatially weighted mutual information (SWMI) scheme to automatically register and map CaP extent from WMHS onto pre-operative, multiprotocol MRI. The SWMI scheme obviates the need for pre-segmentation of the prostate capsule on MRI. Additionally, we leverage a program developed by our group, Histostitcher©, for interactive stitching of individual histology quadrants to digitally reconstruct the pseudo WMHS. Our registration methodology comprises the following main steps, (1) affine registration of T2w and DCE MRI, (2) affine registration of stitched WMHS to multiprotocol T2w and DCE MRI, and (3) multimodal image registration of WMHS to multiprotocol T2w and DCE MRI using SWMI. We quantitatively and qualitatively evaluated all aspects of our methodology in the multimodal registration of a total of 7 corresponding histology and MRI sections from 2 different patients. For the 7 studies, we obtained an average Hausdorff distance of 1.85 mm, mean absolute distance of 0.99 mm, RMS of 1.65 mm, and DICE of 0.83, when comparing the capsular alignment on MRI to histology.

Collaboration


Dive into the Jonathan Chappelow's collaboration.

Top Co-Authors

Avatar

Anant Madabhushi

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Elizabeth M. Genega

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Michael Feldman

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Neil M. Rofsky

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Robert E. Lenkinski

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satish Viswanath

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Mark A. Rosen

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge