Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John W. Hoppin is active.

Publication


Featured researches published by John W. Hoppin.


Inhalation Toxicology | 2012

Regional particle size dependent deposition of inhaled aerosols in rats and mice

Philip J. Kuehl; Tamara Anderson; Gabriel Candelaria; Benjamin Gershman; Ky Harlin; Jacob Hesterman; Thomas D. Holmes; John W. Hoppin; Christian Lackas; Jeffrey P. Norenberg; Hongang Yu; Jacob D. McDonald

Context: The current data analysis tools in nuclear medicine have not been used to evaluate intra organ regional deposition patterns of pharmaceutical aerosols in preclinical species. Objective: This study evaluates aerosol deposition patterns as a function of particle size in rats and mice using novel image analysis techniques. Materials and Method: Mice and rats were exposed to radiolabeled polydisperse aerosols at 0.5, 1.0, 3.0, and 5.0 µm MMAD followed by SPECT/CT imaging for deposition analysis. Images were quantified for both macro deposition patterns and regional deposition analysis using the LRRI-developed Onion Model. Results: The deposition fraction in both rats and mice was shown to increase as the particle size decreased, with greater lung deposition in rats at all particle sizes. The Onion Model indicated that the smaller particle sizes resulted in increased peripheral deposition. Discussion: These data contrast the commonly used 10% deposition fraction for all aerosols between 1.0 and 5.0 µm and indicate that lung deposition fraction in this range does change with particle size. When compared to historical data, the 1.0, 3.0, and 5.0 µm particles result in similar lung deposition fractions; however, the 0.5 µm lung deposition fraction is markedly different. This is probably caused by the current aerosols that were polydisperse to reflect current pharmaceutical aerosols, while the historical data were generated with monodisperse aerosols. Conclusion: The deposition patterns of aerosols between 0.5 and 5.0 µm showed an increase in both overall and peripheral deposition as the particle size decreased. The Onion Model allows a more complex analysis of regional deposition in preclinical models.


information processing in medical imaging | 2002

Objective comparison of quantitative imaging modalities without the use of a gold standard

John W. Hoppin; Matthew A. Kupinski; George A. Kastis; Eric Clarkson; Harrison H. Barrett

Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart in order to know how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. Thus, researchers have often evaluated an estimation method by plotting its results against the results of another (more accepted) estimation method, which amounts to using one set of estimates as the pseudogold standard. In this paper, we present a maximum-likelihood approach for evaluating and comparing different estimation methods without the use of a gold standard with specific emphasis on the problem of evaluating EF estimation methods. Results of numerous simulation studies will be presented and indicate that the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x axis.


Academic Radiology | 2002

Estimation in Medical Imaging without a Gold Standard

Matthew A. Kupinski; John W. Hoppin; Eric Clarkson; Harrison H. Barrett; George A. Kastis

RATIONALE AND OBJECTIVES In medical imaging, physicians often estimate a parameter of interest (eg, cardiac ejection fraction) for a patient to assist in establishing a diagnosis. Many different estimation methods may exist, but rarely can one be considered a gold standard. Therefore, evaluation and comparison of different estimation methods are difficult. The purpose of this study was to examine a method of evaluating different estimation methods without use of a gold standard. MATERIALS AND METHODS This method is equivalent to fitting regression lines without the x axis. To use this method, multiple estimates of the clinical parameter of interest for each patient of a given population were needed. The authors assumed the statistical distribution for the true values of the clinical parameter of interest was a member of a given family of parameterized distributions. Furthermore, they assumed a statistical model relating the clinical parameter to the estimates of its value. Using these assumptions and observed data, they estimated the model parameters and the parameters characterizing the distribution of the clinical parameter. RESULTS The authors applied the method to simulated cardiac ejection fraction data with varying numbers of patients, numbers of modalities, and levels of noise. They also tested the method on both linear and nonlinear models and characterized the performance of this method compared to that of conventional regression analysis by using x-axis information. Results indicate that the method follows trends similar to that of conventional regression analysis as patients and noise vary, although conventional regression analysis outperforms the method presented because it uses the gold standard which the authors assume is unavailable. CONCLUSION The method accurately estimates model parameters. These estimates can be used to rank the systems for a given estimation task.


Journal of The Optical Society of America A-optics Image Science and Vision | 2003

Experimental determination of object statistics from noisy images

Matthew A. Kupinski; Eric Clarkson; John W. Hoppin; Liying Chen; Harrison H. Barrett

Modern imaging systems rely on complicated hardware and sophisticated image-processing methods to produce images. Owing to this complexity in the imaging chain, there are numerous variables in both the hardware and the software that need to be determined. We advocate a task-based approach to measuring and optimizing image quality in which one analyzes the ability of an observer to perform a task. Ideally, a task-based measure of image quality would account for all sources of variation in the imaging system, including object variability. Often, researchers ignore object variability even though it is known to have a large effect on task performance. The more accurate the statistical description of the objects, the more believable the task-based results will be. We have developed methods to fit statistical models of objects, using only noisy image data and a well-characterized imaging system. The results of these techniques could eventually be used to optimize both the hardware and the software components of imaging systems.


IEEE Transactions on Medical Imaging | 2005

Noise characterization of block-iterative reconstruction algorithms: II. Monte Carlo simulations

Edward J. Soares; Stephen J. Glick; John W. Hoppin

In Soares et al. (2000), the ensemble statistical properties of the rescaled block-iterative expectation-maximization (RBI-EM) reconstruction algorithm and rescaled block-iterative simultaneous multiplicative algebraic reconstruction technique (RBI-SMART) were derived. Included in this analysis were the special cases of RBI-EM, maximum-likelihood EM (ML-EM) and ordered-subset EM (OS-EM), and the special case of RBI-SMART, SMART. Explicit expressions were found for the ensemble mean, covariance matrix, and probability density function of RBI reconstructed images, as a function of iteration number. The theoretical formulations relied on one approximation, namely that the noise in the reconstructed image was small compared to the mean image. We evaluate the predictions of the theory by using Monte Carlo methods to calculate the sample statistical properties of each algorithm and then compare the results with the theoretical formulations. In addition, the validity of the approximation will be justified.


Medical Imaging 2003: Image Perception, Observer Performance, and Technology Assessment | 2003

Evaluating estimation techniques in medical imaging without a gold standard: experimental validation

John W. Hoppin; Matthew A. Kupinski; Donald W. Wilson; Todd Peterson; Benjamin Gershman; George A. Kastis; Eric Clarkson; Lars R. Furenlid; Harrison H. Barrett

Imaging is often used for the purpose of estimating the value of some parameter of interest. For example, a cardiologist may measure the ejection fraction (EF) of the heart to quantify how much blood is being pumped out of the heart on each stroke. In clinical practice, however, it is difficult to evaluate an estimation method because the gold standard is not known, e.g., a cardiologist does not know the true EF of a patient. An estimation method is typically evaluated by plotting its results against the results of another (more accepted) estimation method. This approach results in the use of one set of estimates as the pseudo-gold standard. We have developed a maximum-likelihood approach for comparing different estimation methods to the gold standard without the use of the gold standard. In previous works we have displayed the results of numerous simulation studies indicating the method can precisely and accurately estimate the parameters of a regression line without a gold standard, i.e., without the x-axis. In an attempt to further validate our method we have designed an experiment performing volume estimation using a physical phantom and two imaging systems (SPECT< CT).


Medical Imaging 2003: Image Perception, Observer Performance, and Technology Assessment | 2003

Optimizing imaging hardware for estimation tasks

Matthew A. Kupinski; Eric Clarkson; Kevin Gross; John W. Hoppin

Medical imaging is often performed for the purpose of estimating a clinically relevant parameter. For example, cardiologists are interested in the cardiac ejection fraction, the fraction of blood pumped out of the left ventricle at the end of each heart cycle. Even when the primary task of the imaging system is tumor detection, physicians frequently want to estimate parameters of the tumor, e.g. size and location. For signal-detection tasks, we advocate that the performance of an ideal observer be employed as the figure of merit for optimizing medical imaging hardware. We have examined the use of the minimum variance of the ideal, unbiased estimator as a figure of merit for hardware optimization. The minimum variance of the ideal, unbiased estimator can be calculated using the Fisher information matrix. To account for both image noise and object variability, we used a statistical method known as Markov-chain Monte Carlo. We employed a lumpy object model and simulated imaging systems to compute our figures of merit. We have demonstrated the use of this method in comparing imaging systems for estimation tasks.


Medical Imaging 2003: Image Perception, Observer Performance, and Technology Assessment | 2003

Assessing the accuracy of estimates of the likelihood ratio

Eric Clarkson; Matthew A. Kupinski; John W. Hoppin

There are many methods to estimate, from ensembles of signal-present and signal-absent images, the area under the receiver operating characteristic curve for an observer in a detection task. For the ideal observer on realistic detection tasks, all of these methods are time consuming due to the difficulty in calculating the ideal-observer test statistic. There are relations, in the form of equations and inequalities, that can be used to check these estimates by comparing them to other quantities that can also be estimated from the ensembles. This is especially useful for evaluating these estimates for any possible bias due to small sample sizes or errors in the calculation of the likelihood ratio. This idea is demonstrated with a simulation of an idealized single photon emission detector array viewing a possible signal in a two-dimensional lumpy activity distribution.


Proceedings of SPIE | 2015

Quantifying and reducing uncertainties in cancer therapy

Harrison H. Barrett; David S. Alberts; Zhonglin Liu; Luca Caucci; John W. Hoppin

There are two basic sources of uncertainty in cancer chemotherapy: how much of the therapeutic agent reaches the cancer cells, and how effective it is in reducing or controlling the tumor when it gets there. There is also a concern about adverse effects of the therapy drug. Similarly in external-beam radiation therapy or radionuclide therapy, there are two sources of uncertainty: delivery and efficacy of the radiation absorbed dose, and again there is a concern about radiation damage to normal tissues. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control vs. the probability of normal-tissue complications as the overall radiation dose level is varied, e.g. by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. The TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. In this paper we discuss the potential of using mathematical models of drug delivery and tumor response with imaging data to estimate AUTOC for chemotherapy, again for a single patient. This approach provides a basis for truly personalized therapy and for rigorously assessing and optimizing the therapy regimen for the particular patient. A key role is played by Emission Computed Tomography (PET or SPECT) of radiolabeled chemotherapy drugs.


Journal of medical imaging | 2016

Therapy operating characteristic curves: tools for precision chemotherapy.

Harrison H. Barrett; David S. Alberts; Luca Caucci; John W. Hoppin

Abstract. The therapy operating characteristic (TOC) curve, developed in the context of radiation therapy, is a plot of the probability of tumor control versus the probability of normal-tissue complications as the overall radiation dose level is varied, e.g., by varying the beam current in external-beam radiotherapy or the total injected activity in radionuclide therapy. This paper shows how TOC can be applied to chemotherapy with the administered drug dosage as the variable. The area under a TOC curve (AUTOC) can be used as a figure of merit for therapeutic efficacy, analogous to the area under an ROC curve (AUROC), which is a figure of merit for diagnostic efficacy. In radiation therapy, AUTOC can be computed for a single patient by using image data along with radiobiological models for tumor response and adverse side effects. The mathematical analogy between response of observers to images and the response of tumors to distributions of a chemotherapy drug is exploited to obtain linear discriminant functions from which AUTOC can be calculated. Methods for using mathematical models of drug delivery and tumor response with imaging data to estimate patient-specific parameters that are needed for calculation of AUTOC are outlined. The implications of this viewpoint for clinical trials are discussed.

Collaboration


Dive into the John W. Hoppin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kelly Davis Orcutt

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge