Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Garry Chinn is active.

Publication


Featured researches published by Garry Chinn.


IEEE Transactions on Medical Imaging | 2009

Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU

Guillem Pratx; Garry Chinn; Peter D. Olcott; Craig S. Levin

List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach.


ieee nuclear science symposium | 2006

A Method to Include Single Photon Events in Image Reconstruction for a 1 mm Resolution PET System Built with Advanced 3-D Positioning Detectors

Garry Chinn; Angela M. K. Foudray; Craig S. Levin

We are developing cadmium zinc telluride detectors with three-dimensional photon positioning capabilities for high-resolution PET imaging. These detectors exhibit high spatial resolution (1 mm), energy resolution (2.5% full width at half maximum for 511 keV photons), and the ability to resolve individual Compton interactions within the detector. Using these measurements, non-coincident single photons can be reconstructed by estimating the incoming direction of the photon using the kinematics of Compton scatter within the detector. In this paper, we investigated image reconstruction methods for combining two different types of measurements: conventional coincidence photon events and non-coincident single photon events. We introduce a new image reconstruction method that uses a Bayesian projector function. Using Monte Carlo simulated data generated by GATE (Geant4), we showed that this new approach has the potential to improve contrast and resolution with comparable signal-to-noise ratio.


IEEE Transactions on Medical Imaging | 2013

Sparse Signal Recovery Methods for Multiplexing PET Detector Readout

Garry Chinn; Peter D. Olcott; Craig S. Levin

Nuclear medicine imaging detectors are commonly multiplexed to reduce the number of readout channels. Because the underlying detector signals have a sparse representation, sparse recovery methods such as compressed sensing may be used to develop new multiplexing schemes. Random methods may be used to create sensing matrices that satisfy the restricted isometry property. However, the restricted isometry property provides little guidance for developing multiplexing networks with good signal-to-noise recovery capability. In this work, we describe compressed sensing using a maximum likelihood framework and develop a new method for constructing multiplexing (sensing) matrices that can recover signals more accurately in a mean square error sense compared to sensing matrices constructed by random construction methods. Signals can then be recovered by maximum likelihood estimation constrained to the support recovered by either greedy ℓ0 iterative algorithms or ℓ1-norm minimization techniques. We show that this new method for constructing and decoding sensing matrices recovers signals with 4%-110% higher SNR than random Gaussian sensing matrices, up to 100% higher SNR than partial DCT sensing matrices 50%-2400% higher SNR than cross-strip multiplexing, and 22%-210% higher SNR than Anger multiplexing for photoelectric events.


IEEE Transactions on Medical Imaging | 2011

A Maximum NEC Criterion for Compton Collimation to Accurately Identify True Coincidences in PET

Garry Chinn; Craig S. Levin

In this work, we propose a new method to increase the accuracy of identifying true coincidence events for positron emission tomography (PET). This approach requires 3-D detectors with the ability to position each photon interaction in multi-interaction photon events. When multiple interactions occur in the detector, the incident direction of the photon can be estimated using the Compton scatter kinematics (Compton Collimation). If the difference between the estimated incident direction of the photon relative to a second, coincident photon lies within a certain angular range around colinearity, the line of response between the two photons is identified as a true coincidence and used for image reconstruction. We present an algorithm for choosing the incident photon direction window threshold that maximizes the noise equivalent counts of the PET system. For simulated data, the direction window removed 56%-67% of random coincidences while retaining >; 94% of true coincidences from image reconstruction as well as accurately extracted 70% of true coincidences from multiple coincidences.


ieee nuclear science symposium | 2011

Algorithms that exploit multi-interaction photon events in sub-millimeter resolution CZT detectors for PET

Garry Chinn; Craig S. Levin

We are investigating a new cadmium zinc telluride (CZT) detector module with 0.5 mm3 reconstructed spatial resolution for small animal PET and plant imaging. Inter-pixel scatter will degrade the reconstructed spatial resolution and contrast resolution or, if the events are discarded, reduce the sensitivity. CZT also has poor time resolution leading to reduced contrast from random coincidences. We will use the kinematics of Compton scatter to estimate the position of inter-pixel scatter events so that these events can be used without degrading the spatial resolution. Further, we can estimate the incident photon direction from Compton kinematics to reject random coincidence photons. We evaluate the performance of our signal processing algorithms on a new split-layer design. The addition of Compton kinematics yielded up to a 50% improvement in contrast and CNR over processing without Compton kinematics.


ieee nuclear science symposium | 2011

Compressed sensing for the multiplexing of PET detectors

Peter D. Olcott; Garry Chinn; Craig S. Levin

Compressed sensing can be used to multiplex a large number of individual readout sensors to significantly reduce the number of readout channels in a large area PET block detector. The compressed sensing framework can be used to treat PET data acquisition as a sparse readout problem and achieve sub-Nyquist rate sampling, where the Nyquist rate is determined by the pixel pitch of the individual SiPM sensors. The sensing matrix is fabricated by using discrete elements or wires that uniquely connect pixels to readout channels. By analyzing the recorded magnitude on several ADC channels, the original pixel values can be recovered even though they have been scrambled through a sensing matrix. In a PET block detector design comprising 128 SiPM pixels arranged in a 16 × 8 array, compressed sensing can provide higher multiplexing ratios (128∶16) than Anger logic (128∶32) or Cross-strip readout (128∶24) schemes while resolving multiple simultaneous hits. Unlike Anger and cross-strip multiplexing, compressed sensing can recover the positions and magnitudes of simultaneous, multiple pixel hits. Decoding multiple pixel hits can be used to improve the positioning of events in light-sharing designs, inter-crystal scatter events, or events that pile up in the detector. A Monte-carlo simulation of a compressed sensing multiplexed circuit design for a 16 × 8 array of SiPM pixels was done. Noise sources from the SiPM pixel (dark counts) and from the readout channel (thermal noise) were included in the simulation. Also, two different crystal designs were simulated, a 1×1 coupled design with 128 scintillation crystals, and a 3∶2 light-sharing design with 196 crystals. With an input SNR of 37dB (experimentally measured from a single SiPM pixel), all crystals were clearly decoded by the compressed sensing multiplexing with a decoded SNR of the sum signal a 30.6 ± 0.1 dB SNR for the one-to-one coupling, and 26.1 ± 0.1 dB three-to-two coupling. For a 10% energy resolution, and SNR of greater than 20 dB is needed to accurately recover the energy.


ieee nuclear science symposium | 2006

Accurately Positioning and Incorporating Tissue-Scattered Photons into PET Image Reconstruction

Garry Chinn; Angela M. K. Foudray; Craig S. Levin

We are developing cadmium zinc telluride detectors with three-dimensional positioning capabilities for high-resolution PET imaging. These detectors exhibit high spatial resolution (1 mm), energy resolution (2.5% full width at half maximum for 511 keV photons), and the ability to identify the 3-D coordinates of individual Compton and photoelectric interactions within the detector. These detectors can operate in conventional PET mode measuring photons in coincidence and as a Compton camera for single photon events. In this work, we show how the capabilities of this detector can be used to reconstruct tissue-scatter coincidence events. We present a scatter projector function for positioning tissue-scatter coincidence events in the field of view. Using Monte Carlo simulated data generated by GATE (Geant4), we showed that this new approach might be used to increase the number of usable counts in PET.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Single-photon sampling architecture for solid-state imaging sensors

Ewout van den Berg; Emmanuel J. Candès; Garry Chinn; Craig S. Levin; Peter D. Olcott; Carlos Sing-Long

Significance We propose a highly compressed readout architecture for arrays of imaging sensors capable of detecting individual photons. By exploiting sparseness properties of the input signal, our architecture can provide the same information content as conventional readout designs while using orders of magnitude fewer output channels. This is achieved using a unique interconnection topology based on group-testing theoretical considerations. Unlike existing designs, this promises a low-cost sensor with high fill factor and high photon sensitivity, potentially enabling increased spatial and temporal resolution in a number of imaging applications, including positron-emission tomography and light detection and ranging. Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as light detection and ranging and positron-emission tomography. The demands placed on on-chip readout circuitry impose stringent trade-offs between fill factor and spatiotemporal resolution, causing many contemporary designs to severely underuse the technology’s full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs. We provide optimized design instances for various sensor parameters and compute explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization of a 60 × 60 photodiode sensor using only 142 TDCs. The design guarantees registration and unique recovery of up to four simultaneous photon arrivals using a fast decoding algorithm. By contrast, a cross-strip design requires 120 TDCs and cannot uniquely decode any simultaneous photon arrivals. Among other realistic simulations of scintillation events in clinical positron-emission tomography, the above design is shown to recover the spatiotemporal location of 99.98% of all detected photons.


Physics in Medicine and Biology | 2016

Feasibility study of Compton cameras for x-ray fluorescence computed tomography with humans.

Don Vernekohl; Moiz Ahmad; Garry Chinn; Lei Xing

X-ray fluorescence imaging is a promising imaging technique able to depict the spatial distributions of low amounts of molecular agents in vivo. Currently, the translation of the technique to preclinical and clinical applications is hindered by long scanning times as objects are scanned with flux-limited narrow pencil beams. The study presents a novel imaging approach combining x-ray fluorescence imaging with Compton imaging. Compton cameras leverage the imaging performance of XFCT and abolish the need for pencil beam excitation. The study examines the potential of this new imaging approach on the base of Monte-Carlo simulations. In the work, it is first presented that the particular option of slice/fan-beam x-ray excitation has advantages in image reconstruction in regard of processing time and image quality compared to traditional volumetric Compton imaging. In a second experiment, the feasibility of the approach for clinical applications with tracer agents made from gold nano-particles is examined in a simulated lung scan scenario. The high energy of characteristic x-ray photons from gold is advantageous for deep tissue penetration and has lower angular blurring in the Compton camera. It is found that Doppler broadening in the first detector stage of the Compton camera adds the largest contribution on the angular blurring; physically limiting the spatial resolution. Following the analysis of the results from the spatial resolution test, resolutions in the order of one centimeter are achievable with the approach in the center of the lung. The concept of Compton imaging allows one to distinguish to some extent between scattered photons and x-ray fluorescent photons based on their difference in emission position. The results predict that molecular sensitivities down to 240 pM l-1 for 5 mm diameter lesions at 15 mGy for 50 nm diameter gold nano-particles are achievable. A 45-fold speed up time for data acquisition compared to traditional pencil beam XFCT could be achieved for lung imaging at the cost of a small sensitivity decrease.


ieee nuclear science symposium | 2011

Fast and accurate 3D compton cone projections on GPU using CUDA

Jingyu Cui; Garry Chinn; Craig S. Levin

We present a fast and accurate method for reconstructing single photons detected by a Compton camera using 3D cone projection operations formulated to run on a graphics processing unit (GPU) and the compute unified device architecture (CUDA) framework. With these projection operations, image quality and accuracy of modalities such as positron emission tomography (PET) can be improved by incorporating Compton scatter events. We also use Monte Carlo simulation to produce a model of the blurring effects caused by limited energy and spatial resolution of the detectors to improve the quality and accuracy of the reconstructed images. The blur model is then incorporated into the cone projections in a cone-by-cone basis. Our method overcomes challenges such as compute thread divergence, and exploits GPU capabilities such as shared memory and texture memory. Unique challenges for projecting cones compared with projecting lines are also addressed for the GPU. The projection operations are integrated into a list-mode ordered subsets expectation maximization (OSEM) framework to reconstruct images from a Compton camera. The algorithm with blurring model achieves an average of 17.3% improvement on CNR compared with the image reconstructed without the blurring model. The whole reconstruction algorithm takes 2.2 seconds per iteration to process 50,000 cones in a 96×96×32 image on a NVIDIA GeForce GTX 480 GPU, including forward projection, backprojection, and multiplicative update. On a single core state-of-the-art central processing unit (CPU), it takes 3.1 hours for the same task with the same level of accuracy in blur modeling. Images generated using the CPU and GPU implementing the same blurring model are virtually identical, with root mean squared (RMS) deviation of 0.01%.

Collaboration


Dive into the Garry Chinn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge