Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel David Harrison is active.

Publication


Featured researches published by Daniel David Harrison.


Medical Physics | 2011

Pulse pileup statistics for energy discriminating photon counting x‐ray detectors

Adam S. Wang; Daniel David Harrison; Vladimir Lobastov; J. Eric Tkaczyk

PURPOSE Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detectors maximum periodic rate No, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. METHODS The detector count statistics are derived analytically for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramdr-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. RESULTS The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20% N0, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. CONCLUSIONS A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.


ieee nuclear science symposium | 2007

Inverse geometry CT: The next-generation CT architecture?

B. De Man; Samit Kumar Basu; Paul F. FitzGerald; Daniel David Harrison; Maria Iatrou; Kedar Bhalchandra Khare; James Walter Leblanc; Bob Senzig; Colin Richard Wilson; Zhye Yin; Norbert J. Pelc

We present a new system architecture for X-ray computed tomography (CT). A multi-source inverse-geometry CT scanner is composed of a large distributed X-ray source with an array of discrete electron emitters and focal spots, and a high frame-rate flat-panel X-ray detector. In this work we study the advantages and the challenges of this new architecture. We predict potential breakthroughs in volumetric coverage, dose efficiency, and spatial resolution. We also present experimental results obtained with a universal benchtop system.


IEEE Transactions on Information Theory | 1990

Analysis and further results on adaptive entropy-coded quantization

Daniel David Harrison; James W. Modestino

Buffer underflow and overflow problems associated with entropy coding are completely eliminated by effectively imposing reflecting walls at the buffer endpoints. Synchronous operation of the AECQ (adaptive entropy-coded quantizer) encoder and decoder is examined in detail, and it is shown that synchronous operation is easily achieved without side information. A method is developed to explicitly solve for the buffer-state probability distribution and the resulting average distortion when memoryless buffer-state feedback is used as well as when the source is stationary and memoryless. This method is then used as a tool in the design of low-distortion AECQ systems, with particular attention given to developing source scale-invariant distortion performance. It is shown that the introduction of reflecting buffer walls in a properly designed AECQ system results in a very small rate-distortion performance penalty and that the resulting AECQ system can be an extremely simple and effective solution to the stationary memoryless source-coding problem for a wide range of source types. Operation with nonstationary sources is also examined. >


Proceedings of SPIE | 1993

Tracking in a high-clutter environment: simulation results characterizing a bi-level MHT algorithm

David S. K. Chan; Daniel David Harrison; David Allen Langan

As detection processing becomes increasingly advanced, for example, in infrared search and track (IRST) systems, the detection threshold becomes the bottleneck to overall system performance. Significantly reducing this threshold requires the capability to track targets in a high clutter environment. In theory, the multiple hypothesis tracking (MHT) algorithm is a solution to this problem. However, in practice, MHT in its basic form becomes computationally prohibitive for all but low to moderate false alarm densities. In this paper, we evaluate a computationally feasible alternate form, which we call a bi-level MHT algorithm. The basic form of this algorithm has been previously proposed, but results on its performance have been lacking. In addition to describing an implementation of a bi-level MHT algorithm, this paper present Monte Carlo simulation results characterizing the performance of the algorithm, and demonstrates the tradeoff between track acquisition range and false track rate for a simple IRST fly-by scenario.


Physics in Medicine and Biology | 2014

A multi-source inverse-geometry CT system: initial results with an 8 spot x-ray source array

Jongduk Baek; Bruno De Man; Jorge Uribe; Randy Scott Longtin; Daniel David Harrison; Joseph Reynolds; Bogdan Neculaes; Kristopher John Frutschy; Louis Paul Inzinna; Antonio Caiafa; Robert Senzig; Norbert J. Pelc

We present initial experimental results of a rotating-gantry multi-source inverse-geometry CT (MS-IGCT) system. The MS-IGCT system was built with a single module of 2 × 4 x-ray sources and a 2D detector array. It produced a 75 mm in-plane field-of-view (FOV) with 160 mm axial coverage in a single gantry rotation. To evaluate system performance, a 2.5 inch diameter uniform PMMA cylinder phantom, a 200 µm diameter tungsten wire, and a euthanized rat were scanned. Each scan acquired 125 views per source and the gantry rotation time was 1 s per revolution. Geometric calibration was performed using a bead phantom. The scanning parameters were 80 kVp, 125 mA, and 5.4 µs pulse per source location per view. A data normalization technique was applied to the acquired projection data, and beam hardening and spectral nonlinearities of each detector channel were corrected. For image reconstruction, the projection data of each source row were rebinned into a full cone beam data set, and the FDK algorithm was used. The reconstructed volumes from upper and lower source rows shared an overlap volume which was combined in image space. The images of the uniform PMMA cylinder phantom showed good uniformity and no apparent artifacts. The measured in-plane MTF showed 13 lp cm(-1) at 10% cutoff, in good agreement with expectations. The rat data were also reconstructed reliably. The initial experimental results from this rotating-gantry MS-IGCT system demonstrated its ability to image a complex anatomical object without any significant image artifacts and to achieve high image resolution and large axial coverage in a single gantry rotation.


ieee nuclear science symposium | 2009

Multi-source inverse-geometry CT: From system concept to research prototype

Bruno De Man; Antonio Caiafa; Yang Cao; Kristopher John Frutschy; Daniel David Harrison; Lou Inzinna; Randy Scott Longtin; Bogdan Neculaes; Joseph Reynolds; Jaydeep Roy; Jonathan David Short; Jorge Uribe; William Waters; Zhye Yin; Xi Zhang; Yun Zou; Bob Senzig; Jongduk Baek; Norbert J. Pelc

Third-generation CT architectures are approaching fundamental limits. Dose-efficiency is limited by finite detector efficiency and by limited control over the X-ray flux spatial profile. Increasing the volumetric coverage comes with increased scattered radiation, cone-beam artifacts, Heel effect, wasted dose and cost. Spatial resolution is limited by focal spot size and detector cell size. Temporal resolution is limited by mechanical constraints, and alternative geometries such as electron-beam CT and dual-source CT come with severe tradeoffs in terms of image quality, dose-efficiency and complexity. The concept of multi-source inverse-geometry CT (IGCT) breaks through several of the above limitations [1-3], promising a low-dose high image quality volumetric CT architecture. In this paper, we present recent progress with the design and integration efforts of the first gantry-based multi-source CT scanner.


nuclear science symposium and medical imaging conference | 2010

Multisource inverse-geometry CT — Prototype system integration

Jorge Uribe; Joseph Reynolds; Louis Paul Inzinna; Randy Scott Longtin; Daniel David Harrison; Bruno De Man; Bogdan Neculaes; Antonio Caiafa; William Waters; Kristopher John Frutschy; Robert Senzig; Jongduk Baek; Norbert J. Pelc

Todays 3rd generation CT scanners have one or two X-ray tubes, with one focal spot or “source” per vacuum chamber or “tube”. Our first multi-source inverse geometry CT prototype has eight X-ray sources. We have demonstrated multisource imaging with an 8-spot X-ray tube on a stationary gantry and a rotating phantom. We present an update on the development of the gantry-based multi-source CT scanner: we combine the multi-source X-ray tube and gantry rotation producing the first multi-source gantry-based CT scanner prototype. Currently the system is in the process of being upgraded to 32 X-ray sources to provide a larger field-of-view and to demonstrate the concept of virtual bowtie.


Medical Imaging 2007: Physics of Medical Imaging | 2007

Atomic number resolution for three spectral CT imaging systems

J. Eric Tkaczyk; Rogerio Rodrigues; Jeffery Shaw; Jonathan Short; Yanfeng Du; Xiaoye Wu; Deborah Walter; William Macomber Leue; Daniel David Harrison; Peter Michael Edic

The material specificity of computed tomography is quantified using an experimental benchtop imaging system and a physics-based system model. The apparatus is operated with different detector and system configurations each giving X-ray energy spectral information but with different overlap among the energy-bin weightings and noise statistics. Multislice, computed tomography sinograms are acquired using dual kVp, sequential source filters or a detector with two scintillator/photodiodes layers. Basis-material and atomic number images are created by first applying a material decomposition algorithm followed by filtered backprojection. CT imaging of phantom materials with known elemental composition and density were used for model validation. X-ray scatter levels are measured with a beam-blocking technique and the impact to material accuracy is quantified. The image noise is related to the intensity and spectral characteristics of the X-ray source. For optimal energy separation adequate image noise is required. The system must be optimized to deliver the appropriate high mA at both energies. The dual kVp method supports the opportunity to separately engineer the photon flux at low and high kvp. As a result, an optimized system can achieve superior material specificity in a system with limited acquisition time or dose. In contrast, the dual-layer and sequential acquisition modes rely on a material absorption mechanism that yields weaker energy separation and lower overall performance.


Proceedings of SPIE | 2012

Initial results with a multisource inverse-geometry CT system

Jongduk Baek; Norbert J. Pelc; Bruno Kristiaan Bernard DeMan; Jorge Uribe; Daniel David Harrison; Joseph Reynolds; Bogdan Neculaes; Louis Paul Inzinna; Antonio Caiafa

The multi-source inverse-geometry CT(MS-IGCT) system is composed of multiple sources and a small 2D detector array. An experimental MS-IGCT system was built and we report initial results with 2×4 x-ray sources, a 75 mm inplane field-of-view (FOV) and 160 mm z-coverage in a single gantry rotation. To evaluate the system performance, experimental data were acquired from several phantoms and a post-mortem rat. Before image reconstruction, geometric calibration, data normalization, beam hardening correction and detector spectral calibration were performed. For reconstruction, the projection data were rebinned into two full cone beam data sets, and the FDK algorithm was used. The reconstructed volumes from the upper and lower source rows shared an overlap volume which was combined in image space. The reconstructed images of the uniform cylinder phantom showed good uniformity of the reconstructed values without any artifacts. The rat data were also reconstructed reliably. The initial experimental results from this rotating-gantry MS-IGCT system demonstrated its ability to image a complex anatomical object without any significant image artifacts and to ultimately achieve large volumetric coverage in a single gantry rotation.


Optics Express | 2015

Raw data normalization for a multi source inverse geometry CT system

Jongduk Baek; Bruno De Man; Daniel David Harrison; Norbert J. Pelc

A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise.

Collaboration


Dive into the Daniel David Harrison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge