Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurel Jeffers Orr is active.

Publication


Featured researches published by Laurel Jeffers Orr.


ieee international conference on high performance computing data and analytics | 2012

An Irregular Approach to Large-Scale Computed Tomography on Multiple Graphics Processors Improves Voxel Processing Throughput

Edward Steven Jimenez; Laurel Jeffers Orr; Kyle R. Thompson

While much work has been done on applying GPU technology to computed tomography (CT) reconstruction algorithms, many of these implementations focus on smaller datasets that are better suited for medical applications. This paper proposes an irregular approach to the algorithm design which utilizes the GPU hardwares unique cache structure and employs small x-ray image data prefetches on the host to upload to the GPUs while the devices are operating on large contiguous sub-volumes of the reconstruction. This approach will improve the overall cache hit-rates and thus improve the performance of the massively multithreaded environment of the GPU. Overall, utilizing small prefetches of x-ray image data improved the volumetric pixel (voxel) processing rate when compared to utilizing large data prefetches which would minimize data transfers and kernel launches. Additionally, this approach does not sacrifice performance on small datasets and is thus suitable for medical and industrial applications. This work utilizes the CUDA programming environment and Nvidias Tesla GPUs.


nuclear science symposium and medical imaging conference | 2014

Object composition identification via mediated-reality supplemented radiographs

Edward Steven Jimenez; Laurel Jeffers Orr; Kyle R. Thompson

This exploratory work investigates the feasibility of extracting linear attenuation functions with respect to energy from a multi-channel radiograph of an object of interest composed of a homogeneous material by simulating the entire imaging system combined with a digital phantom of the object of interest and leveraging this information along with the acquired multi-channel image. This synergistic combination of information allows for improved estimates on not only the attenuation for an effective energy, but for the entire spectrum of energy that is coincident with the detector elements. Material composition identification from radiographs would have wide applications in both medicine and industry. This work will focus on industrial radiography applications and will analyse a range of materials that vary in attenuative properties. This work shows that using iterative solvers holds encouraging potential to fully solve for the linear attenuation profile for the object and material of interest when the imaging system is characterized with respect to initial source x-ray energy spectrum, scan geometry, and accurate digital phantom.


Proceedings of SPIE | 2014

Exploring mediated reality to approximate x-ray attenuation coefficients from radiographs

Edward Steven Jimenez; Laurel Jeffers Orr; Megan Lea Morgan; Kyle R. Thompson

Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.


Proceedings of SPIE | 2013

Rethinking the Union of Computed Tomography Reconstruction and GPGPU Computing

Edward Steven Jimenez; Laurel Jeffers Orr

This work will present the utilization of the massively multi-threaded environment of graphics processors (GPUs) to improve the computation time needed to reconstruct large computed tomography (CT) datasets and the aris- ing challenges for system implementation. Intelligent algorithm design for massively multi-threaded graphics processors differs greatly from traditional CPU algorithm design. Although a brute force port of a CPU algo- rithm to a GPU kernel may yield non-trivial performance gains, further measurable gains could be achieved by designing the algorithm with consideration given to the computing architecture. Previous work has shown that CT reconstruction on GPUs becomes an irregular problem for large datasets (10GB-4TB),1 thus memory band- width at the host and device levels becomes a significant bottleneck for industrial CT applications. We present a set of GPU reconstruction kernels that utilize various GPU-specific optimizations and measure performance impact.


Proceedings of SPIE | 2013

Preparing for the 100-megapixel detector: reconstructing a multi-terabyte computed-tomography dataset

Laurel Jeffers Orr; Edward Steven Jimenez

Although there has been progress in applying GPU-technology to Computed-Tomography reconstruction algorithms, much of the work has concentrated on optimizing reconstruction performance for smaller, medical-scale datasets. Industrial CT datasets can vary widely in size and number of projections. With the new advancements in high resolution cameras, it is entirely possible that the Industrial CT community may soon need to pursue a 100-megapixel detector for CT applications. To reconstruct such a massive dataset, simply adding extra GPUs would not be an option as memory and storage bottlenecks would result in prolonged periods of GPU downtime, thus negating performance gains. Additionally, current reconstruction algorithms would not be sufficient due to the various bottlenecks in the processor hardware. Past work has shown that CT reconstruction is an irregular problem for large-scale datasets on a GPU due to the massively parallel environment. This work proposes a high-performance, multi-GPU, modularized approach to reconstruction where computation, memory transfers, and disk I/O are optimized to occur in parallel while accommodating the irregular nature of the computation kernel. Our approach utilizes a dynamic MIMD-type of architecture in a hybrid environment of CUDA and OpenMP. The modularized approach showed an improvement in load-balancing and performance such that a 1 trillion voxel volume was reconstructed from 10,000 100 megapixel projections in less than a day.


very large data bases | 2017

Probabilistic database summarization for interactive data exploration

Laurel Jeffers Orr; Magdalena Balazinska; Dan Suciu

We present a probabilistic approach to generate a small, query-able summary of a dataset for interactive data exploration. Departing from traditional summarization techniques, we use the Principle of Maximum Entropy to generate a probabilistic representation of the data that can be used to give approximate query answers. We develop the theoretical framework and formulation of our probabilistic representation and show how to use it to answer queries. We then present solving techniques and give three critical optimizations to improve preprocessing time and query accuracy. Lastly, we experimentally evaluate our work using a 5 GB dataset of flights within the United States and a 210 GB dataset from an astronomy particle simulation. While our current work only supports linear queries, we show that our technique can successfully answer queries faster than sampling while introducing, on average, no more error than sampling and can better distinguish between rare and nonexistent values.


Proceedings of SPIE | 2014

Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

Edward Steven Jimenez; Eric Goodman; Ryeojin Park; Laurel Jeffers Orr; Kyle R. Thompson

This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.


nuclear science symposium and medical imaging conference | 2014

Cluster-based approach to a multi-GPU CT reconstruction algorithm

Laurel Jeffers Orr; Edward Steven Jimenez; Kyle R. Thompson

Conventional CPU-based algorithms for Computed Tomography reconstruction lack the computational efficiency necessary to process large, industrial datasets in a reasonable amount of time. Specifically, processing time for a single-pass, trillion volumetric pixel (voxel) reconstruction requires months to reconstruct using a high performance CPU-based workstation. An optimized, single workstation multi-GPU approach has shown performance increases by 2-3 orders-of-magnitude; however, reconstruction of future-size, trillion voxel datasets can still take an entire day to complete. This paper details an approach that further decreases runtime and allows for more diverse workstation environments by using a cluster of GPU-capable workstations. Due to the irregularity of the reconstruction tasks throughout the volume, using a cluster of multi-GPU nodes requires inventive topological structuring and data partitioning to avoid network bottlenecks and achieve optimal GPU utilization. This paper covers the cluster layout and non-linear weighting scheme used in this high-performance multi-GPU CT reconstruction algorithm and presents experimental results from reconstructing two large-scale datasets to evaluate this approachs performance and applicability to future-size datasets. Specifically, our approach yields up to a 20 percent improvement for large-scale data.


nuclear science symposium and medical imaging conference | 2013

A high-performance and energy-efficient CT reconstruction algorithm for multi-terabyte datasets

Edward Steven Jimenez; Laurel Jeffers Orr; Kyle R. Thompson; Ryeojin Park

There has been much work done in implementing various GPU-based Computed Tomography reconstruction algorithms for medical applications showing tremendous improvement in computational performance. While many of these reconstruction algorithms could also be applied to industrial-scale datasets, the performance gains may be modest to non-existent due to a combination of algorithmic, hardware, or scalability limitations. Previous work presented showed an irregular dynamic approach to GPU-Reconstruction kernel execution for industrial-scale reconstructions that dramatically improved voxel processing throughput. However, the improved kernel execution magnified other system bottlenecks such as host memory bandwidth and storage read/write bandwidth, thus hindering performance gains. This paper presents a multi-GPU-based reconstruction algorithm capable of efficiently reconstructing large volumes (between 64 gigavoxels and 1 teravoxel volumes) not only faster than traditional CPU- and GPU-based reconstruction algorithms but also while consuming significantly less energy. The reconstruction algorithm exploits the irregular kernel approach from previous work as well as a modularized MIMD-like environment, heterogeneous parallelism, as well as macro- and micro-scale dynamic task allocation. The result is a portable and flexible reconstruction algorithm capable of executing on a wide range of architectures including mobile computers, workstations, supercomputers, and modestly-sized hetero or homogeneous clusters with any number of graphics processors.


very large data bases | 2015

Explaining query answers with explanation-ready databases

Sudeepa Roy; Laurel Jeffers Orr; Dan Suciu

Collaboration


Dive into the Laurel Jeffers Orr's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyle R. Thompson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Dan Suciu

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Goodman

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ismael Perez

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jennifer Ortiz

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge