Eric C. Olson
University of Chicago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric C. Olson.
Journal of Parallel and Distributed Computing | 2013
Nicholas T. Karonis; Kirk L. Duffin; Caesar E. Ordonez; B. Erdelyi; Thomas D. Uram; Eric C. Olson; G. Coutrakon; Michael E. Papka
Proton computed tomography (pCT) is an imaging modality that has been in development to support targeted dose delivery in proton therapy. It aims to accurately map the distribution of relative stopping power. Because protons traverse material media in non-linear paths, pCT requires individual proton processing. Image reconstruction then becomes a time-consuming process. Clinical-use scenarios that require images from billions of protons in less than ten or fifteen minutes have motivated us to use distributed and hardware-accelerated computing methods to achieve fast image reconstruction. Combined use of MPI and GPUs demonstrates that clinically viable image reconstruction is possible. On a 60-node CPU/GPU computer cluster, we achieved efficient strong and weak scaling when reconstructing images from two billion histories in under seven minutes. This represents a significant improvement over the previous state-of-the-art in pCT, which took almost seventy minutes to reconstruct an image from 131 million histories on a single-CPU, single-GPU computer.
Journal of Physics: Conference Series | 2008
Mark Hereld; Eric C. Olson; Michael E. Papka; Thomas D. Uram
Connecting expensive and scarce visual data analysis resources to end-users is a major challenge today. We describe a flexible mechanism for meeting this challenge based on commodity compression technologies for streaming video. The advantages of this approach include simplified application development, access to generic client components for viewing, and simplified incorporation of improved codecs as they become available. In this paper we report newly acquired experimental results for two different applications being developed to exploit this approach and test its merits. One is based on a new plugin for ParaView that adds video streaming cleanly and transparently to existing applications. The other is a custom volume rendering application with new remote capabilities. Using typical datasets under realistic conditions, we find the performance for both is satisfactory.
ieee symposium on large data analysis and visualization | 2011
Mark Hereld; Joseph A. Insley; Eric C. Olson; Michael E. Papka; Venkatram Vishwanath; Michael L. Norman; Rick Wagner
Simulations running on the top supercomputers are routinely producing multi-terabyte data sets. Enabling scientists, at their home institutions, to analyze, visualize and interact with these data sets as they are produced is imperative to the scientific discovery process. We report on interactive visualizations of large simulations performed on Kraken at the National Institute for Computational Sciences using the parallel cosmology code Enzo, with grid sizes ranging from 10243 to 64003. In addition to the asynchronous rendering of over 570 timesteps of a 40963 simulation (150 TB in total), we developed the ability to stream the rendering result to multipanel display walls, with full interactive control of the renderer(s).
ieee international conference on high performance computing data and analytics | 2012
Kirk L. Duffin; Nicholas T. Karonis; Caesar E. Ordonez; Michael E. Papka; G. Coutrakon; B. Erdelyi; Eric C. Olson; Thomas D. Uram
Proton computed tomography (pCT) is an imaging modality being developed to support targeted dose delivery in proton therapy. It aims to accurately map the distribution of relative stopping power in the imaged body. Because protons traverse material in non-linear paths, pCT requires individual proton processing and image reconstruction becomes a time-consuming process. We discuss the transformation of image reconstruction techniques from single CPU/GPU implementations to create a hybrid multi-CPU/GPU approach. We demonstrate a reduction of computation time from almost 7 hours down to 53 seconds.
teragrid conference | 2011
Mark Hereld; Michael E. Papka; Joseph A. Insley; Michael L. Norman; Eric C. Olson; Rick Wagner
The top supercomputers typically have aggregate memories in excess of 100 TB, with simulations running on these systems producing datasets of comparable size. The size of these datasets and the speed with which they are produced define the minimum performance that modern analysis and visualization must achieve. We report on interactive visualizations of large simulations performed on Kraken at the National Institute for Computational Sciences using the parallel cosmology code Enzo, with grid sizes ranging from 10243 to 64003. In addition to the asynchronous rendering of over 570 timesteps of a 40963 simulation (150 TB in total), we developed the ability to stream the rendering result to multi-panel display walls, with full interactive control of the renderer(s).
Proceedings of the 2009 Workshop on Ultrascale Visualization | 2009
Mark Hereld; Joseph A. Insley; Eric C. Olson; Michael E. Papka; Thomas D. Uram; Venkatram Vishwanath
Increasingly massive datasets produced by simulations beg the question How will we connect this data to the computational and display resources that support visualization and analysis? This question is driving research into new approaches to allocating computational, storage, and network resources. In this paper we explore potential solutions that couple system resources in new ways. Examples of what we mean by resource-coupled computations abound. For example, remote visualization is an activity that may couple data and large computation resources at the shared facility to client software and display hardware at the remote site. In situ analysis and visualization contemporaneously merges simulation and analysis onto the shared resource of the supercomputing platform. Co-analysis approaches seek to directly couple simulations running on a primary supercomputer to live analysis running on an optimized visualization and analysis platform over a high-performance network. Consequently, we are working on a systems approach to modeling the end-to-end activity of extracting understanding from computational models. In this paper we present our methods and results from experiments.
Studies in health technology and informatics | 2007
Jonathan C. Silverstein; Colin Walsh; Fred Dech; Eric C. Olson; Michael E. Papka; Nigel M. Parsad; Rick Stevens
Studies in health technology and informatics | 2008
Jonathan C. Silverstein; Walsh C; Fred Dech; Eric C. Olson; Nigel M. Parsad; Rick Stevens
Archive | 2012
Kirk L. Duffin; Nicholas T. Karonis; Caesar E. Ordo; Michael E. Papka; G. Coutrakon; B. Erdelyi; Eric C. Olson; Thomas D. Uram
ieee international conference on high performance computing data and analytics | 2011
Joseph A. Insley; Rick Wagner; Robert Harkness; Daniel R. Reynolds; Michael L. Norman; Mark Hereld; Eric C. Olson; Michael E. Papka; Venkatram Vishwanath