Greg Humphreys
University of Virginia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Greg Humphreys.
international conference on computer graphics and interactive techniques | 2002
Greg Humphreys; Mike Houston; Ren Ng; Randall J. Frank; Sean Ahern; P. D. Kirchner; James T. Klosowski
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromiums stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.
international conference on computer graphics and interactive techniques | 2001
Greg Humphreys; Matthew Eldridge; Ian Buck; Gordan Stoll; Matthew Everett; Pat Hanrahan
We describe WireGL, a system for scalable interactive rendering on a cluster of workstations. WireGL provides the familiar OpenGL API to each node in a cluster, virtualizing multiple graphics accelerators into a sort-first parallel renderer with a parallel interface. We also describe techniques for reassembling an output image from a set of tiles distributed over a cluster. Using flexible display management, WireGL can drive a variety of output devices, from standalone displays to tiled display walls. By combining the power of virtual graphics, the familiarity and ordered semantics of OpenGL, and the scalability of clusters, we are able to create time-varying visualizations that sustain rendering performance over 70,000,000 triangles per second at interactive refresh rates using 16 compute nodes and 16 rendering nodes.
international conference on computer graphics and interactive techniques | 2006
Daniel Dunbar; Greg Humphreys
Sampling distributions with blue noise characteristics are widely used in computer graphics. Although Poisson-disk distributions are known to have excellent blue noise characteristics, they are generally regarded as too computationally expensive to generate in real time. We present a new method for sampling by dart-throwing in O(N log N) time and introduce a novel and efficient variation for generating Poisson-disk distributions in O(N) time and space.
international conference on computer graphics and interactive techniques | 2008
Toshiya Hachisuka; Wojciech Jarosz; Richard Peter Weistroffer; Kevin Dale; Greg Humphreys; Matthias Zwicker; Henrik Wann Jensen
We present a new adaptive sampling strategy for ray tracing. Our technique is specifically designed to handle multidimensional sample domains, and it is well suited for efficiently generating images with effects such as soft shadows, motion blur, and depth of field. These effects are problematic for existing image based adaptive sampling techniques as they operate on pixels, which are possibly noisy results of a Monte Carlo ray tracing process. Our sampling technique operates on samples in the multidimensional space given by the rendering equation and as a consequence the value of each sample is noise-free. Our algorithm consists of two passes. In the first pass we adaptively generate samples in the multidimensional space, focusing on regions where the local contrast between samples is high. In the second pass we reconstruct the image by integrating the multidimensional function along all but the image dimensions. We perform a high quality anisotropic reconstruction by determining the extent of each sample in the multidimensional space using a structure tensor. We demonstrate our method on scenes with a 3 to 5 dimensional space, including soft shadows, motion blur, and depth of field. The results show that our method uses fewer samples than Mittchells adaptive sampling technique while producing images with less noise.
IEEE Computer | 2007
David Luebke; Greg Humphreys
GPUs have moved away from the traditional fixed-function 3D graphics pipeline toward a flexible general-purpose computational engine. Today, GPUs can implement many parallel algorithms directly using graphics hardware. Well-suited algorithms that leverage all the underlying computational horsepower often achieve tremendous speedups. Truly, the GPU is the first widely deployed commodity desktop parallel computer
conference on high performance computing (supercomputing) | 2000
Greg Humphreys; Ian Buck; Matthew Eldridge; Pat Hanrahan
We describe a novel distributed graphics system that allows an application to render to a large tiled display. Our system, called WireGL, uses a cluster of off-the-shelf PCs connected with a high-speed network. WireGL allows an unmodified existing application to achieve scalable output resolution on such a display. This paper presents an efficient sorting algorithm which minimizes the network traffic for a scalable display. We will demonstrate that for most applications, our system provides scalable output resolution with minimal performance impact.
international symposium on computer architecture | 2007
Kristen R. Walcott; Greg Humphreys; Sudhanva Gurumurthi
Transient faults due to particle strikes are a key challenge in microprocessor design. Driven by exponentially increasing transistor counts, per-chip faults are a growing burden. To protect against soft errors, redundancy techniques such as redundant multithreading (RMT) are often used. However, these techniques assume that the probability that a structural fault will result in a soft error (i.e., the Architectural Vulnerability Factor (AVF)) is 100 percent, unnecessarily draining processor resources. Due to the high cost of redundancy, there have been efforts to throttle RMT at runtime. To date, these methods have not incorporated an AVF model and therefore tend to be ad hoc. Unfortunately, computing the AVF of complex microprocessor structures (e.g., the ISQ) can be quite involved. To provide probabilistic guarantees about fault tolerance, we have created a rigorous characterization of AVF behavior that can be easily implemented in hardware. We experimentally demonstrate AVF variability within and across the SPEC2000 benchmarks and identify strong correlations between structural AVF values and a small set of processor metrics. Using these simple indicators as predictors, we create a proof-of-concept RMT implementation that demonstrates that AVF prediction can be used to maintain a low fault tolerance level without significant performance impact.
international conference on computer graphics and interactive techniques | 2000
Ian Buck; Greg Humphreys; Pat Hanrahan
As networks get faster, it becomes more feasible to render large data sets remotely. For example, it is useful to run large scientific simulations on remote compute servers but visualize the results of those simulations on one or more local displays. The WireGL project at Stanford is researching new techniques for rendering over a network. For many applications, we can render remotely over a gigabit network to a tiled display with little or no performance loss over running locally. One of the elements of WireGL that makes this performance possible is our ability to track the graphics state of a running application. In this paper, we will describe our techniques for tracking state, as well as efficient algorithms for computing the difference between two graphics contexts. This fast differencing operation allows WireGL to transmit less state data over the network by updating server state lazily. It also allows our system to context switch between multiple graphics applications several million times per second without flushing the hardware accelerator. This results in substantial performance gains when sharing a remote display between multiple clients.
ACM Journal on Computing and Cultural Heritage | 2009
David Koller; Bernard Frischer; Greg Humphreys
The increasing creation of 3D cultural heritage models has resulted in a need for the establishment of centralized digital archives. We advocate open repositories of scientifically authenticated 3D models based on the example of traditional scholarly journals, with standard mechanisms for preservation, peer review, publication, updating, and dissemination of the 3D models. However, fully realizing this vision will require addressing a number of related research challenges. In this article, we first give a brief background of the virtual heritage discipline, and characterize the need for centralized 3D archives, including a preliminary needs assessment survey of virtual heritage practitioners. Then we describe several existing 3D cultural heritage repositories, and enumerate a number of technical research challenges that should be addressed to realize an ideal archive. These challenges include digital rights management for the 3D models, clear depiction of uncertainty in 3D reconstructions, version control for 3D models, effective metadata structures, long-term preservation, interoperability, and 3D searching. Other concerns are provision for the application of computational analysis tools, and the organizational structure of a peer-reviewed 3D model archive.
ieee visualization | 2004
Nate Hoobler; Greg Humphreys; Maneesh Agrawala
We present a system for enhancing observation of user interactions in virtual environments. In particular, we focus on analyzing behavior patterns in the popular team-based first-person perspective game Return to Castle Wolfenstein: Enemy Territory. This game belongs to a genre characterized by two moderate-sized teams (usually 6 to 12 players each) competing over a set of objectives. Our system allows spectators to visualize global features such as large-scale behaviors and team strategies, as opposed to the limited, local view that traditional spectating modes provide. We also add overlay visualizations of semantic information related to the action that might be important to a spectator in order to reduce the information overload that plagues traditional overview visualizations. These overlays can visualize information about abstract concepts such as player distribution over time and areas of intense combat activity, and also highlight important features like player paths, fire coverage, etc. This added information allows spectators to identify important game events more easily and reveals large-scale player behaviors that might otherwise be overlooked.