Steven G. Parker
Nvidia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven G. Parker.
high performance distributed computing | 1999
Rob Armstrong; A. Geist; K. Keahey; S. Kohn; L. McInnes; Steven G. Parker; B. Smolinski
Describes work in progress to develop a standard for interoperability among high-performance scientific components. This research stems from the growing recognition that the scientific community needs to better manage the complexity of multidisciplinary simulations and better address scalable performance issues on parallel and distributed architectures. The driving force for this is the need for fast connections among components that perform numerically intensive work and for parallel collective interactions among components that use multiple processes or threads. This paper focuses on the areas we believe are most crucial in this context, namely an interface definition language that supports scientific abstractions for specifying component interfaces and a port connection model for specifying component interactions.
international conference on computer graphics and interactive techniques | 2010
Steven G. Parker; James Bigler; Andreas Dietrich; Heiko Friedrich; Jared Hoberock; David Luebke; David Kirk McAllister; Morgan McGuire; R. Keith Morley; Austin Robison; Martin Stich
The NVIDIA® OptiX™ ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls.
ieee visualization | 1998
Steven G. Parker; Peter Shirley; Yarden Livnat; Charles D. Hansen; Peter-Pike J. Sloan
We show that it is feasible to perform interactive isosurfacing of very large rectilinear datasets with brute-force ray tracing on a conventional (distributed) shared-memory multiprocessor machine. Rather than generate geometry representing the isosurface and render with a z-buffer, for each pixel we trace a ray through a volume and do an analytic isosurface intersection computation. Although this method has a high intrinsic computational cost, its simplicity and scalability make it ideal for large datasets on current high-end systems. Incorporating simple optimizations, such as volume bricking and a shallow hierarchy, enables interactive rendering (i.e. 10 frames per second) of the 1 GByte full resolution Visible Woman dataset on an SGI Reality Monster. The graphics capabilities of the Reality Monster are used only for display of the final color image.
ieee international conference on high performance computing data and analytics | 2006
Benjamin A. Allan; Robert C. Armstrong; David E. Bernholdt; Felipe Bertrand; Kenneth Chiu; Tamara L. Dahlgren; Kostadin Damevski; Wael R. Elwasif; Thomas Epperly; Madhusudhan Govindaraju; Daniel S. Katz; James Arthur Kohl; Manoj Kumar Krishnan; Gary Kumfert; J. Walter Larson; Sophia Lefantzi; Michael J. Lewis; Allen D. Malony; Lois C. Mclnnes; Jarek Nieplocha; Boyana Norris; Steven G. Parker; Jaideep Ray; Sameer Shende; Theresa L. Windus; Shujia Zhou
The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance coputing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed coputing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal ovehead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including cobustion research, global climate simulation, and computtional chemistry.
international conference on computer graphics and interactive techniques | 2006
Ingo Wald; Thiago Ize; Andrew E. Kensler; Aaron Knoll; Steven G. Parker
We present a new approach to interactive ray tracing of moderate-sized animated scenes based on traversing frustum-bounded packets of coherent rays through uniform grids. By incrementally computing the overlap of the frustum with a slice of grid cells, we accelerate grid traversal by more than a factor of 10, and achieve ray tracing performance competitive with the fastest known packet-based kd-tree ray tracers. The ability to efficiently rebuild the grid on every frame enables this performance even for fully dynamic scenes that typically challenge interactive ray tracing systems.
eurographics | 1999
Bruce Walter; George Drettakis; Steven G. Parker
Interactive rendering requires rapid visual feedback. The render cache is a new method for achieving this when using high-quality pixel-oriented renderers such as ray tracing that are usually considered too slow for interactive use. The render cache provides visual feedback at a rate faster than the renderer can generate complete frames, at the cost of producing approximate images during camera and object motion. The method works both by caching previous results and reprojecting them to estimate the current image and by directing the renderers sampling to more rapidly improve subsequent images. Our implementation demonstrates an interactive application working with both ray tracing and path tracing renderers in situations where they would normally be considered too expensive. Moreover we accomplish this using a software only implementation without the use of 3D graphics hardware.
Modern software tools for scientific computing | 1997
Steven G. Parker; David M. Weinstein; Christopher R. Johnson
We present the design, implementation and application of SCIRun, a scientific programming environment that allows the interactive construction, debugging, and steering of large-scale scientific computations. Using this “computational workbench,” a scientist can design and modify simulations interactively via a dataflow programming model. SCIRun enables scientists to design and modify model geometry, interactively change simulation parameters and boundary conditions, and interactively visualize geometric models and simulation results. We discuss the ubiquitous roles SCIRun plays as a computational tool (e.g. resource manager, thread scheduler, development environment), and how we have applied an object oriented design (implemented in C++) to the scientific computing process. Finally, we demonstrate the application of SCIRun to large scale problems in computational medicine.
interactive 3d graphics and games | 1999
Steven G. Parker; William Martin; Peter-Pike J. Sloan; Peter Shirley; Brian E. Smits; Charles D. Hansen
We examine a rendering system that interactively ray traces n image on a conventional multiprocessor. The implementation i s “brute force” in that it explicitly traces rays through every scree n pixel, yet pays careful attention to system resources for acceleratio n. The design of the system is described, along with issues related to ma erial models, lighting and shadows, and frameless rendering. The syst m is demonstrated for several different types of input scenes . CR Categories: I.3.0 [Computer Graphics]: General; I.3.6 [Computer Graphics]: Methodology and Techniques.
high performance distributed computing | 2000
J. Davison de St. Germain; John McCorquodale; Steven G. Parker; Christopher R. Johnson
Describes Uintah, a component-based visual problem-solving environment (PSE) that is designed to specifically address the unique problems of massively parallel computation on tera-scale computing platforms. Uintah supports the entire life-cycle of scientific applications by allowing scientific programmers to quickly and easily develop new techniques, debug new implementations and apply known algorithms to solve novel problems. Uintah is built on three principles: (1) as much as possible, the complexities of parallel execution should be handled for the scientist, (2) the software should be reusable at the component level, and (3) scientists should be able to dynamically steer and visualize their simulation results as the simulation executes. To provide this functionality, Uintah builds upon the best features of the SCIRun (Scientific Computing and Imaging Run-time) PSE and the DoE (Department of Energy) Common Component Architecture (CCA).
2006 IEEE Symposium on Interactive Ray Tracing | 2006
James Bigler; Abe Stephens; Steven G. Parker
We describe the software architecture of the Manta interactive ray tracer and describe its application in engineering and scientific visualization. Although numerous ray tracing software packages have been developed, much of the traditional design wisdom needs to be updated to provide support for interactivity, high degrees of parallelism, and modern packet-based acceleration structures. We discuss situations that are normally not considered when designing a batch ray tracer and present methods to overcome those challenges. This paper advocates a forward looking programming model for interactive ray tracing that uses reconfigurable components to achieve flexibility while achieving scalability on large numbers of processors. Manta employs data structures motivated by modern microprocessor design that can exploit instruction-level parallelism. We discuss the design tradeoffs and the performance achieved for this system