Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Luebke is active.

Publication


Featured researches published by David Luebke.


eurographics | 2007

A Survey of General-Purpose Computation on Graphics Hardware

John D. Owens; David Luebke; Naga K. Govindaraju; Mark J. Harris; Jens H. Krüger; Aaron E. Lefohn; Timothy John Purcell

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware a compelling platform for computationally demanding tasks in a wide variety of application domains. In this report, we describe, summarize, and analyze the latest research in mapping general‐purpose computation to graphics hardware.


international conference on computer graphics and interactive techniques | 1997

View-dependent simplification of arbitrary polygonal environments

David Luebke; Carl Erikson

This dissertation describes hierarchical dynamic simplification (HDS), a new approach to the problem of simplifying arbitrary polygonal environments. HDS is dynamic, retessellating the scene continually as the users viewing position shifts, and global, processing the entire database without first decomposing the environment into individual objects. The resulting system enables real-time display of very complex polygonal CAD models consisting of thousands of parts and millions of polygons. HDS supports various preprocessing algorithms and various run-time criteria, providing a general framework for dynamic view-dependent simplification. Briefly, HDS works by clustering vertices together in a hierarchical fashion. The simplification process continually queries this hierarchy to generate a scene containing only those polygons that are important from the current viewpoint. When the volume of space associated with a vertex cluster occupies less than a user-specified amount of the screen, all vertices within that cluster are collapsed together and degenerate polygons filtered out. HDS maintains an active list of visible polygons for rendering. Since frame-to-frame movements typically involve small changes in viewpoint, and therefore modify this list by only a few polygons, the method takes advantage of temporal coherence for greater speed.


international conference on computer graphics and interactive techniques | 2010

OptiX: a general purpose ray tracing engine

Steven G. Parker; James Bigler; Andreas Dietrich; Heiko Friedrich; Jared Hoberock; David Luebke; David Kirk McAllister; Morgan McGuire; R. Keith Morley; Austin Robison; Martin Stich

The NVIDIA® OptiX™ ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls.


IEEE Computer Graphics and Applications | 2001

A developer's survey of polygonal simplification algorithms

David Luebke

Polygonal models currently dominate interactive computer graphics. This is chiefly because of their mathematical simplicity: polygonal models lend themselves to simple, regular rendering algorithms that embed well in hardware, which has in turn led to widely available polygon rendering accelerators for every platform. Unfortunately, the complexity of these models, which is measured by the number of polygons, seems to grow faster than the ability of our graphics hardware to render them interactively. Put another way, the number of polygons we want always seems to exceed the number of polygons we can afford. Polygonal simplification techniques offer one solution for developers grappling with complex models. These methods simplify the polygonal geometry of small, distant, or otherwise unimportant portions of the model, seeking to reduce the rendering cost without a significant loss in the scenes visual content. The article surveys polygonal simplification algorithms, identifies the issues in picking an algorithm, relates the strengths and weaknesses of different approaches, and describes several published algorithms.


international conference on computer graphics and interactive techniques | 2004

GPGPU: general purpose computation on graphics hardware

David Luebke; Mark J. Harris; Jens H. Krüger; Timothy John Purcell; Naga K. Govindaraju; Ian Buck; Cliff Woolley; Aaron E. Lefohn

The graphics processor (GPU) on todays commodity video cards has evolved into an extremely powerful and flexible processor. The latest graphics architectures provide tremendous memory bandwidth and computational horsepower, with fully programmable vertex and pixel processing units that support vector operations up to full IEEE floating point precision. High level languages have emerged for graphics hardware, making this computational power accessible. Architecturally, GPUs are highly parallel streaming processors optimized for vector operations, with both MIMD (vertex) and SIMD (pixel) pipelines. Not surprisingly, these processors are capable of general-purpose computation beyond the graphics applications for which they were designed. Researchers have found that exploiting the GPU can accelerate some problems by over an order of magnitude over the CPU.However, significant barriers still exist for the developer who wishes to use the inexpensive power of commodity graphics hardware, whether for in-game simulation of physics of for conventional computational science. These chips are designed for and driven by video game development; the programming model is unusual, the programming environment is tightly constrained, and the underlying architectures are largely secret. The GPU developer must be an expert in computer graphics and its computational idioms to make effective use of the hardware, and still pitfalls abound. This course provides a detailed introduction to general purpose computation on graphics hardware (GPGPU). We emphasize core computational building blocks, ranging from linear algebra to database queries, and review the tools, perils, and tricks of the trade in GPU programming. Finally we present some interesting and important case studies on general-purpose applications of graphics hardware.The course presenters are experts on general-purpose GPU computation from academia and industry, and have presented papers and tutorials on the topic at SIGGRAPH, Graphics Hardware, Game Developers Conference, and elsewhere.


Computer Graphics Forum | 2009

Fast BVH Construction on GPUs

Christian Lauterbach; Michael Garland; Shubhabrata Sengupta; David Luebke; Dinesh Manocha

We present two novel parallel algorithms for rapidly constructing bounding volume hierarchies on manycore GPUs. The first uses a linear ordering derived from spatial Morton codes to build hierarchies extremely quickly and with high parallel scalability. The second is a top‐down approach that uses the surface area heuristic (SAH) to build hierarchies optimized for fast ray tracing. Both algorithms are combined into a hybrid algorithm that removes existing bottlenecks in the algorithm for GPU construction performance and scalability leading to significantly decreased build time. The resulting hierarchies are close in to optimized SAH hierarchies, but the construction process is substantially faster, leading to a significant net benefit when both construction and traversal cost are accounted for. Our preliminary results show that current GPU architectures can compete with CPU implementations of hierarchy construction running on multicore systems. In practice, we can construct hierarchies of models with up to several million triangles and use them for fast ray tracing or other applications.


interactive 3d graphics and games | 1995

Portals and mirrors: simple, fast evaluation of potentially visible sets

David Luebke; Chris Georges

We describe an approach for determining potentially visible sets in dynamic architectural models. Our scheme divides the models into cells and portals, computing a conservative estimate of which cells are visible at render time. The technique is simple to implement and can be easily integrated into existing systems, providing increased interactive performance on large architectural models.


international conference on embedded networked sensor systems | 2005

A high-accuracy, low-cost localization system for wireless sensor networks

Radu Stoleru; Tian He; John A. Stankovic; David Luebke

The problem of localization of wireless sensor nodes has long been regarded as very difficult to solve, when considering the realities of real world environments. In this paper, we formally describe, design, implement and evaluate a novel localization system, called Spotlight. Our system uses the spatio-temporal properties of well controlled events in the network (e.g., light), to obtain the locations of sensor nodes. We demonstrate that a high accuracy in localization can be achieved without the aid of expensive hardware on the sensor nodes, as required by other localization systems. We evaluate the performance of our system in deployments of Mica2 and XSM motes. Through performance evaluations of a real system deployed outdoors, we obtain a 20cm localization error. A sensor network, with any number of nodes, deployed in a 2500m2 area, can be localized in under 10 minutes, using a device that costs less than


eurographics symposium on rendering techniques | 2001

Perceptually-Driven Simplification for Interactive Rendering

David Luebke; Benjamin Hallen

1000. To the best of our knowledge, this is the first report of a sub-meter localization error, obtained in an outdoor environment, without equipping the wireless sensor nodes with specialized ranging hardware.


IEEE Computer | 2007

How GPUs Work

David Luebke; Greg Humphreys

We present a framework for accelerating interactive rendering, grounded in psychophysical models of visual perception. This framework is applicable to multiresolution rendering techniques that use a hierarchy of local simplification operations. Our method drives those local operations directly by perceptual metrics; the effect of each simplification on the final image is considered in terms of the contrast the operation will induce in the image and the spatial frequency of the resulting change. A simple and conservative perceptual model determines under what conditions the simplification operation will be perceptible, enabling imperceptible simplification in which operations are performed only when judged imperceptible. Alternatively, simplifications may be ordered according to their perceptibility, providing a principled approach to best-effort rendering. We demonstrate this framework applied to view-dependent polygonal simplification. Our approach addresses many interesting topics in the acceleration of interactive rendering, including imperceptible simplification, silhouette preservation, and gaze-directed rendering.

Collaboration


Dive into the David Luebke's collaboration.

Top Co-Authors

Avatar

Benjamin Watson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge