Michael C. Doggett
Lund University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael C. Doggett.
international conference on computer graphics and interactive techniques | 2000
Michael C. Doggett; Johannes Hirche
Displacement Mapping is an effective technique for encoding the high levels of detail found in todays triangle based surface models. Extending the hardware rendering pipeline to be capable of handling displacement maps as geometric primitives, will allow highly detailed models to be constructed without requiring large numbers of triangles to be passed from the CPU to the graphics pipeline. We present a new approach based on recursive tessellation that adapts to the surface complexity described by the displacement map. We also ensure that the resolution of the displaced mesh is tessellated with respect to the current view point. Our tessellation scheme performs all tests only on triangle edges to avoid generating cracks on the displaced surface. The main decision for vertex insertion is based on two comparisons involving the average height surrounding the vertices and the normals at the vertices. Individually, the tests will fail to tessellate a mesh satisfactorily, but their combination achieves good results. We propose several additions to the typical hardware rendering pipeline in order to achieve displacement map rendering in hardware. The mesh tessellation is placed within the rendering pipeline so that we can take advantage of the pre-existing vertex transformation units to perform the setup calculations for our view dependent test. Our method adds only simple arithmetic and comparison operations to the graphics pipeline and makes use of existing units for calculations wherever possible.
ACM Transactions on Graphics | 2011
Jonathan Ragan-Kelley; Jaakko Lehtinen; Jiawen Chen; Michael C. Doggett
We propose a generalized approach to decoupling shading from visibility sampling in graphics pipelines, which we call decoupled sampling. Decoupled sampling enables stochastic supersampling of motion and defocus blur at reduced shading cost, as well as controllable or adaptive shading rates which trade off shading quality for performance. It can be thought of as a generalization of multisample antialiasing (MSAA) to support complex and dynamic mappings from visibility to shading samples, as introduced by motion and defocus blur and adaptive shading. It works by defining a many-to-one hash from visibility to shading samples, and using a buffer to memoize shading samples and exploit reuse across visibility samples. Decoupled sampling is inspired by the Reyes rendering architecture, but like traditional graphics pipelines, it shades fragments rather than micropolygon vertices, decoupling shading from the geometry sampling rate. Also unlike Reyes, decoupled sampling only shades fragments after precise computation of visibility, reducing overshading. We present extensions of two modern graphics pipelines to support decoupled sampling: a GPU-style sort-last fragment architecture, and a Larrabee-style sort-middle pipeline. We study the architectural implications of decoupled sampling and blur, and derive end-to-end performance estimates on real applications through an instrumented functional simulator. We demonstrate high-quality motion and defocus blur, as well as variable and adaptive shading rates.
international conference on computer graphics and interactive techniques | 2001
Montserrat Bóo; Margarita Amor; Michael C. Doggett; Johannes Hirche; Wolfgang Strasser
Adaptive subdivision of triangular meshes is highly desirable for surface generation algorithms including adaptive displacement mapping in which a highly detailed model can be constructed from a coarse triangle mesh and a displacement map. The communication requirements between the CPU and the graphics pipeline can be reduced if more detailed and complex surfaces are generated, as in displacement mapping, by an adaptive tessellation unit which is part of the graphics pipeline. Generating subdivision surfaces requires a large amount of memory in whicmultiple arbitrary accesses are required to neighbouring vertices to calculate the new vertices. In this paper we present a meshing scheme and new architecture for the implementation of adaptive subdivision of triangular meshes that allows for quick access using a small memory making it feasible in hardware, while at the same time allowing for new vertices to be adaptively inserted. The architecutre is regular and characterized by an efficient data management that minimizes the data storage and avoids the wait cycles that would be associated with the multiple data accesses required for traditional subdivision. This architecture is presented as an improvement for adaptive displacement mapping algorithms, but could also be used for adaptive subdivision surface generation in hardware.
Simulation Modelling Practice and Theory | 2005
Robert Strzodka; Michael C. Doggett; Andreas Kolb
Abstract Graphics processor units (GPUs) have emerged as powerful parallel processors in recent years. Although floating point computations and high level programming languages are now available, the efficient use of the enormous computing power of GPUs still requires a significant amount of graphics specific knowledge. The paper explains how to use GPUs for scientific computations without graphics specific terminology. It offers an algorithmic view on GPUs with comparisons to cache aware and parallel programming of CPUs. Two typical simulation techniques, namely grid based and particle based methods are discussed.
siggraph eurographics conference on graphics hardware | 2002
Michael Meißner; Urs Kanus; Gregor Wetekam; Johannes Hirche; Alexander Ehlert; Wolfgang Straßer; Michael C. Doggett; P. Forthmann; R. Proksa
This paper presents a reconfigurable, hardware accelerated, volume rendering system for high quality perspective ray casting. The volume rendering accelerator performs ray casting by calculating the path of the ray through the volume using a programmable Xilinx Virtex FPGA which provides fast design changes and low cost development. Volume datasets are stored on the card in low profile DIMMs with standard connectors allowing both, large datasets up to 1 GByte with 32 bit per voxel, and easy upgrades to larger memory capacities. Per-sample Phong shading and post-classification is performed in hardware, giving immediate feedback to changes in the visualization of a dataset. Adding new features, such as pre-integrated classification, can be accomplished using the existing card without expensive and time consuming redesigns. The card can also be used for medical image reconstruction by reconfiguring the FPGA broadening its usefulness for end users. For the first time, users are able to generate high quality perspective images as required for applications such as virtual endoscopy and colonoscopy, and stereoscopic image generation.
international symposium on circuits and systems | 1999
Michael C. Doggett; M. Meissner
This paper presents a memory addressing design that uses buffering to achieve a cache hit ratio of 95% for a PCI based Volume Rendering hardware accelerator. The target system for this memory interface is VIZARD II, a second generation PCI board using several XILINX chips. To improve the performance of this and possibly other hardware accelerators, a cubic addressing scheme is presented that improves the percentage of cache hits to misses. To further improve the performance the cubic addressing is coupled with several FIFO buffers to minimise the pipeline stalling effect of cache misses in the eight parallel memory modules. This combination of addressing scheme and memory access buffering raises the cache hit to miss ratio from 63% to 95%. Most Volume Rendering systems are fully pipelined and can utilise the design presented here to increase the number of frames per second and the quality of rendered images.
Computer Graphics Forum | 2001
Michael C. Doggett; Anders M. Kugler; Wolfgang Straßer
This paper presents a novel algorithm and architectures for perspective correct displacement of the surface geometry of a polygonal model using a displacement map. This new displaced surface geometry is passed onto a traditional rendering pipeline. The algorithm uses a multiple pass approach in which the geometry is displaced in the first pass and then the displaced geometry is rendered. The significant features of the algorithm are that the surface is displaced after its triangle mesh is transformed into screen space and that it uses only bi‐linear interpolation for calculating the displaced geometry allowing a cheap incremental scan‐line implementation. A hardware architecture based on this algorithm is presented along with possible alternative implementations. The technique presented here allows greater photorealism by using increased detail without an increase in bandwidth for geometry or calculation time for transformation.
sketch based interfaces and modeling | 2013
Philip Buchanan; Ramakrishnan Mukundan; Michael C. Doggett
In this paper we present a new method for automatically constructing 3D meshes from a single input image. With the increasing content demands of modern digital entertainment and the expectation of involvement from users, automatic artist-free systems are an important step in allowing user generated content and rapid game prototyping. Our system proposes a novel heuristic for the creation of a 3D mesh from a single piece of non-occluding 2D concept art. By extracting a skeleton structure, approximating the 3D orientation and analysing line curvature properties, appropriate centrepoints can be found around which to create the cross-sectional slices used to build a final triangle mesh. Our results show that a single 2D input image can be used to generate a rigged 3D low-polygon model suitable for use in realtime applications.
eurographics | 1995
Michael C. Doggett; Graham R. Hellestrand
Abstract This paper describes a new architecture for generating smoothly shaded two dimensional images of volume data. This architecture fits into an image synthesis pipeline and uses only simple arithmetic operations and a look-up table to generate 2-D images in real time. The shading algorithm is an extension of the grey-level gradient algorithm for shading volume data. The shading technique produces smooth images for voxelized geometrical data and sampled volume data. Image synthesis from volume data in real time is an important technique in visualization and graphics systems.
The Visual Computer | 2015
Per Ganestam; Michael C. Doggett
We present a new method for real-time rendering of multiple recursions of reflections and refractions. The method uses the strengths of real-time ray tracing for objects close to the camera, by storing them in a per-frame constructed bounding volume hierarchy (BVH). For objects further from the camera, rasterization is used to create G-buffers which store an image-based representation of the scene outside the near objects. Rays that exit the BVH continue tracing in the G-buffers’ perspective space using ray marching, and can even be reflected back into the BVH. Our hybrid renderer is to our knowledge the first method to merge real-time ray tracing techniques with image-based rendering to achieve smooth transitions from accurately ray-traced foreground objects to image-based representations in the background. We are able to achieve more complex reflections and refractions than existing screen space techniques, and offer reflections by off-screen objects. Our results demonstrate that our algorithm is capable of rendering multiple bounce reflections and refractions, for scenes with millions of triangles, at 720p resolution and above 30 FPS.