Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Morgan McGuire is active.

Publication


Featured researches published by Morgan McGuire.


international conference on computer graphics and interactive techniques | 2010

OptiX: a general purpose ray tracing engine

Steven G. Parker; James Bigler; Andreas Dietrich; Heiko Friedrich; Jared Hoberock; David Luebke; David Kirk McAllister; Morgan McGuire; R. Keith Morley; Austin Robison; Martin Stich

The NVIDIA® OptiX™ ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls.


international conference on computer graphics and interactive techniques | 2005

Defocus video matting

Morgan McGuire; Wojciech Matusik; Hanspeter Pfister; John F. Hughes

Video matting is the process of pulling a high-quality alpha matte and foreground from a video sequence. Current techniques require either a known background (e.g., a blue screen) or extensive user interaction (e.g., to specify known foreground and background elements). The matting problem is generally under-constrained, since not enough information has been collected at capture time. We propose a novel, fully autonomous method for pulling a matte using multiple synchronized video streams that share a point of view but differ in their plane of focus. The solution is obtained by directly minimizing the error in filter-based image formation equations, which are over-constrained by our rich data stream. Our system solves the fully dynamic video matting problem without user assistance: both the foreground and background may be high frequency and have dynamic content, the foreground may resemble the background, and the scene is lit by natural (as opposed to polarized or collimated) illumination.


high performance graphics | 2009

Hardware-accelerated global illumination by image space photon mapping

Morgan McGuire; David Luebke

We describe an extension to photon mapping that recasts the most expensive steps of the algorithm -- the initial and final photon bounces -- as image-space operations amenable to GPU acceleration. This enables global illumination for real-time applications as well as accelerating it for offline rendering. Image Space Photon Mapping (ISPM) rasterizes a light-space bounce map of emitted photons surviving initial-bounce Russian roulette sampling on a GPU. It then traces photons conventionally on the CPU. Traditional photon mapping estimates final radiance by gathering photons from a k-d tree. ISPM instead scatters indirect illumination by rasterizing an array of photon volumes. Each volume bounds a filter kernel based on the a priori probability density of each photon path. These two steps exploit the fact that initial path segments from point lights and final ones into a pinhole camera each have a common center of projection. An optional step uses joint bilateral upsampling of irradiance to reduce the fill requirements of rasterizing photon volumes. ISPM preserves the accurate and physically-based nature of photon mapping, supports arbitrary BSDFs, and captures both high- and low-frequency illumination effects such as caustics and diffuse color interreflection. An implementation on a consumer GPU and 8-core CPU renders highquality global illumination at up to 26 Hz at HD (1920x1080) resolution, for complex scenes containing moving objects and lights.


Journal of Visual Communication and Image Representation | 2003

Analysis of image registration noise due to rotationally dependent aliasing

Harold S. Stone; Bo Tao; Morgan McGuire

This paper investigates factors that degrade the precision of image registration based on phase correlation. The major sources of error are interpolation error and rotationally dependent aliasing. The latter error stems from the fact that the discrete-Fourier transform does not commute with the rotation of sampled-images, whereas in the continuous domain the corresponding operations do commute. We show through a series of examples how much the various sources of error contribute to phase-correlation registration, and we demonstrate constructive techniques for improving precision and signal to noise ratio in the registration process. Since rotationally dependent aliasing is exacerbated by the presence of high frequencies, the examples demonstrate that the use of a Blackman window removes spurious high frequencies in the spectral leakage created by the image boundary and greatly reduces aliasing effects. Since remaining aliasing effects are strongest in the low frequencies of the Fourier transform, their effects can be reduced to a negligible amount by removing frequencies within a radius of N/4 of the Fourier domain origin. A third technique is to perform phase correlation over half the Fourier plane rather than over the full plane, which more than doubles the signal-to-noise ratio of phase correlation. For an example image, the combination of techniques improved the phase-correlation signal-to-noise ratio from 8.5 to 172 and raised the peak from 0.348 to 0.885, which are substantially higher values than previously reported.


IEEE Computer Graphics and Applications | 2007

Optical Splitting Trees for High-Precision Monocular Imaging

Morgan McGuire; Wojciech Matusik; Hanspeter Pfister; Billy Chen; John F. Hughes; Shree K. Nayar

In this article, we consider the design of monocular multiview optical systems that form optical splitting trees, where the optical path topology takes the shape of a tree because of recursive beam splitting. Designing optical splitting trees is challenging when it requires many views with specific spectral properties. We introduce a manual design paradigm for optical splitting trees and a computer-assisted design tool to create efficient splitting-tree cameras. The tool accepts as input a specification for each view and a set of weights describing the users relative affinity for efficiency, measurement accuracy, and economy. An optimizer then searches for a design that maximizes these weighted priorities. Our tools output is a splitting-tree design that implements the input specification and an analysis of the efficiency of each root-to-leaf path. Automatically designed trees appear comparable to those designed by hand; we even show some cases where they are superior. With the help of the optimizer, the system demonstrates high dynamic range, focusing, matting, and hybrid imaging implemented on a single, reconfigurable camera containing eight sensors


international conference on computer graphics and interactive techniques | 2006

Dynamo: dynamic, data-driven character control with adjustable balance

Pawel Wrotek; Odest Chadwicke Jenkins; Morgan McGuire

Dynamo (DYNAmic MOtion capture) is an approach to controlling animated characters in a dynamic virtual world. Leveraging existing methods, characters are simultaneously physically simulated and driven to perform kinematic motion (from mocap or other sources). Continuous simulation allows characters to interact more realistically than methods that alternate between ragdoll simulation and pure motion capture.The novel contributions of Dynamo are world-space torques for increased stability and a weak root spring for plausible balance. Promoting joint target angles from the traditional parent-bone reference frame to the world-space reference frame allows a character to set and maintain poses robust to dynamic interactions. It also produces physically plausible transitions between motions without explicit blending. These properties are maintained over a wide range of servo gain constants, making Dynamo significantly easier to tune than parent-space control systems. The weak root spring tempers our world-space model to account for external constraints that should break balance. This root spring provides an adjustable parameter that allows characters to fall when significantly unbalanced or struck with extreme force.We demonstrate Dynamo through in-game simulations of characters walking, running, jumping, and fighting on uneven terrain while experiencing dynamic external forces. We show that an implementation using standard physics (ODE) and graphics (G3D/OpenGL) engines can drive game-like applications with hundreds of rigid bodies and tens of characters, using about 0.002s of CPU time per frame.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

The translation sensitivity of wavelet-based registration

Harold S. Stone; J. Le Moigne; Morgan McGuire

This paper studies the effects of image translation on wavelet-based image registration. The main result is that the normalized correlation coefficients of low-pass Haar and Daubechies wavelet subbands are essentially insensitive to translations for features larger than twice the wavelet blocksize. The third-level low-pass subbands produce a correlation peak that varies with translation from 0.7 and 1.0 with an average in excess of 0.9. Translation sensitivity is limited to the high-pass subband and even this subband is potentially useful. The correlation peak for high-pass subbands derived from first and second-level low-pass subbands ranges from about 0.0 to 1.0 with an average of about 0.5 for Daubechies and 0.7 for Haar. We use a mathematical model to develop these results, and confirm them on real data.


IEEE Transactions on Geoscience and Remote Sensing | 2000

Techniques for multiresolution image registration in the presence of occlusions

Morgan McGuire; Harold S. Stone

This paper describes and compares image registration techniques for situations in which one or both candidate images are partially occluded. Both fractional masks (introduced) and binary masks improve registration accuracy. Efficient mask-based algorithms operate at low resolution on low-pass subbands of wavelets and correlate images in the frequency domain.


interactive 3d graphics and games | 2011

Subpixel reconstruction antialiasing for deferred shading

Matthäus G. Chajdas; Morgan McGuire; David Luebke

Subpixel Reconstruction Antialiasing (SRAA) combines singlepixel (1x) shading with subpixel visibility to create antialiased images without increasing the shading cost. SRAA targets deferred-shading renderers, which cannot use multisample antialiasing. SRAA operates as a post-process on a rendered image with superresolution depth and normal buffers, so it can be incorporated into an existing renderer without modifying the shaders. In this way SRAA resembles Morphological Antialiasing (MLAA), but the new algorithm can better respect geometric boundaries and has fixed runtime independent of scene and image complexity. SRAA benefits shading-bound applications. For example, our implementation evaluates SRAA in 1.8ms (1280x720) to yield antialiasing quality comparable to 4-16x shading. Thus SRAA would produce a net speedup over supersampling for applications that spend 1 ms or more on shading; for comparison, most modern games spend 5-10ms shading. We also describe simplifications that increase performance by reducing quality.


interactive 3d graphics and games | 2011

A local image reconstruction algorithm for stochastic rendering

Peter Shirley; Timo Aila; Jonathan Cohen; Eric Enderton; Samuli Laine; David Luebke; Morgan McGuire

Stochastic renderers produce unbiased but noisy images of scenes that include the advanced camera effects of motion and defocus blur and possibly other effects such as transparency. We present a simple algorithm that selectively adds bias in the form of image space blur to pixels that are unlikely to have high frequency content in the final image. For each pixel, we sweep once through a fixed neighborhood of samples in front to back order, using a simple accumulation scheme. We achieve good quality images with only 16 samples per pixel, making the algorithm potentially practical for interactive stochastic rendering in the near future.

Collaboration


Dive into the Morgan McGuire's collaboration.

Top Co-Authors

Avatar

Wojciech Matusik

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge