Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonard McMillan is active.

Publication


Featured researches published by Leonard McMillan.


international conference on computer graphics and interactive techniques | 1995

Plenoptic modeling: an image-based rendering system

Leonard McMillan; Gary Bishop

Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.


international conference on computer graphics and interactive techniques | 2000

Image-based visual hulls

Wojciech Matusik; Chris Buehler; Ramesh Raskar; Steven J. Gortler; Leonard McMillan

In this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from the computation complexity, limited resolution, or quantization artifacts of previous volumetric approaches. We demonstrate the use of this algorithm in a real-time virtualized reality application running off a small number of video streams.


international conference on computer graphics and interactive techniques | 2001

Unstructured lumigraph rendering

Chris Buehler; Michael Bosse; Leonard McMillan; Steven J. Gortler; Michael F. Cohen

We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.


international conference on computer graphics and interactive techniques | 2003

A data-driven reflectance model

Wojciech Matusik; Hanspeter Pfister; Matthew Brand; Leonard McMillan

We present a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytical reflectance models, we represent each BRDF as a dense set of measurements. This allows us to interpolate and extrapolate in the space of acquired BRDFs to create new BRDFs. We treat each acquired BRDF as a single high-dimensional vector taken from a space of all possible BRDFs. We apply both linear (subspace) and non-linear (manifold) dimensionality reduction tools in an effort to discover a lowerdimensional representation that characterizes our measurements. We let users define perceptually meaningful parametrization directions to navigate in the reduced-dimension BRDF space. On the low-dimensional manifold, movement along these directions produces novel but valid


international conference on computer graphics and interactive techniques | 2000

Dynamically reparameterized light fields

Aaron Isaksen; Leonard McMillan; Steven J. Gortler

This research further develops the light field and lumigraph image-based rendering methods and extends their utility. We present alternate parameterizations that permit 1) interactive rendering of moderately sampled light fields of scenes with significant, unknown depth variation and 2) low-cost, passive autostereoscopic viewing. Using a dynamic reparameterization, these techniques can be used to interactively render photographic effects such as variable focus and depth-of-field within a light field. The dynamic parameterization is independent of scene geometry and does not require actual or approximate geometry of the scene. We explore the frequency domain and ray-space aspects of dynamic reparameterization, and present an interactive rendering technique that takes advantage of todays commodity rendering hardware.


interactive 3d graphics and games | 1997

Post-rendering 3D warping

William R. Mark; Leonard McMillan; Gary Bishop

A pair of rendered images and their Z-buffers contain almost all of the information necessary to re-render from nearby viewpoints. For the small changes in viewpoint that occur in a fraction of a second, this information is sufficient for high quality re-rendering with cost independent of scene complexity. Re-rendering from previously computed views allows an order-of-magnitude increase in apparent frame rate over that provided by conventional rendering alone. It can also compensate for system latency in local or remote display. We use McMillan and Bishop’s image warping algorithm to re-render, allowing us to compensate for viewpoint translation as well as rotation. We avoid occlusion-related artifacts by warping two different reference images and compositing the results. This paper explains the basic design of our system and provides details of our reconstruction and multi-image compositing algorithms. We present our method for selecting reference image locations and the heuristic we use for any portions of the scene which happen to be occluded in both reference images. We also discuss properties of our technique which make it suitable for real-time implementation, and briefly describe our simpler real-time remote display system. CR


symposium on computer animation | 2002

Stable real-time deformations

Matthias Müller; Julie Dorsey; Leonard McMillan; Robert Jagnow; Barbara Cutler

The linear strain measures that are commonly used in real-time animations of deformable objects yield fast and stable simulations. However, they are not suitable for large deformations. Recently, more realistic results have been achieved in computer graphics by using Greens non-linear strain tensor, but the non-linearity makes the simulation more costly and introduces numerical problems.In this paper, we present a new simulation technique that is stable and fast like linear models, but without the disturbing artifacts that occur with large deformations. As a precomputation step, a linear stiffness matrix is computed for the system. At every time step of the simulation, we compute a tensor field that describes the local rotations of all the vertices in the mesh. This field allows us to compute the elastic forces in a non-rotated reference frame while using the precomputed stiffness matrix. The method can be applied to both finite element models and mass-spring systems. Our approach provides robustness, speed, and a realistic appearance in the simulation of large deformations.


eurographics | 2002

A real-time distributed light field camera

Jason Yang; Matthew Everett; Chris Buehler; Leonard McMillan

We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.


eurographics symposium on rendering techniques | 2001

Polyhedral visual hulls for real-time rendering

Wojciech Matusik; Chris Buehler; Leonard McMillan

We present new algorithms for creating and rendering visual hulls in real-time. Unlike voxel or sampled approaches, we compute an exact polyhedral representation for the visual hull directly from the silhouettes. This representation has a number of advantages: 1) it is a view-independent representation, 2) it is well-suited to rendering with graphics hardware, and 3) it can be computed very quickly. We render these visual hulls with a view-dependent texturing strategy, which takes into account visibility information that is computed during the creation of the visual hull. We demonstrate these algorithms in a system that asynchronously renders dynamically created visual hulls in real-time. Our system outperforms similar systems of comparable computational power.


Nature Genetics | 2011

Subspecific origin and haplotype diversity in the laboratory mouse

Hyuna Yang; Jeremy R. Wang; John P. Didion; Ryan J. Buus; Timothy A. Bell; Catherine E. Welsh; Franãois Bonhomme; Alex Hon-Tsen Yu; Michael W. Nachman; Jaroslav Piálek; Priscilla K. Tucker; Pierre Boursot; Leonard McMillan; Gary A. Churchill; Fernando Pardo-Manuel de Villena

Here we provide a genome-wide, high-resolution map of the phylogenetic origin of the genome of most extant laboratory mouse inbred strains. Our analysis is based on the genotypes of wild-caught mice from three subspecies of Mus musculus. We show that classical laboratory strains are derived from a few fancy mice with limited haplotype diversity. Their genomes are overwhelmingly Mus musculus domesticus in origin, and the remainder is mostly of Japanese origin. We generated genome-wide haplotype maps based on identity by descent from fancy mice and show that classical inbred strains have limited and non-randomly distributed genetic diversity. In contrast, wild-derived laboratory strains represent a broad sampling of diversity within M. musculus. Intersubspecific introgression is pervasive in these strains, and contamination by laboratory stocks has played a role in this process. The subspecific origin, haplotype diversity and identity by descent maps can be visualized using the Mouse Phylogeny Viewer (see URLs).

Collaboration


Dive into the Leonard McMillan's collaboration.

Top Co-Authors

Avatar

Fernando Pardo-Manuel de Villena

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Wei Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Timothy A. Bell

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jingyi Yu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darla R. Miller

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

John P. Didion

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Andrew P. Morgan

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David L. Aylor

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge