Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary Bishop is active.

Publication


Featured researches published by Gary Bishop.


international conference on computer graphics and interactive techniques | 1995

Plenoptic modeling: an image-based rendering system

Leonard McMillan; Gary Bishop

Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.


interactive 3d graphics and games | 1997

Post-rendering 3D warping

William R. Mark; Leonard McMillan; Gary Bishop

A pair of rendered images and their Z-buffers contain almost all of the information necessary to re-render from nearby viewpoints. For the small changes in viewpoint that occur in a fraction of a second, this information is sufficient for high quality re-rendering with cost independent of scene complexity. Re-rendering from previously computed views allows an order-of-magnitude increase in apparent frame rate over that provided by conventional rendering alone. It can also compensate for system latency in local or remote display. We use McMillan and Bishop’s image warping algorithm to re-render, allowing us to compensate for viewpoint translation as well as rotation. We avoid occlusion-related artifacts by warping two different reference images and compositing the results. This paper explains the basic design of our system and provides details of our reconstruction and multi-image compositing algorithms. We present our method for selecting reference image locations and the heuristic we use for any portions of the scene which happen to be occluded in both reference images. We also discuss properties of our technique which make it suitable for real-time implementation, and briefly describe our simpler real-time remote display system. CR


international conference on computer graphics and interactive techniques | 1994

Improving static and dynamic registration in an optical see-through HMD

Ronald Azuma; Gary Bishop

In Augmented Reality, see-through HMDs superimpose virtual 3D objects on the real world. This technology has the potential to enhance a users perception and interaction with the real world. However, many Augmented Reality applications will not be accepted until we can accurately register virtual objects with their real counterparts. In previous systems, such registration was achieved only from a limited range of viewpoints, when the user kept his head still. This paper offers improved registration in two areas. First, our system demonstrates accurate static registration across a wide variety of viewing angles and positions. An optoelectronic tracker provides the required range and accuracy. Three calibration steps determine the viewing parameters. Second, dynamic errors that occur when the user moves his head are reduced by predicting future head locations. Inertial sensors mounted on the HMD aid head-motion prediction. Accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency. On average, prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all. Future steps that may further improve registration are outlined.


international conference on computer graphics and interactive techniques | 1997

SCAAT: incremental tracking with incomplete information

Greg Welch; Gary Bishop

The Kalman filter provides a powerful mathematical framework within which a minimum mean-square-error estimate of a users position and orientation can be tracked using a sequence of single sensor observations, as opposed to groups of observations. We refer to this new approach as single-constraint-at-a-time or SCAAT tracking. The method improves accuracy by properly assimilating sequential observations, filtering sensor measurements, and by concurrently autocalibrating mechanical or electrical devices. The method facilitates user motion prediction, multisensor data fusion, and in systems where the observations are only available sequentially it provides estimates at a higher rate and with lower latency than a multiple-constraint approach. Improved accuracy is realized primarily for three reasons. First, the method avoids mathematically treating truly sequential observations as if they were simultaneous. Second, because each estimate is based on the observation of an individual device, perceived error (statistically unusual estimates) can be more directly attributed to the corresponding device. This can be used for concurrent autocalibration which can be elegantly incorporated into the existing Kalman filter. Third, the Kalman filter inherently addresses the effects of noisy device measurements. Beyond accuracy, the method nicely facilitates motion prediction because the Kalman filter already incorporates a model of the users dynamics, and because it provides smoothed estimates of the user state, including potentially unmeasured elements. Finally, in systems where the observations are only available sequentially, the method can be used to weave together information from individual devices in a very flexible manner, producing a new estimate as soon as each individual observation becomes available, thus facilitating multisensor data fusion and improving the estimate rates and latencies. The most significant aspect of this work is the introduction and exploration of the SCAAT approach to 3D tracking for virtual environments. However I also believe that this work may prove to be of interest to the larger scientific and engineering community in addressing a more general class of tracking and estimation problems.


international conference on computer graphics and interactive techniques | 2000

Relief texture mapping

Manuel M. Oliveira; Gary Bishop; David K. McAllister

We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.


international conference on computer graphics and interactive techniques | 1998

Multiple-center-of-projection images

Paul Rademacher; Gary Bishop

In image-based rendering, images acquired from a scene are used to represent the scene itself. A number of reference images are required to fully represent even the simplest scene. This leads to a number of problems during image acquisition and subsequent reconstruction. We present the multiple-center-of-projection image, a single image acquired from multiple locations, which solves many of the problems of working with multiple range images. This work develops and discusses multiple-center-ofprojection images, and explains their advantages over conventional range images for image-based rendering. The contributions include greater flexibility during image acquisition and improved image reconstruction due to greater connectivity information. We discuss the acquisition and rendering of multiple-center-of-projection datasets, and the associated sampling issues. We also discuss the unique epipolar and correspondence properties of this class of image. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation – Digitizing and scanning, Viewing algorithms; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.4.10 [Image Processing]: Scene Analysis


pacific conference on computer graphics and applications | 2002

Real-time consensus-based scene reconstruction using commodity graphics hardware

Ruigang Yang; Greg Welch; Gary Bishop

We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do this without prior geometric information or requiring any user interaction, in real time and on line. The heart of our method is using programmable pixel shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color. In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present results.


international conference on computer graphics and interactive techniques | 1999

LDI tree: a hierarchical representation for image-based rendering

Chun Fa Chang; Gary Bishop; Anselmo Lastra

Using multiple reference images in 3D image warping has been a challenging problem. Recently, the Layered Depth Image (LDI) was proposed by Shade et al. to merge multiple reference images under a single center of projection, while maintaining the simplicity of warping a single reference image. However it does not consider the issue of sampling rate. We present the LDI tree, which combines a hierarchical space partitioning scheme with the concept of the LDI. It preserves the sampling rates of the reference images by adaptively selecting an LDI in the LDI tree for each pixel. While rendering from the LDI tree, we only have to traverse the LDI tree to the levels that are comparable to the sampling rate of the output image. We also present a progressive refinement feature and a “gap filling” algorithm implemented by pre-filtering the LDI tree. We show that the amount of memory required has the same order of growth as the 2D reference images. This also bounds the complexity of rendering time to be less than directly rendering from all reference images. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation Viewing Algorithms; I.3.6 [Computer Graphics] Methodology and Techniques Graphics data structures and data types; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. Additional


Presence: Teleoperators & Virtual Environments | 2001

High-Performance Wide-Area Optical Tracking: The HiBall Tracking System

Greg Welch; Gary Bishop; Leandra Vicci; Stephen Brumback; Kurtis Keller; D'nardo Colucci

Since the early 1980s, the Tracker Project at the University of North Carolina at Chapel Hill has been working on wide-area head tracking for virtual and augmented environments. Our long-term goal has been to achieve the high performance required for accurate visual simulation throughout our entire laboratory, beyond into the hallways, and eventually even outdoors. In this article, we present results and a complete description of our most recent electro-optical system, the HiBall Tracking System. In particular, we discuss motivation for the geometric configuration and describe the novel optical, mechanical, electronic, and algorithmic aspects that enable unprecedented speed, resolution, accuracy, robustness, and flexibility.


virtual reality software and technology | 1999

The HiBall Tracker: high-performance wide-area tracking for virtual and augmented environments

Greg Welch; Gary Bishop; Leandra Vicci; Stephen Brumback; Kurtis Keller; D'nardo Colucci

Our HiBall Tracking System generates over 2000 head-pose estimates per second with less than one millisecond of latency, and less than 0.5 millimeters and 0.02 degrees of position and orientation noise, everywhere in a 4.5 by 8.5 meter room. The system is remarkably responsive and robust, enabling VR applications and experiments that previously would have been difficult or even impossible. Previously we published descriptions of only the Kalman filter-based software approach that we call Single-Constraint-at-a-Time tracking. In this paper we describe the complete tracking system, including the novel optical, mechanical, electrical, and algorithmic aspects that enable the unparalleled performance.

Collaboration


Dive into the Gary Bishop's collaboration.

Top Co-Authors

Avatar

Greg Welch

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Leonard McMillan

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jinghe Zhang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

William R. Mark

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Manuel M. Oliveira

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Leandra Vicci

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kurtis Keller

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Richard Superfine

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge