Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Welch is active.

Publication


Featured researches published by Greg Welch.


international conference on computer graphics and interactive techniques | 1998

The office of the future: a unified approach to image-based modeling and spatially immersive displays

Ramesh Raskar; Greg Welch; Matt Cutts; Adam Lake; Lev Stesin; Henry Fuchs

Apparatus for dispensing pasty flowable substances, and particularly for dispensing substances such as mustard and catsup for deposit on buns. The apparatus comprises a closed container having a plurality of compartments for containing mustard and catsup. A valve arrangement is associated with the container to uncover selected openings in compartments, and air under slight pressure is introduced into the container to assist in ejecting the mustard or catsup. A control is included to selectively provide for ejection of mustard, or catsup, or both.


International Journal of Computer Vision | 2008

Detailed Real-Time Urban 3D Reconstruction from Video

Marc Pollefeys; David Nistér; Jan Michael Frahm; Amir Akbarzadeh; Philippos Mordohai; Brian Clipp; Chris Engels; David Gallup; Seon Joo Kim; Paul Merrell; C. Salmi; Sudipta N. Sinha; B. Talton; Liang Wang; Qingxiong Yang; Henrik Stewenius; Ruigang Yang; Greg Welch; Herman Towles

Abstract The paper presents a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU’s to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames.


IEEE Computer Graphics and Applications | 2002

Motion tracking: no silver bullet, but a respectable arsenal

Greg Welch; Eric Foxlin

This article introduces the physical principles underlying the variety of approaches to motion tracking. Although no single technology will work for all purposes, certain methods work quite well for specific applications.


international conference on computer graphics and interactive techniques | 1997

SCAAT: incremental tracking with incomplete information

Greg Welch; Gary Bishop

The Kalman filter provides a powerful mathematical framework within which a minimum mean-square-error estimate of a users position and orientation can be tracked using a sequence of single sensor observations, as opposed to groups of observations. We refer to this new approach as single-constraint-at-a-time or SCAAT tracking. The method improves accuracy by properly assimilating sequential observations, filtering sensor measurements, and by concurrently autocalibrating mechanical or electrical devices. The method facilitates user motion prediction, multisensor data fusion, and in systems where the observations are only available sequentially it provides estimates at a higher rate and with lower latency than a multiple-constraint approach. Improved accuracy is realized primarily for three reasons. First, the method avoids mathematically treating truly sequential observations as if they were simultaneous. Second, because each estimate is based on the observation of an individual device, perceived error (statistically unusual estimates) can be more directly attributed to the corresponding device. This can be used for concurrent autocalibration which can be elegantly incorporated into the existing Kalman filter. Third, the Kalman filter inherently addresses the effects of noisy device measurements. Beyond accuracy, the method nicely facilitates motion prediction because the Kalman filter already incorporates a model of the users dynamics, and because it provides smoothed estimates of the user state, including potentially unmeasured elements. Finally, in systems where the observations are only available sequentially, the method can be used to weave together information from individual devices in a very flexible manner, producing a new estimate as soon as each individual observation becomes available, thus facilitating multisensor data fusion and improving the estimate rates and latencies. The most significant aspect of this work is the introduction and exploration of the SCAAT approach to 3D tracking for virtual environments. However I also believe that this work may prove to be of interest to the larger scientific and engineering community in addressing a more general class of tracking and estimation problems.


ieee visualization | 1999

Multi-projector displays using camera-based registration

Ramesh Raskar; Michael S. Brown; Ruigang Yang; Wei-Chao Chen; Greg Welch; Herman Towles; B. Scales; Henry Fuchs

Conventional projector-based display systems are typically designed around precise and regular configurations of projectors and display surfaces. While this results in rendering simplicity and speed, it also means painstaking construction and ongoing maintenance. In previously published work, we introduced a vision of projector-based displays constructed from a collection of casually-arranged projectors and display surfaces. In this paper, we present flexible yet practical methods for realizing this vision, enabling low-cost mega-pixel display systems with large physical dimensions, higher resolution, or both. The techniques afford new opportunities to build personal 3D visualization systems in offices, conference rooms, theaters, or even your living room. As a demonstration of the simplicity and effectiveness of the methods that we continue to perfect, we show in the included video that a 10-year old child can construct and calibrate a two-camera, two-projector, head-tracked display system, all in about 15 minutes.


pacific conference on computer graphics and applications | 2002

Real-time consensus-based scene reconstruction using commodity graphics hardware

Ruigang Yang; Greg Welch; Gary Bishop

We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, on-line 3D scene acquisition and view synthesis. Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh. We can do this without prior geometric information or requiring any user interaction, in real time and on line. The heart of our method is using programmable pixel shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, i.e. the most consistent color. In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present results.


international symposium on 3d data processing visualization and transmission | 2006

Towards Urban 3D Reconstruction from Video

Amir Akbarzadeh; Jan Michael Frahm; Philippos Mordohai; Brian Clipp; Chris Engels; David Gallup; Paul Merrell; M. Phelps; Sudipta N. Sinha; B. Talton; Liang Wang; Qingxiong Yang; Henrik Stewenius; Ruigang Yang; Greg Welch; Herman Towles; David Nistér; Marc Pollefeys

The paper introduces a data collection system and a processing pipeline for automatic geo-registered 3D reconstruction of urban scenes from video. The system collects multiple video streams, as well as GPS and INS measurements in order to place the reconstructed models in geo- registered coordinates. Besides high quality in terms of both geometry and appearance, we aim at real-time performance. Even though our processing pipeline is currently far from being real-time, we select techniques and we design processing modules that can achieve fast performance on multiple CPUs and GPUs aiming at real-time performance in the near future. We present the main considerations in designing the system and the steps of the processing pipeline. We show results on real video sequences captured by our system.


Presence: Teleoperators & Virtual Environments | 2001

High-Performance Wide-Area Optical Tracking: The HiBall Tracking System

Greg Welch; Gary Bishop; Leandra Vicci; Stephen Brumback; Kurtis Keller; D'nardo Colucci

Since the early 1980s, the Tracker Project at the University of North Carolina at Chapel Hill has been working on wide-area head tracking for virtual and augmented environments. Our long-term goal has been to achieve the high performance required for accurate visual simulation throughout our entire laboratory, beyond into the hallways, and eventually even outdoors. In this article, we present results and a complete description of our most recent electro-optical system, the HiBall Tracking System. In particular, we discuss motivation for the geometric configuration and describe the novel optical, mechanical, electronic, and algorithmic aspects that enable unprecedented speed, resolution, accuracy, robustness, and flexibility.


virtual reality software and technology | 1999

The HiBall Tracker: high-performance wide-area tracking for virtual and augmented environments

Greg Welch; Gary Bishop; Leandra Vicci; Stephen Brumback; Kurtis Keller; D'nardo Colucci

Our HiBall Tracking System generates over 2000 head-pose estimates per second with less than one millisecond of latency, and less than 0.5 millimeters and 0.02 degrees of position and orientation noise, everywhere in a 4.5 by 8.5 meter room. The system is remarkably responsive and robust, enabling VR applications and experiments that previously would have been difficult or even impossible. Previously we published descriptions of only the Kalman filter-based software approach that we call Single-Constraint-at-a-Time tracking. In this paper we describe the complete tracking system, including the novel optical, mechanical, electrical, and algorithmic aspects that enable the unparalleled performance.


international conference on computer vision | 2005

Ensuring color consistency across multiple cameras

Adrian Ilie; Greg Welch

Most multi-camera vision applications assume a single common color response for all cameras. However different cameras - even of the same type - can exhibit radically different color responses, and the differences can cause significant errors in scene interpretation. To address this problem we have developed a robust system aimed at inter-camera color consistency. Our method consists of two phases: an iterative closed-loop calibration phase that searches for the per-camera hardware register settings that best balance linearity and dynamic range, followed by a refinement phase that computes the per-camera parametric values for an additional software-based color mapping

Collaboration


Dive into the Greg Welch's collaboration.

Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Herman Towles

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Gary Bishop

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Ilie

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bruce A. Cairns

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Andrei State

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jinghe Zhang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Gerd Bruder

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge