Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gaile G. Gordon is active.

Publication


Featured researches published by Gaile G. Gordon.


computer vision and pattern recognition | 1998

Integrated person tracking using stereo, color, and pattern detection

Trevor Darrell; Gaile G. Gordon; Michael Harville; John Iselin Woodfill

We present an approach to real-time person tracking in crowded and/or unknown environments using integration of multiple visual modalities. We combine stereo, color, and face detection modules into a single robust system, and show an initial application in an interactive, face-responsive display. Dense, real-time stereo processing is used to isolate users from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the silhouette of a user. Face pattern detection discriminates and localizes the face within the identified body parts. Faces and bodies of users are tracked over several temporal scales: short-term (user stays within the field of view), medium-term (user exits/reenters within minutes), and long term (user returns after hours or days). Short-term tracking is performed using simple region position and size correspondences, while medium and long-term tracking are based on statistics of user appearance. We discuss the failure modes of each individual module, describe our integration method, and report results with the complete system in trials with thousands of users.


computer vision and pattern recognition | 1992

Face recognition based on depth and curvature features

Gaile G. Gordon

Face recognition from a representation based on features extracted from range images is explored. Depth and curvature features have several advantages over more traditional intensity-based features. Specifically, curvature descriptors have the potential for higher accuracy in describing surface-based events, are better suited to describe properties of the face in areas such as the cheeks, forehead, and chin, and are viewpoint invariant. Faces are represented in terms of a vector of feature descriptors. Comparisons between two faces is made based on their relationship in the feature space. The author provides a detailed analysis of the accuracy and discrimination of the particular features extracted, and the effectiveness of the recognition system for a test database of 24 faces. Recognition rates are in the range of 80% to 100%. In many cases, feature accuracy is limited more by surface resolution than by the extraction process.<<ETX>>


computer vision and pattern recognition | 1999

Background estimation and removal based on range and color

Gaile G. Gordon; Trevor Darrell; Michael Harville; John Iselin Woodfill

Background estimation and removal based on the joint use of range and color data produces superior results than can be achieved with either data source alone. This is increasingly relevant as inexpensive, real-time, passive range systems become more accessible through novel hardware and increased CPU processing speeds. Range is a powerful signal for segmentation which is largely independent of color and hence not effected by the classic color segmentation problems of shadows and objects with color similar to the background. However range alone is also not sufficient for the good segmentation: depth measurements are rarely available at all pixels in the scene, and foreground objects may be indistinguishable in depth when they are close to the background. Color segmentation is complementary in these cases. Surprisingly, little work has been done to date on joint range and color segmentation. We describe and demonstrate a background estimation method based on a multidimensional (range and color) clustering at each image pixel. Segmentation of the foreground in a given frame is performed via comparison with background statistics in range and normalized color. Important implementation issues such as treatment of shadows and low confidence measurements are discussed in detail.


computer vision and pattern recognition | 2004

Tyzx DeepSea High Speed Stereo Vision System

John Iselin Woodfill; Gaile G. Gordon; Ron Buck

This paper describes the DeepSea Stereo Vision System which makes the use of high speed 3D images practical in many application domains. This system is based on the DeepSea processor, an ASIC, which computes absolute depth based on simultaneously captured left and right images with high frame rates, low latency, and low power. The chip is capable of running at 200 frames per second with 512x480 images, with only 13 scan lines latency between data input and first depth output. The DeepSea Stereo Vision System includes a stereo camera, onboard image rectification, and an interface to a general purpose processor over a PCI bus. We conclude by describing several applications implemented with the DeepSea system including person tracking, obstacle detection for autonomous navigation, and gesture recognition.


international conference on computer vision | 1999

3D pose tracking with linear depth and brightness constraints

Michael Harville; Ali Rahimi; Trevor Darrell; Gaile G. Gordon; John Iselin Woodfill

This paper explores the direct motion estimation problem assuming that video-rate depth information is available, from either stereo cameras or other sensors. We use these depth measurements in the traditional linear brightness constraint equations, and we introduce a new depth constraint equation. As a result, estimation of certain types of motion, such as translation in depth and rotations out of the image plane, becomes more robust. We derive linear brightness and depth change constraint equations that govern the velocity field in 3-D for both perspective and orthographic camera projection models. These constraints are integrated jointly over image regions according to a rigid-body motion model, yielding a single linear system to robustly track 3D object pose. Results are shown for tracking the pose effaces in sequences of synthetic and real images.


computer vision and pattern recognition | 2006

The Tyzx DeepSea G2 Vision System, ATaskable, Embedded Stereo Camera

John Iselin Woodfill; Gaile G. Gordon; Dave Jurasek; Terrance Brown; Ron Buck

Our goal is to build vision systems suitable for deployment in devices that operate in demanding dynamic, variably lit, real-world environments. For such systems to be successful, not only must they perform their visual analysis well and robustly, but they must also be small, cheap and consume little power. Further, since volume deployments of such vision systems are still nascent, the systems must be taskable -- flexible enough to support many different uses. We have met our goal with the Tyzx DeepSea G2 Stereo Vision System, an embedded stereo camera consisting of two CMOS imagers, a DeepSea II stereo ASIC, an FPGA, a DSP/Co-processor and a PowerPC running Linux, connected to the Ethernet. It is made practically taskable by the definition of a set of configurable visual primitives supported by specific hardware acceleration. These primitives include stereo correlation, color and depth background modeling, and 2D and 3D quantized representations or projections of the range data. We have defined a common programming interface in which the visual primitives are available both in traditional workstation environments, supported in software, and on the G2 with hardware acceleration. Single G2s are deployed in mobile platforms such as robots and automobiles, while networks of G2s are deployed in tracking systems in public and private sites around the world.


international symposium on mixed and augmented reality | 2002

The use of dense stereo range data in augmented reality

Gaile G. Gordon; Mark Billinghurst; Melanie Bell; John Woodfill; Bill Kowalik; Alex Erendi; Janet Tilander

This paper describes an augmented reality system that incorporates a real-time dense stereo vision system. Analysis of range and intensity data is used to perform two functions: 1) 3D detection and tracking of the users fingertip or a pen to provide natural 3D pointing gestures, and 2) computation of the 3D position and orientation of the users viewpoint without the need for fiducial mark calibration procedures, or manual initialization. The paper describes the stereo depth camera, the algorithms developed for pointer tracking and camera pose tracking, and demonstrates their use within an application in the field of oil and gas exploration.


international conference on image processing | 2001

Adaptive video background modeling using color and depth

Michael Harville; Gaile G. Gordon; John Iselin Woodfill

A new algorithm for background estimation and removal in video sequences obtained with stereo cameras is presented. Per-pixel Gaussian mixtures are used to model recent scene observations in the combined space of depth and luminance-invariant color. These mixture models adapt over time, and are used to build a new model of the background at each time step. This combination in itself is novel, but we also introduce the idea of modulating the learning rate of the background model according to the scene activity level on a per-pixel basis, so that dynamic foreground objects are incorporated into the background more slowly than are static scene changes. Our results show much greater robustness than prior state-of-the-art methods to challenging phenomena such as video displays, non-static background objects, areas of high foreground traffic, and similar color of foreground and background. Our method is also well-suited for use in real-time systems.


Versus | 1998

Robust, real-time people tracking in open environments using integrated stereo, color, and face detection

Trevor Darrell; Gaile G. Gordon; John Iselin Woodfill; Harlyn Baker; Michael Harville

We present approach to robust real-time person tracking in crowded and/or unknown environments using multimodal integration. We combine stereo, color, and face detection modules into a single robust system, and show an initial application for an interactive display where the user sees his face distorted into various comic poses in real-time. Stereo processing is used to isolate the figure of a user from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the foreground region, and face pattern detection discriminates and localizes the face within the tracked body parts. We discuss the failure modes of these individual components, and report results with the complete system in trials with thousands of users.


electronic imaging | 2008

Person and gesture tracking with smart stereo cameras

Gaile G. Gordon; Xiangrong Chen; Ron Buck

Physical security increasingly involves sophisticated, real-time visual tracking of a persons location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the systems operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

Collaboration


Dive into the Gaile G. Gordon's collaboration.

Top Co-Authors

Avatar

John Iselin Woodfill

Interval Research Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Woodfill

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Alex Erendi

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Bill Kowalik

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Franklin C. Crow

Interval Research Corporation

View shared research outputs
Researchain Logo
Decentralizing Knowledge