Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher O. Jaynes is active.

Publication


Featured researches published by Christopher O. Jaynes.


ieee visualization | 2001

Dynamic shadow removal from front projection displays

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele; Michael S. Brown; W. Brent Seales

Front-projection display environments suffer from a fundamental problem: users and other objects in the environment can easily and inadvertently block projectors, creating shadows on the displayed image. We introduce a technique that detects and corrects transient shadows in a multi-projector display. Our approach is to minimize the difference between predicted (generated) and observed (camera) images by continuous modification of the projected image values for each display device. We speculate that the general predictive monitoring framework introduced here is capable of addressing more general radiometric consistency problems. Using an automatically-derived relative position of cameras and projectors in the display environment and a straightforward color correction scheme, the system renders an expected image for each camera location. Cameras observe the displayed image, which is compared with the expected image to detect shadowed regions. These regions are transformed to the appropriate projector frames, where corresponding pixel values are increased. In display regions where more than one projector contributes to the image, shadow regions are eliminated. We demonstrate an implementation of the technique in a multiprojector system.


machine vision applications | 2008

Object matching in disjoint cameras using a color transfer approach

Kideog Jeong; Christopher O. Jaynes

Object appearance models are a consequence of illumination, viewing direction, camera intrinsics, and other conditions that are specific to a particular camera. As a result, a model acquired in one view is often inappropriate for use in other viewpoints. In this work we treat this appearance model distortion between two non-overlapping cameras as one in which some unknown color transfer function warps a known appearance model from one view to another. We demonstrate how to recover this function in the case where the distortion function is approximated as general affine and object appearance is represented as a mixture of Gaussians. Appearance models are brought into correspondence by searching for a bijection function that best minimizes an entropic metric for model dissimilarity. These correspondences lead to a solution for the transfer function that brings the parameters of the models into alignment in the UV chromaticity plane. Finally, a set of these transfer functions acquired from a collection of object pairs are generalized to a single camera-pair-specific transfer function via robust fitting. We demonstrate the method in the context of a video surveillance network and show that recognition of subjects in disjoint views can be significantly improved using the new color transfer approach.


Computer Vision and Image Understanding | 2003

Recognition and reconstruction of buildings from multiple aerial images

Christopher O. Jaynes; Edward M. Riseman; Allen R. Hanson

We present a model-based approach to the automatic detection and reconstruction of buildings from aerial imagery. Buildings are first segmented from the scene in an optical image followed by a reconstruction process that makes use of a corresponding digital elevation map (DEM). Initially, each segmented DEM region likely to contain a building rooftop is indexed into a database of parameterized surface models that represent different building shape classes such as peaked, flat, or curved roofs. Given a set of indexed models, each is fit to the elevation data using a robust iterative procedure that determines the precise position and shape of the building rooftop. The indexed model that converges to the data with the lowest residual fit error is then added to the scene by extruding the fit rooftop surfaces to a local ground plane.The approach is based on the observation that a significant amount of rooftop variation can be modeled as the union of a small set of parameterized models and their combinations. By first recognizing the rooftop as one of the several potential rooftop shapes and fitting only these surfaces, the technique remains robust while still capable of reconstructing a wide variety of building types. In contrast to earlier approaches that presuppose a particular class of rooftops to be reconstructed (e.g., flat roofs), the algorithm is capable of reconstructing a variety of building types including peaked, flat, multi-level flat, and curved surfaces. The approach is evaluated on two datasets. Recognition rates for the different building rooftop classes and reconstruction accuracy are reported.


IEEE Transactions on Visualization and Computer Graphics | 2004

Camera-based detection and removal of shadows from interactive multiprojector displays

Christopher O. Jaynes; Stephen B. Webb; R.M. Steele

Front-projection displays are a cost-effective and increasingly popular method for large format visualization and immersive rendering of virtual models. New approaches to projector tiling, automatic calibration, and color balancing have made multiprojector display systems feasible without undue infrastructure changes and maintenance. As a result, front-projection displays are being used to generate seamless, visually immersive worlds for virtual reality and visualization applications with reasonable cost and maintenance overhead. However, these systems suffer from a fundamental problem: Users and other objects in the environment can easily and inadvertently block projectors, creating shadows on the displayed image. Shadows occlude potentially important information and detract from the sense of presence an immersive display may have conveyed. We introduce a technique that detects and corrects shadows in a multiprojector display while it is in use. Cameras observe the display and compare observations with an expected image to detect shadowed regions. These regions are transformed to the appropriate projector frames, where corresponding pixel values are increased and/or attenuated. In display regions where more than one projector contributes to the image, shadow regions are eliminated.


international symposium on mixed and augmented reality | 2006

The universal media book: tracking and augmenting moving surfaces with projected information

Shilpi Gupta; Christopher O. Jaynes

We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates. The book pages are blank, so traditional approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched in order to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images, videos, and volumetric datasets (in which case a page can be removed from the book and used to navigate through a virtual 3D volume).


european conference on computer vision | 2006

Overconstrained linear estimation of radial distortion and multi-view geometry

R. Matt Steele; Christopher O. Jaynes

This paper introduces a new method for simultaneous estimation of lens distortion and multi-view geometry using only point correspondences. The new technique has significant advantages over the current state-of-the art in that it makes more effective use of correspondences arising from any number of views. Multi-view geometry in the presence of lens distortion can be expressed as a set of point correspondence constraints that are quadratic in the unknown distortion parameter. Previous work has demonstrated how the system can be solved efficiently as a quadratic eigenvalue problem by operating on the normal equations of the system. Although this approach is appropriate for situations in which only a minimal set of matchpoints are available, it does not take full advantage of extra correspondences in overconstrained situations, resulting in significant bias and many potential solutions. The new technique directly operates on the initial constraint equations and solves the quadratic eigenvalue problem in the case of rectangular matrices. The method is shown to contain significantly less bias on both controlled and real-world data and, in the case of a moving camera where additional views serve to constrain the number of solutions, an accurate estimate of both geometry and distortion is achieved.


computer vision and pattern recognition | 2005

Feature uncertainty arising from covariant image noise

R.M. Steele; Christopher O. Jaynes

Uncertainty estimates related to the position of image features are seeing increasing use in several computer vision problems. Many of these have been recast from standard least squares model fitting to techniques that minimize the Mahalanobis distance, which weighs each error vector by covariance of the observations. These include structure from motion and traditional geometric camera calibration. Uncertainty estimates previously derived for the case of corner localization are based on implicit assumptions that preclude sophisticated image noise models. Uncertainties associated with these features tend to be over estimated. In this work, we introduce a new formulation for feature location uncertainty that supports arbitrary pixel covariance to derive a more accurate positional uncertainty estimate. The method is developed and evaluated in the case of a traditional interest operator that is in widespread use. Results show that uncertainty estimates based on this new formulation better reflect the error distribution in feature location.


Proceedings of the workshop on Virtual environments 2003 | 2003

The Metaverse: a networked collection of inexpensive, self-configuring, immersive environments

Christopher O. Jaynes; Williams B. Seales; Kenneth L. Calvert; Zongming Fei; James Griffioen

Immersive projection-based display environments have been growing steadily in popularity. However, these systems have, for the most part, been confined to laboratories or other special-purpose uses and have had relatively little impact on human-computer interaction or user-to-user communication/collaboration models. Before large-scale deployment and adoption of these technologies can occur, some key technical issues must be resolved. We address these issues in the design of the Metaverse. In particular, the Metaverse system supports automatic self-calibration of an arbitrary number of projectors, thereby simplifying systems setup and maintenance. The Metaverse also supports novel communication models that enhance the scalability of the system and facilitate collaboration between Metaverse portals. Finally, we describe a prototype implementation of the Metaverse.


computer vision and pattern recognition | 2006

A Joint Illumination and Shape Model for Visual Tracking

Amit A. Kale; Christopher O. Jaynes

Visual tracking involves generating an inference about the motion of an object from measured image locations in a video sequence. In this paper we present a unified framework that incorporates shape and illumination in the context of visual tracking. The contribution of the work is twofold. First, we introduce a a multiplicative, low dimensional model of illumination that is defined by a linear combination of a set of smoothly changing basis functions. Secondly, we show that a small number of centroids in this new space can be used to represent the illumination conditions existing in the scene. These centroids can be learned from ground truth and are shown to generalize well to other objects of the same class for the scene. Finally we show how this illumination model can be combined with shape in a probabilistic sampling framework. Results of the joint shape-illumination model are demonstrated in the context of vehicle and face tracking in challenging conditions.


Image and Vision Computing | 2004

Multi-view calibration from planar motion trajectories

Christopher O. Jaynes

Abstract We present a technique for the registration of a network of surveillance cameras through the automatic alignment of observed planar motion trajectories. The algorithm addresses the problem of recovering the relative pose of several stationary, networked cameras whose intrinsic parameters are known. Each camera tracks several objects to produce a set of image trajectories. Using temporal and geometric constraints derived from the trajectory and a network synchronization signal, overlapping viewing frustums are determined and corresponding cameras are calibrated. Full calibration is a two stage process. Initially, the relative orientation of each camera to the local ground plane, is computed in order to recover the projective mapping of image points to world trajectories embedded on a nominal plane of correct orientation. Given the relative camera-to-plane orientation, projectively unwarped trajectory curves can then be robustly matched by solving for the similarity transform that brings them into absolute alignment. Registration aligns n-cameras with respect to each other in a single camera frame (that of the reference camera).The approach recovers both the epipolar geometry between all cameras and the camera-to-ground rotation for each camera independently. After calibration, points that are known to lie on a world ground plane can be directly back projected into each of the camera frames. These tracked points are known to be in spatial and temporal correspondence, supporting multi-view surveillance and motion understanding tasks. The algorithm is demonstrated for two, three, and five camera scenarios by tracking pedestrians as they move through a surveillance area and matching the resulting trajectories.

Collaboration


Dive into the Christopher O. Jaynes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael S. Brown

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Allen R. Hanson

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Edward M. Riseman

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge