Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where R. Matt Steele is active.

Publication


Featured researches published by R. Matt Steele.


ieee visualization | 2001

Dynamic shadow removal from front projection displays

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele; Michael S. Brown; W. Brent Seales

Front-projection display environments suffer from a fundamental problem: users and other objects in the environment can easily and inadvertently block projectors, creating shadows on the displayed image. We introduce a technique that detects and corrects transient shadows in a multi-projector display. Our approach is to minimize the difference between predicted (generated) and observed (camera) images by continuous modification of the projected image values for each display device. We speculate that the general predictive monitoring framework introduced here is capable of addressing more general radiometric consistency problems. Using an automatically-derived relative position of cameras and projectors in the display environment and a straightforward color correction scheme, the system renders an expected image for each camera location. Cameras observe the displayed image, which is compared with the expected image to detect shadowed regions. These regions are transformed to the appropriate projector frames, where corresponding pixel values are increased. In display regions where more than one projector contributes to the image, shadow regions are eliminated. We demonstrate an implementation of the technique in a multiprojector system.


european conference on computer vision | 2006

Overconstrained linear estimation of radial distortion and multi-view geometry

R. Matt Steele; Christopher O. Jaynes

This paper introduces a new method for simultaneous estimation of lens distortion and multi-view geometry using only point correspondences. The new technique has significant advantages over the current state-of-the art in that it makes more effective use of correspondences arising from any number of views. Multi-view geometry in the presence of lens distortion can be expressed as a set of point correspondence constraints that are quadratic in the unknown distortion parameter. Previous work has demonstrated how the system can be solved efficiently as a quadratic eigenvalue problem by operating on the normal equations of the system. Although this approach is appropriate for situations in which only a minimal set of matchpoints are available, it does not take full advantage of extra correspondences in overconstrained situations, resulting in significant bias and many potential solutions. The new technique directly operates on the initial constraint equations and solves the quadratic eigenvalue problem in the case of rectangular matrices. The method is shown to contain significantly less bias on both controlled and real-world data and, in the case of a moving camera where additional views serve to constrain the number of solutions, an accurate estimate of both geometry and distortion is achieved.


Iete Journal of Research | 2002

A scalable framework for high-resolution immersive displays

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele

We introduce an immersive display framework that is scalable, easily re-configurable, and does not constrain the display surface geometry. The system achieves very-high resolution display through synchronized rendering and display from multiple PCs and light projectors. The projectors can be placed in a loose configuration and calibrated at run time. A full display is composed of these underlying display devices by blending overlapping regions and pre-warping imagery to correct for distortions due to display surface shape and the viewers position. The effect is a perceptually correct display of a single high-resolution frame buffer. A major contribution of the work is the addition of cameras into the display environment that assist in calibration of projector positions and the automatic recovery of the display surface shape. In addition, a straightforward synchronization framework is introduced that facilitates communication between the multiple rendering elements for calibration, tracking the users viewing position, and synchronous endering of a uniform, perceptually correct image.


computer vision and pattern recognition | 2009

Color calibration of multi-projector displays through automatic optimization of hardware settings

R. Matt Steele; Mao Ye; Ruigang Yang

We describe a system that performs automatic, camera-based photometric projector calibration by adjusting hardware settings (e.g. brightness, contrast, etc.). The approach has two basic advantages over software-correction methods. First, there is no software interface imposed on graphical programs: all imagery displayed on the projector benefits from the calibration immediately, without render-time overhead or code changes. Secondly, the approach benefits from the fact that projector hardware settings typically are capable of expanding or shifting color gamuts (e.g. trading off maximum brightness versus darkness of black levels), something that software methods, which only shrink gamuts, cannot do. In practice this means that hardware settings can possibly match colors between projectors while maintaining a larger overall color gamut (e.g. better contrast) than software-only correction can. The prototype system is fully automatic. The space of hardware settings is explored by using a computer-controlled universal remote to navigate each projectors menu system. An off-the-shelf camera observes each projectors response curves. A cost function is computed for the curves based on their similarity to each other, as well as intrinsic characteristics, including color balance, black level, gamma, and dynamic range. An approximate optimum is found using a heuristic combinatoric search. Results show significant qualitative improvements in the absolute colors, as well as the color consistency, of the display.


computer vision and pattern recognition | 2003

Parametric Subpixel Matchpoint Recovery with Uncertainty Estimation: A Statistical Approach

R. Matt Steele; Christopher O. Jaynes

We present a novel matchpoint acquisition method capable of producing accurate correspondences at subpixel precision. Given the known representation of the point to be matched, such as a projected fiducial in a structured light system, the method estimates the fiducial location and its expected uncertainty. Improved matchpoint precision has application in a number of calibration tasks, and uncertainty estimates can be used to significantly improve overall calibration results. A simple parametric model captures the relationship between the known fiducial and its corresponding position, shape, and intensity on the image plane. For each match-point pair, these unknown model parameters are recovered using maximum likelihood estimation to determine a sub-pixel center for the fiducial. The uncertainty of the match-point center is estimated by performing forward error analysis on the expected image noise. Uncertainty estimates used in conjunction with the accurate matchpoints can improve calibration accuracy for multi-view systems.


Pattern Recognition Letters | 2007

Center-of-mass variation under projective transformation

R. Matt Steele; Christopher O. Jaynes

Accurate feature detection and localization is fundamentally important to computer vision, and feature locations act as input to many algorithms including camera calibration, structure recovery, and motion estimation. Unfortunately, feature localizers in common use are typically not projectively invariant even in the idealized case of a continuous image. This results in feature location estimates that contain bias which can influence the higher level algorithms that make use of them. While this behavior has been studied in the case of ellipse centroids and then used in a practical calibration algorithm, those results do not trivially generalize to the center-of-mass of a radially symmetric intensity distribution. This paper introduces the generalized result of feature location bias with respect to perspective distortion and applies it to several specific radially symmetric intensity distributions. The impact on calibration is then evaluated. Finally, an initial study is conducted comparing calibration results obtained using center-of-mass to those obtained with an ellipse detector. Results demonstrate that feature localization error, over a range of increasingly large projective distortions, can be stabilized at less than a tenth of a pixel versus errors that can grow to larger than a pixel in the uncorrected case.


international conference on computer graphics and interactive techniques | 2003

Interactive light field display from a cluster of projectors

R. Matt Steele; Christopher O. Jaynes

We are developing a novel display system that physically realizes a sampling of the light field emitted by a three-dimensional scene. An array of projectors, each with a two-dimensional framebuffer, populates the 4D space of the light field. The view of the scene is simultaneously correct for all head positions within a volume. This eliminates the need for head tracking, produces binocular disparities without the need for glasses, and supports any number of viewers. Input to our display could be streamed from a light field sensor [1], or can be efficiently rendered in parallel using a cluster of standard computer graphics pipelines. For static scenes, no run-time rendering is necessary. We demonstrate the feasibility of the approach using a prototype cluster of projectors.


Proceedings of the 5th ACM/IEEE International Workshop on Projector camera systems | 2008

Reducing resolution loss in two-pass rendering by optimal view directions and display-surface partitioning

R. Matt Steele; Christopher O. Jaynes; Ruigang Yang

We describe a method for reducing the amount of aliasing or resolution loss in two-pass rendering for distortion and alignment correction of a projector-based display. Resolution loss is caused by the fact that the second rendering pass must resample the result of the first rendering pass, and the two procedures in general have sampling rates that vary differently. We show that for a flat display surface, it is possible to choose a viewing direction for the first-pass render so that its sampling-rate variations cancel with variations of the second-pass sampling rate. This means that the first-pass effectively samples the projector frame-buffer evenly, so an appropriate resolution for the first-pass render will provide uniformly low aliasing over the entire framebuffer. We also show that, for flat display surfaces, this choice of view direction can be combined with an appropriate first-pass intrinsics matrix to eliminate the need for a second pass. The resulting single-pass rendering algorithm is very similar to existing single-pass techniques, but has a few advantages that we discuss. For a non-flat display surface, relative sampling cannot be made perfectly uniform, but the optimal view direction for a best-fit plane provides an approximate solution when the display surface is almost flat. Although the approximation is poor when the display surface differs radically from a plane, we describe a technique for those cases, which automatically subdivides the display surface into partitions that are approximately planar. In common display-surface configurations, great improvements in rendering quality are obtained by using two or three partitions, which causes only modest rendering overhead.


Presence: Teleoperators & Virtual Environments | 2005

Rapidly deployable multiprojector immersive displays

Christopher O. Jaynes; R. Matt Steele; Stephen B. Webb

Immersive, multiprojector systems are a compelling alternative to traditional head-mounted displays and have been growing steadily in popularity. However, the vast majority of these systems have been confined to laboratories or other special purpose facilities and have had little impact on general humancomputer and humanhuman communication models. Cost, infrastructure requirements, and maintenance are all obstacles to the widespread deployment of immersive displays. We address these issues in the design and implementation of the Metaverse. The Metaverse system focuses on a multiprojector scalable display framework that supports automatic detection of devices as they are added/removed from the display environment. Multiple cameras support calibration over wide fields of view for immersive applications with little or no input from the user. The approach is demonstrated on a 24-projector display environment that can be scaled on the fly, reconfigured, and redeployed according to user needs. Using our method, subpixel calibration is possible with little or no user input. Because little effort is required by the user to either install or reconfigure the projectors, rapid deployment of large, immersive displays in somewhat unconstrained environments is feasible.


Archive | 2002

An Open Development Environment for Evaluation of Video Surveillance Systems

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele; Quanren Xiong

Collaboration


Dive into the R. Matt Steele's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mao Ye

University of Kentucky

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael S. Brown

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge