Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen B. Webb is active.

Publication


Featured researches published by Stephen B. Webb.


ieee visualization | 2001

Dynamic shadow removal from front projection displays

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele; Michael S. Brown; W. Brent Seales

Front-projection display environments suffer from a fundamental problem: users and other objects in the environment can easily and inadvertently block projectors, creating shadows on the displayed image. We introduce a technique that detects and corrects transient shadows in a multi-projector display. Our approach is to minimize the difference between predicted (generated) and observed (camera) images by continuous modification of the projected image values for each display device. We speculate that the general predictive monitoring framework introduced here is capable of addressing more general radiometric consistency problems. Using an automatically-derived relative position of cameras and projectors in the display environment and a straightforward color correction scheme, the system renders an expected image for each camera location. Cameras observe the displayed image, which is compared with the expected image to detect shadowed regions. These regions are transformed to the appropriate projector frames, where corresponding pixel values are increased. In display regions where more than one projector contributes to the image, shadow regions are eliminated. We demonstrate an implementation of the technique in a multiprojector system.


IEEE Transactions on Visualization and Computer Graphics | 2004

Camera-based detection and removal of shadows from interactive multiprojector displays

Christopher O. Jaynes; Stephen B. Webb; R.M. Steele

Front-projection displays are a cost-effective and increasingly popular method for large format visualization and immersive rendering of virtual models. New approaches to projector tiling, automatic calibration, and color balancing have made multiprojector display systems feasible without undue infrastructure changes and maintenance. As a result, front-projection displays are being used to generate seamless, visually immersive worlds for virtual reality and visualization applications with reasonable cost and maintenance overhead. However, these systems suffer from a fundamental problem: Users and other objects in the environment can easily and inadvertently block projectors, creating shadows on the displayed image. Shadows occlude potentially important information and detract from the sense of presence an immersive display may have conveyed. We introduce a technique that detects and corrects shadows in a multiprojector display while it is in use. Cameras observe the display and compare observations with an expected image to detect shadowed regions. These regions are transformed to the appropriate projector frames, where corresponding pixel values are increased and/or attenuated. In display regions where more than one projector contributes to the image, shadow regions are eliminated.


Iete Journal of Research | 2002

A scalable framework for high-resolution immersive displays

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele

We introduce an immersive display framework that is scalable, easily re-configurable, and does not constrain the display surface geometry. The system achieves very-high resolution display through synchronized rendering and display from multiple PCs and light projectors. The projectors can be placed in a loose configuration and calibrated at run time. A full display is composed of these underlying display devices by blending overlapping regions and pre-warping imagery to correct for distortions due to display surface shape and the viewers position. The effect is a perceptually correct display of a single high-resolution frame buffer. A major contribution of the work is the addition of cameras into the display environment that assist in calibration of projector positions and the automatic recovery of the display surface shape. In addition, a straightforward synchronization framework is introduced that facilitates communication between the multiple rendering elements for calibration, tracking the users viewing position, and synchronous endering of a uniform, perceptually correct image.


Communications of The ACM | 2001

Building large-format displays for digital libraries

Michael S. Brown; W. Brent Seales; Stephen B. Webb; Christopher O. Jaynes

COMMUNICATIONS OF THE ACM May 2001/Vol. 44, No. 5 57 Digital libraries house media of ever-increasing complexity, from models of statues to virtual archeological landscapes. Future digital archives will only increase in resolution and complexity as acquisition hardware improves and data becomes more intricate. Although digital content is constantly improving, the typical display device (the desktop monitor) remains relatively unchanged. The problem is the spatial resolution and color quality of digital media surpasses the display real estate of typical monitors. It is common for objects such as artwork and manuscripts to be digitized at a much higher resolution than can be displayed on even the best monitors. This forces the data to be down-sampled, or restricts viewing to selected regions of interest at a time. Even with improved resolution, small display areas are often less than compelling. For example, the high-resolution facsimile of Michelangelo’s 17-foot statue of David [4] is somewhat less spectacular when displayed on a high-resolution 20-inch monitor. The simple fact is monitors have become under-powered technologies for the visualization capabilities now needed to fully appreciate digital archives. The immediate reaction is to enlist the help of large-format displays for visualization available as video walls, domes, and immersive environments such as the CAVE. These environments provide a compelling sense of presence and help break the “window on worlds” paradigm. Unfortunately, these technologies also present serious challenges that limit their accessibility and usefulness to a small number of institutions. Of primary concern is the cost. Initial costs of projectors, rendering hardware, and subsequent recurring operational expenses are fiscally prohibitive. A second concern is the renovation necessary for installing a large-format display system. Because most systems rely on mechanically aligned projectors and rigidly constructed display surfaces, room modification and projection surface infrastructure are necessary. An additional difficulty is the complexity of continuous operation. These systems are not for novice users and can quickly become unusable without expert maintenance and tuning. Our goal is to bring large-format displays to the digital library community by breaking down the barriers that make them expensive and difficult to set up and run. We are attacking the price/performance barrier by engineering displays built entirely from commodity components and assembled in a scalable configuration. Specifically, commodity light projectors, unlike monitors, can be positioned collectively to form a single logical desktop (see Figure 1). Inexpensive video cards can perform at the level required to drive a projector. Groups of PCs, each driving a projector and communicating via a local area network, are inexpensive and powerful [2, 6]. While the physical alignment of projectors creates a very compelling display, it is tedious and requires major effort. Within the digital library community, where libraries must operate and maintain equipment, it is important to enable novice technicians to build up and maintain a display by a more casual placement of projectors. Michael S. Brown, W. Brent Seales, Stephen B. Webb, and Christopher O. Jaynes { {


Presence: Teleoperators & Virtual Environments | 2005

Rapidly deployable multiprojector immersive displays

Christopher O. Jaynes; R. Matt Steele; Stephen B. Webb

Immersive, multiprojector systems are a compelling alternative to traditional head-mounted displays and have been growing steadily in popularity. However, the vast majority of these systems have been confined to laboratories or other special purpose facilities and have had little impact on general humancomputer and humanhuman communication models. Cost, infrastructure requirements, and maintenance are all obstacles to the widespread deployment of immersive displays. We address these issues in the design and implementation of the Metaverse. The Metaverse system focuses on a multiprojector scalable display framework that supports automatic detection of devices as they are added/removed from the display environment. Multiple cameras support calibration over wide fields of view for immersive applications with little or no input from the user. The approach is demonstrated on a 24-projector display environment that can be scaled on the fly, reconfigured, and redeployed according to user needs. Using our method, subpixel calibration is possible with little or no user input. Because little effort is required by the user to either install or reconfigure the projectors, rapid deployment of large, immersive displays in somewhat unconstrained environments is feasible.


Archive | 2000

Object specific information relaying system

Christopher O. Jaynes; Stephen B. Webb


Archive | 2002

An Open Development Environment for Evaluation of Video Surveillance Systems

Christopher O. Jaynes; Stephen B. Webb; R. Matt Steele; Quanren Xiong


Archive | 2007

ALIGNMENT OPTIMIZATION IN IMAGE DISPLAY SYSTEMS EMPLOYING MULTI-CAMERA IMAGE ACQUISITION

Christopher O. Jaynes; Stephen B. Webb


Archive | 2010

Hybrid system for multi-projector geometry calibration

Christopher O. Jaynes; Stephen B. Webb


Archive | 2007

MULTI-PROJECTOR INTENSITY BLENDING SYSTEM

Christopher O. Jaynes; Stephen B. Webb

Collaboration


Dive into the Stephen B. Webb's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael S. Brown

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

R.M. Steele

University of Kentucky

View shared research outputs
Researchain Logo
Decentralizing Knowledge