Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven A. Shafer is active.

Publication


Featured researches published by Steven A. Shafer.


ubiquitous computing | 2000

EasyLiving: Technologies for Intelligent Environments

Barry Brumitt; Brian Meyers; John Krumm; Amanda Kern; Steven A. Shafer

The EasyLiving project is concerned with development of an architecture and technologies for intelligent environments which allow the dynamic aggregation of diverse I/O devices into a single coherent user experience. Components of such a system include middleware (to facilitate distributed computing), world modelling (to provide location-based context), perception (to collect information about world state), and service description (to support decomposition of device control, internal logic, and user interface). This paper describes the current research in each of these areas, highlighting some common requirements for any intelligent environment.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1988

Vision and navigation for the Carnegie-Mellon Navlab

Charles E. Thorpe; Martial Hebert; Takeo Kanade; Steven A. Shafer

A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles. >


Versus | 2000

Multi-camera multi-person tracking for EasyLiving

John Krumm; Steve Harris; Brian Meyers; Barry Brumitt; Michael Hale; Steven A. Shafer

While intelligent environments are often cited as a reason for doing work on visual person-tracking, really making an intelligent environment exposes many real-world problems in visual tracking that must be solved to make the technology practical. In the context of our EasyLiving project in intelligent environments, we created a practical person-tracking system that solves most of the real-world problems. It uses two sets of color stereo cameras for tracking multiple people during live demonstrations in a living room. The stereo images are used for locating people, and the color images are used for maintaining their identities. The system runs quickly enough to make the room feel responsive, and it tracks multiple people standing, walking, sitting, occluding, and entering and leaving the space.


International Journal of Computer Vision | 1989

A physical approach to color image understanding

Gudrun Klinker; Steven A. Shafer; Takeo Kanade

In this paper, we present an approach to color image understanding that can be used to segment and analyze surfaces with color variations due to highlights and shading. The work is based on a theory—the Dichromatic Reflection Model—which describes the color of the reflected light as a mixture of light from surface reflection (highlights) and body reflection (object color). In the past, we have shown how the dichromatic theory can be used to separate a color image into two intrinsic reflection images: an image of just the highlights, and the original image with the highlights removed. At that time, the algorithm could only be applied to hand-segmented images. This paper shows how the same reflection model can be used to include color image segmentation into the image analysis. The result is a color image understanding system, capable of generating physical descriptions of the reflection processes occurring in the scene. Such descriptions include the intrinsic reflection images, an image segmentation, and symbolic information about the object and highlight colors. This line of research can lead to physicsbased image understanding methods that are both more reliable and more useful than traditional methods.


international conference on robotics and automation | 1987

Error modeling in stereo navigation

Larry Matthies; Steven A. Shafer

In stereo navigation, a mobile robot estimates its position by tracking landmarks with on-board cameras. Previous systems for stereo navigation have suffered from poor accuracy, in part because they relied on scalar models of measurement error in triangulation. Using three-dimensional (3D) Gaussian distributions to model triangulation error is shown to lead to much better performance. How to compute the error model from image correspondences, estimate robot motion between frames, and update the global positions of the robot and the landmarks over time are discussed. Simulations show that, compared to scalar error models, the 3D Gaussian reduces the variance in robot position estimates and better distinguishes rotational from translational motion. A short indoor run with real images supported these conclusions and computed the final robot position to within two percent of distance and one degree of orientation. These results illustrate the importance of error modeling in stereo vision for this and other applications.


International Journal of Computer Vision | 1992

The measurement of highlights in color images

Gudrun Klinker; Steven A. Shafer; Takeo Kanade

In this paper, we present an approach to color image understanding that accounts for color variations due to highlights and shading. We demonstrate that the reflected light from every point on a dielectric object, such as plastic, can be described as a linear combination of the object color and the highlight color. The colors of all light rays reflected from one object then form a planar cluster in the color space. The shape of this cluster is determined by the object and highlight colors and by the object shape and illumination geometry. We present a method that exploits the difference between object color and highlight color to separate the color of every pixel into a matte component and a highlight component. This generates two intrinsic images, one showing the scene without highlights, and the other one showing only the highlights. The intrinsic images may be a useful tool for a variety of algorithms in computer vision, such as stereo vision, motion analysis, shape from shading, and shape from highlights. Our method combines the analysis of matte and highlight reflection with a sensor model that accounts for camera limitations. This enables us to successfully run our algorithm on real images taken in a laboratory setting. We show and discuss the results.


computer vision and pattern recognition | 1993

Depth from focusing and defocusing

Yalin Xiong; Steven A. Shafer

The problem of obtaining depth information from focusing and defocusing is studied. In depth from focusing, instead of the Fibonacci search, which is often trapped in local maxima, the combination of Fibonacci search and curve fitting is proposed. This combination leads to an unprecedentedly accurate result. A model of the blurring effect that takes geometric blurring as well as imaging blurring into consideration in the calibration of the blurring model is proposed. In spectrogram-based depth from defocusing, a maximal resemblance estimation method is proposed to decrease or eliminate the window effect.<<ETX>>


human factors in computing systems | 2003

XWand: UI for intelligent spaces

Andrew D. Wilson; Steven A. Shafer

The XWand is a novel wireless sensor package that enables styles of natural interaction with intelligent environments. For example, a user may point the wand at a device and control it using simple gestures. The XWand system leverages the intelligence of the environment to best determine the users intention. We detail the hardware device, signal processing algorithms to recover position and orientation, gesture recognition techniques, a multimodal (wand and speech) computational architecture and a preliminary user study examining pointing performance under conditions of tracking availability and audio feedback.


Journal of The Optical Society of America A-optics Image Science and Vision | 1994

What is the center of the image

Reg G. Willson; Steven A. Shafer

To model the way that cameras project the three-dimensional world into a two-dimensional image we need to know the camera’s image center. First-order models of lens behavior, such as the pinhole-camera model and the thin-lens model, suggest that the image center is a single, fixed, and intrinsic parameter of the lens. On closer inspection, however, we find that there are many possible definitions for image center. Most image centers do not have the same coordinates and, moreover, move as lens parameters are changed. We present a taxonomy that includes 15 techniques for measuring image center. Several techniques are applied to a precision automated zoom lens, and experimental results are shown.


workshop on perceptive user interfaces | 2001

Visual panel: virtual mouse, keyboard and 3D controller with an ordinary piece of paper

Zhengyou Zhang; Ying Wu; Ying Shan; Steven A. Shafer

This paper presents a vision-based interface system, VISUAL PANEL, which employs an arbitrary quadrangle-shaped panel (e.g., an ordinary piece of paper) and a tip pointer (e.g., fingertip) as an intuitive, wireless and mobile input device. The system can accurately and reliably track the panel and the tip pointer. The panel tracking continuously determines the projective mapping between the panel at the current position and the display, which in turn maps the tip position to the corresponding position on the display. By detecting the clicking and dragging actions, the system can fulfill many tasks such as controlling a remote large display, and simulating a physical keyboard. Users can naturally use their fingers or other tip pointers to issue commands and type texts. Furthermore, by tracking the 3D position and orientation of the visual panel, the system can also provide 3D information, serving as a virtual joystick, to control 3D virtual objects.

Collaboration


Dive into the Steven A. Shafer's collaboration.

Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carol L. Novak

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles E. Thorpe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Reg G. Willson

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yalin Xiong

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Douglas A. Reece

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge