Richard S. Wallace
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard S. Wallace.
IEEE Computer Graphics and Applications | 1993
Takaaki Akimoto; Yasuhito Suenaga; Richard S. Wallace
Model-based encoding of human facial features for narrowband visual communication is described. Based on an already prepared 3D human model, this coding method detects and understands a persons body motion and facial expressions. It expresses the essential information as compact codes and transmits it. At the receiving end, this code becomes the basis for modifying the 3D model of the person and thereby generating lifelike human images. The feature extraction used by the system to acquire data for regions or edges that express the eyes, nose, mouth, and outlines of the face and hair is discussed. The way in which the system creates a 3D model of the person by using the features extracted in the first part to modify a generic head model is also discussed.<<ETX>>
International Journal of Computer Vision | 1994
Richard S. Wallace; Ping-Wen Ong; Benjamin B. Bederson; Eric L. Schwartz
This paper describes a graph-based approach to image processing, intended for use with images obtained from sensors having space variant sampling grids. The connectivity graph (CG) is presented as a fundamental framework for posing image operations in any kind of space variant sensor. Partially motivated by the observation that human vision is strongly space variant, a number of research groups have been experimenting with space variant sensors. Such systems cover wide solid angles yet maintain high acuity in their central regions. Implementation of space variant systems pose at least two outstanding problems. First, such a system must be active, in order to utilize its high acuity region; second, there are significant image processing problems introduced by the non-uniform pixel size, shape and connectivity. Familiar image processing operations such as connected components, convolution, template matching, and even image translation, take on new and different forms when defined on space variant images. The present paper provides a general method for space variant image processing, based on a connectivity graph which represents the neighbor-relations in an arbitrarily structured sensor. We illustrate this approach with the following applications: (1) Connected components is reduced to its graph theoretic counterpart. We illustrate this on a logmap sensor, which possesses a difficult topology due to the branch cut associated with the complex logarithm function. (2) We show how to write local image operators in the connectivity graph that are independent of the sensor geometry. (3) We relate the connectivity graph to pyramids over irregular tessalations, and implement a local binarization operator in a 2-level pyramid. (4) Finally, we expand the connectivity graph into a structure we call a transformation graph, which represents the effects of geometric transformations in space variant image sensors. Using the transformation graph, we define an efficient algorithm for matching in the logmap images and solve the template matching problem for space variant images. Because of the very small number of pixels typical of logarithmic structured space variant arrays, the connectivity graph approach to image processing is suitable for real-time implementation, and provides a generic solution to a wide range of image processing applications with space variant sensors.
international conference on robotics and automation | 1986
Richard S. Wallace; K. Matsuzaki; Yoshimasa Goto; Jill D. Crisman; Jon A. Webb; Takeo Kanade
We report progress in visual road following by autonomous robot vehicles. We present results and work in progress in the areas of system architecture, image rectification and camera calibration, oriented edge tracking, color classification and road-region segmentation, extracting geometric structure, and the use of a map. In test runs of an outdoor robot vehicle, the Terregator, under control of the Warp computer, we have demonstrated continuous motion vision-guided road-following at speeds up to 1.08 km/hour with image processing and steering servo loop times of 3 sec.
international conference on robotics and automation | 1994
Benjamin B. Bederson; Richard S. Wallace; Eric L. Schwartz
A pan-tilt mechanism is a computer-controlled actuator designed to point an object such as a camera sensor. For applications in active vision, a pan-tilt mechanism should be accurate, fast, small, inexpensive and have low power requirements. The authors have designed and constructed a new type of actuator meeting these requirements, which incorporates both pan and tilt into a single, two-degree-of-freedom device. The spherical pointing motor (SPM) consists of three orthogonal motor windings in a permanent magnetic field, configured to move a small camera mounted on a gimbal. It is an absolute positioning device and is run open-loop. The SPM is capable of panning and tilting a load of 15 grams, for example a CCD image sensor, at rotational velocities of several hundred degrees per second with a repeatability of .15/spl deg/. The authors have also built a miniature camera consisting of a single CCD sensor chip and miniature lens assembly that fits on the rotor of this motor. In this paper, the authors discuss the theory of the SPM, which includes its basic electromagnetic principles, and derive the relationship between applied currents and resultant motor position. The authors present an automatic calibration procedure and discuss open- and closed-loop control strategies. Finally, the authors present the physical characteristics and results of their prototype. >
international conference on pattern recognition | 1992
Benjamin B. Bederson; Richard S. Wallace; Eric L. Schwartz
The authors have developed a prototype miniaturized active vision system whose sensor architecture is based on a logarithmically structured space-variant pixel geometry. This system integrates a CCD sensor, miniature pan-tilt actuator, controller, general purpose processors and display. Due to the ability of space-variant sensors to cover large work-spaces yet provide high acuity with an extremely small number of pixels, space-variant active vision system architectures provide the potential for radical reductions in system size and cost. The authors describe a prototype space-variant active vision system which performs tasks such as moving object tracking and functions as a video telephone. The potential application domains for systems of this type include vision systems for mobile robots and robot manipulators, traffic monitoring systems, security and surveillance, and consumer video communications.<<ETX>>
machine vision applications | 1995
Benjamin B. Bederson; Richard S. Wallace; Eric L. Schwartz
We have developed a prototype for a miniaturized, active vision system with a sensor architecture based on a logarithmically structured, space-variant, pixel geometry. The central part of the image has a high resolution, and the periphery has a a smoothly falling resolution. The human visual system uses a similar image architecture. Our system integrates a miniature CCD-based camera, a novel pantilt actuator/controller, general purpose processors, a video-telephone modem and a display. Due to the ability of space-variant sensors to cover large work spaces, yet provide high acuity with an extremely small number of pixels, architectures with space-variant, active vision systems provide a potential for reductions in system size and cost of several orders of magnitude. Cortex-I takes up less than a third of a cubic foot, including camera, actuators, control, computers, and power supply, and was built for a (one-off) parts cost of roughly US
international conference on robotics and automation | 1987
Richard S. Wallace
2000. In this paper, we describe several applications that we have developed for Cortex-I such as tracking moving objects, visual attention, pattern recognition (license plate reading), and video-telephone communcications (teleoperation). We report here on the design of the camera and optics (8 × 8 × 8 mm), a method to convert the uniform image to a space-variant image, and a new miniature pan-tilt actuator, the spherical pointing motor (SPM), (4 × 5 × 6 cm). Finally, we discuss applications for motion tracking and license plate reading. Potential application domains for systems of this type include vision systems for mobile robots and robot manipulators, traffic monitoring systems, security and surveillance, telerobotics, and consumer video communications. The long-range goal of this project is to demonstrate that major new applications of robotics will become feasible when small, low-cost, machine-vision systems can be mass produced. We use the term “commodity robotics” to express the expected impact of the possibilities for opening up new application niches in robotics and machine vision, for what has until now been an expensive, and therefore limited, technology.
international conference on pattern recognition | 1990
Richard S. Wallace; Takeo Kanade
This paper present a color vision based robot road following program. The program uses an adaptive color model to classify pixels as road or shoulder features. To update the color statistics, the program tracks the shape of the road through the image sequence. As long as the program has either a reasonable color model from the previous image or a reasonable estimation of the expected road position it can steer a robot vehicle along a road reliably even in the presence of surface color variation, fluctuation of illumination conditions and deviation of sensor response. We present results from several test runs of a robot vehicle in a natural outdoor environment. The paper includes a discussion of failure modes of the program that have been catalogued and analyzed in order to guide future developments.
Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1989
Richard S. Wallace; Jon A. Webb; I-Chen Wu
A two-step procedure that finds natural clusters in geometric point data is described. The first step computes a hierarchical cluster tree minimizing an entropy objective function. The second step recursively explores the tree for a level clustering having minimum description length. Together, these two steps find natural clusters without requiring a user to specify threshold parameters or so-called magic numbers. In particular, the method automatically determines the number of clusters in the input data. The first step exploits a new hierarchical clustering procedure called numerical iterative hierarchical clustering (NIHC). The output of NIHC is a cluster tree. The second step in the procedure searches the tree for a minimum-description-length (MDL) level clustering. The MDL formulation, equivalent to maximizing the posterior probability, is suited to the clustering problem because it defines a natural prior distribution.<<ETX>>
international conference on robotics and automation | 1992
Benjamin B. Bederson; Richard S. Wallace; Eric L. Schwartz
One of the most important obstacles standing in the way of widespread use of parallel computers for low-level vision is the lack of a programming language that can be mapped efficiently onto different computer architectures which is suited for low-level vision. The Apply language has been designed and implemented to perform such operations. We demonstrate its capabilities by comparing the performance of the Hughes HBA, the Carnegie Mellon Warp machine, and the Sun 3 on a large set of low-level vision programs. In order to demonstrate its efficiency, we also compare performance of the Sun code with routines of similar function written by professional programmers.