Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jill D. Crisman is active.

Publication


Featured researches published by Jill D. Crisman.


international conference on robotics and automation | 1993

SCARF: a color vision system that tracks roads and intersections

Jill D. Crisman; Charles E. Thorpe

SCARF, a color vision system that recognizes difficult roads and intersections, is presented. It has been integrated into several navigation systems that drive a robot vehicle, the Navlab, on a variety of roads in many different weather conditions. SCARF recognizes roads that have degraded surfaces and edges with no lane markings in difficult shadow conditions. It also recognizes intersections with or without predictions from the navigation system. This is the first system that detects intersections in images without a priori knowledge of the intersection shape and location. SCARF uses Bayesian classification to determine a road-surface likelihood for each pixel in a reduced color image. It then evaluates a number of road and intersection candidates by matching an ideal road-surface likelihood image with the results from the Bayesian classification. The best matching candidate is passed to a path-planning system that navigates the robot vehicle on the road or intersection. The SCARF system is described in detail, results on a variety of images are presented, and Navlab test runs using SCARF are discussed. >


international conference on robotics and automation | 1991

UNSCARF-a color vision system for the detection of unstructured roads

Jill D. Crisman; Charles E. Thorpe

The problem of navigating a robot vehicle on unstructured roads that have no lane markings, may have degraded surfaces and edges, and may be partially obscured by strong shadows is addressed. These conditions cause many road following systems to fail. The authors have build a system, UNSCARF, which is based on pattern recognition techniques, for successfully navigating on a variety of unstructured roads. UNSCARF does not need a road location prediction to find the location of the road; therefore, UNSCARF can be used as a bootstrapping system. It uses a clustering technique to group pixels with similar colors and locations. It then matches models of road shape to locate the roads in the image. These methods are more robust in noisy conditions than other road interpretation techniques. UNSCARF has been integrated into a navigation system that has successfully driven a test vehicle in may types of weather conditions.<<ETX>>


international conference on robotics and automation | 1986

Progress in robot road-following

Richard S. Wallace; K. Matsuzaki; Yoshimasa Goto; Jill D. Crisman; Jon A. Webb; Takeo Kanade

We report progress in visual road following by autonomous robot vehicles. We present results and work in progress in the areas of system architecture, image rectification and camera calibration, oriented edge tracking, color classification and road-region segmentation, extracting geometric structure, and the use of a map. In test runs of an outdoor robot vehicle, the Terregator, under control of the Warp computer, we have demonstrated continuous motion vision-guided road-following at speeds up to 1.08 km/hour with image processing and steering servo loop times of 3 sec.


IEEE Robotics & Automation Magazine | 1996

Graspar: a flexible, easily controllable robotic hand

Jill D. Crisman; Chaitanya Kanojia; Ibrahim Zeid

With only one motor per finger and a simple, easy to maintain, mechanical structure, the Graspar robotic hand uses minimal computation to provide secure grasping of objects ranging from eggs and light bulbs to tennis rackets, coffee pots and stuffed toys. We first discuss currently developed hands, and show the motivation for our work. Next, we present the mechanical structure of our robotic hand and the antagonistic tendoning system. We then show how this structure moves and complies to the object surface. We discuss how we chose the pulley diameters for our tendoning system by maximizing the workspace of the fingers while mechanically insuring that the hand cannot collide with itself. Our control algorithm which uses simple mechanical switches is presented.


1988 Robotics Conferences | 1989

Color Vision For Road Following

Jill D. Crisman; Charles E. Thorpe

At Carnegie Mellon University, we have two new vision systems for outdoor road following. The first system, called SCARF (Supervised Classification Applied to Road Following), is designed to be fast and robust when the vehicle is running in both sunshine and shadows under constant illumination. The second system, UNSCARF (UNSupervised Classification Applied to Road Following), is slower, but provides good results even if the sun is alternately covered by clouds or uncovered. SCARF incorporates our results from our previous experience with road tracking by supervised classification. It is an adaptive supervised classification scheme using color data from two cameras to form a new six dimensional color space. The road is localized by a Hough space technique. SCARF is specifically designed for fast implementation on the WARP supercomputer, an experimental parallel architecture developed at Carnegie Mellon. UNSCARF uses an unsupervised classification algorithm to group the pixels in the image into regions. The road is detected by finding the set of regions which, grouped together, best match the road shape. UNSCARF can be expanded easily to perform unsupervised classification on any number of features, and to use any combination of constraints to select the best combination of regions. The basic unsupervised classification segmentation will also have applications outside the realm of road following.


ieee intelligent transportation systems | 1997

An easy to install camera calibration for traffic monitoring

Ender Kivanc Bas; Jill D. Crisman

In this paper, a calibration procedure for a single camera overlooking traffic is described. This procedure does not require the technician to measure any corresponding points or use special calibration targets. The technician installing the system simply measures the height and tilt of the camera and selects the road edges in an image. From these inputs, we use the vanishing point to compute the focal length and pan of the camera. We show that, using the calibration parameters, the projection equations can measure distances and speeds of vehicles within.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1991

The Warp machine on Navlab

Jill D. Crisman; Jon A. Webb

The authors review the history of the Carnegie-Mellon Warp machine on Navlab, an autonomous land vehicle, and describe three Navlab vision systems implemented on the Warp machine. They then critically evaluate components of Warp in light of this experience. The Warp machine was used to implement stereo vision for obstacle avoidance and color-based road-following systems. The stereo-vision system was FIDO, which is derived from some of the earliest work in vision-guided robot vehicle navigation. Two color-based road following systems were implemented; one adapted conventional vision techniques to the problem of road recognition and the other used a neural network-based technique to learn road following online. Finally, the authors discuss the value of applications integration with machine development, discuss the limitations of the attached processor model, and give recommendations for future systems. >


Lecture Notes in Computer Science | 1998

Progress on the Deictically Controlled Wheelchair

Jill D. Crisman; Michael E. Cleary

We aim to develop a robot which can be commanded simply and accurately, especially by users with reduced mobility. Our shared control approach divides task responsibilities between the user (high level) and the robot (low level). A video interface shared between the user and robot enables the use of a deictic interface. The paper describes our progress toward this goal in several areas. A complete command set has been developed which uses minimal environmental features. Our video tracking algorithms have proven robust on the types of targets used by the commands. Specialized hardware and new tactile and acoustic sensors have been developed. These and other advances are discussed, as well as planned work.


Image and Vision Computing | 1998

The deictically controlled wheelchair

Jill D. Crisman; Michael E. Cleary; Juan Carlos Rojas

Abstract We are developing a ‘gopher’ wheelchair robot which can be used as an aid for disabled individuals. The robot uses a shared control architecture where the robot and the human user share the responsibility for a retrieve and replace task. The medium of the interactive interface between the robot and the user is stereo video images. In addition, the stereo cameras serve as a primary sensor to detect and track targets which guide the robots low-level servoing. The person is responsible for selecting objects or targets in the environment and then instructing the robot how to move relative to these targets. This paper first describes the hardware and the control interface of this human-robot system. The description here focuses on the systems video algorithms for tracking and evaluating targets. The system builds a binary shape model for each target selected by the user. It also forms a color mapping used to highlight the target in the image. This mapping is used on subsequent images to create a binary image which can be quickly matched with the targets shape model. We have tested this tracking algorithm on videotaped image sequences and on some runs with our wheelchair mobile robot. Our initial results show that this algorithm is reasonably robust for various types of edge and corner targets necessary for navigation.


intelligent robots and systems | 1995

A color projection for fast generic target tracking

Yue Du; Jill D. Crisman

We present a piecewise linear projection of the 3D color space that greatly reduces the computations required for using color information for robot vision tasks which we call categorical color. This 24-bit to 6-bit projection is inspired by the way humans name colors. This projection is developed to provide generic target tracking in real-time. A generic target in our system is defined by a user selecting a distinctive object in the window of a color image. The system has no a priori models of object shapes or colors. Therefore, the generic target tracking must perform robustly, in real time, using only the initial example appearance of the target object. To evaluate the performance of our piecewise linear projection on the task of generic target tracking, we compare similar RGB, intensity, and categorical color algorithms. Rather than simply observing the located target, we have developed a quantitative method for evaluating generic target tracking algorithms. By using this procedure, we show that categorical color is a better feature for generic tracking than RGB and gray-level.

Collaboration


Dive into the Jill D. Crisman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Ayers

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Charles E. Thorpe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jon A. Webb

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ibrahim Zeid

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Yue Du

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiansu Lai

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge