Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marilyn Nashman is active.

Publication


Featured researches published by Marilyn Nashman.


international conference on robotics and automation | 1994

A discriminating feature tracker for vision-based autonomous driving

Henry Schneiderman; Marilyn Nashman

A new vision-based technique for autonomous driving is described. This approach explicitly addresses and compensates for two forms of uncertainty: uncertainty about changes in road direction and uncertainty in the measurements of the road derived in each image. Autonomous driving has been demonstrated on both local roads and highways at speeds up to 100 km/h. The algorithm has performed well in the presence of non-ideal road conditions including gaps in the lane markers, sharp curves, shadows, cracks in the pavement, and wet roads. It has also performed well in rain, dark, and nighttime driving with headlights. >


intelligent vehicles symposium | 1993

Real-time Visual Processing For Autonomous Driving

Marilyn Nashman; Henry Schneiderman

This paper describes a visual processing algorithm that supports autonomous road following. The algorithm requires that lane markings be present and attempts to track the lane markings on both lane boundaries. The algorithm has been used as part of a complete system to drive an autonomous vehicle, the High Mobility Multipurpose Wheeled Vehicle (HMMWV).


workshop on applications of computer vision | 1992

Visual processing for autonomous driving

Henry Schneiderman; Marilyn Nashman

Describes a visual processing algorithm that supports autonomous road following. The algorithm requires that lane markings be present and attempts to track the lane markings on both lane boundaries. There are three stages of computation: extracting edges; matching extracted edge points with a geometric model of the road, and updating the geometric road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been implemented and tested using video taped road scenes. It performs robustly for both highways and rural roads. The algorithm runs at a sampling rate of 15 Hz and has a worst case latency of 139 milliseconds (ms). The algorithm is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated vision processing engine and a VME-based microprocessor system.<<ETX>>


1982 Technical Symposium East | 1982

Six-Dimensional Vision System

James S. Albus; Ernest W. Kent; Marilyn Nashman; P Mansbach; L. Palombo; Michael Shneier

There are six degrees of freedom that define the position and orientation of any object relative to a robot gripper. All six need to be determined for the robot to grasp the object in a uniquely specified manner. A robot vision system under development at the National Bureau of Standards is designed to measure all six of these degrees of freedom using two frames of video data taken sequentially from the same camera position. The system employs structured light techniques; in the first frame, the scene is illuminated by two parallel planes of light, and in the second frame by a point source of light.


Sensor fusion and decentralized control in autonomous robotic systems. Conference | 1997

Unique sensor fusion system for coordinate-measuring machine tasks

Marilyn Nashman; Billibon Yoshimi; Tsai Hong Hong; William G. Rippey; Martin Herman

This paper describes a real-time hierarchical system that fuses data from vision and touch sensors to improve the performance of a coordinate measuring machine (CMM) used for dimensional inspection tasks. The system consists of sensory processing, world modeling, and task decomposition modules. It uses the strengths of each sensor -- the precision of the CMM scales and the analog touch probe and the global information provided by the low resolution camera -- to improve the speed and flexibility of the inspection task. In the experiment described, the vision module performs all computations in image coordinate space. The parts boundaries are extracted during an initialization process and then the probes position is continuously updated as it scans and measures the part surface. The system fuses the estimated probe velocity and distance to the part boundary in image coordinates with the estimated velocity and probe position provided by the CMM controller. The fused information provides feedback to the monitor controller as it guides the touch probe to scan the part. We also discuss integrating information from the vision system and the probe to autonomously collect data for 2-D to 3-D calibration, and work to register computer aided design (CAD) models with images of parts in the workplace.


applied imagery pattern recognition workshop | 1994

Visual road following without 3D reconstruction

Martin Herman; Daniel Raviv; Henry Schneiderman; Marilyn Nashman

The traditional approach to visual road following involves reconstructing a 3D model of the road. The model is in a world or vehicle-centered coordinate system, and it is symbolic, iconic, or a combination of both. Road-following commands (as well as other commands, e.g., obstacle avoidance) are then generated from this 3D model. Here we discuss an alternative approach in which a minimal road model is generated. The model contains only task-relevant information and a minimum of vision processing is performed to extract this information in the form of visual cues represented in the 2D image coordinate system. This approach leads to rapid and continuous update of the road model from the visual data. It results in inexpensive, fast, and robust computations. Road following is achieved by servoing on the visual cues in the 2D model. This approach results in a tight coupling of perception and action. In this paper, two specific examples of road following that use this approach are presented. In the first example, we show that road-following commands can be generated from visual cues consisting of the projection into the image of the tangent point on the edge of the road, along with the optical flow of this point. Using this cue, the resulting servo loop is very simple and fast. In the second example, we show that lane markings can be robustly tracked in real time while confining all processing to the 2D image plane. Neither knowledge of vehicle motion nor a calibrated camera is required. This system has been used to drive a vehicle up to 80 km/hr under various road conditions. The algorithm runs at a 15 Hz update rate.


international symposium on intelligent control | 1988

Low data rate remote vehicle driving

Martin Herman; Karen Chaconas; Marilyn Nashman; Tsai-Hong Hong

Several algorithms that have been implemented as possible candidates for a hybrid video compression system to be used for remote driving of a ground vehicle are described. The algorithms have been implemented on the pipelined image processing engine (PIPE) real-time image processing machine. The PIPE has been integrated with a remote control vehicle system, and the algorithms were evaluated by means of real-world remote driving experiments. These experiments have shown that remote vehicle driving is difficult enough without degrading the imagery through compression algorithms. The degraded imagery makes driving even more difficult. The following difficulties were found in driving in cross country terrain using either the full video or the compressed video: global relative vehicle location is very difficult for the driver to obtain; the orientation of the local ground surface is very difficult to obtain; ditches, gullies, and other obstacles are difficult to distinguish; and the range of objects from the vehicle is difficult to determine. It appears that performing compression by transmitting images at a rate of, at most, a few per second and then providing a realistic video simulation to the operator may be one of the most effective ways of performing video compression.<<ETX>>


Three-Dimensional Machine Perception | 1981

Real-Time Three-Dimensional Vision For Parts Acquisition

James S. Albus; R. Haar; Marilyn Nashman; Michael Shneier; S. Nagalia

The National Bureau of Standards is developing a vision system for use in an automated factory environment. The emphasis of the project is on the real-time acquisition of three-dimensional parts using visual feedback. The system employs multiple light sources in con-junction with object models to establish the position and orientation of an object in the cameras field of view. A flood flash enables shape information to be obtained from an image, while a plane of light can be used to find the three-dimensional positions of points on the object. Because there are only a small number of object types and the objects all have pre-defined nominal locations, a model can be used to predict how the scene should look from a given viewpoint using a particular light source. This prediction can be compared with the actual image, and the differences used to establish position information. Models are expected to be particularly useful in reducing the number of views of an object necessary to calculate its three-dimensional position.


applied imagery pattern recognition workshop | 1997

Real-time visual processing in support of autonomous driving

Marilyn Nashman; Henry Schneiderman

Autonomous driving provides an effective way to address traffic concerns such as safety and congestion. There has been increasing interest in the development of autonomous driving in recent years. Interest has included high-speed driving on highways, urban driving, and navigation through less structured off-road environments. The primary challenge in autonomous driving is developing perception techniques that are reliable under the extreme variability of outdoor conditions in any of these environments. Roads vary in appearance. Some are smooth and well marked, while others have cracks and potholes or are unmarked. Shadows, glare, varying illumination, dirt or foreign matter, other vehicles, rain, and snow also affect road appearance. This paper describes a visual processing algorithm that supports autonomous driving. The algorithm requires that lane markings be present and attempts to track the lane markings on each of two lane boundaries in the lane of travel. There are three stages of visual processing computation: extracting edges, determining which edges correspond to lane markers, and updating geometric models of the lane markers. A fourth stage computes a steering command for the vehicle based on the updated road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been used as part of a complete system to drive an autonomous vehicle, a high mobility multipurpose wheeled vehicle (HMMWV). Autonomous driving has been demonstrated on both local roads and highways at speeds up to 100 kilometers per hour (km/h). The algorithm has performed well in the presence of non-ideal road conditions including gaps in the lane markers, sharp curves, shadows, cracks in the pavement, wet roads, rain, dusk, and nighttime driving. The algorithm runs at a sampling rate of 15 Hz and has a worst case processing delay time of 150 milliseconds. Processing is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated image processing engine and a VME-based microprocessor system.


Sensor Fusion III: 3D Perception and Recognition | 1991

Three-dimensional position determination from motion

Marilyn Nashman; Karen Chaconas

The analysis of sequences of images over time provides a means of extracting meaningful information which is used to compute and track the three-dimensional position of a moving object This paper describes an application in which sensory feedback based on time-varying camera images is used to provide position information to a manipulator control system. The system operates in a real-time environment and provides updated information at a rate which permits intelligent trajectory planning by the control system.

Collaboration


Dive into the Marilyn Nashman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Herman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Karen Chaconas

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsai Hong Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James S. Albus

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Billibon Yoshimi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David Coombs

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Ernest W. Kent

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Karl Murphy

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Steven Legowik

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge