Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael O. Shneier is active.

Publication


Featured researches published by Michael O. Shneier.


international conference on intelligent transportation systems | 2006

Color model-based real-time learning for road following

Ceryen Tan; Tsai Hong; Tommy Chang; Michael O. Shneier

Road following is an important skill vital to the development and deployment of autonomous vehicles. Over the past few decades, a large number of road following computer vision systems have been developed. All of these systems have limitations in their capabilities, arising from assumptions of idealized conditions. The systems show dependency on highly structured roads, road homogeneity, simplified road shapes, and idealized lighting conditions. In the real world, the systems are only effective in specialized cases. This paper proposes a vision system that is capable of dealing with many of these limitations, accurately segmenting unstructured, nonhomogeneous roads of arbitrary shape under various lighting conditions. The system uses color classification and learning to construct and use multiple road and background models. Color models are constructed on a frame by frame basis and used to segment each color image into road and background by estimating the probability that a pixel belongs to a particular model. The models are constructed and learned independently of road shape, allowing the segmentation of arbitrary road shapes. Temporal fusion is used in the stabilization of the results. Preliminary testing demonstrates the systems effectiveness on roads not handled by previous systems


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1985

Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera

Tsai-Hong Hong; Michael O. Shneier

This correspondence describes a method of building and maintaining a spatial respresentation for the workspace of a robot, using a sensor that moves about in the world. From the known camera position at which an image is obtained, and two-dimensional silhouettes of the image, a series of cones is projected to describe the possible positions of the objects in the space. When an object is seen from several viewpoints, the intersections of the cones constrain the position and size of the object. After several views have been processed, the representation of the object begins to resemble its true shape. At all times, the spatial representation contains the best guess at the true situation in the world with uncertainties in position and shape explicitly represented. An octree is used as the data structure for the representation. It not only provides a relatively compact representation, but also allows fast access to information and enables large parts of the workspace to be ignored. The purpose of constructing this representation is not so much to recognize objects as to describe the volumes in the workspace that are occupied and those that are empty. This enables trajectory planning to be carried out, and also provides a means of spatially indexing objects without needing to represent the objects at an extremely fine resolution. The spatial representation is one part of a more complex representation of the workspace used by the sensory system of a robot manipulator in understanding its environment.


Annual International Symposium on Aerospace/Defense Sensing, Simulation, and Controls | 2002

Road detection and tracking for autonomous mobile robots

Tsai Hong Hong; Christopher Rasmussen; Tommy Chang; Michael O. Shneier

As part of the Armys Demo III project, a sensor-based system has been developed to identify roads and to enable a mobile robot to drive along them. A ladar sensor, which produces range images, and a color camera are used in conjunction to locate the road surface and its boundaries. Sensing is used to constantly update an internal world model of the road surface. The world model is used to predict the future position of the road and to focus the attention of the sensors on the relevant regions in their respective images. The world model also determines the most suitable algorithm for locating and tracking road features in the images based on the current task and sensing information. The planner uses information from the world model to determine the best path for the vehicle along the road. Several different algorithms have been developed and tested on a diverse set of road sequences. The road types include some paved roads with lanes, but most of the sequences are of unpaved roads, including dirt and gravel roads. The algorithms compute various features of the road images including smoothness in the world model map and in the range domain, and color features and texture in the color domain. Performance in road detection and tracking are described and examples are shown of the system in action.


Autonomous Robots | 2008

Learning traversability models for autonomous mobile vehicles

Michael O. Shneier; Tommy Chang; Tsai Hong; William P. Shackleford; Roger V. Bostelman; James S. Albus

Abstract Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle’s navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene. The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1986

Model-based strategies for high-level robot vision

Michael O. Shneier; Ronald Lumia; Ernest W. Kent

Abstract The higher levels of a sensory system for a robot manipulator are described. The sensory system constructs and maintains a representation of the world in a form suitable for fast responses to questions posed by other robot subsystems. This is achieved by separating the sensing processes from the descriptive processes, allowing questions to be answered without waiting for the sensors to respond. Four groups of processes are described. Predictive processes (world modellers) are needed to set up initial expectations about the world and to generate predictions about sensor responses. Processes are also needed to analyze the sensory input. They make use of the predictions in analyzing the world. A third essential function is matching, which involves comparing the sensed data with the expectations, and provides errors that help to servo the models to the world. Finally, the descriptive process constructs and maintains the internal representation of the world. It constructs the representation from the sensed information and the expectations, and contains at all times everything known about the world. The sensory system is responsive to changes in the world, but can also deal with interruptions in sensing, and can supply information that may not be available by sensing the world directly.


international conference on robotics and automation | 2002

Feature detection and tracking for mobile robots using a combination of ladar and color images

Tsai-Hong Hong; Tommy Chang; Christopher Rasmussen; Michael O. Shneier

In an outdoor, off-road mobile robotics environment, it is important to identify objects that can affect the vehicles ability to traverse its planned path, and to determine their three-dimensional characteristics. In the paper, a combination of three elements is used to accomplish this task. An imaging ladar collects range images of the scene. A color camera, whose position relative to the ladar is known, is used to gather color images. Information extracted from these sensors is used to build a world model, a representation of the current state of the world. The world model is used actively in the sensing to predict what should be visible in each of the sensors during the next imaging cycle. The paper explains how the combined use of these three types of information leads to a robust understanding of the local environment surrounding the robotic vehicle for two important tasks: puddle/pond avoidance and road sign detection.


Proceedings of SPIE | 2002

Hierarchical world model for an autonomous scout vehicle

Tsai Hong Hong; Stephen B. Balakirsky; Elena R. Messina; Tommy Chang; Michael O. Shneier

This paper describes a world model that combines a variety of sensed inputs and a priori information and is used to generate on-road and off-road autonomous driving behaviors. The system is designed in accordance with the principles of the 4D/RCS architecture. The world model is hierarchical, with the resolution and scope at each level designed to minimize computational resource requirements and to support planning functions for that level of the control hierarchy. The sensory processing system that populates the world model fuses inputs from multiple sensors and extracts feature information, such as terrain elevation, cover, road edges, and obstacles. Feature information from digital maps, such as road networks, elevation, and hydrology, is also incorporated into this rich world model. The various features are maintained in different layers that are registered together to provide maximum flexibility in generation of vehicle plans depending on mission requirements. The paper includes discussion of how the maps are built and how the objects and features of the world are represented. Functions for maintaining the world model are discussed. The world model described herein is being developed for the Army Research Laboratorys Demo III Autonomous Scout Vehicle experiment.


Unmanned ground vehicle technology. Conference | 2003

Repository of sensor data for autonomous driving research

Michael O. Shneier; Tommy Chang; Tsai Hong Hong; Geraldine S. Cheok; Harry A. Scott; Steven Legowik; Alan M. Lytle

We describe a project to collect and disseminate sensor data for autonomous mobility research. Our goals are to provide data of known accuracy and precision to researchers and developers to enable algorithms to be developed using realistically difficult sensory data. This enables quantitative comparisons of algorithms by running them on the same data, allows groups that lack equipment to participate in mobility research, and speeds technology transfer by providing industry with metrics for comparing algorithm performance. Data are collected using the NIST High Mobility Multi-purpose Wheeled Vehicle (HMMWV), an instrumented vehicle that can be driven manually or autonomously both on roads and off. The vehicle can mount multiple sensors and provides highly accurate position and orientation information as data are collected. The sensors on the HMMWV include an imaging ladar, a color camera, color stereo, and inertial navigation (INS) and Global Positioning System (GPS). Also available are a high-resolution scanning ladar, a line-scan ladar, and a multi-camera panoramic sensor. The sensors are characterized by collecting data from calibrated courses containing known objects. For some of the data, ground truth will be collected from site surveys. Access to the data is through a web-based query interface. Additional information stored with the sensor data includes navigation and timing data, sensor to vehicle coordinate transformations for each sensor, and sensor calibration information. Several sets of data have already been collected and the web query interface has been developed. Data collection is an ongoing process, and where appropriate, NIST will work with other groups to collect data for specific applications using third-party sensors.


Journal of Field Robotics | 2006

Learning in a hierarchical control system: 4D/RCS in the DARPA LAGR program

James S. Albus; Roger V. Bostelman; Tommy Chang; Tsai Hong Hong; William P. Shackleford; Michael O. Shneier

Abstract : The Defense Applied Research Projects Agency (DARPA) Learning Applied to Ground Vehicles (LAGR) program aims to develop algorithms for autonomous vehicle navigation that learn how to operate in complex terrain. Over many years, the National Institute of Standards and Technology (NIST) has developed a reference model control system architecture called 4D/RCS that has been applied to many kinds of robot control, including autonomous vehicle control. For the LAGR program, NIST has embedded learning into a 4D/RCS controller to enable the small robot used in the program to learn to navigate through a range of terrain types. The vehicle learns in several ways. These include learning by example, learning by experience, and learning how to optimize traversal. Learning takes place in the sensory processing, world modeling, and behavior generation parts of the control system. The 4D/RCS architecture is explained in the paper, its application to LAGR is described, and the learning algorithms are discussed. Results are shown of the performance of the NIST control system on independently-conducted tests. Further work on the system and its learning capabilities is discussed.


Journal of Field Robotics | 2007

Applying SCORE to field-based performance evaluations of soldier worn sensor technologies

Craig I. Schlenoff; Michelle Potts Steves; Brian A. Weiss; Michael O. Shneier; Ann M. Virts

Soldiers are often asked to perform missions that last many hours and are extremely stressful. After a mission is complete, the soldiers are typically asked to provide a report describing the most important things that happened during the mission. Due to the various stresses associated with military missions, there are undoubtedly many instances in which important information is missed or not reported and, therefore, not available for use when planning future missions. The ASSIST (Advanced Soldier Sensor Information System and Sensors Technology) program is addressing this challenge by instrumenting soldiers with sensors that they can wear directly on their uniforms. During the mission, the sensors continuously record what is going on around the soldier. With this information, soldiers are able to give more accurate reports without relying solely on their memory. In order for systems like this (often termed autonomous or intelligent systems) to be successful, they must be comprehensively and quantitatively evaluated to ensure that they will function appropriately and as expected in a wartime environment. The primary contribution of this paper is to introduce and define a framework and approach to performance evaluation called SCORE (System, Component, and Operationally Relevant Evaluation) and describe the results of applying it to evaluate the ASSIST technology. As the name implies, SCORE is built around the premise that, in order to get a true picture of how a system performs in the field, it must be evaluated at the component level, the system level, and in operationally relevant environments. The SCORE framework provides proven techniques to aid in the performance evaluation of many types of intelligent systems. To date, SCORE has only been applied to technologies under development (formative evaluation), but the authors believe that this approach would lend itself equally well to the evaluation of technologies ready to be fielded (summative evaluation).

Collaboration


Dive into the Michael O. Shneier's collaboration.

Top Co-Authors

Avatar

Tommy Chang

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsai Hong Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

William P. Shackleford

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Roger V. Bostelman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsai Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Craig I. Schlenoff

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Ernest W. Kent

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James S. Albus

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Geraldine S. Cheok

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Harry A. Scott

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge