Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tommy Chang is active.

Publication


Featured researches published by Tommy Chang.


international conference on intelligent transportation systems | 2006

Color model-based real-time learning for road following

Ceryen Tan; Tsai Hong; Tommy Chang; Michael O. Shneier

Road following is an important skill vital to the development and deployment of autonomous vehicles. Over the past few decades, a large number of road following computer vision systems have been developed. All of these systems have limitations in their capabilities, arising from assumptions of idealized conditions. The systems show dependency on highly structured roads, road homogeneity, simplified road shapes, and idealized lighting conditions. In the real world, the systems are only effective in specialized cases. This paper proposes a vision system that is capable of dealing with many of these limitations, accurately segmenting unstructured, nonhomogeneous roads of arbitrary shape under various lighting conditions. The system uses color classification and learning to construct and use multiple road and background models. Color models are constructed on a frame by frame basis and used to segment each color image into road and background by estimating the probability that a pixel belongs to a particular model. The models are constructed and learned independently of road shape, allowing the segmentation of arbitrary road shapes. Temporal fusion is used in the stabilization of the results. Preliminary testing demonstrates the systems effectiveness on roads not handled by previous systems


Annual International Symposium on Aerospace/Defense Sensing, Simulation, and Controls | 2002

Road detection and tracking for autonomous mobile robots

Tsai Hong Hong; Christopher Rasmussen; Tommy Chang; Michael O. Shneier

As part of the Armys Demo III project, a sensor-based system has been developed to identify roads and to enable a mobile robot to drive along them. A ladar sensor, which produces range images, and a color camera are used in conjunction to locate the road surface and its boundaries. Sensing is used to constantly update an internal world model of the road surface. The world model is used to predict the future position of the road and to focus the attention of the sensors on the relevant regions in their respective images. The world model also determines the most suitable algorithm for locating and tracking road features in the images based on the current task and sensing information. The planner uses information from the world model to determine the best path for the vehicle along the road. Several different algorithms have been developed and tested on a diverse set of road sequences. The road types include some paved roads with lanes, but most of the sequences are of unpaved roads, including dirt and gravel roads. The algorithms compute various features of the road images including smoothness in the world model map and in the range domain, and color features and texture in the color domain. Performance in road detection and tracking are described and examples are shown of the system in action.


Autonomous Robots | 2008

Learning traversability models for autonomous mobile vehicles

Michael O. Shneier; Tommy Chang; Tsai Hong; William P. Shackleford; Roger V. Bostelman; James S. Albus

Abstract Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle’s navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene. The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.


international conference on robotics and automation | 2002

Feature detection and tracking for mobile robots using a combination of ladar and color images

Tsai-Hong Hong; Tommy Chang; Christopher Rasmussen; Michael O. Shneier

In an outdoor, off-road mobile robotics environment, it is important to identify objects that can affect the vehicles ability to traverse its planned path, and to determine their three-dimensional characteristics. In the paper, a combination of three elements is used to accomplish this task. An imaging ladar collects range images of the scene. A color camera, whose position relative to the ladar is known, is used to gather color images. Information extracted from these sensors is used to build a world model, a representation of the current state of the world. The world model is used actively in the sensing to predict what should be visible in each of the sensors during the next imaging cycle. The paper explains how the combined use of these three types of information leads to a robust understanding of the local environment surrounding the robotic vehicle for two important tasks: puddle/pond avoidance and road sign detection.


Proceedings of SPIE | 2002

Hierarchical world model for an autonomous scout vehicle

Tsai Hong Hong; Stephen B. Balakirsky; Elena R. Messina; Tommy Chang; Michael O. Shneier

This paper describes a world model that combines a variety of sensed inputs and a priori information and is used to generate on-road and off-road autonomous driving behaviors. The system is designed in accordance with the principles of the 4D/RCS architecture. The world model is hierarchical, with the resolution and scope at each level designed to minimize computational resource requirements and to support planning functions for that level of the control hierarchy. The sensory processing system that populates the world model fuses inputs from multiple sensors and extracts feature information, such as terrain elevation, cover, road edges, and obstacles. Feature information from digital maps, such as road networks, elevation, and hydrology, is also incorporated into this rich world model. The various features are maintained in different layers that are registered together to provide maximum flexibility in generation of vehicle plans depending on mission requirements. The paper includes discussion of how the maps are built and how the objects and features of the world are represented. Functions for maintaining the world model are discussed. The world model described herein is being developed for the Army Research Laboratorys Demo III Autonomous Scout Vehicle experiment.


Unmanned ground vehicle technology. Conference | 2003

Repository of sensor data for autonomous driving research

Michael O. Shneier; Tommy Chang; Tsai Hong Hong; Geraldine S. Cheok; Harry A. Scott; Steven Legowik; Alan M. Lytle

We describe a project to collect and disseminate sensor data for autonomous mobility research. Our goals are to provide data of known accuracy and precision to researchers and developers to enable algorithms to be developed using realistically difficult sensory data. This enables quantitative comparisons of algorithms by running them on the same data, allows groups that lack equipment to participate in mobility research, and speeds technology transfer by providing industry with metrics for comparing algorithm performance. Data are collected using the NIST High Mobility Multi-purpose Wheeled Vehicle (HMMWV), an instrumented vehicle that can be driven manually or autonomously both on roads and off. The vehicle can mount multiple sensors and provides highly accurate position and orientation information as data are collected. The sensors on the HMMWV include an imaging ladar, a color camera, color stereo, and inertial navigation (INS) and Global Positioning System (GPS). Also available are a high-resolution scanning ladar, a line-scan ladar, and a multi-camera panoramic sensor. The sensors are characterized by collecting data from calibrated courses containing known objects. For some of the data, ground truth will be collected from site surveys. Access to the data is through a web-based query interface. Additional information stored with the sensor data includes navigation and timing data, sensor to vehicle coordinate transformations for each sensor, and sensor calibration information. Several sets of data have already been collected and the web query interface has been developed. Data collection is an ongoing process, and where appropriate, NIST will work with other groups to collect data for specific applications using third-party sensors.


Journal of Field Robotics | 2006

Learning in a hierarchical control system: 4D/RCS in the DARPA LAGR program

James S. Albus; Roger V. Bostelman; Tommy Chang; Tsai Hong Hong; William P. Shackleford; Michael O. Shneier

Abstract : The Defense Applied Research Projects Agency (DARPA) Learning Applied to Ground Vehicles (LAGR) program aims to develop algorithms for autonomous vehicle navigation that learn how to operate in complex terrain. Over many years, the National Institute of Standards and Technology (NIST) has developed a reference model control system architecture called 4D/RCS that has been applied to many kinds of robot control, including autonomous vehicle control. For the LAGR program, NIST has embedded learning into a 4D/RCS controller to enable the small robot used in the program to learn to navigate through a range of terrain types. The vehicle learns in several ways. These include learning by example, learning by experience, and learning how to optimize traversal. Learning takes place in the sensory processing, world modeling, and behavior generation parts of the control system. The 4D/RCS architecture is explained in the paper, its application to LAGR is described, and the learning algorithms are discussed. Results are shown of the performance of the NIST control system on independently-conducted tests. Further work on the system and its learning capabilities is discussed.


Unmanned ground vehicle technology. Conference | 2003

Using a priori data for prediction and object recognition in an autonomous mobile vehicle

Christopher J. Scrapper; Ayako Takeuchi; Tommy Chang; Tsai Hong Hong; Michael O. Shneier

A robotic vehicle needs to understand the terrain and features around it if it is to be able to navigate complex environments such as road systems. By taking advantage of the fact that such vehicles also need accurate knowledge of their own location and orientation, we have developed a sensing and object recognition system based on information about the area where the vehicle is expected to operate. The information is collected through aerial surveys, from maps, and by previous traverses of the terrain by the vehicle. It takes the form of terrain elevation information, feature information (roads, road signs, trees, ponds, fences, etc.) and constraint information (e.g., one-way streets). We have implemented such an a priori database using One Semi-Automated Forces (OneSAF), a military simulation environment. Using the Inertial Navigation System and Global Positioning System (GPS) on the NIST High Mobility Multi-purpose Wheeled Vehicle (HMMWV) to provide indexing into the database, we extract all the elevation and feature information for a region surrounding the vehicle as it moves about the NIST campus. This information has also been mapped into the sensor coordinate systems. For example, processing the information from an imaging Laser Detection And Ranging (LADAR) that scans a region in front of the vehicle has been greatly simplified by generating a prediction image by scanning the corresponding region in the a priori model. This allows the system to focus the search for a particular feature in a small region around where the a priori information predicts it will appear. It also permits immediate identification of features that match the expectations. Results indicate that this processing can be performed in real time.


performance metrics for intelligent systems | 2008

Dynamic 6DOF metrology for evaluating a visual servoing system

Tommy Chang; Tsai Hong; Michael O. Shneier; German Holguin; Johnny Park; Roger D. Eastman

In this paper we demonstrate the use of a dynamic, six-degree-of-freedom (6DOF) laser tracker to empirically evaluate the performance of a real-time visual servoing implementation, with the objective of establishing a general method for evaluating real-time 6DOF dimensional measurements. The laser tracker provides highly accurate ground truth reference measurements of position and orientation of an object under motion, and can be used as an objective standard for calibration and evaluation of visual servoing and robot control algorithms. The real-time visual servoing implementation used in this study was developed at the Purdue Robot Vision Lab with a subsumptive, hierarchical, and distributed vision-based architecture. Data were taken simultaneously from the laser tracker and visual servoing implementation, enabling comparison of the data streams.


performance metrics for intelligent systems | 2009

Performance measurements for evaluating static and dynamic multiple human detection and tracking systems in unstructured environments

Barry A. Bodt; Richard Camden; Harry A. Scott; Adam Jacoff; Tsai Hong; Tommy Chang; Rick Norcross; Tony Downs; Ann M. Virts

The Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance (CTA) conducted an assessment and evaluation of multiple algorithms for real-time detection of pedestrians in Laser Detection and Ranging (LADAR) and video sensor data taken from a moving platform. The algorithms were developed by Robotics CTA members and then assessed in field experiments jointly conducted by the National Institute of Standards and Technology (NIST) and ARL. A robust, accurate and independent pedestrian tracking system was developed to provide ground truth. The ground truth was used to evaluate the CTA member algorithms for uncertainty and error in their results. A real-time display system was used to provide early detection of errors in data collection.

Collaboration


Dive into the Tommy Chang's collaboration.

Top Co-Authors

Avatar

Michael O. Shneier

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsai Hong Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Roger V. Bostelman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Tsai Hong

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James S. Albus

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

William P. Shackleford

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Harry A. Scott

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Roger D. Eastman

Loyola University Maryland

View shared research outputs
Top Co-Authors

Avatar

Christopher J. Scrapper

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Marilyn N. Abrams

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge