Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsai Hong is active.

Publication


Featured researches published by Tsai Hong.


IEEE Transactions on Intelligent Transportation Systems | 2012

A Learning Approach Towards Detection and Tracking of Lane Markings

Raghuraman Gopalan; Tsai Hong; Michael Shneier; Ramalingam Chellappa

Road scene analysis is a challenging problem that has applications in autonomous navigation of vehicles. An integral component of this system is the robust detection and tracking of lane markings. It is a hard problem primarily due to large appearance variations in lane markings caused by factors such as occlusion (traffic on the road), shadows (from objects like trees), and changing lighting conditions of the scene (transition from day to night). In this paper, we address these issues through a learning-based approach using visual inputs from a camera mounted in front of a vehicle. We propose the following: 1) a pixel-hierarchy feature descriptor to model the contextual information shared by lane markings with the surrounding road region; 2) a robust boosting algorithm to select relevant contextual features for detecting lane markings; and 3) particle filters to track the lane markings, without knowledge of vehicle speed, by assuming the lane markings to be static through the video sequence and then learning the possible road scene variations from the statistics of tracked model parameters. We investigate the effectiveness of our algorithm on challenging daylight and night-time road video sequences.


international conference on intelligent transportation systems | 2006

Color model-based real-time learning for road following

Ceryen Tan; Tsai Hong; Tommy Chang; Michael O. Shneier

Road following is an important skill vital to the development and deployment of autonomous vehicles. Over the past few decades, a large number of road following computer vision systems have been developed. All of these systems have limitations in their capabilities, arising from assumptions of idealized conditions. The systems show dependency on highly structured roads, road homogeneity, simplified road shapes, and idealized lighting conditions. In the real world, the systems are only effective in specialized cases. This paper proposes a vision system that is capable of dealing with many of these limitations, accurately segmenting unstructured, nonhomogeneous roads of arbitrary shape under various lighting conditions. The system uses color classification and learning to construct and use multiple road and background models. Color models are constructed on a frame by frame basis and used to segment each color image into road and background by estimating the probability that a pixel belongs to a particular model. The models are constructed and learned independently of road shape, allowing the segmentation of arbitrary road shapes. Temporal fusion is used in the stabilization of the results. Preliminary testing demonstrates the systems effectiveness on roads not handled by previous systems


Autonomous Robots | 2008

Learning traversability models for autonomous mobile vehicles

Michael O. Shneier; Tommy Chang; Tsai Hong; William P. Shackleford; Roger V. Bostelman; James S. Albus

Abstract Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle’s navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene. The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.


Proceedings of SPIE, the International Society for Optical Engineering | 2007

Super-resolution enhancement of flash LADAR range data

Gavin Rosenbush; Tsai Hong; Roger D. Eastman

Flash LADAR systems are becoming increasingly popular for robotics applications. However, they generally provide a low-resolution range image because of the limited number of pixels available on the focal plane array. In this paper, the application of image super-resolution algorithms to improve the resolution of range data is examined. Super-resolution algorithms are compared for their use on range data and the frequency-domain method is selected. Four low-resolution range images which are slightly shifted and rotated from the reference image are registered using Fourier transform properties and the super-resolution image is built using non-uniform interpolation. Image super-resolution algorithms are typically rated subjectively based on the perceived visual quality of their results. In this work, quantitative methods for evaluating the performance of these algorithms on range data are developed. Edge detection in the range data is used as a benchmark of the data improvement provided by super-resolution. The results show that super-resolution of range data provides the same advantage as image super-resolution, namely increased image fidelity.


Applied Optics | 2010

Super-resolution for flash ladar imagery.

Shuowen Hu; S. Susan Young; Tsai Hong; Joseph P. Reynolds; Keith Krapels; Brian Miller; James D. Thomas; Oanh Nguyen

Flash ladar systems are compact devices with high frame rates that hold promise for robotics applications, but these devices suffer from poor spatial resolution. This work develops a wavelet preprocessing stage to enhance registration of multiple frames and applies super-resolution to improve the resolution of flash ladar range imagery. The triangle orientation discrimination methodology was used for a subjective evaluation of the effectiveness of super-resolution for flash ladar. Results show statistically significant increases in the probability of target discrimination at all target ranges, as well as a reduction in subject response times for super-resolved imagery.


performance metrics for intelligent systems | 2008

Dynamic 6DOF metrology for evaluating a visual servoing system

Tommy Chang; Tsai Hong; Michael O. Shneier; German Holguin; Johnny Park; Roger D. Eastman

In this paper we demonstrate the use of a dynamic, six-degree-of-freedom (6DOF) laser tracker to empirically evaluate the performance of a real-time visual servoing implementation, with the objective of establishing a general method for evaluating real-time 6DOF dimensional measurements. The laser tracker provides highly accurate ground truth reference measurements of position and orientation of an object under motion, and can be used as an objective standard for calibration and evaluation of visual servoing and robot control algorithms. The real-time visual servoing implementation used in this study was developed at the Purdue Robot Vision Lab with a subsumptive, hierarchical, and distributed vision-based architecture. Data were taken simultaneously from the laser tracker and visual servoing implementation, enabling comparison of the data streams.


international symposium on safety, security, and rescue robotics | 2005

3D range imaging for urban search and rescue robotics research

Roger V. Bostelman; Tsai Hong; Raj Madhavan; Brian Weiss

Urban search and rescue (USAR) operations can be extremely dangerous for human rescuers during disaster response. Human task forces carrying necessary tools and equipment and having the required skills and techniques, are deployed for the rescue of victims of structural collapse. Instead of sending human rescuers into such dangerous structures, it is hoped that robots will one day meet the requirements to perform such tasks so that rescuers are not at risk of being hurt or worse. Recently, the National Institute of Standards and Technology, sponsored by the Defense Advanced Research Projects Agency, created reference test arenas that simulate collapsed structures for evaluating the performance of autonomous mobile robots performing USAR tasks. At the same time, the NIST Industrial Autonomous Vehicles Project has been studying advanced 3D range sensors for improved robot safety in warehouses and manufacturing environments. Combined applications are discussed in this paper where advanced 3D range sensors also show promise during USAR operations toward improved robot performance in collapsed structure navigation and rescue operations.


international conference on robotics and automation | 2007

Training and optimization of operating parameters for flash LADAR cameras

Michael Price; Jacqueline Kenney; Roger D. Eastman; Tsai Hong

Flash LADAR cameras based on continuous-wave, time-of-flight range measurement deliver fast 3D imaging for robot applications including mapping, localization, obstacle detection and object recognition. The accuracy of the range values produced depends on characteristics of the scene as well as dynamically adjustable operating parameters of the cameras. In order to optimally set these parameters during camera operation we have devised and implemented an optimization algorithm in a modular, extensible architecture for real-time applications including robot control. The approach uses two components: offline nonlinear optimization to minimize the range error for a training set of simple scenes followed by an online, real-time algorithm to reference the training data and set camera parameters. We quantify the effectiveness of our approach and highlight topics of interest for future research.


performance metrics for intelligent systems | 2009

Performance measurements for evaluating static and dynamic multiple human detection and tracking systems in unstructured environments

Barry A. Bodt; Richard Camden; Harry A. Scott; Adam Jacoff; Tsai Hong; Tommy Chang; Rick Norcross; Tony Downs; Ann M. Virts

The Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance (CTA) conducted an assessment and evaluation of multiple algorithms for real-time detection of pedestrians in Laser Detection and Ranging (LADAR) and video sensor data taken from a moving platform. The algorithms were developed by Robotics CTA members and then assessed in field experiments jointly conducted by the National Institute of Standards and Technology (NIST) and ARL. A robust, accurate and independent pedestrian tracking system was developed to provide ground truth. The ground truth was used to evaluate the CTA member algorithms for uncertainty and error in their results. A real-time display system was used to provide early detection of errors in data collection.


performance metrics for intelligent systems | 2012

Technology readiness levels for randomized bin picking

Jeremy A. Marvel; Kamel S. Saidi; Roger Eastman; Tsai Hong; Geraldine S. Cheok; Elena R. Messina

A proposal for the utilization of Technology Readiness Levels to the application of unstructured bin picking is discussed. A special session was held during the 2012 Performance Metrics for Intelligent Systems workshop to discuss the challenges and opportunities associated with the bin picking problem, and to identify the potentials for applying an industry-wide standardized assessment and reporting framework such as Technology Readiness Levels to bin picking. Representative experts from government, academia, and industry were assembled to form a special panel to share their insights into the challenge.

Collaboration


Dive into the Tsai Hong's collaboration.

Top Co-Authors

Avatar

Tommy Chang

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael O. Shneier

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Roger V. Bostelman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Harry A. Scott

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Roger D. Eastman

Loyola University Maryland

View shared research outputs
Top Co-Authors

Avatar

Geraldine S. Cheok

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Kamel S. Saidi

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Raj Madhavan

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elena R. Messina

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James S. Albus

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge