Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsai- Hong is active.

Publication


Featured researches published by Tsai- Hong.


international conference on computer vision | 1995

Real-time obstacle avoidance using central flow divergence and peripheral flow

David Coombs; Martin Herman; Tsai-Hong Hong; Marilyn Nashman

The lure of using motion vision as a fundamental element in the perception of space drives this effort to use flow features as the sole cues for robot mobility. Real-time estimates of image flow and flow divergence provide the robots sense of space. The robot steers down a conceptual corridor comparing left and right peripheral flows. Large central flow divergence warns the robot of impending collisions at dead ends. When this occurs, the robot turns around and resumes wandering. Behavior is generated by directly using flow-based information in the 2D image sequence; no 3D reconstruction is attempted. Active mechanical gate stabilization simplifies the visual interpretation problems by reducing camera rotation. By combining corridor following and dead-end deflection, the robot has wandered around the lab at 30 cm/s for as long as 20 minutes without collision. The ability to support this behavior in real-time with current equipment promises expanded capabilities as computational power increases in the future.<<ETX>>


Computer Vision and Image Understanding | 1998

Accuracy vs Efficiency Trade-offs in Optical Flow Algorithms

Hongche Liu; Tsai-Hong Hong; Martin Herman; Theodore(Ted) Camus; Rama Chellappa

There have been two thrusts in the development of optical flow algorithms. One has emphasized higher accuracy; the other faster implementation. These two thrusts, however, have been independently pursued, without addressing the accuracy vs efficiency trade-offs. Although the accuracy?efficiency characteristic is algorithm dependent, an understanding of a general pattern is crucial in evaluating an algorithm as far as real-world tasks are concerned, which often pose various performance requirements. This paper addresses many implementation issues that have often been neglected in previous research, including temporal filtering of the output stream, algorithms flexibility, and robustness to noise, subsampling, etc. Their impacts on accuracy and/or efficiency are emphasized. We present a survey of different approaches toward the goal of higher performance and present experimental studies on accuracy vs efficiency trade-offs. A detailed analysis of how this trade-off affects algorithm design is manifested in a case study involving two state-of-the-art optical flow algorithms: a gradient and a correlation-based method. The goal of this paper is to bridge the gap between the accuracy- and the efficiency-oriented approaches.


european conference on computer vision | 1996

Accuracy vs. Efficiency Trade-offs in Optical Flow Algorithms

Hongche Liu; Tsai-Hong Hong; Martin Herman; Rama Chellappa

There have been two thrusts in the development of optical flow algorithms. One has emphasized higher accuracy; the other faster implementation. These two thrusts, however, have been independently pursued, without addressing the accuracy vs. efficiency trade-offs. Although the accuracy-efficiency characteristic is algorithm dependent, an understanding of a general pattern is crucial in evaluating an algorithm as far as real world tasks are concerned, which often pose various performance requirements. This paper addresses many implementation issues that have often been neglected in previous research, including subsampling, temporal filtering of the output stream, algorithms flexibility and robustness, etc. Their impacts on accuracy and/or efficiency are emphasized. We present a critical survey of different approaches toward the goal of higher performance and present experimental studies on accuracy vs. efficiency trade-offs. The goal of this paper is to bridge the gap between the accuracy and the efficiency-oriented approaches.


International Journal of Computer Vision | 1997

A General Motion Model and Spatio-Temporal Filters forComputing Optical Flow

Hongche Liu; Tsai-Hong Hong; Martin Herman; Rama Chellappa

Traditional optical flow algorithms assume local image translational motion and apply simple image filtering techniques. Recent studies have taken two separate approaches toward improving the accuracy of computed flow: the application of spatio-temporal filtering schemes and the use of advanced motion models such as the affine model. Each has achieved some improvement over traditional algorithms in specialized situations but the computation of accurate optical flow for general motion has been elusive. In this paper, we exploit the interdependency between these two approaches and propose a unified approach. The general motion model we adopt characterizes arbitrary 3-D steady motion. Under perspective projection, we derive an image motion equation that describes the spatio-temporal relation of gray-scale intensity in an image sequence, thus making the utilization of 3-D filtering possible. However, to accommodate this motion model, we need to extend the filter design to derive additional motion constraint equations. Using Hermite polynomials, we design differentiation filters, whose orthogonality and Gaussian derivative properties insure numerical stability; a recursive relation facilitates application of the general nonlinear motion model while separability promotes efficiency. The resulting algorithm produces accurate optical flow and other useful motion parameters. It is evaluated quantitatively using the scheme established by Barron et al. (1994) and qualitatively with real images.


Computer Vision and Image Understanding | 1998

Motion-model-based boundary extraction and a real-time implementation

Hongche Liu; Tsai-Hong Hong; Martin Herman; Rama Chellappa

Motion boundary extraction and optical flow computation are two subproblems of the motion recovery problem that cannot be solved independently of one another. These two problems have been treated separately. A popular recent approach uses an iterative scheme that consists of motion boundary extraction and optical flow computation components and refines each result through iteration. We present a local, noniterative algorithm that simultaneously extracts motion boundaries and computes optical flow. This is achieved by modeling 3-D Hermite polynomial decompositions of image sequences representing the perspective projection of 3-D general motion. Local model parameters are used to determine whether motion should be estimated or motion boundaries should be extracted at the neighborhood. A definite advantage of this noniterative algorithm is its efficiency. It is demonstrated by a real-time implementation and supporting experimental results.


international conference on pattern recognition | 1994

A generalized motion model for estimating optical flow using 3-D Hermite polynomials

Hongche Liu; Tsai-Hong Hong; Martin Herman; Rama Chellappa

Classic optical flow algorithms assume local image translational motion and apply some primitive image smoothing. Recent studies have taken two separate approaches toward improving accuracy: the application of spatio-temporal filtering schemes and the use of generalized motion models such as the affine model. Each has achieved improvement in its specialized situations. We analyze the interdependency between them and propose a unified theory. The generalized motion we adopt models arbitrary 3D steady motion. Under perspective projection, we derive an image motion equation that describes the spatio-temporal relation in an image sequence, thus making 3D spatio-temporal filtering possible. Hence we establish a theory of Hermite polynomial differentiation filters, whose orthogonality and Gaussian derivative properties ensure numerical stability. The use of higher order motion constraint equations to accommodate more complex motion is justified by the algorithms reliable performance, as demonstrated by evaluating our algorithm in the scheme established by Barron, et al. (1994).


computer vision and pattern recognition | 1992

Kinematic calibration of an active camera system

Gin-Shu Young; Tsai-Hong Hong; Martin Herman; Jackson C. S. Yang

A technique for the calibration of an active camera system is presented. The calibration of manipulator, camera-to-manipulator, camera, and base-to-world is treated in a unified and elegant way. In this approach, the camera frames and manipulator link frames are all related to the world frame, therefore the camera-to-manipulator and base-to-world calibration is very straightforward. The approach is simple, since it uses the form of one equation solving one parameter. Two experiments that verify the accuracy of the technique are reported.<<ETX>>


International Journal of Computer Vision | 1998

New Visual Invariants for Terrain Navigation Without 3DReconstruction

Gin-Shu Young; Martin Herman; Tsai-Hong Hong; David Jiang; Jackson C. S. Yang

For autonomous vehicles to achieve terrain navigation, obstacles must be discriminated from terrain before any path planning and obstacle avoidance activity is undertaken. In this paper, a novel approach to obstacle detection has been developed. The method finds obstacles in the 2D image space, as opposed to 3D reconstructed space, using optical flow. Our method assumes that both nonobstacle terrain regions, as well as regions with obstacles, will be visible in the imagery. Therefore, our goal is to discriminate between terrain regions with obstacles and terrain regions without obstacles. Our method uses new visual linear invariants based on optical flow. Employing the linear invariance property, obstacles can be directly detected by using reference flow lines obtained from measured optical flow. The main features of this approach are: (1) 2D visual information (i.e., optical flow) is directly used to detect obstacles; no range, 3D motion, or 3D scene geometry is recovered; (2) knowledge about the camera-to-ground coordinate transformation is not required; (3) knowledge about vehicle (or camera) motion is not required; (4) the method is valid for the vehicle (or camera) undergoing general six-degree-of-freedom motion; (5) the error sources involved are reduced to a minimum, because the only information required is one component of optical flow. Numerous experiments using both synthetic and real image data are presented. Our methods are demonstrated in both ground and air vehicle scenarios.


international symposium on computer vision | 1995

Motion-model-based boundary extraction

Hongche Liu; Tsai-Hong Hong; Martin Herman; Rama Chellappa

Motion boundary extraction and optical flow computation are two subproblems of the motion recovery problem that cannot be solved independently of one another. We present a local, non-iterative algorithm that extracts motion boundaries and computes optical flow simultaneously. This is achieved by modeling a 3-D image intensity block with a general motion model that presumes locally coherent motion. Local motion coherence, which is measured by the fitness of the motion model, is the criterion we use to determine whether motion should be estimated. If not, then motion boundaries should be located. The motion boundary extraction algorithm is evaluated quantitatively and qualitatively against other existing algorithms in a scheme originally developed for edge detection.


international conference on image processing | 1995

Spatio-temporal filters for transparent motion segmentation

Hongche Liu; Tsai-Hong Hong; Martin Herman; Rama Chellappa

An image is ideally a projection of the 3-D scene. However the imaging process is always imperfect and constrained by the physical environment, for example, viewing through a window with reflections. This paper is concerned with image sequences acquired in such situations, the so-called transparency. When it occurs, the image sequence contains undesirable transparent motion, for example, of the window reflections. This complicates the already difficult motion estimation problem. We present an algorithm to segment transparent motion based on a spatio-temporal filtering technique-3-D Hermite polynomial differentiation filters. With motion segmentation accomplished, we can then focus on the scene analysis. The implementation of our algorithm is fast and accurate.

Collaboration


Dive into the Tsai- Hong's collaboration.

Top Co-Authors

Avatar

Martin Herman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Hongche Liu

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Gin-Shu Young

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Karen Chaconas

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Marilyn Nashman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David Coombs

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David Jiang

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Marily Nashman

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Theodore(Ted) Camus

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge