Yuntao Cui
Michigan State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuntao Cui.
computer vision and pattern recognition | 1996
Yuntao Cui; John J. Weng
This paper presents a prediction-and-verification segmentation scheme wing attention images from multiple fixations. A major advantage of this scheme is that it can handle a large number of different deformable objects presented in complex backgrounds. The scheme is also relatively efficient since the segmentation is guided by the past knowledge through a prediction-and-verification scheme. The system has been tested to segment hands in the sequences of intensity images, where each sequence represents a hand sign. The experimental result showed a 95% correct segmentation rate with a 3% false rejection rate.
international conference on pattern recognition | 1996
Yuntao Cui; John J. Weng
In this paper, we presents a three-stage framework to analyze time-varying image sequences. The focus of this paper is the second stage: segmentation. We propose a prediction-and-verification segmentation scheme which efficiently utilizes the attention images from the multiple fixations. The experimental results show 95% correct segmentation rate with 3% false rejection rate of 805 testing images. The recognition of hand sign based on the segmentation results has shown that the system has achieved a good performance for this very difficult vision task.
Pattern Recognition Letters | 1996
Yuntao Cui; John J. Weng; Herbert Reynolds
In this paper, we propose an unbiased minimum variance estimator to estimate the parameters of an ellipse. A space decomposition scheme is presented to direct the search of the optimal parameters. Experimental results have shown the dramatic improvement over existing weighted least sum of squares approaches, especially when the ellipse is occluded.
international symposium on computer vision | 1995
Yuntao Cui; John J. Weng
In this paper, we consider the problem of segmenting 2D objects from intensity fovea images based on learning. During the training, we apply the Karhunen-Loeve projection to the training set to obtain a set of eigenvectors and also construct a space decomposition tree to achieve logarithmic retrieval time complexity. The eigenvectors are used to reconstruct the test fovea image. Then we apply a spring network model to the reconstructed image to generate a polygon mask. After applying the mask to the test image, we search the space decomposition tree to find the nearest neighbor to segment the object from background. The system is tested to segment 25 classes of different hand shapes. The experimental results show 97% correct rate for the hands presented in the training (because of the background effect) and 93% correct rate for the hands that have not been used in the training phase.
Image and Vision Computing | 1997
George C. Stockman; Jin-Long Chen; Yuntao Cui; Herbert M. Reynolds
Methods are given for measuring 3D points on human drivers of automobiles. Points are natural body features marked by special targets placed on the body. The measurements are needed for improved comfort and accommodation in automotive seat design. The measurement methods required hardware instrumentation in the automobile and development of algorithms for off-line processing of the acquired data. This paper describes the use of multiple cameras to measure 3D locations within the drivers workspace. Results obtained show that measurement error in X, Y and Z coordinates is expected to be less than 1.0 mm on the lab bench and 2.0 mm in the car, and that combined 3D error is expected to be less than 2.0 mm and 3.0 mm, respectively.
IS&T/SPIE's Symposium on Electronic Imaging: Science & Technology | 1995
Yuntao Cui; John J. Weng
We consider the task of passive navigation, where a stereo visual sensor system moves around an unknown scene. In order to guide an autonomous navigation, it is important to build a visual map which records the location and the shape of the objects in the scene and their world coordinates. The extended global visual map is an integration of local maps. The approach described in this paper integrates the processes of motion estimation, stereo matching, temporal tracking, and Delaunay triangulation interpolation. Through stereo matching, each frame (a stereo image pair) can produce a set of 3D points of the current scene. The global structures of these 3D points are obtained using the results of the motion estimation. Delaunay tetrahedralization interpolates three-dimensional data points with a simplicial polyhedral surface. The experiment includes 151 frames of stereo images acquired from the moving mobile robot.
Archive | 1998
Herbert M. Reynolds; Robert Kerr; Raymond R. Brodeur; Khaldoun Rayes; Douglas Neal; Yuntao Cui
SAE transactions | 1996
Raymond R. Brodeur; Yuntao Cui; Herbert M. Reynolds
Archive | 1998
Raymond R. Brodeur; Yuntao Cui; Robert Kerr; Douglas Neal; Khaldoun Rayes; M. Herbert Reynolds
FGR | 1995
Yuntao Cui; John Juyang Weng