Cheng-Yuan Tang
Huafan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cheng-Yuan Tang.
intelligent information hiding and multimedia signal processing | 2009
Gang-Zeng Mao; Yi-Leh Wu; Maw-Kae Hor; Cheng-Yuan Tang
Most hand detection and tracking algorithms can be only applied in the fairly simple and similar background. We propose to combine a modified object detection method proposed by Viola and Jones with the skin-color detection method to perform hand detection and tracking against complex background. Out experimental results show that the proposed method is effective in near real-time speed (15 frames per second.).
international conference on machine learning and cybernetics | 2008
Cheng-Yuan Tang; Yi-Leh Wu; Maw-Kae Hor; Wen-Hung Wang
There remain many difficult problems in computer vision research such as object recognition, three dimensional reconstruction, object tracking, etc. And the basis of solving these problems relies on image matching. The scale invariant feature transform (SIFT) algorithm has been widely used for image matching application. The SIFT algorithm can successfully extract the most descriptive feature points in given input images taken under different viewpoints. However, the performance of the original SIFT algorithm degrades under the influence of noise. We propose to modify the SIFT algorithm to produce better invariant feature points for image matching under noise. We also propose to employ the Earth movers distance (EMD) as the measurement of similarity between two descriptors. We present extensive experiment results to demonstrate the performance of the proposed methods in image matching under noise.
Expert Systems With Applications | 2011
Yi-Leh Wu; Cheng-Yuan Tang; Maw-Kae Hor; Pei-Fen Wu
Feature selection plays an important role in image retrieval systems. The better selection of features usually results in higher retrieval accuracy. This work tries to select the best feature set from a total of 78 low level image features, including regional, color, and textual features, using the genetic algorithms (GA). However, the GA is known to be slow to converge. In this work we propose two directions to improve the convergence time of the GA. First we employ the Taguchi method to reduce the number of necessary offspring to be tested in every generation in the GA. Second we propose to use an alternative measure, the Huberts @C statistics, to evaluate the fitness of each offspring instead of evaluating the retrieval accuracy directly. The experiment results show that the proposed techniques improve the feature selection results by using the GA in both time and accuracy.
international conference on machine learning and cybernetics | 2014
Wei-Chih Hung; Fan Shen; Yi-Leh Wu; Maw-Kae Hor; Cheng-Yuan Tang
Recently, Activity Recognition (AR) has become a popular research topic and gained attention in the study field because of the increasing availability of sensors in consumer products, such as GPS sensors, vision sensors, audio sensors, light sensors, temperature sensors, direction sensors, and acceleration sensors. The availability of a variety of sensors creates many new opportunities for data mining applications. This paper proposes a mobile phone-based system that employs the accelerometer and the gyroscope signals for AR. To evaluate the proposed system, we employ a data set where 30 volunteers performed daily activities such as walking, lying, upstairs, sitting, and standing. The result shows that the features extracted from the gyroscope enhance the classification accuracy in term of dynamic activities recognition such as walking and upstairs. A comparison study shows that the recognition accuracies of the proposed framework using various classification algorithms are higher than previous works.
intelligent information hiding and multimedia signal processing | 2009
Kai-En Tsay; Yi-Leh Wu; Maw-Kae Hor; Cheng-Yuan Tang
Current researches toward solving personal photo management suffered two problems: (1) lacking of training data, and (2) no consolidated reference for classification. In this paper, we propose an automated annotation framework to address these problems. The framework was composed by three main components: the context information generator, the semantic concept detector, and the face recognition model. By assigning multi-labels for each photo, the framework makes the photo collection more structured and searchable. Our experimental results show that the techniques used in this framework are promising.
Multimedia Tools and Applications | 2014
Yi-Leh Wu; Chun-Tsai Yeh; Wei-Chih Hung; Cheng-Yuan Tang
In recent years, research on human-computer interaction is becoming popular, most of which uses body movements, gestures or eye gaze direction. Until now, gazing estimation is still an active research domain. We propose an efficient method to solve the problem of the eye gaze point. We first locate the eye region by modifying the characteristics of the Active Appearance Model (AAM). Then by employing the Support Vector Machine (SVM), we estimate the five gazing directions through classification. The original 68 facial feature points in AAM are modified into 36 eye feature points. According to the two-dimensional coordinates of feature points, we classify different directions of eye gazing. The modified 36 feature points describe the contour of eyes, iris size, iris location, and the position of pupils. In addition, the resolution of cameras does not affect our method to determine the direction of line of sight accurately. The final results show the independence of classifications, less classification errors, and more accurate estimation of the gazing directions.
Journal of Information Science and Engineering | 2008
Cheng-Yuan Tang; Yi-Leh Wu; Yueh-Hung Lai
In this paper, we present the use of two evolutionary algorithms to estimate fundamental matrices. We first propose a modification of the Hybrid Taguchi Genetic Algorithm (HTGA) that employs a single objective function, either geometric or algebraic distance, for optimization. We then propose to use a multi-objective optimization algorithm, Intelligent Multi-Objective Evolutionary Algorithm (IMOEA), to optimize both geometric and algebraic distances concurrently. Our experiments show that the proposed modified HTGA (MHTGA) and IMOEA produce more accurate estimation of fundamental matrices than the traditional Genetic Algorithm (GA) and the original HTGA do.
international conference on machine learning and cybernetics | 2015
Wei-Tung Wang; Yi-Leh Wu; Cheng-Yuan Tang; Maw-Kae Hor
Clustering is a task that aims to grouping data objects into several groups. DBSCAN is a density-based clustering method. However, it requires two parameters and these two parameters are hard to decide. Also, DBSCAN has difficulties in finding clusters when the density changes in the dataset. In this paper, we modify the original DBSCAN to make it able to determine the appropriate eps values according to data distribution and to cluster when the density varies among dataset. The main idea is to run DBSCAN with different eps and Minpts values. We also modified the calculation of the Minpts so that DBSCAN can have better clustering results. We did several experiments to evaluate the performance. The results suggest that our proposed DBSCAN can automatically decide the appropriate eps and Minpts values and can detect clusters with different density-levels.
Archive | 2013
Wei-Syun Lin; Yi-Leh Wu; Wei-Chih Hung; Cheng-Yuan Tang
We present a novel way to use the Scale Invariance Feature Transform (SIFT) on binary images. As far as we know, we proposed employ SIFT on binary images for hand gesture recognition and provide more accurate result comparing to traditional template approaches. There exist many restrictions on template matching approaches, such as the rotation must be less than 15°, and the variation on scale, etc. However, our proposed approach is robust against rotations, scaling, illumination conditions, and can recognize hand gestures in real-time with only off-the-shelf camera such as webcams. The proposed approach employs the SIFT features on binary image, the k-means clustering to map keypoints into a unified dimensional histogram vector (bag-of-words), and the Support Vector Machine (SVM) to classify different hand gestures.
Pattern Recognition Letters | 2011
Maw-Kae Hor; Cheng-Yuan Tang; Yi-Leh Wu; Kai-Hsuan Chan; Jeng-Jiun Tsai
This paper proposes robust refinement methods to improve the popular patch multi-view 3D reconstruction algorithm by Furukawa and Ponce (2008). Specifically, a new method is proposed to improve the robustness by removing outliers based on a filtering approach. In addition, this work also proposes a method to divide the 3D points in to several buckets for applying the sparse bundle adjustment algorithm (SBA) individually, removing the outliers and finally merging them. The residuals are used to filter potential outliers to reduce the re-projection error used as the performance evaluation of refinement. In our experiments, the original mean re-projection error is about 47.6. After applying the proposed methods, the mean error is reduced to 2.13.