Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maw-Kae Hor is active.

Publication


Featured researches published by Maw-Kae Hor.


intelligent information hiding and multimedia signal processing | 2009

Real-Time Hand Detection and Tracking against Complex Background

Gang-Zeng Mao; Yi-Leh Wu; Maw-Kae Hor; Cheng-Yuan Tang

Most hand detection and tracking algorithms can be only applied in the fairly simple and similar background. We propose to combine a modified object detection method proposed by Viola and Jones with the skin-color detection method to perform hand detection and tracking against complex background. Out experimental results show that the proposed method is effective in near real-time speed (15 frames per second.).


international conference on machine learning and cybernetics | 2008

Modified sift descriptor for image matching under interference

Cheng-Yuan Tang; Yi-Leh Wu; Maw-Kae Hor; Wen-Hung Wang

There remain many difficult problems in computer vision research such as object recognition, three dimensional reconstruction, object tracking, etc. And the basis of solving these problems relies on image matching. The scale invariant feature transform (SIFT) algorithm has been widely used for image matching application. The SIFT algorithm can successfully extract the most descriptive feature points in given input images taken under different viewpoints. However, the performance of the original SIFT algorithm degrades under the influence of noise. We propose to modify the SIFT algorithm to produce better invariant feature points for image matching under noise. We also propose to employ the Earth movers distance (EMD) as the measurement of similarity between two descriptors. We present extensive experiment results to demonstrate the performance of the proposed methods in image matching under noise.


Expert Systems With Applications | 2011

Feature selection using genetic algorithm and cluster validation

Yi-Leh Wu; Cheng-Yuan Tang; Maw-Kae Hor; Pei-Fen Wu

Feature selection plays an important role in image retrieval systems. The better selection of features usually results in higher retrieval accuracy. This work tries to select the best feature set from a total of 78 low level image features, including regional, color, and textual features, using the genetic algorithms (GA). However, the GA is known to be slow to converge. In this work we propose two directions to improve the convergence time of the GA. First we employ the Taguchi method to reduce the number of necessary offspring to be tested in every generation in the GA. Second we propose to use an alternative measure, the Huberts @C statistics, to evaluate the fitness of each offspring instead of evaluating the retrieval accuracy directly. The experiment results show that the proposed techniques improve the feature selection results by using the GA in both time and accuracy.


international conference on machine learning and cybernetics | 2014

Activity Recognition with sensors on mobile devices

Wei-Chih Hung; Fan Shen; Yi-Leh Wu; Maw-Kae Hor; Cheng-Yuan Tang

Recently, Activity Recognition (AR) has become a popular research topic and gained attention in the study field because of the increasing availability of sensors in consumer products, such as GPS sensors, vision sensors, audio sensors, light sensors, temperature sensors, direction sensors, and acceleration sensors. The availability of a variety of sensors creates many new opportunities for data mining applications. This paper proposes a mobile phone-based system that employs the accelerometer and the gyroscope signals for AR. To evaluate the proposed system, we employ a data set where 30 volunteers performed daily activities such as walking, lying, upstairs, sitting, and standing. The result shows that the features extracted from the gyroscope enhance the classification accuracy in term of dynamic activities recognition such as walking and upstairs. A comparison study shows that the recognition accuracies of the proposed framework using various classification algorithms are higher than previous works.


intelligent information hiding and multimedia signal processing | 2009

Personal Photo Organizer Based on Automated Annotation Framework

Kai-En Tsay; Yi-Leh Wu; Maw-Kae Hor; Cheng-Yuan Tang

Current researches toward solving personal photo management suffered two problems: (1) lacking of training data, and (2) no consolidated reference for classification. In this paper, we propose an automated annotation framework to address these problems. The framework was composed by three main components: the context information generator, the semantic concept detector, and the face recognition model. By assigning multi-labels for each photo, the framework makes the photo collection more structured and searchable. Our experimental results show that the techniques used in this framework are promising.


international conference on machine learning and cybernetics | 2015

Adaptive density-based spatial clustering of applications with noise (DBSCAN) according to data

Wei-Tung Wang; Yi-Leh Wu; Cheng-Yuan Tang; Maw-Kae Hor

Clustering is a task that aims to grouping data objects into several groups. DBSCAN is a density-based clustering method. However, it requires two parameters and these two parameters are hard to decide. Also, DBSCAN has difficulties in finding clusters when the density changes in the dataset. In this paper, we modify the original DBSCAN to make it able to determine the appropriate eps values according to data distribution and to cluster when the density varies among dataset. The main idea is to run DBSCAN with different eps and Minpts values. We also modified the calculation of the Minpts so that DBSCAN can have better clustering results. We did several experiments to evaluate the performance. The results suggest that our proposed DBSCAN can automatically decide the appropriate eps and Minpts values and can detect clusters with different density-levels.


Pattern Recognition Letters | 2011

Robust refinement methods for camera calibration and 3D reconstruction from multiple images

Maw-Kae Hor; Cheng-Yuan Tang; Yi-Leh Wu; Kai-Hsuan Chan; Jeng-Jiun Tsai

This paper proposes robust refinement methods to improve the popular patch multi-view 3D reconstruction algorithm by Furukawa and Ponce (2008). Specifically, a new method is proposed to improve the robustness by removing outliers based on a filtering approach. In addition, this work also proposes a method to divide the 3D points in to several buckets for applying the sparse bundle adjustment algorithm (SBA) individually, removing the outliers and finally merging them. The residuals are used to filter potential outliers to reduce the re-projection error used as the performance evaluation of refinement. In our experiments, the original mean re-projection error is about 47.6. After applying the proposed methods, the mean error is reduced to 2.13.


EURASIP Journal on Advances in Signal Processing | 2010

Automatic image interpolation using homography

Yi-Leh Wu; Cheng-Yuan Tang; Maw-Kae Hor; Chi-Tsung Liu

While taking photographs, we often face the problem that unwanted foreground objects (e.g., vehicles, signs, and pedestrians) occlude the main subject(s). We propose to apply image interpolation (also known as inpainting) techniques to remove unwanted objects in the photographs and to automatically patch the vacancy after the unwanted objects are removed. When given only a single image, if the information loss after the unwanted objects in images being removed is too great, the patching results are usually unsatisfactory. The proposed inpainting techniques employ the homographic constraints in geometry to incorporate multiple images taken from different viewpoints. Our experiment results showed that the proposed techniques could effectively reduce process in searching for potential patches from multiple input images and decide the best patches for the missing regions.


Pattern Recognition Letters | 2013

Robust trifocal tensor constraints for structure from motion estimation

Kai-Hsuan Chan; Cheng-Yuan Tang; Maw-Kae Hor; Yi-Leh Wu

It is important to estimate accurate camera parameters in multi-view stereo. In this paper, we use three-view relations, the trifocal tensor, to improve the Bundler, a popular structure from motion (SfM) system, for estimating accurate camera parameters. We propose a novel method: the Robust Orthogonal Particle Swarm Optimization (ROPSO) to estimate a robust and accurate trifocal tensor. In ROPSO, we formulate the trifocal tensor estimation as a global optimization problem and use the particle swarm optimization (PSO) for parameter searching. The orthogonal array is used to select the representative initial particles in PSO for more stable results. In the experiments, we use simulated and real ground truth data for statistical analysis. The experimental results show that the proposed ROPSO can achieve more accurate estimation of the trifocal tensor than the traditional methods and has higher probability to find the optimization solution than the traditional methods. Based on the trifocal tensor estimated by the proposed method, the SfM estimation errors can effectively be reduced. The average reprojection errors are reduced from 21.5 pixels to less than 1 pixel.


visual communications and image processing | 2011

Robust Orthogonal Particle Swarm Optimization for estimating the fundamental matrix

Kai-Hsuan Chan; Cheng-Yuan Tang; Yi-Leh Wu; Maw-Kae Hor

In this paper, we present a novel method which uses PSO combined LMedS and orthogonal array to improve the estimation of fundamental matrix. We first translate the fundamental matrix estimation problem into the PSO problem. Then, we use the orthogonal array and LMedS to make the initial particles for more robust. Our experiments show that the proposed ROPSO produces more accurate estimation of fundamental matrices than the traditional LMedS, and the ROPSO has higher probability to find the optimal solution than the LMedS.

Collaboration


Dive into the Maw-Kae Hor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi-Leh Wu

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kai-Hsuan Chan

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Hungmin Hsu

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Wei-Chih Hung

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yifan Peng

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fan Shen

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gang-Zeng Mao

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hung-Wei Chen

National Taiwan University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge