Nelson Hon Ching Yung
University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nelson Hon Ching Yung.
international conference on pattern recognition | 2004
Xiao-Chen He; Nelson Hon Ching Yung
Corners play an important role in object identification methods used in machine vision and image processing systems. Single-scale feature detection finds it hard to detect both fine and coarse features at the same time. On the other hand, multi-scale feature detection is inherently able to solve this problem. This paper proposes an improved multi-scale corner detector with dynamic region of support, which is based on curvature scale space (CSS) technique. The proposed detector first uses an adaptive local curvature threshold instead of a single global threshold as in the original and enhanced CSS methods. Second, the angles of corner candidates are checked in a dynamic region of support for eliminating falsely detected corners. The proposed method has been evaluated over a number of images and compared with some popular corner detectors. The results showed that the proposed method offers a robust and effective solution to images containing widely different size features.
Optical Engineering | 2008
Xiaochen He; Nelson Hon Ching Yung
This paper proposes a curvature-based corner detector that detects both fine and coarse features accurately at low computational cost. First, it extracts contours from a Canny edge map. Second, it com- putes the absolute value of curvature of each point on a contour at a low scale and regards local maxima of absolute curvature as initial corner candidates. Third, it uses an adaptive curvature threshold to remove round corners from the initial list. Finally, false corners due to quantiza- tion noise and trivial details are eliminated by evaluating the angles of corner candidates in a dynamic region of support. The proposed detector was compared with popular corner detectors on planar curves and gray- level images, respectively, in a subjective manner as well as with a fea- ture correspondence test. Results reveal that the proposed detector per- forms extremely well in both fields.
Image and Vision Computing | 2011
Henry Y. T. Ngan; Grantham K. H. Pang; Nelson Hon Ching Yung
This paper provides a review of automated fabric defect detection methods developed in recent years. Fabric defect detection, as a popular topic in automation, is a necessary and essential step of quality control in the textile manufacturing industry. In categorizing these methods broadly, a major group is regarded as non-motif-based while a minor group is treated as motif-based. Non-motif-based approaches are conventional, whereas the motif-based approach is novel in utilizing motif as a basic manipulation unit. Compared with previously published review papers on fabric inspection, this paper firstly offers an up-to-date survey of different defect detection methods and describes their characteristics, strengths and weaknesses. Secondly, it employs a wider classification of methods and divides them into seven approaches (statistical, spectral, model-based, learning, structural, hybrid, and motif-based) and performs a comparative study across these methods. Thirdly, it also presents a qualitative analysis accompanied by results, including detection success rate for every method it has reviewed. Lastly, insights, synergy and future research directions are discussed. This paper shall benefit researchers and practitioners alike in image processing and computer vision fields in understanding the characteristics of the different defect detection approaches.
systems man and cybernetics | 2003
Cang Ye; Nelson Hon Ching Yung; Danwei Wang
Fuzzy logic systems are promising for efficient obstacle avoidance. However, it is difficult to maintain the correctness, consistency, and completeness of a fuzzy rule base constructed and tuned by a human expert. A reinforcement learning method is capable of learning the fuzzy rules automatically. However, it incurs a heavy learning phase and may result in an insufficiently learned rule base due to the curse of dimensionality. In this paper, we propose a neural fuzzy system with mixed coarse learning and fine learning phases. In the first phase, a supervised learning method is used to determine the membership functions for input and output variables simultaneously. After sufficient training, fine learning is applied which employs reinforcement learning algorithm to fine-tune the membership functions for output variables. For sufficient learning, a new learning method using a modification of Sutton and Bartos model is proposed to strengthen the exploration. Through this two-step tuning approach, the mobile robot is able to perform collision-free navigation. To deal with the difficulty of acquiring a large amount of training data with high consistency for supervised learning, we develop a virtual environment (VE) simulator, which is able to provide desktop virtual environment (DVE) and immersive virtual environment (IVE) visualization. Through operating a mobile robot in the virtual environment (DVE/IVE) by a skilled human operator, training data are readily obtained and used to train the neural fuzzy system.
systems man and cybernetics | 2000
Andrew H. S. Lai; Nelson Hon Ching Yung
This paper describes a novel lane detection algorithm for visual traffic surveillance applications under the auspice of intelligent transportation systems. Traditional lane detection methods for vehicle navigation typically use spatial masks to isolate instantaneous lane information from on-vehicle camera images. When surveillance is concerned, complete lane and multiple lane information is essential for tracking vehicles and monitoring lane change frequency from overhead cameras, where traditional methods become inadequate. The algorithm presented in this paper extracts complete multiple lane information by utilizing prominent orientation and length features of lane markings and curb structures to discriminate against other minor features. Essentially, edges are first extracted from the background of a traffic sequence, then thinned and approximated by straight lines. From the resulting set of straight lines, orientation and length discriminations are carried out three-dimensionally with the aid of two-dimensional (2-D) to three-dimensional (3-D) coordinate transformation and K-means clustering. By doing so, edges with strong orientation and length affinity are retained and clustered, while short and isolated edges are eliminated. Overall, the merits of this algorithm are as follows. First, it works well under practical visual surveillance conditions. Second, using K-means for clustering offers a robust approach. Third, the algorithm is efficient as it only requires one image frame to determine the road center lines. Fourth, it computes multiple lane information simultaneously. Fifth, the center lines determined are accurate enough for the intended application.
ieee intelligent transportation systems | 2001
Andrew H. S. Lai; George S. K. Fung; Nelson Hon Ching Yung
This paper presents a visual-based dimension estimation method for vehicle type classification. Our method extracts moving vehicles from traffic image sequences and fits them with a simple deformable vehicle model. Using a set of coordination mapping functions derived from a calibrated camera model and relying on a shadow removal method, vehicles width, length and height are estimated. Our experimental tests show that the modeling method is effective and the estimation accuracy is sufficient for general vehicle type classification.
IEEE Transactions on Intelligent Transportation Systems | 2004
Clement Chun Cheong Pang; William Wai Leung Lam; Nelson Hon Ching Yung
This paper presents a novel method for resolving the occlusion of vehicles seen in a sequence of traffic images taken from a single roadside mounted camera. Its concept is built upon a previously proposed vehicle-segmentation method, which is able to extract the vehicle shape out of the background accurately without the effect of shadows and other visual artifacts. Based on the segmented shape and that the shape can be represented by a simple cubical model, we propose a two-step method: first, detect the curvature of the shape contour to generate a data set of the vehicles occluded and, second, decompose it into individual vehicle models using a vanishing point in three dimensions and the set of curvature points of the composite model. The proposed method has been tested on a number of monocular traffic-image sequences and found that it detects the presence of occlusion correctly and resolves most of the occlusion cases involving two vehicles. It only fails when the occlusion was very severe. Further analysis of vehicle dimension also shows that the average estimation accuracy for vehicle width, length, and height are 94.78%, 94.09%, and 95.44%, respectively.
systems man and cybernetics | 1999
Nelson Hon Ching Yung; Cang Ye
In this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behaviour and goal seeking behaviour to determine its control actions, where adaptability is achieved with the aid of an environment evaluator. A comparison of this navigator using the rule bases obtained from the new training method and the EEM method, shows that the new navigator guarantees a solution and its solution is more acceptable.
IEEE Transactions on Intelligent Transportation Systems | 2007
Clement Chun Cheong Pang; William Wai Leung Lam; Nelson Hon Ching Yung
This paper proposes a novel method for accurately counting the number of vehicles that are involved in multiple-vehicle occlusions, based on the resolvability of each occluded vehicle, as seen in a monocular traffic image sequence. Assuming that the occluded vehicles are segmented from the road background by a previously proposed vehicle segmentation method and that a deformable model is geometrically fitted onto the occluded vehicles, the proposed method first deduces the number of vertices per individual vehicle from the camera configuration. Second, a contour description model is utilized to describe the direction of the contour segments with respect to its vanishing points, from which individual contour description and vehicle count are determined. Third, it assigns a resolvability index to each occluded vehicle based on a resolvability model, from which each occluded vehicle model is resolved and the vehicle dimension is measured. The proposed method has been tested on 267 sets of real-world monocular traffic images containing 3074 vehicles with multiple-vehicle occlusions and is found to be 100% accurate in calculating vehicle count, in comparison with human inspection. By comparing the estimated dimensions of the resolved generalized deformable model of the vehicle with the actual dimensions published by the manufacturers, the root-mean-square error for width, length, and height estimations are found to be 48, 279, and 76 mm, respectively.
IEEE Transactions on Intelligent Transportation Systems | 2010
Lu Wang; Nelson Hon Ching Yung
The extraction of moving objects from their background is a challenging task in visual surveillance. As a single threshold often fails to resolve ambiguities and correctly segment the object, in this paper, we propose a new method that uses three thresholds to accurately classify pixels as foreground or background. These thresholds are adaptively determined by considering the distributions of differences between the input and background images and are used to generate three boundary sets. These boundary sets are then merged to produce a final boundary set that represents the boundaries of the moving objects. The merging step proceeds by first identifying boundary segment pairs that are significantly inconsistent. Then, for each inconsistent boundary segment pair, its associated curvature, edge response, and shadow index are used as criteria to evaluate the probable location of the true boundary. The resulting boundary is finally refined by estimating the width of the halo-like boundary and referring to the foreground edge map. Experimental results show that the proposed method consistently performs well under different illumination conditions, including indoor, outdoor, moderate, sunny, rainy, and dim cases. By comparing with a ground truth in each case, both the classification error rate and the displacement error indicate an accurate detection, which show substantial improvement in comparison with other existing methods.