Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kin-Man Lam is active.

Publication


Featured researches published by Kin-Man Lam.


Pattern Recognition | 1996

Locating and extracting the eye in human face images

Kin-Man Lam; Hong Yan

Facial feature extraction is an important step in automated visual interpretation and human face recognition. Among the facial features, the eye plays the most important part in the recognition process. The deformable template can be used in extracting the eye boundaries. However, the weaknesses of the deformable template are that the processing time is lengthy and that its success relies on the initial position of the template. In this paper, the head boundary is first located in a head-and-shoulders image. The approximate positions of the eyes are estimated by means of average anthropometric measures. Corners, the salient features of the eyes, are detected and used to set the initial parameters of the eye templates. The corner detection scheme introduced in this paper can provide accurate information about the corners. Based on the corner positions, we can accurately locate the templates in relation to the eye images and greatly reduce the processing time for the templates. The performance of the deformable template is assessed with and without using the information on corner positions. Experiments show that a saving in execution time of about 40% on average and a better eye boundary representation can be achieved by using the corner information.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

An analytic-to-holistic approach for face recognition based on a single frontal view

Kin-Man Lam; Hong Yan

We propose an analytic-to-holistic approach which can identify faces at different perspective variations. The database for the test consists of 40 frontal-view faces. The first step is to locate 15 feature points on a face. A head model is proposed, and the rotation of the face can be estimated using geometrical measurements. The positions of the feature points are adjusted so that their corresponding positions for the frontal view are approximated. These feature points are then compared with the feature points of the faces in a database using a similarity transform. In the second step, we set up windows for the eyes, nose, and mouth. These feature windows are compared with those in the database by correlation. Results show that this approach can achieve a similar level of performance from different viewing directions of a face. Under different perspective variations, the overall recognition rates are over 84 percent and 96 percent for the first and the first three likely matched faces, respectively.


Pattern Recognition | 2005

Face recognition under varying illumination based on a 2D face shape model

Xudong Xie; Kin-Man Lam

This paper proposes a novel illumination compensation algorithm, which can compensate for the uneven illuminations on human faces and reconstruct face images in normal lighting conditions. A simple yet effective local contrast enhancement method, namely block-based histogram equalization (BHE), is first proposed. The resulting image processed using BHE is then compared with the original face image processed using histogram equalization (HE) to estimate the category of its light source. In our scheme, we divide the light source for a human face into 65 categories. Based on the category identified, a corresponding lighting compensation model is used to reconstruct an image that will visually be under normal illumination. In order to eliminate the influence of uneven illumination while retaining the shape information about a human face, a 2D face shape model is used. Experimental results show that, with the use of principal component analysis for face recognition, the recognition rate can be improved by 53.3% to 62.6% when our proposed algorithm for lighting compensation is used.


Pattern Recognition | 2001

An efficient algorithm for human face detection and facial feature extraction under different conditions

Kwok-Wai Wong; Kin-Man Lam; Wan-Chi Siu

Abstract In this paper, an efficient algorithm for human face detection and facial feature extraction is devised. Firstly, the location of the face regions is detected using the genetic algorithm and the eigenface technique. The genetic algorithm is applied to search for possible face regions in an image, while the eigenface technique is used to determine the fitness of the regions. As the genetic algorithm is computationally intensive, the searching space is reduced and limited to the eye regions so that the required timing is greatly reduced. Possible face candidates are then further verified by measuring their symmetries and determining the existence of the different facial features. Furthermore, in order to improve the level of detection reliability in our approach, the lighting effect and orientation of the faces are considered and solved.


Pattern Recognition Letters | 2006

An efficient illumination normalization method for face recognition

Xudong Xie; Kin-Man Lam

In this paper, an efficient representation method insensitive to varying illumination is proposed for human face recognition. Theoretical analysis based on the human face model and the illumination model shows that the effects of varying lighting on a human face image can be modeled by a sequence of multiplicative and additive noises. Instead of computing these noises, which is very difficult for real applications, we aim to reduce or even remove their effect. In our method, a local normalization technique is applied to an image, which can effectively and efficiently eliminate the effect of uneven illuminations while keeping the local statistical properties of the processed image the same as in the corresponding image under normal lighting condition. After processing, the image under varying illumination will have similar pixel values to the corresponding image that is under normal lighting condition. Then, the processed images are used for face recognition. The proposed algorithm has been evaluated based on the Yale database, the AR database, the PIE database, the YaleB database and the combined database by using different face recognition methods such as PCA, ICA and Gabor wavelets. Consistent and promising results were obtained, which show that our method can effectively eliminate the effect of uneven illumination and greatly improve the recognition results.


Pattern Recognition Letters | 2004

Optimal sampling of Gabor features for face recognition

Dang-Hui Liu; Kin-Man Lam; Lansun Shen

The Gabor feature is effective for facial image representation, while linear discriminant analysis (LDA) can extract the most discriminant information from the Gabor feature for face recognition. In practice, the dimension of a Gabor feature vector is so high that the computation and memory requirements are prohibitively large. To reduce the dimension, one simple scheme is to extract the Gabor feature at sub-sampled positions, usually in a regular grid, in a face region. However, this scheme is not effective enough and degrades the recognition performance. In this paper, we propose a method to determine the optimal position for extracting the Gabor feature such that the number of feature points is as small as possible while the representation capability of the points is as high as possible. The subsampled positions of the feature points are determined by a mask generated from a set of training images by means of principal component analysis (PCA). With the feature vector of reduced dimension, a subspace LDA is applied for face recognition, i.e., PCA is first used to reduce the dimension of the Gabor feature vectors generated from the subsampled positions, and then a common LDA is applied. Experimental results show that the new sampling method is simple, and effective for both dimension reduction and image representation. The recognition rate based on our proposed scheme is also higher than that achieved using a regular sampling method in a face region.


Pattern Recognition | 2003

Extraction of the Euclidean skeleton based on a connectivity criterion

Wai-Pak Choi; Kin-Man Lam; Wan-Chi Siu

Abstract The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may influence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, efficient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

A Level Set Approach to Image Segmentation With Intensity Inhomogeneity

Kaihua Zhang; Lei Zhang; Kin-Man Lam; David Zhang

It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.


Pattern Recognition | 2005

Illumination invariant face recognition

Dang-Hui Liu; Kin-Man Lam; Lansun Shen

The appearance of a face will vary drastically when the illumination changes. Variations in lighting conditions make face recognition an even more challenging and difficult task. In this paper, we propose a novel approach to handle the illumination problem. Our method can restore a face image captured under arbitrary lighting conditions to one with frontal illumination by using a ratio-image between the face image and a reference face image, both of which are blurred by a Gaussian filter. An iterative algorithm is then used to update the reference image, which is reconstructed from the restored image by means of principal component analysis (PCA), in order to obtain a visually better restored image. Image processing techniques are also used to improve the quality of the restored image. To evaluate the performance of our algorithm, restored images with frontal illumination are used for face recognition by means of PCA. Experimental results demonstrate that face recognition using our method can achieve a higher recognition rate based on the Yale B database and the Yale database. Our algorithm has several advantages over other previous algorithms: (1) it does not need to estimate the face surface normals and the light source directions, (2) it does not need many images captured under different lighting conditions for each person, nor a set of bootstrap images that includes many images with different illuminations, and (3) it does not need to detect accurate positions of some facial feature points or to warp the image for alignment, etc.


IEEE Transactions on Circuits and Systems for Video Technology | 2005

A new key frame representation for video segment retrieval

Kin-Wai Sze; Kin-Man Lam; Guoping Qiu

In this paper, we propose an optimal key frame representation scheme based on global statistics for video shot retrieval. Each pixel in this optimal key frame is constructed by considering the probability of occurrence of those pixels at the corresponding pixel position among the frames in a video shot. Therefore, this constructed key frame is called temporally maximum occurrence frame (TMOF), which is an optimal representation of all the frames in a video shot. The retrieval performance of this representation scheme is further improved by considering the k pixel values with the largest probabilities of occurrence and the highest peaks of the probability distribution of occurrence at each pixel position for a video shot. The corresponding schemes are called k-TMOF and k-pTMOF, respectively. These key frame representation schemes are compared to other histogram-based techniques for video shot representation and retrieval. In the experiments, three video sequences in the MPEG-7 content set were used to evaluate the performances of the different key frame representation schemes. Experimental results show that our proposed representations outperform the alpha-trimmed average histogram for video retrieval.

Collaboration


Dive into the Kin-Man Lam's collaboration.

Top Co-Authors

Avatar

Wan-Chi Siu

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guoping Qiu

University of Nottingham

View shared research outputs
Top Co-Authors

Avatar

Lansun Shen

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hong Yan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Muwei Jian

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Junyu Dong

Ocean University of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiwei Hu

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Kwok-Wai Wong

Hong Kong Polytechnic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge