Guangda Su
Tsinghua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guangda Su.
international conference on machine learning and cybernetics | 2005
Yafeng Deng; Guangda Su; Jun Zhou; Bo Fu
In this paper, a fast and robust face detection method in video is introduced. To speed up, we integrate motion energy into cascade-structured classifier to reject most of the candidate windows. Motion energy representing moving extent of candidate regions can be computed efficiently with integral motion image and thus accelerates the evaluating procedure greatly. By dividing state of system into three and processing input images according to current state with different modes, we treat with special situations such as still face existence, continuously failing to detect faces and input without evident changes to get a robust system suitable for real application. Without depending on any supposed motion model, the system is out of limitation of moving patterns including speed and direction. The approach is implemented to detect faces in real situation and the speed is about 6-22 ms with a detection ratio of 99%.
Neurocomputing | 2007
Jun Zhou; Guangda Su; Chunhong Jiang; Yafeng Deng; Congcong Li
This paper presents a novel face and fingerprint identity authentication system based on multi-route detection. To exclude the influence of pose on face recognition, a multi-route detection module is adopted, with parallel processing technology to speed the authentication process. Parallel processing technology is used to speed the authentication process. Fusion of face and fingerprint by support vector machine (SVM) which introduced a new normalization method improved the authentication accuracy. A new concept of optimal two-dimensional face is proposed to improve the performance of the dynamic face authentication system. Experiments on a real database showed that the proposed system achieved better results compared with face-only or fingerprint-only system.
international conference on machine learning and cybernetics | 2002
Huimin Ma; Guangda Su; Jun-Yan Wang; Zheng Ni
This paper presents a new remote detection system for defects in the upper portion of glass bottles in a production line. The system uses eight video cameras installed beside the production line to capture the images of the mouth, lip and neck of each bottle. Then, two computers collect and process these images to detect any defects. The detection process includes two parts: one is to detect defects occurring in the mouth and lip of a glass bottle using images captured from the top of a bottle. The other is for the defects occurring in the neck and shoulder of a bottle, by processing the images shot on the upper-side of the bottle. The processing includes image orientation, preprocessing, and image recognizing. The system sends a control signal to the mainframe once it recognizes a defective bottle, which can then be ejected from the production line. Test results are given to show the recognition rate system for all defects occurring in the upper part of a glass bottle.
IEEE Signal Processing Letters | 2015
Jiansheng Chen; Yu Deng; Gaocheng Bai; Guangda Su
Face image quality is an important factor affecting the accuracy of automatic face recognition. It is usually possible for practical recognition systems to capture multiple face images from each subject. Selecting face images with high quality for recognition is a promising stratagem for improving the system performance. We propose a learning to rank based framework for assessing the face image quality. The proposed method is simple and can adapt to different recognition methods. Experimental result demonstrates its effectiveness in improving the robustness of face detection and recognition.
international conference on machine learning and cybernetics | 2005
Meng; Guangda Su; Congcong Li; Bo Fu; Jun Zhou
This paper presents a high performance face recognition system, in which the face database has a large amount of 2.5 million faces. Huge as the face database is, the recognition processes in ordinary ways meets with great difficulties: the identification rate of most algorithms may decline significantly; meanwhile, querying on a large-scale database may be quite time-consuming. In our system, a special distributed parallel architecture is proposed to speed up the computation. Furthermore, a multimodal part face recognition method based on principal component analysis (MMP-PCA) is adopted to perform the recognition task, and the MMX technology is introduced to accelerate the matching procedure. Practical results prove that this system has an excellent performance in recognition: when searching among 2,560,000 faces on 6 PC servers with Xeon 2.4 GHz CPU, the querying time is only 1.094 s and the identification rate is above 85% in most cases. Moreover, the greatest advantage of this system is not only increasing recognition speed but also breaking the upper limit of face data capacity. Consequently, the face data capability of this system can be extended to an arbitrarily large amount.
international conference on machine learning and cybernetics | 2004
Chun-Hong Jiang; Guangda Su
Biometric based person identity verification is gaining more and more attention. It has been shown that combining different biometric modalities enables to achieve better performances than single modality. So as to improve the verification accuracy, this paper combines face and fingerprint for person identity verification. And some multimodal biometric information fusion strategies, includes sum rule (SR), weighted sum rule (WSR), Fisher linear discriminant analysis (FLDA) and support vector machine (SVM) are evaluated, furthermore, a new method for data normalization in verification system is proposed in this paper. Experiment results prove the effectiveness of fusion of multiple biometrics compared with single biometric, and also the better verification performance by adopting the new data normalization method. The SVM, SR, WSR and FLDA fusion methods present a decreasing performance in our experiment.
2014 International Conference on Smart Computing | 2014
Yunfan Liu; Xueshi Hou; Jiansheng Chen; Chang Yang; Guangda Su; Weibei Dou
Facial expression recognition has important practical applications. In this paper, we propose a method based on the combination of optical flow and a deep neural network - stacked sparse autoencoder (SAE). This method classifies facial expressions into six categories (i.e. happiness, sadness, anger, fear, disgust and surprise). In order to extract the representation of facial expressions, we choose the optical flow method because it could analyze video image sequences effectively and reduce the influence of personal appearance difference on facial expression recognition. Then, we train the stacked SAE with the optical flow field as the input to extract high-level features. To achieve classification, we apply a softmax classifier on the top layer of the stacked SAE. This method is applied to the Extended Cohn-Kanade Dataset (CK+). The expression classification result shows that the SAE performances the classification effectively and successfully. Further experiments (transformation and purification) are carried out to illustrate the application of the feature extraction and input reconstruction ability of SAE.
computer vision and pattern recognition | 2014
Chang Yang; Jiansheng Chen; Nan Su; Guangda Su
For each person, there exist large unstructured photo collections in personal photo albums. We call these photos Hetero-source images, which imply abundant shape and texture information of the specific face. In this paper, we propose a novel 3D face modeling method combining the normal map of Hetero-source images with the fitting result based on a single image to achieve more accurate 3D shape estimates. Based on recent research showing that the set of images of convex Lambertian surfaces under general illumination can be well approximated using low-order spherical harmonics, we first incorporate spherical harmonics into the 3D morphable model to initialize the 3D shape. The fitting result, however, suffers from model dominance and lacks of fine details. The normal map inferred by Hetero-source image shading constraints allows the possibility of improving local details and challenging the model dominance. We estimate the normal map which contains more accurate orientation information in an alternating optimization way and apply it to improve the preliminary 3D surface. Experimental results on both synthetic and real world data demonstrate that our method could be used to capture discriminating facial features and outperforms the single image fitting result in accuracy.
Tsinghua Science & Technology | 2013
Jing Wang; Guangda Su; Ying Xiong; Jiansheng Chen; Yan Shang; Jiongxin Liu; Xiaolong Ren
Sparse Representation based Classification (SRC) has emerged as a new paradigm for solving recognition problems. This paper presents a constraint sampling feature extraction method that improves the SRC recognition rate. The method combines texture and shape features to significantly improve the recognition rate. Tests show that the combined constraint sampling and facial alignment achieves very high recognition accuracy on both the AR face database (99.52%) and the CAS-PEAL face database (99.54%).
Pattern Recognition | 2013
Gang Zhang; Jiansheng Chen; Guangda Su; Jing Liu
Double-pupil location is a premise to face normalization and a key to face recognition. Usually, the method for face detection and location can only detect a face region and a single eye for a face image with pose variation. This paper presents a novel method for double-pupil location. The detected pupil of the single eye is located accurately to reduce the effect of location offset and overlay. Both the center of the face region and that of the pupil are used to compute possible location of another unknown pupil. A method for single-pupil detection is used for the location to determine whether a pupil appears or not. The most optimal pupil candidate is used as the pupil of another unknown eye. To testify the performance of the methods for double-pupil location and single-pupil detection, both the face pose image database subset of our laboratory and open head pose image database subset are used. Experiments show that the double-pupil location can not only used for the face images rotated by vertical direction, but also those rotated by both vertical direction and horizontal direction.