Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chao-Fa Chuang is active.

Publication


Featured researches published by Chao-Fa Chuang.


Information Sciences | 2004

Automatic extraction of head and face boundaries and facial features

Frank Y. Shih; Chao-Fa Chuang

This paper presents a novel approach for the extraction of human head, face and facial features. In the double-threshold method, the high-thresholded image is used to trace head boundary and the low-thresholded image is used to scan face boundary. We obtain facial features candidates and eliminate noises, and apply x- and y-projections to extract facial features such as eyes, nostrils and mouth. Because low contrast of chin occurs in some face images, its boundary cannot be completely detected. An elliptic model is used to repair it. Because of noises or clustered facial features candidates, we apply a geometric face model to locate facial features and an elliptic model to trace face boundary. The Gabor filter algorithm is adopted to locate two eyes. We have tested our algorithm on more than 100 FERET face images. Experimental results show that our algorithm can perform the extraction of human head, face and facial features successfully.


International Journal of Pattern Recognition and Artificial Intelligence | 2008

PERFORMANCE COMPARISONS OF FACIAL EXPRESSION RECOGNITION IN JAFFE DATABASE

Frank Y. Shih; Chao-Fa Chuang; Patrick S. P. Wang

Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression recognition has recently become a promising research area. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this paper, we investigate various feature representation and expression classification schemes to recognize seven different facial expressions, such as happy, neutral, angry, disgust, sad, fear and surprise, in the JAFFE database. Experimental results show that the method of combining 2D-LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) outperforms others. The recognition rate of this method is 95.71% by using leave-one-out strategy and 94.13% by using cross-validation strategy. It takes only 0.0357 second to process one image of size 256 × 256.


Pattern Recognition | 2006

Rapid and Brief Communication: Recognizing facial action units using independent component analysis and support vector machine

Chao-Fa Chuang; Frank Y. Shih

Facial expression provides a crucial behavioral measure for studies of human emotion, cognitive processes, and social interaction. In this paper, we focus on recognizing facial action units (AUs), which represent the subtle change of facial expressions. We adopt ICA (independent component analysis) as the feature extraction and representation method and SVM (support vector machine) as the pattern classifier. By comparing with three existing systems, such as Tian, Donato, and Bazzo, our proposed system can achieve the highest recognition rates. Furthermore, the proposed system is fast since it takes only 1.8ms for classifying a test image.


Artificial Intelligence in Medicine | 2006

Machine recognition and representation of neonatal facial displays of acute pain

Sheryl Brahnam; Chao-Fa Chuang; Frank Y. Shih; Melinda R. Slack

OBJECTIVE It has been reported in medical literature that health care professionals have difficulty distinguishing a newborns facial expressions of pain from facial reactions to other stimuli. Although a number of pain instruments have been developed to assist health professionals, studies demonstrate that health professionals are not entirely impartial in their assessment of pain and fail to capitalize on all the information exhibited in a newborns facial displays. This study tackles these problems by applying three different state-of-the-art face classification techniques to the task of distinguishing a newborns facial expressions of pain. METHODS The facial expressions of 26 neonates between the ages of 18 h and 3 days old were photographed experiencing the pain of a heel lance and a variety of stressors, including transport from one crib to another (a disturbance that can provoke crying that is not in response to pain), an air stimulus on the nose, and friction on the external lateral surface of the heel. Three face classification techniques, principal component analysis (PCA), linear discriminant analysis (LDA), and support vector machine (SVM), were used to classify the faces. RESULTS In our experiments, the best recognition rates of pain versus nonpain (88.00%), pain versus rest (94.62%), pain versus cry (80.00%), pain versus air puff (83.33%), and pain versus friction (93.00%) were obtained from an SVM with a polynomial kernel of degree 3. The SVM outperformed two commonly used methods in face classification: PCA and LDA, each using the L1 distance metric. CONCLUSION The results of this study indicate that the application of face classification techniques in pain assessment and management is a promising area of investigation.


decision support systems | 2007

Machine assessment of neonatal facial expressions of acute pain

Sheryl Brahnam; Chao-Fa Chuang; Randall S. Sexton; Frank Y. Shih

We propose that a machine assessment system of neonatal expressions of pain be developed to assist clinicians in diagnosing pain. The facial expressions of 26 neonates (age 18-72h) were photographed experiencing the acute pain of a heel lance and three nonpain stressors. Four algorithms were evaluated on out-of-sample observations: PCA, LDA, SVMs and NNSOA. NNSOA provided the best classification rate of pain versus nonpain (90.20%), followed by SVM with linear kernel (82.35%). We believe these results indicate a high potential for developing a decision support system for diagnosing neonatal pain from images of neonatal facial displays.


International Journal of Pattern Recognition and Artificial Intelligence | 2008

EXTRACTING FACES AND FACIAL FEATURES FROM COLOR IMAGES

Frank Y. Shih; Shouxian Cheng; Chao-Fa Chuang; Patrick S. P. Wang

In this paper, we present image processing and pattern recognition techniques to extract human faces and facial features from color images. First, we segment a color image into skin and non-skin regions by a Gaussian skin-color model. Then, we apply mathematical morphology and region filling techniques for noise removal and hole filling. We determine whether a skin region is a face candidate by its size and shape. Principle component analysis (PCA) is used to verify face candidates. We create an ellipse model to locate eyes and mouths areas roughly, and apply the support vector machine (SVM) to classify them. Finally, we develop knowledge rules to verify eyes. Experimental results show that our algorithm achieves the accuracy rate of 96.7% in face detection and 90.0% in facial feature extraction.


Pattern Recognition Letters | 2005

A modified regulated morphological corner detector

Frank Y. Shih; Chao-Fa Chuang; Vijayalakshmi Gaddipati

Corner detection in image processing is extremely important in many applications such as matching, representation, recognition, registration and camera calibration. In this paper, we propose a modified regulated morphological corner detector with adjustable strictness parameters. Experimental results show that the operator is simple and provide good quality for corners. It can be achieved at a low computational cost.


Journal of Information Science and Engineering | 2007

An intelligent sensor network for object detection, classification and recognition

Frank Y. Shih; Yi-Ta Wu; Chao-Fa Chuang; Jiann-Liang Chen; Hsi-Feng Lu; Yao-Chung Chang


Lecture Notes in Computer Science | 2006

SVM classification of neonatal facial images of pain

Sheryl Brahnam; Chao-Fa Chuang; Frank Y. Shih; Melinda R. Slack


Archive | 2006

Facial feature representation and recognition

Frank Y. Shih; Chao-Fa Chuang

Collaboration


Dive into the Chao-Fa Chuang's collaboration.

Top Co-Authors

Avatar

Frank Y. Shih

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sheryl Brahnam

Missouri State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shouxian Cheng

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Vijayalakshmi Gaddipati

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yi-Ta Wu

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hsi-Feng Lu

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar

Jiann-Liang Chen

National Taiwan University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge