Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyong I. Chang is active.

Publication


Featured researches published by Kyong I. Chang.


Computer Vision and Image Understanding | 2006

A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition

Kevin W. Bowyer; Kyong I. Chang; Patrick J. Flynn

This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

Comparison and combination of ear and face images in appearance-based biometrics

Kyong I. Chang; Kevin W. Bowyer; Sudeep Sarkar; Barnabas Victor

Researchers have suggested that the ear may have advantages over the face for biometric recognition. Our previous experiments with ear and face recognition, using the standard principal component analysis approach, showed lower recognition performance using ear images. We report results of similar experiments on larger data sets that are more rigorously controlled for relative quality of face and ear images. We find that recognition performance is not significantly different between the face and the ear, for example, 70.5 percent versus 71.6 percent, respectively, in one experiment. We also find that multimodal recognition using both the ear and face results in statistically significant improvement over either individual biometric, for example, 90.9 percent in the analogous experiment.


Digital Mammography / IWDM | 1998

Current Status of the Digital Database for Screening Mammography

Michael D. Heath; Kevin W. Bowyer; Daniel B. Kopans; P. Kegelmeyer; Richard H. Moore; Kyong I. Chang; S. Munishkumaran

The Digital Database for Screening Mammography1 is a resource for use by researchers investigating mammogram image analysis. In particular, the resource is focused on the context of image analysis to aid in screening for breast cancer. The database now contains substantial numbers of “normal” and “cancer” cases. This paper describes recent improvements and additions to DDSM.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

An evaluation of multimodal 2D+3D face biometrics

Kyong I. Chang; Kevin W. Bowyer; Patrick J. Flynn

We report on the largest experimental study to date in multimodal 2D+3D face recognition, involving 198 persons in the gallery and either 198 or 670 time-lapse probe images. PCA-based methods are used separately for each modality and match scores in the separate face spaces are combined for multimodal recognition. Major conclusions are: 1) 2D and 3D have similar recognition performance when considered individually, 2) combining 2D and 3D results using a simple weighting scheme outperforms either 2D or 3D alone, 3) combining results from two or more 2D images using a similar weighting scheme also outperforms a single 2D image, and 4) combined 2D+3D outperforms the multi-image 2D result. This is the first (so far, only) work to present such an experimental control to substantiate multimodal performance improvement.


international conference on pattern recognition | 2004

A survey of approaches to three-dimensional face recognition

Kevin W. Bowyer; Kyong I. Chang; Patrick J. Flynn

The vast majority of face recognition research has focused on the use of two-dimensional intensity images, and is covered in existing survey papers. This survey focuses on face recognition using three-dimensional data, either alone or in combination with two-dimensional intensity images. Challenges involved in developing more accurate three-dimensional face recognition are identified.


workshop on applications of computer vision | 2002

Does colorspace transformation make any difference on skin detection

Min C. Shin; Kyong I. Chang; Leonid V. Tsap

Skin detection is an important process in many of computer vision algorithms. It usually is a process that starts at a pixel-level, and that involves a pre-process of colorspace transformation followed by a classification process. A colorspace transformation is assumed to increase separability between skin and non-skin classes, to increase similarity among different skin tones, and to bring a robust performance under varying illumination conditions, without any sound reasonings. In this work, we examine if the colorspace transformation does bring those benefits by measuring four separability measurements on a large dataset of 805 images with different skin tones and illumination. Surprising results indicate that most of the colorspace transformations do not bring the benefits which have been assumed.


international soi conference | 2003

Multimodal 2D and 3D biometrics for face recognition

Kyong I. Chang; Kevin W. Bowyer; Patrick J. Flynn

Results are presented for the largest experimental study to date that investigates the comparison and combination of 2D and 3D face data for biometric recognition. To our knowledge, this is also the only such study to incorporate significant time lapse between gallery and probe image acquisition. Recognition results are presented for gallery and probe datasets of 166 subjects imaged in both 2D and 3D, with six to thirteen weeks time lapse between gallery and probe images of a given subject. Using a PCA-based approach tuned separately for 2D and for 3D, we find no statistically significant difference between the rank-one recognition rates of 83.1% for 2D and 83.7% for 3D. Using a certainty-weighted sum-of-distance approach to combining 2D and 3D, we find a multimodal rank-one recognition rate of 92.8%, which is statistically significantly greater than either 2D or 3D alone.


ieee international conference on automatic face gesture recognition | 2004

Multi-biometrics using facial appearance, shape and temperature

Kyong I. Chang; Kevin W. Bowyer; Patrick J. Flynn; Xin Chen

We present results of the first study to examine individual and multi-modal face recognition using 2D, 3D and infrared images of the same set of subjects. Each sensor captures different aspects of human facial features; appearance in intensity representing surface reflectance from a light source, shape data representing depth values from the camera, and the pattern of heat emitted, respectively. We employ a database containing a gallery set of 127 images and an accumulated time-lapse probe set of 297 images. Using a PCA-based approach tuned separately for 2 D, 3D and IR, we find rank-one recognition rates of 90.6% for 2D, 91.9% for 3D and 71.0% for IR. Combining each pair of modalities, we find a multi-modal rank-one recognition rate of 98.7% for 2D-3D, 96.6% for 2D-IR and 98.0% for 3D-IR. When all three modalities are combined, we obtain 100% recognition. The results shown in this study appear to support the conclusion that the path to higher accuracy and robustness in biometrics involves use of multiple biometrics rather than the best possible sensor and algorithm for a single biometric.


Biometric Technology for Human Identification | 2004

Evaluation of multimodal biometrics using appearance, shape, and temperature

Kyong I. Chang; Kevin W. Bowyer; Patrick J. Flynn; Xin Chen

This study considers face recognition using multiple imaging modalities. Face recognition is performed using a PCA-based algorithm on each of three individual modalities: normal 2D intensity images, range images representing 3D shape, and infra-red images representing the pattern of heat emission. The algorithm is separately tuned for each modality. For each modality, the gallery consists of one image of each of the same 127 persons, and the probe set consists of 297 images of these subjects, acquired with one or more weeks time lapse. In this experiment, we find a rank-one recognition rate of 71% for infra-red, 91% for 2D, 92% for 3D. We also consider the multi-modal combination of each pair of modalities, and find a rank-one recognition rate of 97% for 2D plus infra-red, 98% for 3D plus infra-red, and 99% for 3D plus 2D. The combination of all three modalities yields a rank-one recognition rate of 100%. We conclude that multi-modal face recognition appears to offer great potential for improved accuracy over using a single 2D image. Larger and more challenging experiments are needed in order to explore this potential.


Archive | 2003

Face Recognition Using 2D and 3D Facial Data

Kyong I. Chang; Kevin W. Bowyer; Patrick J. Flynn

Collaboration


Dive into the Kyong I. Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barnabas Victor

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonid V. Tsap

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael D. Heath

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge