Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Unsang Park is active.

Publication


Featured researches published by Unsang Park.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Age-Invariant Face Recognition

Unsang Park; Yiying Tong; Anil K. Jain

One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine.


IEEE Transactions on Information Forensics and Security | 2011

Periocular Biometrics in the Visible Spectrum

Unsang Park; Raghavender R. Jillela; Arun Ross; Anil K. Jain

The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers.


IEEE Transactions on Information Forensics and Security | 2010

Face Matching and Retrieval Using Soft Biometrics

Unsang Park; Anil K. Jain

Soft biometric traits embedded in a face (e.g., gender and facial marks) are ancillary information and are not fully distinctive by themselves in face-recognition tasks. However, this information can be explicitly combined with face matching score to improve the overall face-recognition accuracy. Moreover, in certain application domains, e.g., visual surveillance, where a face image is occluded or is captured in off-frontal pose, soft biometric traits can provide even more valuable information for face matching or retrieval. Facial marks can also be useful to differentiate identical twins whose global facial appearances are very similar. The similarities found from soft biometrics can also be useful as a source of evidence in courts of law because they are more descriptive than the numerical matching scores generated by a traditional face matcher. We propose to utilize demographic information (e.g., gender and ethnicity) and facial marks (e.g., scars, moles, and freckles) for improving face image matching and retrieval performance. An automatic facial mark detection method has been developed that uses (1) the active appearance model for locating primary facial features (e.g., eyes, nose, and mouth), (2) the Laplacian-of-Gaussian blob detection, and (3) morphological operators. Experimental results based on the FERET database (426 images of 213 subjects) and two mugshot databases from the forensic domain (1225 images of 671 subjects and 10 000 images of 10 000 subjects, respectively) show that the use of soft biometric traits is able to improve the face-recognition performance of a state-of-the-art commercial matcher.


IEEE Transactions on Information Forensics and Security | 2010

Soft Biometric Traits for Continuous User Authentication

Koichiro Niinuma; Unsang Park; Anil K. Jain

Most existing computer and network systems authenticate a user only at the initial login session. This could be a critical security weakness, especially for high-security systems because it enables an impostor to access the system resources until the initial user logs out. This situation is encountered when the logged in user takes a short break without logging out or an impostor coerces the valid user to allow access to the system. To address this security flaw, we propose a continuous authentication scheme that continuously monitors and authenticates the logged in user. Previous methods for continuous authentication primarily used hard biometric traits, specifically fingerprint and face to continuously authenticate the initial logged in user. However, the use of these biometric traits is not only inconvenient to the user, but is also not always feasible due to the users posture in front of the sensor. To mitigate this problem, we propose a new framework for continuous user authentication that primarily uses soft biometric traits (e.g., color of users clothing and facial skin). The proposed framework automatically registers (enrolls) soft biometric traits every time the user logs in and fuses soft biometric matching with the conventional authentication schemes, namely password and face biometric. The proposed scheme has high tolerance to the users posture in front of the computer system. Experimental results show the effectiveness of the proposed method for continuous user authentication.


international conference on biometrics theory applications and systems | 2009

Periocular biometrics in the visible spectrum: A feasibility study

Unsang Park; Arun Ross; Anil K. Jain

Periocular biometric refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric does not require high user cooperation and close capture distance unlike other ocular biometrics (e.g., iris, retina, and sclera). We study the feasibility of using periocular images of an individual as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set that can be used for matching. The effect of fusing these feature sets is also studied. The experimental results show a 77% rank-1 recognition accuracy using 958 images captured from 30 different subjects.


IEEE Transactions on Information Forensics and Security | 2011

A Discriminative Model for Age Invariant Face Recognition

Zhifeng Li; Unsang Park; Anil K. Jain

Aging variation poses a serious problem to automatic face recognition systems. Most of the face recognition studies that have addressed the aging problem are focused on age estimation or aging simulation. Designing an appropriate feature representation and an effective matching framework for age invariant face recognition remains an open problem. In this paper, we propose a discriminative model to address face matching in the presence of age variation. In this framework, we first represent each face by designing a densely sampled local feature description scheme, in which scale invariant feature transform (SIFT) and multi-scale local binary patterns (MLBP) serve as the local descriptors. By densely sampling the two kinds of local descriptors from the entire facial image, sufficient discriminatory information, including the distribution of the edge direction in the face image (that is expected to be age invariant) can be extracted for further analysis. Since both SIFT-based local features and MLBP-based local features span a high-dimensional feature space, to avoid the overfitting problem, we develop an algorithm, called multi-feature discriminant analysis (MFDA) to process these two local feature spaces in a unified framework. The MFDA is an extension and improvement of the LDA using multiple features combined with two different random sampling methods in feature and sample space. By random sampling the training set as well as the feature space, multiple LDA-based classifiers are constructed and then combined to generate a robust decision via a fusion rule. Experimental results show that our approach outperforms a state-of-the-art commercial face recognition engine on two public domain face aging data sets: MORPH and FG-NET. We also compare the performance of the proposed discriminative model with a generative aging model. A fusion of discriminative and generative models further improves the face matching accuracy in the presence of aging.


computer vision and pattern recognition | 2012

Multimodal feature fusion for robust event detection in web videos

Pradeep Natarajan; Shuang Wu; Shiv Naga Prasad Vitaladevuni; Xiaodan Zhuang; Stavros Tsakalidis; Unsang Park; Rohit Prasad; Premkumar Natarajan

Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ~45000 videos, our system showed the best performance among the 19 international teams.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Fingerprint verification using SIFT features

Unsang Park; Sharath Pankanti; Anil K. Jain

Fingerprints are being extensively used for person identification in a number of commercial, civil, and forensic applications. Most of the current fingerprint verification systems utilize features that are based on minutiae points and ridge patterns. While minutiae based fingerprint verification systems have shown fairly high accuracies, further improvements in their performance are needed for acceptable performance, especially in applications involving very large scale databases. In an effort to extend the existing technology for fingerprint verification, we propose a new representation and matching scheme for fingerprint using Scale Invariant Feature Transformation (SIFT). We extract characteristic SIFT feature points in scale space and perform matching based on the texture information around the feature points using the SIFT operator. A systematic strategy of applying SIFT to fingerprint images is proposed. Using a public domain fingerprint database (FVC 2002), we demonstrate that the proposed approach complements the minutiae based fingerprint representation. Further, the combination of SIFT and conventional minutiae based system achieves significantly better performance than either of the individual schemes.


international conference on pattern recognition | 2006

ViSE: Visual Search Engine Using Multiple Networked Cameras

Unsang Park; Anil K. Jain; Itaru Kitahara; Kiyoshi Kogure; Norihiro Hagita

We propose a visual search engine (ViSE) as a semi-automatic component in a surveillance system using networked cameras. The ViSE aims to assist the monitoring operation of huge amounts of captured video streams, which tracks and finds people in the video based on their primitive features with the interaction of a human operator. We address the issues of object detection and tracking, shadow suppression and color-based recognition for the proposed system. The experimental results on a set of video data with ten subjects showed that ViSE retrieves correct candidates with 83% recall at 83% precision


IEEE MultiMedia | 2012

Face Matching and Retrieval in Forensics Applications

Anil K. Jain; Brendan Klare; Unsang Park

This article surveys forensic face-recognition approaches and the challenges they face in improving matching and retrieval results as well as processing low-quality images.

Collaboration


Dive into the Unsang Park's collaboration.

Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge