Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yooyoung Lee is active.

Publication


Featured researches published by Yooyoung Lee.


Computer Vision and Image Understanding | 2013

Sensitivity analysis for biometric systems: A methodology based on orthogonal experiment designs

Yooyoung Lee; James J. Filliben; Ross J. Micheals; P. Jonathon Phillips

The purpose of this paper is to introduce an effective and structured methodology for carrying out a biometric system sensitivity analysis. The goal of sensitivity analysis is to provide the researcher/developer with insight and understanding of the key factors-algorithmic, subject-based, procedural, image quality, environmental, among others-that affect the matching performance of the biometric system under study. This proposed methodology consists of two steps: (1) the design and execution of orthogonal fractional factorial experiment designs which allow the scientist to efficiently investigate the effect of a large number of factors-and interactions-simultaneously, and (2) the use of a select set of statistical data analysis graphical procedures which are fine-tuned to unambiguously highlight important factors, important interactions, and locally-optimal settings. We illustrate this methodology by application to a study of VASIR (Video-based Automated System for Iris Recognition)-NIST iris-based biometric system. In particular, we investigated k=8 algorithmic factors from the VASIR system by constructing a (2^6^-^1x3^1x4^1) orthogonal fractional factorial design, generating the corresponding performance data, and applying an appropriate set of analysis graphics to determine the relative importance of the eight factors, the relative importance of the 28 two-term interactions, and the local best settings of the eight algorithms. The results showed that VASIRs performance was primarily driven by six factors out of the eight, along with four two-term interactions. A virtue of our two-step methodology is that it is systematic and general, and hence may be applied with equal rigor and effectiveness to other biometric systems, such as fingerprints, face, voice, and DNA.


Journal of Research of the National Institute of Standards and Technology | 2013

VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

Yooyoung Lee; Ross J. Micheals; James J. Filliben; P. Jonathon Phillips

The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.


International Journal of Central Banking | 2014

The IJCB 2014 PaSC video face and person recognition competition

J. Ross Beveridge; Hao Zhang; Patrick J. Flynn; Yooyoung Lee; Venice Erin Liong; Jiwen Lu; Marcus de Assis Angeloni; Tiago de Freitas Pereira; Haoxiang Li; Gang Hua; Vitomir Struc; Janez Krizaj; P. Jonathon Phillips

The Point-and-Shoot Face Recognition Challenge (PaSC) is a performance evaluation challenge including 1401 videos of 265 people acquired with handheld cameras and depicting people engaged in activities with non-frontal head pose. This report summarizes the results from a competition using this challenge problem. In the Video-to-video Experiment a person in a query video is recognized by comparing the query video to a set of target videos. Both target and query videos are drawn from the same pool of 1401 videos. In the Still-to-video Experiment the person in a query video is to be recognized by comparing the query video to a larger target set consisting of still images. Algorithm performance is characterized by verification rate at a false accept rate of 0.01 and associated receiver operating characteristic (ROC) curves. Participants were provided eye coordinates for video frames. Results were submitted by 4 institutions: (i) Advanced Digital Science Center, Singapore; (ii) CPqD, Brasil; (iii) Stevens Institute of Technology, USA; and (iv) University of Ljubljana, Slovenia. Most competitors demonstrated video face recognition performance superior to the baseline provided with PaSC. The results represent the best performance to date on the handheld video portion of the PaSC.


International Journal of Central Banking | 2014

Generalizing face quality and factor measures to video.

Yooyoung Lee; P. Jonathon Phillips; James J. Filliben; J. Ross Beveridge; Hao Zhang

Methods for assessing the impact of factors and image-quality metrics for still face images are well-understood. The extension of these factors and quality measures to faces in video has not, however, been explored. We present a specific methodology for carrying out this extension from still to video. Using the Point-and-Shoot Challenge (PaSC) dataset, our study investigates the effect of nine factors on three face recognition algorithms, and identifies the most important factors for algorithm performance in video. We also evaluate four factor metrics for characterizing a single video as well as two comparative metrics for pairs of videos. For video-based face recognition, the analysis shows that distribution-based metrics are generally more effective in quantifying factor values than algorithm-dependent metrics. For predicting face recognition performance in video, we observe that the face detection confidence and face size factors are potentially useful quality measures. From our data, we also find that males are easier to identify than females, and Asians easier to identify than Caucasians. Finally, for this PaSC video dataset, face recognition algorithm performance is primarily driven by environment and sensor factors.


international conference on biometrics theory applications and systems | 2013

A baseline for assessing biometrics performance robustness: A case study across seven iris datasets

Yooyoung Lee; James J. Filliben; Ross J. Micheals; Michael D. Garris; P. Jonathon Phillips

We examine the robustness of algorithm performance over multiple datasets collected with different sensors. This study provides insight as to whether an algorithm performance derived from traditional controlled environment studies will robustly extrapolate to more challenging stand-off/real-world environments. We argue that a systematic methodology is critical in assuring the validity of algorithmic conclusions over the broader arena of applications. We present a structured evaluation protocol and demonstrate its utility by comparing the performance of an open-source algorithm over seven diverse datasets, spanning six different sensors (three stationary, one handheld, and two stand-off). We also provide baseline results for the ranking of the seven datasets as measured by four performance metrics. Finally, we compare our protocol-based ranking with a parallel ranking based on an independent survey of biometrics experts, with high correlation between the two rankings being demonstrated.


NIST Interagency/Internal Report (NISTIR) - 7777 | 2011

Robust Iris Recognition Baseline for the Grand Challenge

Yooyoung Lee; Ross J. Micheals; James J. Filliben; P J. Phillips


NIST Interagency/Internal Report (NISTIR) - 7828 | 2011

Ocular and Iris Recognition Baseline Algorithm

Yooyoung Lee; Ross J. Micheals; James J. Filliben; P. Jonathon Phillips; Hassan Sahibzada


NIST Interagency/Internal Report (NISTIR) - 8004 | 2014

Identifying Face Quality and Factor Measures for Video

Yooyoung Lee; P. Jonathon Phillips; James J. Filliben; J. Ross Beveridge; Hao Zhang


NIST Interagency/Internal Report (NISTIR) - 7855 | 2012

Sensitivity Analysis for Biometric Systems: A Methodology Based on Orthogonal Experiment Designs

Yooyoung Lee; James J. Filliben; Ross J. Micheals; P J. Phillips


Archive | 2011

Robust Iris Recognition Baseline for the Occular Challenge

Yooyoung Lee; Ross J. Micheals; James J. Filliben; P J. Phillips

Collaboration


Dive into the Yooyoung Lee's collaboration.

Top Co-Authors

Avatar

James J. Filliben

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

P. Jonathon Phillips

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Ross J. Micheals

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Zhang

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

J. Ross Beveridge

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

P J. Phillips

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Hassan Sahibzada

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haoxiang Li

Stevens Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge