Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yung-Hui Li is active.

Publication


Featured researches published by Yung-Hui Li.


international conference on acoustics, speech, and signal processing | 2006

Illumination Tolerant Face Recognition Using a Novel Face From Sketch Synthesis Approach and Advanced Correlation Filters

Yung-Hui Li; Marios Savvides; Vijayakumar Bhagavatula

Current state-of-the-art approach for performing face sketch recognition transforms all the test face images into sketches, and then performs recognition on sketch domain using the sketch composite. In our approach we propose the opposite; which has advantages in a real-time system; we propose to generate a realistic face image from the composite sketch using a hybrid subspace method and then build an illumination tolerant correlation filter which can recognize the person under different illumination variations from a surveillance video footage. We show how effective proposed algorithm works on the CMU PIE (pose illumination and expression) database


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

An Automatic Iris Occlusion Estimation Method Based on High-Dimensional Density Estimation

Yung-Hui Li; Marios Savvides

Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jains Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.


computer vision and pattern recognition | 2007

Kernel Fukunaga-Koontz Transform Subspaces For Enhanced Face Recognition

Yung-Hui Li; Marios Savvides

Traditional linear Fukunaga-Koontz transform (FKT) (F. Fukunaga and W. Koontz, 1970) is a powerful discriminative subspaces building approach. Previous work has successfully extended FKT to be able to deal with small-sample-size. In this paper, we extend traditional linear FKT to enable it to work in multi-class problem and also in higher dimensional (kernel) subspaces and therefore provide enhanced discrimination ability. We verify the effectiveness of the proposed kernel Fukunaga-Koontz transform by demonstrating its effectiveness in face recognition applications; however the proposed non-linear generalization can be applied to any other domain specific problems.


applied imagery pattern recognition workshop | 2008

Investigating useful and distinguishing features around the eyelash region

Yung-Hui Li; Marios Savvides; Tsuhan Chen

Traditionally, iris recognition is always about analyzing and extracting features from iris texture. We proposed to investigate regions around eyelashes and extract useful information which helps us to perform ethnic classification. We propose an algorithm which is easy to implement and effective. First, we locate eyelash region by using ASM to model eyelid boundary. Second, we extract local patch around local landmarks. After image processing, we are able to separate eyelashes and extract features from the directions of eyelashes. Those features are descriptive and can be used to train classifiers. Experimental results show our method can successfully perform East-Asian/Caucasian classification up to 93% accuracy, which shows our proposed method is useful and promising.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

Extending the Capture Volume of an Iris Recognition System Using Wavefront Coding and Super-Resolution

Sheng-Hsun Hsieh; Yung-Hui Li; Chung-Hao Tien; Chin-Chen Chang

Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the systems high recognition rate.


international conference on acoustics, speech, and signal processing | 2009

A pixel-wise, learning-based approach for occlusion estimation of iris images in polar domain

Yung-Hui Li; Marios Savvides

On normalized iris images, there are many kinds of noises, such as eyelids, eyelashes, shadows or specular reflections, that often occlude the true iris texture. If high recognition rate is desired, those occluded areas must be estimated accurately in order for them to be excluded during the matching stage. In this paper, we propose a unified, probabilistic and learning-based approach to estimate all kinds of occlusions within one unified model. Experiments have shown that our method not only estimates occlusion very accurately, but also does it with high speed, which makes it useful for practical iris recognition systems.


international conference on acoustics, speech, and signal processing | 2014

Heterogeneous IRIS recognition using heterogeneous eigeniris and sparse representation

Bo-Ren Zheng; Dai-Yan Ji; Yung-Hui Li

When the iris images for training and testing are acquired by different iris image sensors, the recognition rate will be degraded and not as good as the one when both sets of images are acquired by the same image sensors. Such problem is called “heterogeneous iris recognition”. In this paper, we propose two novel patch-based heterogeneous dictionary learning methods using heterogeneous eigeniris and sparse representation which learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at testing stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 23.9% relatively by the proposed method using sparse representation, which proves the effectiveness of the proposed image hallucination method.


Archive | 2007

Frequency Domain Face Recognition

Marios Savvides; Ramamurthy Bhagavatula; Yung-Hui Li; Ramzi Abiantun

In the always expanding field of biometrics the choice of which biometric modality or modalities to use, is a difficult one. While a particular biometric modality might offer superior discriminative properties (or be more stable over a longer period of time) when compared to another modality, the ease of its acquisition might be quite difficult in comparison. As such, the use of the human face as a biometric modality presents the attractive qualities of signifi cant discrimination with the least amount of intrusiveness . In this sense, the majority of biometric systems whose primary modality is the face, emphasize analysis of the spatial representation of the face i.e., the intensity image of the face. While there has been varying and significant levels of performance achieved through the use of spatial 2-D data, there is significant theoretical work and empirical results that support the use of a frequency domain representation, to achieve greater face recognition performance. The use of the Fourier transform allows us to quickly and easily obtain raw frequency data which is significantly more discriminative (after appropriate data manipulation) than the raw spatial data from which it was derived. We can further increase discrimination through additional signal transforms and specific feature extraction algorithms intended for use in the frequency domain, so we can achieve significant improved performance and distortion tolerance compared to that of their spatial domain counterparts. In this chapter we will review, outline, and present theory and results that elaborate on frequency domain processing and representations for enhanced face recognition. The second section is a brief literature review of various face recognition algorithms. The third section will focus on two points: a review of the commonly used algorithms such as Principal Component Analysis (PCA) (Turk and Pentland, 1991) and Fisher Linear Discriminant Analysis (FLDA) (Belhumeur et al., 1997) and their novel use in conjunction with frequency domain processed data for enhancing face recognition ability of these algorithms. A comparison of performance with respect to the use of spatial versus processed and un-processed frequency domain data will be presented. The fourth section will be a thorough analysis and derivation of a family of advanced frequency domain matching algorithms collectively known as Advanced Correlation Filters (ACFs). It is in this section that the most significant discussion will occur as ACFs represent the latest advances in frequency domain facial recognition algorithms with specifically built-in distortion tolerance. In the fifth section we present results of more recent research done involving ACFs and face recognition. The final


INTERNATIONAL SYMPOSIUM ON PHOTOELECTRONIC DETECTION AND IMAGING 2013: INFRARED IMAGING AND APPLICATIONS | 2013

Biometric iris image acquisition system with wavefront coding technology

Sheng-Hsun Hsieh; Hsi-Wen Yang; Shao-Hung Huang; Yung-Hui Li; Chung-Hao Tien

Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code apertured imaging system, where the imaging volume was 2.57 times extended over the traditional optics, while keeping sufficient recognition accuracy.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Faces from sketches: a subspace synthesis approach

Yung-Hui Li; Marios Savvides

In real life scenario, we may be interested in face recognition for identification purpose when we only got sketch of the face images, for example, when police tries to identify criminals based on sketches of suspect, which is drawn by artists according to description of witnesses, what they have in hand is a sketch of suspects, and many real face image acquired from video surveillance. So far the state-of-the-art approach toward this problem tries to transform all real face images into sketches and perform recognition on sketch domain. In our approach we propose the opposite which is a better approach; we propose to generate a realistic face image from the composite sketch using a Hybrid subspace method and then build an illumination tolerant correlation filter which can recognize the person under different illumination variations. We show experimental results on our approach on the CMU PIE (Pose Illumination and Expression) database on the effectiveness of our novel approach.

Collaboration


Dive into the Yung-Hui Li's collaboration.

Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chung-Hao Tien

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Sheng-Hsun Hsieh

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Bo-Ren Zheng

National Central University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Che Wun Chiou

Chien Hsin University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chiou-Yng Lee

Lunghwa University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge