Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raghavender R. Jillela is active.

Publication


Featured researches published by Raghavender R. Jillela.


IEEE Transactions on Information Forensics and Security | 2011

Periocular Biometrics in the Visible Spectrum

Unsang Park; Raghavender R. Jillela; Arun Ross; Anil K. Jain

The term periocular refers to the facial region in the immediate vicinity of the eye. Acquisition of the periocular biometric is expected to require less subject cooperation while permitting a larger depth of field compared to traditional ocular biometric traits (viz., iris, retina, and sclera). In this work, we study the feasibility of using the periocular region as a biometric trait. Global and local information are extracted from the periocular region using texture and point operators resulting in a feature set for representing and matching this region. A number of aspects are studied in this work, including the 1) effectiveness of incorporating the eyebrows, 2) use of side information (left or right) in matching, 3) manual versus automatic segmentation schemes, 4) local versus global feature extraction schemes, 5) fusion of face and periocular biometrics, 6) use of the periocular biometric in partially occluded face images, 7) effect of disguising the eyebrows, 8) effect of pose variation and occlusion, 9) effect of masking the iris and eye region, and 10) effect of template aging on matching performance. Experimental results show a rank-one recognition accuracy of 87.32% using 1136 probe and 1136 gallery periocular images taken from 568 different subjects (2 images/subject) in the Face Recognition Grand Challenge (version 2.0) database with the fusion of three different matchers.


international conference on pattern recognition | 2010

On the Fusion of Periocular and Iris Biometrics in Non-ideal Imagery

Damon L. Woodard; Shrinivas J. Pundlik; Philip E. Miller; Raghavender R. Jillela; Arun Ross

Human recognition based on the iris biometric is severely impacted when encountering non-ideal images of the eye characterized by occluded irises, motion and spatial blur, poor contrast, and illumination artifacts. This paper discusses the use of the periocular region surrounding the iris, along with the iris texture patterns, in order to improve the overall recognition performance in such images. Periocular texture is extracted from a small, fixed region of the skin surrounding the eye. Experiments on the images extracted from the Near Infra-Red (NIR) face videos of the Multi Biometric Grand Challenge (MBGC) dataset demonstrate that valuable information is contained in the periocular region and it can be fused with the iris texture to improve the overall identification accuracy in non-ideal situations.


international conference on biometrics | 2012

Matching highly non-ideal ocular images: An information fusion approach

Arun Ross; Raghavender R. Jillela; Jonathon M. Smereka; Vishnu Naresh Boddeti; B. V. K. Vijaya Kumar; Ryan T. Barnard; Xiaofei Hu; Paul Pauca; Robert J. Plemmons

We consider the problem of matching highly non-ideal ocular images where the iris information cannot be reliably used. Such images are characterized by non-uniform illumination, motion and de-focus blur, off-axis gaze, and non-linear deformations. To handle these variations, a single feature extraction and matching scheme is not sufficient. Therefore, we propose an information fusion framework where three distinct feature extraction and matching schemes are utilized in order to handle the significant variability in the input ocular images. The Gradient Orientation Histogram (GOH) scheme extracts the global information in the image; the modified Scale Invariant Feature Transform (SIFT) extracts local edge anomalies in the image; and a Probabilistic Deformation Model (PDM) handles nonlinear deformations observed in image pairs. The simple sum rule is used to combine the match scores generated by the three schemes. Experiments on the extremely challenging Face and Ocular Challenge Series (FOCS) database and a subset of the Face Recognition Grand Challenge (FRGC) database confirm the efficacy of the proposed approach to perform ocular recognition.


international conference on biometrics theory applications and systems | 2012

Mitigating effects of plastic surgery: Fusing face and ocular biometrics

Raghavender R. Jillela; Arun Ross

The task of successfully matching face images obtained before and after plastic surgery is a challenging problem. The degree to which a face is altered depends on the type and number of plastic surgeries performed, and it is difficult to model such variations. Existing approaches use learning based methods that are either computationally expensive or rely on a set of training images. In this work, a fusion approach is proposed that combines information from the face and ocular regions to enhance recognition performance in the identification mode. The proposed approach provides the highest reported recognition performance on a publicly accessible plastic surgery database, with a rank-one accuracy of 87.4%. Compared to existing approaches, the proposed approach is not learning based and reduces computational requirements. Furthermore, a systematic study of the matching accuracies corresponding to various types of surgeries is presented.


Pattern Recognition Letters | 2015

Segmenting iris images in the visible spectrum with applications in mobile biometrics

Raghavender R. Jillela; Arun Ross

An overview of mobile device based biometric recognition.An outline of various steps involved in iris recognition.A review of some popular iris segmentation algorithms applicable for images acquired in the visible (VIS) spectrum. The widespread use of mobile devices with Internet connectivity has resulted in the storage and transmission of sensitive data. This has heightened the need to perform reliable user authentication on mobile devices in order to prevent an adversary from accessing such data. Biometrics, the science of recognizing individuals based on their biological and behavioral traits, has the potential to be leveraged for this purpose. In this work, we briefly discuss the suitability of using the iris texture for biometric recognition in mobile devices. One of the critical components of an iris recognition system is the segmentation module which separates the iris from other ocular attributes. Since current mobile devices acquire color images of an object, we conduct a literature review for performing automated iris segmentation in the visible spectrum. The goal is to convey the possibility of successfully incorporating iris recognition in mobile devices.


workshop on applications of computer vision | 2011

Information fusion in low-resolution iris videos using Principal Components Transform

Raghavender R. Jillela; Arun Ross; Patrick J. Flynn

The focus of this work is on improving the recognition performance of low-resolution iris video frames acquired under varying illumination. To facilitate this, an image-level fusion scheme with modest computational requirements is proposed. The proposed algorithm uses the evidence of multiple image frames of the same iris to extract discriminatory information via the Principal Components Transform (PCT). Experimental results on a subset of the MBGC NIR iris database demonstrate the utility of this scheme to achieve improved recognition accuracy when low-resolution probe images are compared against high-resolution gallery images.


Pattern Recognition | 2017

Long range iris recognition

Kien Nguyen; Clinton Fookes; Raghavender R. Jillela; Sridha Sridharan; Arun Ross

The term iris refers to the highly textured annular portion of the human eye that is externally visible. An iris recognition system exploits the richness of these textural patterns to distinguish individuals. Iris recognition systems are being used in a number of human recognition applications such as access control, national ID schemes, border control, etc. To capture the rich textural information of the iris pattern regardless of the eye color, traditional iris recognition systems utilize near-infrared (NIR) sensors to acquire images of the iris. This, however, restricts the iris image acquisition distance to close quarters (less than 1m). Over the last several years, there have been numerous attempts to design and implement iris recognition systems that operate at longer standoff distances ranging from 1m to 60m. Such long range iris acquisition and recognition systems can provide high user convenience and improved throughput. This paper reviews the state-of-the-art design and implementation of iris-recognition-at-a-distance (IAAD) systems. In this regard, the design of such a system from both the image acquisition (hardware) and image processing (algorithms) perspectives are presented. The major contributions of this paper include: (1) discussing the significance and applications of IAAD systems in the context of human recognition, (2) providing a review of existing IAAD systems, (3) presenting a complete solution to the design problem of an IAAD system, from both hardware and algorithmic perspectives, (4) discussing the use of additional ocular information, along with iris, for improving IAAD accuracy, and (5) discussing the current research challenges and providing recommendations for future research in IAAD.


international conference on image processing | 2014

Matching face against iris images using periocular information

Raghavender R. Jillela; Arun Ross

We consider the problem of matching face against iris images using ocular information. In biometrics, face and iris images are typically acquired using sensors operating in visible (VIS) and near-infrared (NIR) spectra, respectively. This presents a challenging problem of matching images corresponding to different biometric modalities, imaging spectra, and spatial resolutions. We propose the usage of ocular traits that are common between face and iris images (viz., iris and ocular region) to perform matching. Iris matching is performed using a commercial software, while ocular regions are matched using three different techniques: Local Binary Patterns (LBP), Normalized Gradient Correlation (NGC), and Joint Dictionary-based Sparse Representation (JDSR). Experimental results on a database containing 1358 images of 704 subjects indicate that ocular region can provide better performance than iris biometric under a challenging cross-modality matching scenario.


international symposium on neural networks | 2009

Adaptive frame selection for improved face recognition in low-resolution videos

Raghavender R. Jillela; Arun Ross

Performing face detection and identification in low-resolution videos (e.g., surveillance videos) is a challenging task. The task entails extracting an unknown face image from the video and comparing it against identities in the gallery database. To facilitate biometric recognition in such videos, fusion techniques may be used to consolidate the facial information of an individual, available across successive low-resolution frames. For example, super-resolution schemes can be used to improve the spatial resolution of facial objects contained in these videos (image-level fusion). However, the output of the super-resolution routine can be significantly affected by large changes in facial pose in the constituent frames. To mitigate this concern, an adaptive frame selection technique is developed in this work. The proposed technique automatically disregards frames that can cause severe artifacts in the super-resolved output, by examining the optical flow matrices pertaining to successive frames. Experimental results demonstrate an improvement in the identification performance when the proposed technique is used to automatically select the input frames necessary for super-resolution. In addition, improvements in output image quality and computation time are observed. The paper also compares image-level fusion against score-level fusion where the low-resolution frames are first spatially interpolated and the simple sum rule is used to consolidate the match scores corresponding to the interpolated frames. On comparing the two fusion methods, it is observed that score-level fusion outperforms image-level fusion.


Handbook of Iris Recognition | 2013

Methods for Iris Segmentation

Raghavender R. Jillela; Arun Ross

Under ideal image acquisition conditions, the iris biometric has been observed to provide high recognition performance compared to other biometric traits. Such a performance is possible by accurately segmenting the iris region from the given ocular image. This chapter discusses the challenges associated with the segmentation process, along with some of the prominent iris segmentation techniques proposed in the literature. The methods are presented according to their suitability for segmenting iris images acquired under different wavelengths of illumination. Furthermore, methods to refine and evaluate the output of the iris segmentation routine are presented. The goal of this chapter is to provide a brief overview of the progress made in iris segmentation.

Collaboration


Dive into the Raghavender R. Jillela's collaboration.

Top Co-Authors

Avatar

Arun Ross

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Pauca

Wake Forest University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaofei Hu

Wake Forest University

View shared research outputs
Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge