Sriram Pavan Tankasala
University of Missouri–Kansas City
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sriram Pavan Tankasala.
ieee international conference on technologies for homeland security | 2012
Vikas Gottemukkula; Sashi K. Saripalle; Sriram Pavan Tankasala; Reza Derakhshani; Raghunandan Pasula; Arun Ross
Ocular biometrics refers to the imaging and use of characteristic features of the eyes for personal identification. Traditionally, the iris has been viewed as a powerful ocular biometric cue. However, the iris is typically imaged in the near infrared (NIR) spectrum. RGB images of the iris, acquired in the visible spectrum, offer limited biometric information for dark-colored irides. In this work, we explore the possibility of performing ocular biometric recognition in the visible spectrum by utilizing the iris in conjunction with the vasculature observed in the white of the eye. We design a weighted fusion scheme to combine the information originating from these two modalities. Experiments on a dataset of 50 subjects indicate that such a fusion scheme improves the equal error rate by a margin of 4.5% over an iris-only approach.
ieee international conference on image information processing | 2011
Sriram Pavan Tankasala; Plamen Doynov; Reza Derakhshani; Arun Ross; Simona Crihalmeanu
Besides the iris, conjunctival vasculature may also be used for ocular biometric recognition. Conjunctival vessel patterns can be easily observed in the visible spectrum and can compensate for off-angle or otherwise occluded iridial texture. In this paper, classification of conjunctival vasculature using Gray Level Co-occurrence Matrix (GLCM) is studied. Statistical features of GLCM, i.e., contrast, correlation, energy and homogeneity, were used in conjunction with Fisher linear discriminant analysis and regularized neural network classifiers in order to recognize textures arising from conjunctival vessels. Match score level fusion of Fisher LDA and neural networks provided the best results, resulting in a test set equal error rate (EER) and area under receiver operating characteristics curve (ROC AUC) of 13.97% and 0.9333, respectively. These figures improved to 11.9% and 0.9504 after fusion of LDA and neural network match scores.
ieee international conference on technologies for homeland security | 2012
Sriram Pavan Tankasala; Vikas Gottemukkula; Sashi K. Saripalle; Venkata Goutam Nalamati; Reza Derakhshani; Raghunandan Pasula; Arun Ross
We design and implement a hyper-focal imaging system for acquiring iris images in the visible spectrum. The proposed system uses a DSLR Canon T2i camera and an Okii controller to capture videos of the ocular region at multiple focal lengths. The ensuing frames are fused in order to yield a single image with higher fidelity. Further, the proposed setup extends the imaging depth-of-field (DOF), thereby preempting the need for employing expensive cameras for increased DOF. Experiments convey the benefits of utilizing a hyper-focal system over a traditional fixed-focus system for performing iris recognition in visible spectrum.
ieee international conference on technologies for homeland security | 2013
Sriram Pavan Tankasala; Plamen Doynov; Reza Derakhshani
Directional pyramidal filter banks as feature extractors for ocular vascular biometrics are proposed. Apart from the red, green, and blue (RGB) format, we analyze the significance of using HSV, YCbCr, and layer combinations (R+Cr)/2, (G+Cr)/2, (B+Cr)/2. For classification, Linear Discriminant Analysis (LDA) is used. We outline the advantages of a Contourlet transform implementation for eye vein biometrics, based on vascular patterns seen on the white of the eye. The performance of the proposed algorithm is evaluated using Receiver Operating Characteristic (ROC) curves. Area under the curve (AUC), equal error rate (EER), and decidability values are used as performance metrics. The dataset consists of more than 1600 still images and video frames acquired in two separate sessions from 40 subjects. All images were captured from a distance of 5 feet using a DSLR camera with an attached white LED light source. We evaluate and discuss the results of cross matching features extracted from still images and video recordings of conjunctival vasculature patterns. The best AUC value of 0.9999 with an EER of 0.064% resulted from using Cb layer in YCbCr color space. The best (lowest value) EER of 0.032% was obtained with an AUC value of 0.9998 using the green layer of the RGB images.
ieee international conference on technologies for homeland security | 2011
Vikas Gottemukkula; Sashikanth Saripalle; Reza Derakshani; Sriram Pavan Tankasala
Noting the advantages of texture-based features over the structural descriptors of vascular trees, we investigated texture-based features from gray level cooccurrence matrix (GLCM) and various wavelet packet energies to classify retinal vasculature for biometric identification. Wavelet packet energy features were generated by Daubechies, Coiflets and Reverse Biorthogonal wavelets. Two different entropy methods, Shannon and logarithm of energy, were used to prune wavelet packet decomposition trees. Next, wrapper methods were used for classification-guided feature selection. Features were ranked based on area under the receiver operating curves, Bhattacharya, and t-test metrics. Using the ranked lists, wrapper methods were used in conjunction with Naïve Bayesian, k-nearest neighbor (k-NN), and Support Vector Machine (SVM) classifiers. Best results were achieved by using features from Reverse Biorthogonal 2.4 wavelet packet decomposition in conjunction with a nearest neighbor classifier, yielding a 3-fold cross validation accuracy of 99.42% with a sensitivity and specificity of 98.33% and 99.47% respectively.
international symposium on neural networks | 2009
Christopher T. Lovelace; Reza Derakhshani; Sriram Pavan Tankasala; Diane L. Filion
In this paper, we show the feasibility of using high-speed video for measurement of startle eyeblinks as a new augmentative modality for biometric security, as blinks can reveal emotional states of interest in security screenings using nonintrusive measurements. Using neural network as classifiers, this initial study shows that upper eyelid tracking at 250 frames per second can categorize startle blinks with accuracies comparable to those of the well-established but intrusive EMG-based measures of muscles in charge of eyelid closure.
IET Biometrics | 2017
Sriram Pavan Tankasala; Plamen Doynov; Simona Chrihalmeanu; Reza Derakhshani
The vascular patterns seen on the white of the eye, mainly in conjunctival and episcleral layers, are termed as ocular surface vasculature (OSV). OSV is visible in images captured with commercial RGB cameras, and its unique texture can be used for biometric recognition. This study demonstrates the capabilities of curvelet transform for OSV feature extraction. Non-linear feature enhancement and feature mapping in curvelet domain are shown to be effective in differentiating OSV texture. Linear discriminant analysis and similarity metrics are used for matching. A match-score level fusion is used across multiple gaze directions for both eyes. Using a multi-distance dataset of 50 volunteers, where eye images were acquired from 30, 150, and 250 cm using a dSLR, a best equal error rate (EER) of 0.2% is obtained. Using a second dataset of 40 volunteers acquired from 150 cm using a dSLR, a best EER of 3.1% is obtained. For a 216-participant dataset of ocular images acquired using cellular phones from close proximity, an EER of 0.9% is obtained. The proposed methodology was also tested on the publicly available UBIRIS V1 dataset, yielding an EER of 0.7%. The experimental results support the theoretically formulated advantages of the curvelet transform and its capability in successful extraction of curved structures when applied to OSV patterns.
international symposium on neural networks | 2016
Reza Derakhshani; Sriram Pavan Tankasala; Simona Crihalmeanu; Arun Ross; Rohit Krishna
Vascular Similarly Measurement (VSM) is an important tool in many biomedical applications. However, designing a robust computational VSM remains a challenge. We investigate different wavelet families and their orders to find their efficacy as feature extractors for computational VSM. Using a 50-subject dataset of RGB ocular surface vasculature images, we show that a compact feature vector composed of wavelet packet energies derived from Db1 wavelets, in conjunction with Fisher linear discriminant analysis and judged by the ensuing ROCs, is best suited for this task. Coif1 and Rbio2.4 were found to be the next best two wavelets for this purpose. Repetition of the same experiments using neural networks confirmed the optimality of the above suite of features for VSM.
ieee international conference on technologies for homeland security | 2015
Sriram Pavan Tankasala; Plamen Doynov
In this paper, we present the results of a study on utilization of Conjunctival vasculature pattern as a biometric modality for personal identification. The visible red blood vessel patterns on the sclera of the eye is gaining acceptance as a biometric modality due to its proven uniqueness and easy accessibility for imaging in the visible spectrum. After acquisition, the images of Conjunctival vascular patterns are enhanced using the difference of Gaussian (DoG). The feature extraction is performed using a multi-scale, multi-directional shear operator (Shearlet transform). Linear discriminant analysis (LDA), neural networks (NN) and pairwise distance metrics were used for classification. In the study, images of 50 subjects are acquired with a DSLR camera at different gazes and multiple distances (CIBIT-I dataset). Additionally, the performance of the proposed algorithms is tested using different gaze images acquired from 35 subjects using an iPhone (CIBIT-II dataset). ROC AUC analysis is used to test the classification performance. Areas under the curve (AUC) and equal error rates (EER) are reported for all acquisition scenarios and different processing algorithms. The best EER value of 0.29% is obtained for a CIBIT-I dataset using NN and a 2.44% EER value for a CIBIT-II dataset using LDA.
2016 IEEE Symposium on Technologies for Homeland Security (HST) | 2016
Plamen Doynov; Sriram Pavan Tankasala
Defocus and motion blur are common distortions in ocular biometric image collections. This is especially prevalent for image acquisition in less constrained environment. An accurate estimation of the degradation parameters can be used for image quality assessment and for degradation restoration. Computationally fast methods are applicable for real-time image quality evaluation and feedback during the acquisition process. ISO/IEC 29794-6 (SC 37 N4302) specifies the computational method for sharpness as a focus-quality metric. The method for motion blur calculation is not defined. In this paper we report the performance of fast, non-referenced sharpness and motion blur estimation algorithms with application in Ocular biometrics. This paper reviews current techniques for blur estimation and reports the comparable accuracy of the proposed methods for multiple degrees of degradation.