Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sushma Venkatesh is active.

Publication


Featured researches published by Sushma Venkatesh.


2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA) | 2017

On the vulnerability of extended Multispectral face recognition systems towards presentation attacks

Ramachandra Raghavendra; Kiran B. Raja; Sushma Venkatesh; Faouzi Alaya Cheikh; Christoph Busch

Presentation attacks (a.k.a, direct attacks or spoofing attacks) against face recognition systems have emerged as a serious security threat. To mitigate these attacks on conventional face recognition systems, several Presentation Attack Detection (PAD) algorithms have been developed, which address various Presentation Attack Instruments (PAI) including 3D face masks, 2D photo, wrap photo and electronic display, that can be used for the attack. In this paper, we demonstrate and evaluate the vulnerability of an extended Multispectral face recognition system. The extended Multispectral system captures the face image across various spectral bands, thus, we propose to study each of these spectral bands for the vulnerability towards presentation attacks. We have employed a commercial Multispectral camera - SpectraCam™ that can capture seven different spectral bands to collect both bona-fide (a.k.a, live or normal or real) samples as well as artefact (or spoof) face samples. Extensive experiments are carried out on the newly compiled database to provide insights on the vulnerability of the extended Multispectral face system towards PAI generated using a printer. We have created the face artefacts using two different printers, which include laser and inkjet printers. Further, we have also evaluated the state-of-the-art PAD algorithms that are widely employed in conventional face PAD systems. Our study reveals the vulnerability of extended Multispectral face recognition system with respect to the print attack. The results obtained using state-of-the-art PAD algorithms further indicate the challenge to detect the presentation attacks in extended Multispectral face recognition systems.


Pattern Recognition Letters | 2017

Multi-patch deep sparse histograms for iris recognition in visible spectrum using collaborative subspace for robust verification

Kiran B. Raja; Ramachandra Raghavendra; Sushma Venkatesh; Christoph Busch

Multi-patch based and hoslistic image deep sparse features for improved iris recognition using collaborative subspace.Explores color channel along with patch based approach for better feature representation.Extensive analysis & results presented for both MICHE-I & MICHE-II databases.High verification accuracy (MICHE-I & II) with single sample iris enrolment. The challenge of recognizing iris in visible spectrum images captured using smartphone stems from heavily degraded data (due to reflection, partial closure of eyes, pupil dilation due to light) where the iris texture is either not visible or visible to very low extent. In order to perform reliable verification, the set of extracted features should be robust and unique to obtain high similarity scores between different samples of same subject while obtaining high dissimilarity score between samples of different subjects. In this work, we propose multi-patch deep features using deep sparse filters to obtain robust features for reliable iris recognition. Further, we also propose to represent them in a collaborative subspace to perform classification via maximized likelihood, even under single sample enrolment. Through the set of extensive experiments on MICHE-I iris dataset, we demonstrate the robustness of newly proposed scheme which achieves high verification rate (GMR > 95%) with low Equal Error Rate (EER < 2%). Further, the robustness of proposed feature representation is reiterated by employing simple distance measures which has outperformed the state-of-art techniques. Additionally, the scheme is tested on the MICHE-II challenge evaluation dataset where the results are promising with GMR=100% on limited sub-corpus of iPhone data.


signal image technology and internet based systems | 2016

Mutual Information Based Multispectral Image Fusion for Improved Face Recognition

Ramachandra Raghavendra; Sushma Venkatesh; Kiran B. Raja; Faouzi Alaya Cheikh; Christoph Busch

Multispectral face images captured in more than one spectra is known to provide reliable person verification, especially in varying illumination conditions. In this paper, we present an extended multi-spectral face recognition framework by combining the face images captured in six different spectra consisting of 425nm,475nm,525nm,570nm,625nm, and 680nm. We propose a novel image fusion scheme that combines the information from different spectrum of multispectral face images. The proposed image fusion scheme first selects two images from set of all spectral images based on the highest information quantified using entropy measure. Two selected images are combined by decomposing them using Discrete Wavelet Transform (DWT) to get the sub-bands that are fused using weighted sum rule. The weights are computed automatically on each of these sub-bands by measuring the dependency using correlation and wavelet energy. Extensive experiments are carried out on a newly constructed exclusive multispectral face database using a commercial multispectral sensor SpectraCamTM from Pixelteq company. Extensive experiments are carried out on our database to present both qualitative and quantitative results of the proposed image fusion scheme. The comprehensive comparative analysis is performed by comparing the performance of the proposed scheme with four different state-of-the-art schemes. The obtained results have justified the efficacy of the proposed system for robust multispectral face recognition.


Pattern Recognition | 2018

Improved ear verification after surgery - An approach based on collaborative representation of locally competitive features

Ramachandra Raghavendra; Kiran B. Raja; Sushma Venkatesh; Christoph Busch

Abstract Ear characteristic is a promising biometric modality that has demonstrated good biometric performance. In this paper, we investigate a novel and challenging problem to verify a subject (or user) based on the ear characteristics after undergoing ear surgery. Ear surgery is performed to reconstruct the abnormal ear structures both locally and globally to beautify the overall appearance of the ear. Ear surgery performed for both for beautification and corrections alters the original ear characteristics to the greater extent that will challenge the comparison and subsequently verification performance of the ear recognition systems. This work presents a new database of images from 211 subjects with surgically altered ear along with corresponding pre and post-surgery samples. We then propose a novel scheme for ear verification based on the features extracted using a bank of filters learnt using Topographic Locally Competitive Algorithm (T-LCA) and comparison is carried out using Robust Probabilistic Collaborative Representation Classifier (R-ProCRC). Extensive experiments are carried out on both clean (normal) and surgically altered ear database to evaluate the performance of the proposed ear verification scheme. We also present a comprehensive performance analysis by comparing the performance of the proposed ear recognition scheme with eight different state-of-the-art ear verification system. Furthermore, we also present a new scheme to detect both deformed and surgically altered ear using one-class classification. Experimental results indicate the magnitude of problem in verifying the surgically altered ears and the signifies the need for considerable research in this direction.


scandinavian conference on image analysis | 2017

Collaborative Representation of Statistically Independent Filters’ Response: An Application to Face Recognition Under Illicit Drug Abuse Alterations

Raghavendra Ramachandra; Kiran B. Raja; Sushma Venkatesh; Christoph Busch

Face biometrics is widely deployed in many security and surveillance applications that demand a secure and reliable authentication service. The performance of face recognition systems is primarily based on the analysis of texture and geometric variation of the face. Continuous and extensive consumption of illicit drugs will significantly result in deformation of both texture and geometric characteristics of a face and thus, impose additional challenges on accurately identifying the subjects who abuse drugs. This work proposes a novel scheme to improve robustness of face recognition system to address the variations caused by the prolonged use of illicit drugs. The proposed scheme is based on the collaborative representation of statistically independent filters whose responses are computed on the face images captured before and after substance (or drug) abuse. Extensive experiments are carried out on the publicly available Illicit Drug Abuse Database (DAD) comprised of face images from 100 subjects. The obtained results indicate better performance of the proposed scheme when compared with six different state-of-the-art approaches including a commercial face recognition system.


international conference on information fusion | 2017

Extended multispectral face presentation attack detection: An approach based on fusing information from individual spectral bands

Ramachandra Raghavendra; Kiran B. Raja; Sushma Venkatesh; Christoph Busch

Multispectral face recognition systems are widely used in various access control applications. The vulnerability of multispectral face recognition sensors towards low-cost Presentation Attack Instrument (PAI) such as printed photos used in attacks has emerged as a serious security threat. In this paper, we present a novel framework to detect presentation attacks against an extended multispectral face sensor. The proposed framework stems from the idea of exploring the complementary information available from different bands of an extended multispectral face sensor. To this extent, two different frameworks are proposed where the first framework is based on image fusion and the second builds on the Presentation Attack Detection (PAD) score level fusion. Extensive experiments are carried out on the extended multispectral face sensor database comprising of 50 subjects with two different presentation attacks generated using the printed photo artefacts. The obtained results indicate the superior performance of the PAD score level fusion on detecting both known and unknown attacks.


computer vision and pattern recognition | 2017

Face Presentation Attack Detection by Exploring Spectral Signatures

Ramachandra Raghavendra; Kiran B. Raja; Sushma Venkatesh; Christoph Busch

Presentation attack on the face recognition systems is well studied in the biometrics community resulting in various techniques for detecting the attacks. A low-cost presentation attack (e.g. print attacks) on face recognition systems has been demonstrated for systems operating in visible, multispectral (visible and near infrared spectrum) and extended multispectral (more than two spectral bands spanning from visible to near infrared space, commonly in 500nm-1000nm). In this paper, we propose a novel method to detect the presentation attacks on the extended multispectral face recognition systems. The proposed method is based on characterising the reflectance properties of the captured image through the spectral signature. The spectral signature is further classified using the linear Support Vector Machine (SVM) to obtain the decision on presented sample as an artefact or bona-fide. Since the reflectance property of the human skin and the artefact material differ, the proposed method can efficiently detect the presentation attacks on the extended multispectral system. Extensive experiments are carried out on a publicly available extended multispectral database (EMSPAD) comprised of 50 subjects with two different Presentation Attack Instrument (PAI) generated using two different printers. The comparison analysis is presented by comparing the performance of the proposed scheme with the contemporary schemes based on the image fusion and PAD score level fusion. Based on the obtained results, the proposed method has indicated the best performance in detecting both known and unknown attacks.


2017 5th International Workshop on Biometrics and Forensics (IWBF) | 2017

Transferable deep convolutional neural network features for fingervein presentation attack detection

Ramachandra Raghavendra; Sushma Venkatesh; Kiran B. Raja; Christoph Busch

Presentation attacks (or spoofing) on finger-vein biometric capture devices are gaining increased attention because of their wider deployment in multiple secure applications. In this work. we propose a novel method for fingervein Presentation Attack Detection (PAD) by exploring the transfer learning ability of Deep Convolutional Neural Network (CNN). To this extent, we have considered the pre-trained Alex-Net architecture and augmented the existing architecture with additional seven layers to improve the reliability and reduce over-fitting problem. We then fine-tune the modified CNN architecture with the fingervein presentation attack samples to make it adaptable to fingervein Presentation Attack Detection (PAD). Extensive experiments are carried out using two different fingervein presentation attack databases with two different fingervein artefact species generated using two different kinds of printers. Obtained results show consistently high performance of the proposed scheme on both databases that further indicate the robustness and efficiency.


computer vision and pattern recognition | 2017

Transferable Deep-CNN Features for Detecting Digital and Print-Scanned Morphed Face Images

Ramachandra Raghavendra; Kiran B. Raja; Sushma Venkatesh; Christoph Busch


international conference on information fusion | 2018

Fusion of Multi-Scale Local Phase Quantization Features for Face Presentation Attack Detection

Ramachandra Raghavendra; Sushma Venkatesh; Kiran B. Raja; Pankaj Shivdayal Wasnik; Martin Stokkenes; Christoph Busch

Collaboration


Dive into the Sushma Venkatesh's collaboration.

Top Co-Authors

Avatar

Christoph Busch

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kiran B. Raja

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ramachandra Raghavendra

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Faouzi Alaya Cheikh

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Raghavendra Ramachandra

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Stokkenes

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Pankaj Shivdayal Wasnik

Norwegian University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge