On the Robustness of Face Recognition Algorithms Against Attacks and Bias
Richa Singh, Akshay Agarwal, Maneet Singh, Shruti Nagpal, Mayank Vatsa
OOn the Robustness of Face Recognition Algorithms Against Attacks and Bias
Richa Singh , , Akshay Agarwal , Maneet Singh , Shruti Nagpal , Mayank Vatsa , IIIT-Delhi, India; IIT-Jodhpur, India { rsingh, akshaya, maneets, shrutin, mayank } @iiitd.ac.in Abstract
Face recognition algorithms have demonstrated very highrecognition performance, suggesting suitability for real worldapplications. Despite the enhanced accuracies, robustness ofthese algorithms against attacks and bias has been chal-lenged. This paper summarizes different ways in whichthe robustness of a face recognition algorithm is chal-lenged, which can severely affect its intended working. Dif-ferent types of attacks such as physical presentation at-tacks, disguise/makeup, digital adversarial attacks, and mor-phing/tampering using GANs have been discussed. We alsopresent a discussion on the effect of bias on face recogni-tion models and showcase that factors such as age and gendervariations affect the performance of modern algorithms. Thepaper also presents the potential reasons for these challengesand some of the future research directions for increasing therobustness of face recognition models.
Introduction
Face is one of the most commonly used and widely ex-plored biometric modality for person authentication. Re-cent advances in machine learning, especially deep learning,coupled with the availability of sophisticated hardware andabundant data, have led to the development of several facerecognition algorithms achieving superlative performance(Parkhi, Vedaldi, and Zisserman 2015; Amos et al. 2016).Due to the advancements, automated face recognition sys-tems are now utilized in several real world scenarios rang-ing from photo tagging in social media, photo organizationin mobile devices to critical law enforcement applicationsof missing person search and suspect identification. Whilerecognition accuracy is one of the key metrics to evaluatethe effectiveness of a face recognition model, its robustnesswith respect to different types of data variations must alsobe evaluated.Robustness of a face recognition algorithm refers to itsability to handle intentional and unintentional variations inthe input space. Figure 1 presents illustrative class bound-aries learned by a face recognition model for three classes.
Copyright c (cid:13)
Class-1Class-2Class-3Attack on
FR System
Figure 1: An illustration where a face recognition algo-rithm learns the decision boundary to classify/identify threeclasses/subjects. The singularities between the boundariesgive rise to the vulnerable points.The data points which are in-between boundaries (i.e. ma-roon crosses) can potentially be used to challenge the ro-bustness of the algorithm (intentionally or unintentionally).Intentional variations refer to the samples/variations intro-duced by an adversary which attempts to attack a face recog-nition algorithm for obtaining unauthorized access. For ex-ample, robustness of a recognition algorithm can be af-fected by different kinds of impersonation techniques suchas spoofing or disguise variations. On the other hand, unin-tentional variations refer to the changes brought to the inputimage without the intent of fooling the system. For example,variations due to unintended occlusion, disease correctingfacial plastic surgery, and images captured across differentcharacteristics such as ethnicity and gender.This research presents an overview of different techniqueswhich challenge the robustness of a face recognition algo-rithm, along with the potential solutions provided in theliterature. As part of this research, the variations observedby a face recognition system have broadly been catego-rized as: (i) robustness due to external adversary (i.e. at-tacking face recognition algorithms) and (ii) robustness withrespect to bias. As mentioned earlier, an adversary can cre-ate data variations to fool the face recognition system andthis can be accomplished either via (a) physical attacks or(b) digital attacks. Physical attacks refer to the techniqueswhere changes are made to the physical appearance of a facebefore capturing an image. Presentation attacks, variationsdue to disguise/make-up, and intentional plastic surgery are a r X i v : . [ c s . C V ] F e b a) Presentation Attack using Mask(b) Disguise Accessories used for Obfuscation and Impersonation Figure 2: Face recognition systems are susceptible to phys-ical attacks, where physical modifications are made to theface, such as (a) presentation attack and (b) variations due todisguise accessories.some of the key techniques for physical attacks. Digital at-tacks refer to the changes made in the captured face image,which can result in a different output from a face recog-nition system as compared to the original image. For ex-ample, adversarial attacks such as the Universal perturba-tion (Moosavi-Dezfooli et al. 2017) or the l -attack (Car-lini and Wagner 2017), as well as the variations broughtdue to morphing/re-touching/tampering (Yuan et al. 2019;Scherhag et al. 2019). Finally, robustness with respect to“bias” has also been studied in this research. Biased behav-ior of models is a relatively recent area of research whichhas garnered substantial attention due to its widespread im-pact in the society. The inability of a recognition model toperform well for a particular subset of the population hascaused concern in the community. Therefore, there is a needfor an in-depth understanding of the biased behavior shownby the face recognition algorithms.The remainder of this paper is organized as follows: thenext section elaborates upon the physical attacks, followedby a section on the digital attacks. A discussion on the ro-bustness of face recognition models with respect to differentbiases are discussed thereafter, followed by the path forwardfor future research. Robustness Against Physical Attacks
Physical attacks refer to the variations brought to thephysical-self before capturing the input data for a face recog-nition system. In terms of a face recognition pipeline, modi-fications are performed at the sensor level, such that a modi-fied face image is acquired by the recognition system. A facerecognition system can be attacked with the intention of im-personating another individual (in order to obtain unautho-rized access) or obfuscating one’s identity. As demonstratedin Figure 2, generally physical attacks can correspond to: (i)presentation attacks, (ii) disguised faces, and (iii) variationsdue to plastic surgery. The following subsections elaborate upon these physical attacks in terms of the associated litera-ture and implications.
Presentation Attacks
According to the ISO/IEC JTC1 SC37 Biometrics 2016Standards, presentation attack (Figure 2(a)) can be definedas “an alteration in the face acquisition system with the in-tention of modifying its intended working”. As mentionedearlier, the aim of presentation attacks can be two fold: (i)impersonation: where an attacker wants to acquire the iden-tity of someone else for illegal access and (ii) obfuscation:where the person wants to hide his/her own identity. In theliterature, Marcel, Nixon, and Li (2018) showed that facerecognition systems are vulnerable against various presen-tation attacks ranging from cost-effective 2D photo medi-ums to sophisticated 3D silicone masks. The first public2D photo print-attack database, namely the NUAA photoimposter (PI) database, was released in 2010. Later, Print-Attack (Anjos and Marcel 2011), CASIA-FASD (Zhang etal. 2012), and Replay-Attack (Chingovska, Anjos, and Mar-cel 2012) databases were developed to showcase the chal-lenging nature of these physical attacks. The above-listeddatabases, focusing only on 2D presentation mediums, gen-erally suffer from texture loss, image quality, and Moir´e pat-terns. To overcome these limitations and with the advance-ment in 3D technology, 3D masks are deployed to attack theface recognition system. Both commercial and deep learn-ing based face recognition systems are found vulnerabletowards such presentation attacks. In 2017, Manjani et al.(2017) prepared the first silicone mask attack database con-taining videos captured in unconstrained settings using var-ious YouTube links. In total, the database consists of videos of the real face and people wearing a silicone mask.Later, several other databases have been prepared showcas-ing the challenging nature of 3D mask attacks.In the literature, several image features based algorithmshave been developed to detect presentation attacks (Gal-bally, Marcel, and Fierrez 2014; Ramachandra and Busch2017). The image features based algorithms can be groupedin pre deep learning era and post deep learning success.The pre deep learning algorithms are generally based onhand-crafted features: either texture features, motion fea-tures, image quality features, or the combination of them.In the post deep learning era, architectures such as deep-dictionary learning, convolutional neural network (CNN),and long short-term memory (LSTM) have been applied forfeature extraction and detection of presentation attacks. Forinstance, Manjani et al. (2017) presented a novel detectionalgorithm using multiple level dictionary learning. The errorrate of the proposed algorithm is % lower than the thenbest performing algorithm on the proposed silicone maskdatabase in the seen setting. On the other hand, in open-settesting, the error rate of the proposed algorithm is at least % less than existing algorithms . Generally, PAD algorithms suffer from low generalizability,where the defense might work perfectly on a seen database andseen attack but fails under unseen settings. The scenario is popu-larly referred to as open-set, where during test, the defense algo- ace recognition systems are not limited to operate in daytime under the visible spectrum; they are also deployed tooperate in the infra-red spectrum (night time). Agarwal et al.(2017b) presented the first-ever multi-spectral video-basedface presentation attack database. Real and presentation at-tack videos are captured in the visible (VIS), near infra-red(NIR), and thermal spectrum. Presentation attack mediumincludes 3D latex masks and 2D hard resin masks of fa-mous celebrities. Face recognition experiments performedusing commercial-off-the-shelf (COTS) system and hand-crafted texture features show that the performance of thesystem drops when attack videos are provided for recog-nition. Further, to secure the system from presentation at-tacks, several existing texture-based feature extractors areimplemented. It is found that presentation attack detection ishighest in the thermal spectrum and lowest in the NIR spec-trum. The best performance is obtained by the combinationof multi-resolution decomposition from wavelet and tex-ture features via a gray-level-co-occurrence matrix (GLCM)(Agarwal, Singh, and Vatsa 2016). This was followed by anovel feature aggregation based presentation attack detec-tion algorithm for 2D and 3D attack mediums (Siddiqui etal. 2016). After performing the motion magnification to en-hance the micro patterns in the videos, a linear SVM clas-sifier is trained over texture and motion features separately,followed by score-level fusion.Recently, new concerns have been raised regarding the ro-bustness of face recognition systems. The PAD algorithms,which are used to protect the face recognition algorithms,are also vulnerable to attacks and unseen distribution sam-ples. Agarwal et al. (2019) showed that a face recognitionsystem can be made vulnerable by tampering the feature ex-traction block of PAD algorithms. A convolutional autoen-coder based mapping has been learned to map the features offake class to the feature distribution of real class. Given thevulnerabilities, it is our belief that future research should fo-cus primarily on developing (i) robust PAD algorithms and(ii) universal detectors (Mehta et al. 2019) capable of han-dling multiple attacks.
Disguise and Make-up
Face recognition systems are often presented with the chal-lenge of recognizing disguised faces (Figure 2(b)). Dis-guised face recognition is accompanied with the inherentcharacteristic of intent . Disguise accessories can be usedintentionally or unintentionally to obfuscate different faceregions. Intentionally, disguise accessories can be used toimpersonate another person in order to gain unauthorizedaccess. Disguise accessories can also be used to obfuscateone’s identity by hiding certain face parts. Similarly, usageof accessories such as sunglasses, hats, or scarves also resultin unintentional disguised faces. The combination of differ-ent types of disguises, their usage, and varying intent, resultsin vast variations observed in disguised faces, which oftentend to challenge the robustness of face recognition systems.The AR Face dataset (Martinez and Benavente 1998) isone of the initial datasets containing face images with dis- rithm is provided with images of unseen database or attack. guise variations. It contains images pertaining to 126 sub-jects, captured in constrained settings with a fixed set of dis-guise accessories (sunglasses and scarves). The dataset hasserved a pivotal role in promoting research on disguised facerecognition; however, the constrained nature of the datasetresulted in algorithms achieving high recognition perfor-mance very quickly (almost 95% (Singh, Vatsa, and Noore2009b)). The AR Face dataset was superseded by other morechallenging datasets, which facilitated research in the di-rection of disguised face recognition. In 2013, the IIITD Inand Beyond Visible Spectrum Disguise database (I BVSD)(Dhamecha et al. 2013) was released for understanding andevaluating disguised face recognition in the visible and thethermal spectra. The dataset continues to be one of the sem-inal datasets for evaluating face recognition systems underdisguise variations. In 2014, Dhamecha et al. (2014) com-pared human performance and machine performance for thetask of disguised face recognition on this dataset. The al-gorithm proposed by the authors identified disguised facialregions, and performed person recognition using the non-disguised facial regions only.Up till 2016, most of the research on disguised facerecognition involved face images captured in relatively con-strained settings. Wang and Kumar (2016) presented a Dis-guised and Makeup Faces Dataset containing 2460 face im-ages of 410 subjects. The dataset contains images collectedfrom the Internet with variations across different disguiseaccessories and makeup. In 2018, the Disguised Faces in theWild (DFW) dataset (Singh et al. 2019b) was released aspart of the International Workshop on DFW held in conjunc-tion with CVPR2018. The DFW dataset is a first-of-its-kinddataset containing 11,157 face images of 1,000 subjects. Itis the first dataset to contain multiple images for each sub-ject, along the lines of normal , validation , disguised , and impersonator . Recently, the concept of the DFW datasetis further extended and the DFW2019 dataset (Singh et al.2019a) is presented as part of the DFW2019 competition atICCV2019. This includes additional sets of plastic surgeryand bridal makeup. While the top performing teams in thecompetition demonstrated high verification performance athigher False Acceptance Rates (Deng and Zafeririou 2019;Singh et al. 2019a), analysis of the submissions demonstratelow performance (less than 10% True Acceptance Rate) at0% False Acceptance Rate; a metric often used in strictersettings such as access control in highly secure locations.The key observation formed in the two competitions is thatimpersonators are the most difficult subset. Plastic Surgery
Plastic surgery is another covariate of face recognitionwhich challenges the robustness of automated recognitionsystems. Plastic surgery is often performed to modify faceparts such as the nose, eyes, lips, ears, or bone structure.Post surgery, an individual can demonstrate large perma-nent variations in the face shape or different facial regions,thereby resulting in low intra-class similarity. In 2009, plas-tic surgery was established as a challenging covariate forface recognition, which required dedicated research atten-tion (Singh, Vatsa, and Noore 2009a). In 2010, the first pub-icly available plastic surgery dataset was released (Singhet al. 2010) containing pre and post surgery images of 900subjects. The dataset continues to be the only availabledataset for this problem, which is still heavily used by theresearchers across the world.The release of the IIITD Plastic Surgery dataset insti-gated the development of several face recognition algo-rithms capable of handling variations due to plastic surgery(Nappi, Ricciardi, and Tistarelli 2016). Bhatt et al. (2013)proposed a multi-objective evolutionary granular algorithmfor matching faces before and after plastic surgery. The al-gorithm demonstrated improved recognition performance ascompared to the then state-of-the-art results. Marsico et al.(2015) proposed region-based strategies for face recognitionunder variations due to plastic surgery. Suri et al. (2018) pro-posed a COST framework (COlour, Shape, and Texture) formatching pre and post plastic surgery face images. For de-tecting faces which have undergone plastic surgery, a Mul-tiple Projective Dictionary Learning based technique hasbeen proposed, followed by a face verification pipeline uti-lizing the information from the altered and non-altered re-gions (Kohli, Yadav, and Noore 2015). A high accuracy ofalmost 98% is achieved in the plastic surgery detection task.Recently, the DFW2019 competition (Singh et al. 2019a)has also contained a protocol for recognizing images underplastic surgery variations, where deep learning based base-line algorithms show around 50% verification accuracy at0.01% False Acceptance Rate. It is our belief that the avail-ability of these face datasets will enable deep learning basedface recognition systems to encode the covariate of plasticsurgery and improve the results.
Robustness Against Digital Attacks
Digital attacks correspond to the variations introduced in theacquired image before presenting it to the face recognitionsystem. With the availability of several image modificationtools, it has become relatively easy for attackers to digitallymodify a face image. As shown in Figure 3, generally, digitalattacks can broadly be classified into: (i) adversarial attacksand (ii) alterations - morphing/re-touching/tampering. Thefollowing subsections elaborate upon each type of digital at-tack and its related literature.
Adversarial Attacks
Despite the high classification performance obtained bydeep learning techniques (Majumdar, Singh, and Vatsa2016; He et al. 2015; Silver et al. 2016), they are highly sus-ceptible to changes in the input space (Figure 3(c)). Szegedyet al. (2014) demonstrated the vulnerabilities of CNN mod-els by introducing a minute noise or perturbation in the in-put image. Karahan et al. (2016) have shown that deep facerecognition algorithms are susceptible to image degradationbased effects such as Gaussian noise, contrast, blur, andfacial part occlusions. It is also observed that the accura-cies of GoogLeNet (Szegedy et al. 2015) and VGG-Face(Parkhi, Vedaldi, and Zisserman 2015) degrade with colorbalance manipulation. Dabouei et al. (2019) have perturbedthe face images by manipulating various facial landmarks, Figure 3: Digital attacks: (a) morphing, (b) retouching, (c)adversarial perturbation. In each row, image(s) in blue boxrepresents the real image, and remaining are attack images.and demonstrated that geometric attacks are more than %successful on the state-of-the-art face recognition networks.Goswami et al. (2018) showed that several commercialand deep CNN based face recognition algorithms are vul-nerable towards different adversarial attacks at (i) image-level and (ii) face-level. In the extended work (Goswamiet al. 2019), the authors proposed two defense algorithms:(i) adversarial perturbation detection algorithm utilizing theintermediate filter maps of a CNN, and (ii) a mitigation al-gorithm for recognizing adversarial faces. To mitigate theeffect of adversarial noise, the most affected filter mapsof a CNN model are selectively dropped out, and match-ing is performed using the unaffected filter maps. In an-other work, Agarwal et al. (2018) demonstrated that theattacks performed using image-agnostic perturbations (i.e.,one noise across multiple images) can be detected using acomputationally efficient algorithm based on the data distri-bution. Further, Goel et al. (2018) developed the first bench-mark toolbox of algorithms for adversarial generation, de-tection, and mitigation for face recognition. Recently, Goelet al. (2019) presented one of the best security mechanism,namely blockchain to protect against attacks on face recog-nition. Layers of CNN are converted into blocks similarto blocks in the blockchain. Each block contains the data,hash function, public and private cryptographic keys to iden-tify any possible tampering. The proposed network is re-silient to any kind of tampering including modifications tothe CNN weights. While defense against adversarial sam-ples of utmost importance, researchers have also focusedon evaluating the adversarial robustness of a model (Carliniet al. 2019). It is our belief that going further, researchersshould focus more on understanding the cause of adversaries(Gilmer et al. 2019), and providing robust defense mecha-isms (Athalye, Carlini, and Wagner 2018). Morphing, Re-touching, and Tampering
Ferrara, Franco, and Maltoni (2014) first demonstrated thevulnerability of commercial face recognition systems to-wards morphed faces. Agarwal et al. (2017a) generated thefirst video-based morphed face database using the popu-lar social messaging application, Snapchat. The databasecontains videos of unique subjects. Further, the effectof face morphing is demonstrated using a commercial facerecognition system and in-built iPhone face unlocking sys-tem. It is observed that both the systems are unable to pro-tect themselves from morphed images. Recently, Majumdaret al. (2019) performed an enhanced study on face morph-ing through two operations: (i) morphing two identities byblending as per a certain amount, and (ii) by partially re-placing a particular part of the face from a different identity.The vulnerability of two deep face recognition algorithms,OpenFace (Amos et al. 2016) and VGG-Face, are evaluatedon the tampered database. Blending and replacement of theeye region show the highest impact in the recognition perfor-mance, and both the networks demonstrate a drop of at least %. Further, to protect the integrity of these algorithms, anovel Siamese detection network is proposed which utilizesthe RGB and high pass filtered images for tamper detec-tion. Jain, Singh, and Vatsa (2018) proposed an algorithm fordetecting synthetic face images generated using StarGAN(Choi et al. 2018). A support vector machine classifier istrained for binary classification over the softmax probabili-ties given by the CNN network.Similar to morphing, facial retouching is an importantapplication, particularly in the fashion and beauty productindustry. In 2016, Bharati et al. (2016) prepared one ofthe most extensive facial retouching based database. Theauthors demonstrated that retouched face images can de-grade the matching accuracy of the commercial system byup to %. A deep Boltzmann machine based model wasproposed for detecting retouched images. The proposed ar-chitecture is able to perfectly detect the existing makeupbased retouched images. In the follow-up work (Bharatiet al. 2017), the face retouched database was extendedby covering multiple demographics regions. A novel semi-supervised network is also proposed for the detection ofmakeup and retouched images, which shows superior per-formance compared to existing algorithms.Recently, GAN based techniques such as the FSGAN(Nirkin, Keller, and Hassner 2019) have shown to gener-ate seemingly real content, making it challenging even forhumans to identify fake images. Moreover, the rise in Deep-Fakes (Amerini et al. 2019; Li and Lyu 2019) and other so-phisticated morphing techniques demands robust solutionsfor detection of fake content. Robustness Against Bias
Another less explored yet crucial field for assessing robust-ness of face recognition systems is their invariance to the (a) Racial bias in existing facerecognition system. (b) Age bias observed in facerecognition models.
Figure 4: (a) Recent incidents have demonstrated bias in facerecognition algorithms. (b) Nagpal et al. (2019) have demon-strated bias due to race and age in deep learning models.presence of bias . Recently, multiple incidents have high-lighted the presence of bias in existing machine learningbased systems for face analysis. Amazon’s face recogni-tion software, Rekognition, despite being easy to use, madean erroneous prediction for 28 members of the Congressand confused them with images of publicly available mug-shots. Moreover, even though only 20% of the membersof Congress are people of color, almost 40% of the falsematches belonged to them (Figure 4 (a)) (Wong 2019). In theliterature, Buolamwini and Gebru (2018) demonstrated thebiased performance of three commercial software for genderclassification. These algorithms performed poorly on darkskinned females as compared to lighter skinned males. Theauthors also introduced a new database, Pilot ParliamentsBenchmark (PPB), which was labeled using the six pointFitzpatrick scale for skin color. Based on this labeled data,further analysis was performed with respect to skin tone ofthe subject to study the bias in existing systems.Following these observations, researchers have presentedtechniques to mitigate the effect of bias in face analysistasks. A joint learning and unlearning framework (Alvi, Zis-serman, and Nellker 2018) has been proposed for eliminat-ing bias from CNN models for age, gender, race, and poseclassification from face images. A joint loss is used to opti-mize the network. The primary loss focuses on the task ofclassification, while the additional loss enforces the learntrepresentations to be invariant to the secondary task, andthe variations in the data. Ryu, Adam, and Mitchell (2018)proposed the Inclusive FaceNet model, which utilized trans-fer learning to learn attribute prediction models for varioussubgroups across gender and ethnicity. Multi-task Convo-lutional Neural Network (MTCNN) (Das, Dantcheva, andBremond 2018) is another framework, which is proposedto learn unbiased feature representations. It jointly learns topredict the gender, ethnicity, and age from the input. Jointlearning results in improved learning across sub-groupswhich reduces the biased behavior of the model towards aparticular sub-group. Another research thread to mitigatelearning biased representations involves pre-processing thedata to obtain fair representations. Amini et al. (2019) pre-sented a pre-processing technique to de-bias face detection As per the Oxford dictionary, bias is defined as the inclinationor prejudice for or against one person or group, especially in a wayconsidered to be unfair. lgorithms. The algorithm learns the latent structure of thetraining data with respect to the ethnicity and gender of thesubject via variational autoencoders, which is later utilizedto re-weight samples in order to obtain fair representations.Limited research has focused on understanding the effectof bias in face recognition. Recently, we have (Nagpal et al.2019) presented a first-of-its-kind in-depth analysis of biasin deep learning based face recognition algorithms. We haveanalyzed deep learning models for the existence of bias withrespect to the race and age of individuals (Figure 4(b)). It isobserved that similar to humans, deep learning based facerecognition models appear to undergo the phenomenon of“own-race” and “own-age” bias, where they suffer a drop inaccuracy while recognizing individuals of a different raceor age than those seen during training. Feature visualiza-tions further demonstrate an inherent bias in deep learningnetworks, wherein they appear to focus on race-specific dis-criminative facial regions. These findings suggest an imme-diate need for researchers to focus on eliminating bias fromface recognition models in order to develop fairer systems.
Discussion
Deep learning based face recognition models may haveachieved very high performance on “seen” distributions andlearnt to predict the unseen classes under certain variations,they still show poor generalizability on unseen variations.This singularity can be exploited by an adversary to attackthe models or can unintentionally yield biased decisions.For example, attacks such as adversarial perturbations, deep-fakes, morphing/tampering using GANs, and silicone masksbased physical presentation attacks have already been usedto fool face recognition models. Future research directionsshould focus on two important aspects: (i) developing meth-ods to compute the robustness level of an algorithm whichassess if the algorithm would show biased behavior and (ii)developing robust defense mechanisms to build trustworthyface recognition systems. Finally, the research communitywill benefit from novel databases and benchmarking pro-tocols focusing on identifying the singular points of facerecognition algorithms.
Acknowledgement
The authors are partially supported through the Infosys CAIat IIIT-Delhi, India. A. Agarwal is partly supported by theVisvesvaraya PhD Fellowship and S. Nagpal is supportedvia the TCS PhD fellowship. M. Vatsa is also supportedthrough the Swarnajayanti Fellowship by the Governmentof India,
References
Agarwal, A.; Singh, R.; Vatsa, M.; and Noore, A. 2017a. Swapped!digital face presentation attack detection via weighted local magni-tude pattern. In
IEEE/IAPR IJCB , 659–665.Agarwal, A.; Yadav, D.; Kohli, N.; Singh, R.; Vatsa, M.; and Noore,A. 2017b. Face presentation attack with latex masks in multispec-tral videos. In
IEEE CVPRW , 81–89.Agarwal, A.; Singh, R.; Vatsa, M.; and Ratha, N. 2018. Areimage-agnostic universal adversarial perturbations for face recog-nition difficult to detect? In
IEEE BTAS , 1–7. Agarwal, A.; Sehwag, A.; Vatsa, M.; and Singh, R. 2019. Deceiv-ing the protector: Fooling face presentation attack detection algo-rithms. In
IEEE/IAPR ICB .Agarwal, A.; Singh, R.; and Vatsa, M. 2016. Face anti-spoofingusing Haralick features. In
IEEE BTAS , 1–6.Alvi, M.; Zisserman, A.; and Nellker, C. 2018. Turning a blind eye:Explicit removal of biases and variation from deep neural networkembeddings. In
ECCVW , 556–572.Amerini, I.; Galteri, L.; Caldelli, R.; and Del Bimbo, A. 2019.Deepfake video detection through optical flow based cnn. In
IEEEICCVW .Amini, A.; Soleimany, A. P.; Schwarting, W.; Bhatia, S. N.; andRus, D. 2019. Uncovering and mitigating algorithmic bias throughlearned latent structure. In
AAAI/ACM AIES , 289–295.Amos, B.; Ludwiczuk, B.; Satyanarayanan, M.; et al. 2016. Open-face: A general-purpose face recognition library with mobile appli-cations.
CMU School of Computer Science
IEEE IJCB ,1–7.Athalye, A.; Carlini, N.; and Wagner, D. 2018. Obfuscated gra-dients give a false sense of security: Circumventing defenses toadversarial examples. In
ICML , 274–283.Bharati, A.; Singh, R.; Vatsa, M.; and Bowyer, K. W. 2016. Detect-ing facial retouching using supervised deep learning.
IEEE TIFS
IEEE IJCB , 474–482.Bhatt, H. S.; Bharadwaj, S.; Singh, R.; and Vatsa, M. 2013. Rec-ognizing surgically altered face images using multiobjective evolu-tionary algorithm.
IEEE TIFS
ACMFAT* , volume 81, 77–91.Carlini, N., and Wagner, D. 2017. Towards evaluating the robust-ness of neural networks. In
IEEE S&P , 39–57.Carlini, N.; Athalye, A.; Papernot, N.; Brendel, W.; Rauber, J.;Tsipras, D.; Goodfellow, I. J.; Madry, A.; and Kurakin, A. 2019.On evaluating adversarial robustness.
CoRR abs/1902.06705.Chingovska, I.; Anjos, A.; and Marcel, S. 2012. On the effective-ness of local binary patterns in face anti-spoofing. In
IEEE BIOSIG ,1–7.Choi, Y.; Choi, M.; Kim, M.; Ha, J.-W.; Kim, S.; and Choo, J. 2018.Stargan: Unified generative adversarial networks for multi-domainimage-to-image translation. In
IEEE CVPR , 8789–8797.Dabouei, A.; Soleymani, S.; Dawson, J.; and Nasrabadi, N. 2019.Fast geometrically-perturbed adversarial faces. In
IEEE WACV ,1979–1988.Das, A.; Dantcheva, A.; and Bremond, F. 2018. Mitigating biasin gender, age and ethnicity classification: a multi-task convolutionneural network approach. In
ECCVW , 573–585.Deng, J., and Zafeririou, S. 2019. Arcface for disguised face recog-nition. In
ICCVW .Dhamecha, T. I.; Nigam, A.; Singh, R.; and Vatsa, M. 2013. Dis-guise detection and face recognition in visible and thermal spec-trums. In
ICB .Dhamecha, T. I.; Singh, R.; Vatsa, M.; and Kumar, A. 2014. Rec-ognizing disguised faces: Human and machine evaluation.
PLOSONE
IEEE/IAPR IJCB , 1–7.Galbally, J.; Marcel, S.; and Fierrez, J. 2014. Biometric antispoof-ing methods: A survey in face recognition.
IEEE Access
ICML ,2280–2289.Goel, A.; Singh, A.; Agarwal, A.; Vatsa, M.; and Singh, R. 2018.Smartbox: Benchmarking adversarial detection and mitigation al-gorithms for face recognition. In
IEEE BTAS , 1–7.Goel, A.; Agarwal, A.; Vatsa, M.; Singh, R.; and Ratha, N. 2019.Securing CNN model and biometric template using blockchain. In
IEEE BTAS .Goswami, G.; Ratha, N.; Agarwal, A.; Singh, R.; and Vatsa, M.2018. Unravelling robustness of deep learning based face recogni-tion against adversarial attacks.
AAAI
IJCV
IEEE ICCV , 1026–1034.Jain, A.; Singh, R.; and Vatsa, M. 2018. On detecting GANs andretouching based synthetic alterations. In
IEEE BTAS , 1–7.Karahan, S.; Yildirum, M. K.; Kirtac, K.; Rende, F. S.; Butun, G.;and Ekenel, H. K. 2016. How image degradations affect deep cnn-based face recognition? In
IEEE BIOSIG , 1–5.Kohli, N.; Yadav, D.; and Noore, A. 2015. Multiple projectivedictionary learning to detect plastic surgery for face verification.
IEEE Access
IEEE CVPRW , 46–52.Majumdar, P.; Agarwal, A.; Singh, R.; and Vatsa, M. 2019. Evadingface recognition via partial tampering of faces. In
IEEE CVPRW .Majumdar, A.; Singh, R.; and Vatsa, M. 2016. Face verifica-tion via class sparsity based supervised encoding.
IEEE T-PAMI
IEEE TIFS
Hand-book of biometric anti-spoofing : Presentation attack detection .Editors: Marcel, S., Nixon, M.S., Fierrez, J., Evans, N. (Eds.);Springer International Publishing; ISBN: 978-3319926261.Marsico, M. D.; Nappi, M.; Riccio, D.; and Wechsler, H. 2015.Robust face recognition after plastic surgery using region-basedapproaches. PR CVC Technical Report .Mehta, S.; Uberoi, A.; Agarwal, A.; Vatsa, M.; and Singh, R. 2019.Crafting a panoptic face presentation attack detector.
IEEE/IAPRICB .Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; and Frossard, P.2017. Universal adversarial perturbations. In
IEEE CVPR , 1765–1773.Nagpal, S.; Singh, M.; Singh, R.; and Vatsa, M. 2019. Deep learn-ing for face recognition: Pride or prejudiced? arXiv,1904.01219. Nappi, M.; Ricciardi, S.; and Tistarelli, M. 2016. Deceiving faces:When plastic surgery challenges face recognition.
Img. and Vis.Comp.
IEEE ICCV , 7184–7193.Parkhi, O. M.; Vedaldi, A.; and Zisserman, A. 2015. Deep facerecognition. In
BMVC , 41.1–41.12.Ramachandra, R., and Busch, C. 2017. Presentation attack detec-tion methods for face recognition systems: A comprehensive sur-vey.
ACM Computing Surveys
FAT/ML .Scherhag, U.; Rathgeb, C.; Merkle, J.; Breithaupt, R.; and Busch,C. 2019. Face recognition systems under morphing attacks: Asurvey.
IEEE Access
IEEE/IAPR ICPR , 1035–1040.Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; VanDen Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershel-vam, V.; Lanctot, M.; et al. 2016. Mastering the game of go withdeep neural networks and tree search.
Nature
IEEE TIFS
IEEE ICCVW .Singh, M.; Singh, R.; Vatsa, M.; Ratha, N. K.; and Chellappa, R.2019b. Recognizing disguised faces in the wild.
IEEE T-BIOM
IEEE CVPRW , 72–77.Singh, R.; Vatsa, M.; and Noore, A. 2009b. Face recognition withdisguise and single gallery images.
Img. and Vis. Comp.
IEEEBTAS , 1–7.Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.;Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neuralnetworks.
ICLR .Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.;Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2015. Going deeperwith convolutions. In
IEEE CVPR , 1–9.Wang, T. Y., and Kumar, A. 2016. Recognizing human faces underdisguise and makeup. In
IEEE ISBA
IEEE TNNLS