CelebA-Spoof Challenge 2020 on Face Anti-Spoofing: Methods and Results
Yuanhan Zhang, Zhenfei Yin, Jing Shao, Ziwei Liu, Shuo Yang, Yuanjun Xiong, Wei Xia, Yan Xu, Man Luo, Jian Liu, Jianshu Li, Zhijun Chen, Mingyu Guo, Hui Li, Junfu Liu, Pengfei Gao, Tianqi Hong, Hao Han, Shijie Liu, Xinhua Chen, Di Qiu, Cheng Zhen, Dashuang Liang, Yufeng Jin, Zhanlong Hao
CCelebA-Spoof Challenge 2020 on Face Anti-Spoofing: Methods and Results
Yuanhan Zhang, Zhenfei Yin, Jing Shao, Ziwei Liu,Shuo Yang, Yuanjun Xiong, Wei Xia,Yan Xu, Man Luo, Jian Liu, Jianshu Li, Zhijun Chen, Mingyu Guo,Hui Li, Junfu Liu, Pengfei Gao, Tianqi Hong, Hao Han, Shijie Liu,Xinhua Chen, Di Qiu, Cheng Zhen, Dashuang Liang, Yufeng Jin, Zhanlong Hao
Abstract
As facial interaction systems are prevalently deployed,security and reliability of these systems become a criticalissue, with substantial research efforts devoted. Amongthem, face anti-spoofing emerges as an important area,whose objective is to identify whether a presented face islive or spoof. Recently, a large-scale face anti-spoofingdataset,
CelebA-Spoof which comprised of 625,537 pic-tures of 10,177 subjects has been released. It is the largestface anti-spoofing dataset in terms of the numbers of thedata and the subjects. This paper reports methods and re-sults in the
CelebA-Spoof Challenge 2020 on Face Anti-Spoofing which employs the CelebA-Spoof dataset. Themodel evaluation is conducted online on the hidden test set.A total of 134 participants registered for the competition,and 19 teams made valid submissions. We will analyze thetop ranked solutions and present some discussion on futurework directions.
1. Introduction
Face interaction systems have become an essential partin real-life applications, with the successful deploymentsin electronic identity authentication. Meanwhile, it is chal-lenging to deal with Presentation Attacks (PA) [2] in prac-tical usage. In order to protect our privacy and property • Yuanhan Zhang is with Beijing Jiaotong University and SenseTimeResearch. • Zhenfei Yin and Jing Shao are with SenseTime Research. • Ziwei Liu is with S-Lab, Nanyang Technological University. • Shuo Yang, Yuanjun Xiong, and Wei Xia are with Amazon WebServices. • Yan Xu, Man Luo, Jian Liu, Jianshu Li, Zhijun Chen and MingyuGuo are with ZOLOZ. • Hui Li, Junfu Liu, Pengfei Gao, Tianqi Hong, Hao Han and ShijieLiu are with Meituan. • Xinhua Chen, Di Qiu, Cheng Zhen, Dashuang Liang, Yufeng Jin,and Zhanlong Hao are with Vision Intelligence Center of Meituan. from being illegally used by others, Face Anti-Spoofing(FAS) [5, 11, 1], which aims to determine whether a pre-sented face is an attacker or client, has emerged as a cru-cial technique and attracted extensive interests in recentyears [7].Leveraging on the CelebA-Spoof [15] dataset, we or-ganize the
CelebA-Spoof Challenge 2020 on Face Anti-Spoofing (CelebA-Spoof Challenge) collocated with theWorkshop on Sensing, Understanding and Synthesizing Hu-mans at ECCV2020 . The goal of this challenge is to boostthe research on face anti-spoofing. Specifically, the CelebA-Spoof is comprised of 625,537 pictures of 10,177 subjects,which is the largest face anti-spoofing dataset in terms of thenumbers of the data and the subjects. The dataset also fea-tures a hidden test set containing around 30000 images foronline evaluation of this challenge. The dataset constructionof the hidden dataset is the same as the public dataset.In the following sections, we will describe this challenge,analyze the top ranked solutions and provide discussions todraw conclusion derived from the competition and outlinefuture work directions.
2. Challenge Overview
The CelebA-Spoof Challenge is hosted on the CodaLabplatform . After registering on the CelebA-Spoof Chal-lenge, each team is allowed to submit their models to theAmazon Web Services (AWS) , and each team is allocatedone 16 GB Tesla V100 GPU to perform online evaluationon the hidden test set. The encrypted prediction files in-cluding results of each data in hidden test set are sent tothe teams through automatic email when their requested on-line evaluation has done. Teams are required to upload theirencrypted prediction files to the CodaLab platform for the Workshop website: https://sense-human.github.io/ . Challenge website: https://competitions.codalab.org/competitions/26210 . Online evaluation website: https://aws.amazon.com . a r X i v : . [ c s . C V ] F e b anking. The CelebA-Spoof Challenge 2020 on Face Anti-Spoofing employs the CelebA-Spoof dataset [15] that wasproposed in ECCV 2020. CelebA-Spoof is a large-scaleface anti-spoofing dataset that has 625,537 images from10,177 subjects, which includes 43 rich attributes on face,illumination,environment and spoof types. Live image se-lected from the CelebA dataset [10]. We collect and an-notate spoof images of CelebA-Spoof. Among 43 rich at-tributes, 40 attributes belong to Live images including allfacial components and accessories such as skin, nose, eyes,eyebrows, lip, hair, hat, eyeglass. 3 attributes belong tospoof images including spoof types, environments and il-lumination conditions.CelebA-Spoof can be used to trainand evaluate algorithms of face anti-spoofing. The hiddentest set is devised for the CelebA-Spoof Challenge, the dataconstruction of the hidden test is as the same as the publictest set. All the teams participating the CelebA-Spoof Chal-lenge are restricted to train their algorithms on the publiclyavailable CelebA-Spoof training dataset.
Considering face anti-spoof as binary classification, wecan leverage FPR@TPR as evaluation criteria. Specifically,spoof class is Positive, live class is Negative.
F P R = F PF P + T N , T P R = T PT P + T N (1)Specifically,
FPR , TPR , TP , TN and FP correspondto False Positive Rate, True Positive Rate, True Positive,True Negative and False Negative. The TPR@FPR=5 -3 determines the final ranking. Besides, we also provideTPR@FPR=10 -3 and TPR@FPR=10 -4 . Among all the up-loaded results, if their TPR@FPR=5 -3 is the same, the onewith higher TPR@FPR=10E-4 will achieve a higher rank-ing. The CelebA-Spoof Challenge lasted for nine weeks fromAugust 28, 2020 to October 31, 2020. During the chal-lenge, participants had access to the public CelebA-Spoofdataset, and they are restricted to used the public CelebA-Spoof training dataset for the training of their model. Thechallenge results were announced in February 10, 2021. Atotal of 134 participants registered for the competition, and19 teams made valid submissions.
3. Results and Solutions
Among the 19 teams who made valid submissions, manyparticipants achieve promising results. We show the final
Table 1. Final results of the top-5 teams in the CelebA-Spoof Chal-lenge 2020 on Face Anti-Spoofing.
Ranking TeamName UserName TPR (%) ↑ FPR=10 -3 FPR=5 ∗ -3 FPR=10 -6 results of the top-5 teams in Table 1. In the following sec-tions, we will present the solutions of the top-3 entries. Team members: Yan Xu, Man Luo, Jian Liu, Jianshu Li,Zhijun Chen, Mingyu Guo
Figure 1. The framework of the first-place solution.
General Method Description.
The champion team pro-pose a robust method for face anti-spoofing. It has twocomponents:
1) Spoof modeling : in which they adopt sev-eral state-of-art models to predict spoof cue of each testingimage.
2) Spoof fusion : in which they propose a heuristicvoting strategy for robust multiple scores combination.
Spoof Modeling.
In order to obtain the spoof cue forthe attacked images, they combine a bag of start-of-the-art models to find the spoof evidence of each testing im-age in this competition. Specifically, they propose a novelframework named FOCUS ( F inding sp O of CU e for faceanti- S poofing) to handle with face anti-spoofing problem.A multi-task learning based model AENet [15], a binarytask based model ResNet [8] and a attack types classifica-tion base model are adopted to enhance the ability to detectthe spoof cue. Furthermore, a noise print method is adoptedto recognize the device type of attacked image. • FOCUS:
As shown in Figure 1, inspired by [15, 4], theypropose a novel framework FOCUS for face anti-spoofing.It mainly includes two modules: a Spoof Cue Generatorand an Aux Classifier. The spoof cue generator adopts theU-Net structure with an encoder and a decoder to gener-ate spoof cue of the same size as the input image. Regres-sion loss is utilized during training process to minimize the2poof cue of live images. Meanwhile, it does not apply anyconstraints on the spoof images. In order to improve thegeneralization ability of unknown attack types, they designa two-path encoder and adopt ResNet18 CDC [13] as thebackbone of each encoder. In addition, they introduce a re-flection map [14] and a depth map [9] in the latent spaceof the encoder, and adopt 3D geometric information as anauxiliary constraint. As a result, the features of the latentspace will have higher responses on the spoof image. In thedecoder part, they introduce multi-branch arcface loss [3]to improve the compactness within the live class and thedistinction bettheyen the live-spoof classes. For the aux-iliary classifier, they design a binary classification modelconnected after the generator to assist the end-end trainingof the whole framework. • AENet:
AENet [15] is adopted to predict the spoof scoreof each testing image. • ResNet:
Through bad-case analysis, they find that AENetis not good at detecting spoof images for mask and outdoorscenarios. To this end, they deploy a binary classificationmodel ResNet-18 [8] to enhance the spoof-detecting abil-ity. During the training status, the training samples of spoofimages are only from masks and outdoor attacks. Further-more, focal loss is adopted to solve the over fitting problemfrom easy samples. Meanwhile, a series of data argumen-tation strategies, such as random crop, image flip and colordistortion, are adopted to improve its generalization ability. • Attack types:
By analyzing the CelebA-Spoof train data,they find that the different spoof images have similar at-tack clues, such as similar display borders, similar back-grounds, and similar paper printing edges. To this end, theytrain a attack-type based model to predict the attack clue.Specifically, they first remove the foreground area whichcontaining face context, since the spoof clue is the featurein a attack image. Then they train a classification moduleon various spoof types. • Noise Print:
Different camera’s digital imaging pipelinehave common processes like data compression, interpola-tion, and gamma correction, and also have unique processesto offer more advanced functionalities. The unique processvaries from camera model to camera model, and the ac-quired images from different cameras have artifacts whichare peculiar to the camera itself, and hence can be used toperform face anti-spoof task. In this competition, One fea-ture has been observed from train set, the live images arecollected from internet or social medial while the spoof im-ages are directly captured from device cameras, e.g. Phonecamera, Pad camera or PC camera. they find that the differ-ent noise prints on different device cameras, therefore theyutilize noise print as a feature to represent the camera type.To extract the noisy print, they fist apply DCT transforma-tion and quantization on this image, then total 64 frequencydensity histograms are calculated based on 8x8 macro block of DCT coefficients. For each frequency density histogram,FFT is applied to obtain the the number of peaks that is ex-ceeding the pre-defined threshold t. Finally, they can use64 dimensional vector to represent noise print of differentcamera type. During training procedure, they first dividethe train set as four groups, group 1 from live images, group2 from spoof images of Phone, group 3 from spoof imagesof Pad and group 4 from spoof images of PC. Then noiseprints of four group are extracted and send them to a net-work to distinguish the different distribution of each noiseprint.
Spoof Fusion.
To obtain the best performance of TAR(True accepted rate) at given FAR (False accepted rate) inthe face anti-spoofing task, they propose a heuristic votingscheme at the score level for robust combinations of dif-ferent models. they first normalize all confidence scores ofeach trained model to 0-1, and they assign the model withthe best performance as the main model while they regardthe others as auxiliary models. Then, they change the con-fidence score to 0 or 1 if all models have similar predictionranges. The scores are amended to 0 or 1 if other auxiliarymodels have strong confidence of belonging to live face orspoofing face, respectively. For images whose scores arenot close to 0 or 1, they consider them as hard cases, be-cause they are laying on the edge of decision boundaries ofeach model. For these cases, they re-arrange the scores toaround 0.1.
Implementation details:
For fusion strategy, they adopta heuristic voting scheme to obtain the best performanceon TAR and FAR, please refer to “Testing description” formore details. The fusion strategy allow us to achieve
TAR when FAR is 5 ∗ -3 and 10 -4 .FOCUS is implemented with Pytorch and trained end-to-end. In the training stage, models are trained with Adamoptimizer and the initial learning rate (lr) and the weight de-cay (wd) are 2 ∗ -4 and 5 ∗ -5 , respectively. they trainmodels with maximum 25 epochs while lr decays every 6epochs by a factor of 0.3. During training, the training sam-ples are resampled to keep the live-spoof ratio as close to1:1. The training batch size is 64 on four 1080Ti GPUs.they initialize the backbone ResNet18 CDC in the encoderswith MSRA method. Besides, their pipeline spend 24h fortraining/0.8s for testing each image (not including the pre-processing) Team members: Hui Li, Junfu Liu, Pengfei Gao, TianqiHong, Hao Han, Shijie Liu
General Method Description.
In this challenge, they adoptfive different models and ensemble with a “weight-after-sorting” strategy for face anti-spoofing. In the training andtest stage of several networks, they use patches to keep themodel focusing on the spoof cues instead of other irrelevant3 " 𝑤 ···𝑤 % + 𝑠𝑐𝑜𝑟𝑒𝑠𝑐𝑜𝑟𝑒 +,-. 𝑠𝑐𝑜𝑟𝑒 /0112 𝑚𝑜𝑑𝑒𝑙 " 𝑚𝑜𝑑𝑒𝑙 𝑚𝑜𝑑𝑒𝑙 , 𝑚𝑜𝑑𝑒𝑙 ······𝑖𝑛𝑝𝑢𝑡 𝑖𝑚𝑎𝑔𝑒 ×𝒔𝒐𝒓𝒕𝒂𝒏𝒅𝒔𝒆𝒍𝒆𝒄𝒕𝒕𝒉𝒆𝒕𝒐𝒑𝒌𝒔𝒄𝒐𝒓𝒆𝒔𝑠𝑐𝑜𝑟𝑒 " 𝑠𝑐𝑜𝑟𝑒 𝑠𝑐𝑜𝑟𝑒 % ··· ××× Figure 2. The framework of the second-place solution. face features, which makes the trained networks more ro-bust and generalizable. In the fusion stage, they propose anovel ensemble strategy, naming “weighting-after-sorting”.The output scores of different methods will first be sortedand they select the top k scores and assign which with dif-ferent weights, searched by the Particle Swarm Optimiza-tion (PSO) algorithm. This strategy is rank-specific insteadof model-specific, which further enhances the performancesof their method. Training Description.
They used 5 single models for thefurther model ensemble. The details of the single modelsare as follows. • CDCNpp:
They used the Central Difference Convolu-tional Network [13]. Instead of training on the whole im-age, they used random patches of the face images as inputs.They trained the CDCNpp on two scales of patches, 64*64and 96*96 respectively. They adopted grayscale images asthe depth supervision in CDCNpp, the sizes of the grayscaleimages are 16*16 and 24*24 respectively. • LGSC:
They adopted the LGSC [4]. The input imageswere resized to 224*224 and the model was initialized withthe pre-trained ResNet18 model. They used batch balancesampler to balance the positive samples and negative onesin a batch. • SeResNet50:
They adopted SeResNet50 for a simple bi-nary classification. They used images resized to 224*224 asinputs and the pre-trained SeResNet50 model. • EfficientNet-b7:
All settings [12] are same as the SeRes-Net50, with inputs with sizes of 224*224 and a pre-trainedmodel on ImageNet. • SeResNeXt50:
The training took random patches withsizes of 64*64 as inputs. To take advantage of other su-pervision information like the spoof types and the illumi-nation types provided in the training set, they adopted amulti-task learning similar to the AENet [15] and added twofully connected layers in the tail of SeResNeXt50, predict-ing the spoof types and the illumination types respectively.The losses are all softmax cross entropy losses, the weightsof spoof types loss and the illumination types loss are set to
Figure 3. The framework of the second-place solution.Table 2. Image transforms in the training stage of the second placesolution.
Methods CDCNpp LGSC SeResNet50 EfficientNet-b7 SeResNeXt50RandomHonrizontalFlip (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
RandomRotation (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
RandomErasing (cid:88)
Cutout (cid:88) (cid:88) (cid:88) (cid:88)
ColorJitter (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
Mixup (cid:88) (cid:88)
Testing Description.
The testing strategies of the abovesingle models are as follows: • CDCNpp : The input image was first split into 3*3 partsand they crop the upper left corner with size of 64*64 andthe lower right corner with size of 96*96 of each part togenerate test patches of the two CDCNpp trained on differ-ent size of patches. Figure 3 illustrates how they generatethe patches. They applied the horizontal flip to the 96*96patches. For each CDCNpp, they calculated the mean valueof the nine patches of a test image as the prediction score ofthe image. • SeResNeXt50 : They cropped the center part with size of64*64 of the image and flip the patch horizontally. Theprediction score of SeResNeXt50 is calculated as the meanvalue of the two patches. • Others : For other models mentioned above, the inputsimages are resized to 224*224, then sent to the LGSC, theSeResNet50 and the EfficientNet-b7.
Implementation details.
As has been introduced above,they tested an image with six models (two CDCNpp withdifferent sizes of patches and four other models) and get six4cores of the input image belonging to the spoof class. Theyproposed a novel “weighting-after-sorting” strategy for themodel ensembles. Specifically, they first sorted six scoresin descending order, then selected the top k scores for scorefusion. They used the Particle Swarm Optimization (PSO)algorithm to find the k weights assigned to the top k scoresat different ranks with the best performance on the valida-tion set. It can be seen with that such strategy, the weightsare not model-specific, but rank-specific. Considering aspoof image with only one model giving high score, suchmodel may not weight enough in previous model-specificfusion strategy, but its score will certainly be noticed intheir weighting-after-sorting strategy. The k was set to 4in their final submission. Besides, their pipeline spend 18hfor training/0.076s for testing each image (pre-processingincluded). Team members: Xinhua Chen, Di Qiu, Cheng Zhen,Dashuang Liang, Yufeng Jin, Zhanlong Hao
Figure 4. The framework of the third-place solution.
General Method Description.
AS shown in Fig 5, theypropose a novel method based on fusing CDCN [13] andDAN [6]. CDCN is able to capture detailed patterns viaaggregating both intensity and gradient information, whileDAN with self-attention mechanism can enhance the dis-criminated ability of feature representations through spatialinter-dependencies and channel inter-dependencies. Whencombined, they can significantly improve the face anti-spoofing performance by modeling rich contextual informa-tion over local intensity and gradient features. In addition,motivated by the insight that artificial spoofing informationof the image is independent on semantic information, theyutilize facial patches [1] as the input for models that theytend to decouple the spoofing feature from its full-face fea-ture. The full-face image is divided into many different fa-cial patches which can force the proposed networks to fo-cus on the spoof-specific discriminative information. Fi-nally, the introduced multi-scale strategy aims at generatingmulti-scale patches from the original image. Random crop
Figure 5. The backbone of the purposed CDC-DAN. is applied to produce patches on the different scales thatthey use five scales of patches, i.e., 32*32, 48*48, 64*64,112*112, and 128*128 respectively as the input size for theproposed CNN-based networks. In general, they proposea multi-scale patch-based CDC-DAN method for face anti-spoofing detection.
Training Description.
In the training stage, to reduce over-fitting, convolutional neural networks are typically trainedwith data augmentation. For face anti-spoofing, they findthat mixed-example data augmentation is very useful. theyutilize a variety of methods including cutout, vh-mixup,mixed-concat, random square and random interval, to gen-erate mixed-example data augmentation, which result in im-provements over models trained without any form of mixed-example data augmentation. Moreover, if the image size issmaller than the preset input size, they enlarge the imageby mirroring instead of scaling, which greatly improves theperformance.
Testing Description.
In the test stage, they do not use anytype of data augmentation methods. They only adjust theimage size by mirroring to meet different network input re-quirements. Then, they uniformly sample multiple smallpatches of different sizes and input them into the corre-sponding neural network models. Finally, they ensemblethe output results of different patches and different models.
Implementation details.
As shown in Fig 5, their fusionstrategy is a simple method that adjusts the weights of CNNmodels based on the best performance of the validationdataset. The main benefit of using fusion is that the perfor-mance of average prediction is better than any contributingmember in the fusion. The mechanism for improving per-formance through fusion is usually the reduction of the vari-ance of the predictions made by each individual model. Be-sides, CDCN-DAN spends 2 days for training with 8 GPUs,Se-resnext26 spends 12 hours for training with 8 GPUs, andLight-weighted Network spends 4 hours for training with 4GPUs.5 . Discussion
The winning solutions mentioned above have achievedpromising results on the CelebA-Spoof Challenge, thesesolutions focus on different aspects in developing a robustand efficient face anti-spoofing model. To briefly sum up,among their solutions, there are two key points are essen-tial for improving the performance of the face anti-spoofingtask.
1) Spoofing Cues Model:
Besides the commonly useddeep learning models, such as ResNet and EfficientNet,these solutions not only inherit the models which are pub-lished recently [4, 13], but also devise novel framework fordetecting spoofing cues, such as attack-type based model,Noise Print based model as mentioned in the first place so-lution, and CDC-DAN proposed by the third place solution.
2) Ensemble Strategy:
These winning methods leverage ondifferent ensemble strategies to boost their model perfor-mance, such as the heuristic voting scheme of the first placesolution and “weight-after-sorting” strategy of the secondplace solution. Moreover, we believe that there is still muchroom for improvement in the future face anti-spoofing chal-lenge. For example,
1) Size:
The size of hidden set could belarger in the future.
2) Diversity:
The live images could bemore realistic instead of inheriting from the CelebA [10].
Acknowledgments.
We thank Amazon Web Servicesfor sponsoring the prize of this challenge. Besides,we sincerely thank the codebase from DeeperForensicsChallenge , especially for the helpful discussions fromZhengkui Guo and Liming Jiang. References [1] Yousef Atoum, Yaojie Liu, Amin Jourabloo, and XiaomingLiu. Face anti-spoofing using patch and depth-based cnns.In
IJCB , pages 319–328. IEEE, 2017. 1, 5[2] Josef Bigun, Hartwig Fronthaler, and Klaus Kollreider. As-suring liveness in biometric identity authentication by real-time face tracking.
CIHSPS , pages 104–111, 2004. 1[3] Jiankang Deng, Jia Guo, Xue Niannan, and StefanosZafeiriou. Arcface: Additive angular margin loss for deepface recognition. In
CVPR , 2019. 3[4] Haocheng Feng, Zhibin Hong, Haixiao Yue, Yang Chen,Keyao Wang, Junyu Han, Jingtuo Liu, and Errui Ding.Learning generalized spoof cues for face anti-spoofing. arXiv preprint arXiv:2005.03922 , 2020. 2, 4, 6[5] Robert W Frischholz and Alexander Werner. Avoidingreplay-attacks in a face recognition system using head-poseestimation. In
IEEE International SOI Conference , pages234–235. IEEE, 2003. 1[6] Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, ZhiweiFang, and Hanqing Lu. Dual attention network for scenesegmentation. In
CVPR , pages 3146–3154, 2019. 5 https://competitions.codalab.org/competitions/25228 . [7] Javier Galbally, S´ebastien Marcel, and Julian Fierrez. Bio-metric antispoofing methods: A survey in face recognition. IEEE Access , 2:1530–1552, 2014. 1[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.Deep residual learning for image recognition. In
CVPR ,pages 770–778, 2016. 2, 3[9] Taewook Kim, YongHyun Kim, Inhan Kim, and Daijin Kim.Basn: Enriching feature representation using bipartite aux-iliary supervisions for face anti-spoofing. In
ICCVW , pages0–0, 2019. 3[10] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.Deep learning face attributes in the wild. In
ICCV , 2015.2, 6[11] Stephanie AC Schuckers. Spoofing and anti-spoofing mea-sures.
Information Security technical report , 7(4):56–62,2002. 1[12] Mingxing Tan and Quoc Le. Efficientnet: Rethinking modelscaling for convolutional neural networks. In
ICML , pages6105–6114. PMLR, 2019. 4[13] Zitong Yu, Chenxu Zhao, Zezheng Wang, Yunxiao Qin,Zhuo Su, Xiaobai Li, Feng Zhou, and Guoying Zhao.Searching central difference convolutional networks for faceanti-spoofing. In
CVPR , pages 5295–5305, 2020. 3, 4, 5, 6[14] Xuaner Zhang, Ren Ng, and Qifeng Chen. Single imagereflection separation with perceptual losses. In
CVPR , pages4786–4794, 2018. 3[15] Yuanhan Zhang, Zhenfei Yin, Yidong Li, Guojun Yin, JunjieYan, Jing Shao, and Ziwei Liu. Celeba-spoof: Large-scaleface anti-spoofing dataset with rich annotations. In
ECCV ,2020. 1, 2, 3, 4,2020. 1, 2, 3, 4