Perception of Image Features in Post-Mortem Iris Recognition: Humans vs Machines
PPerception of Image Features in Post-Mortem Iris Recognition:Humans vs Machines
Mateusz TrokielewiczBiometrics and Machine Intelligence LabResearch and Academic Computer NetworkKolska 12, 01045 Warsaw, Poland [email protected]
Adam CzajkaDepartment of Computer Science and EngineeringUniversity of Notre DameNotre Dame, IN 46556, USA [email protected]
Piotr MaciejewiczDepartment of OphthalmologyMedical University of WarsawLindleya 4, 02005 Warsaw, Poland [email protected]
Abstract
Post-mortem iris recognition can offer an additionalforensic method of personal identification. However, in con-trary to already well-established human examination of fin-gerprints, making iris recognition human-interpretable isharder, and therefore it has never been applied in foren-sic proceedings. There is no strong consensus among bio-metric experts which iris features, especially those in irisimages acquired post-mortem, are the most important forhuman experts solving an iris recognition task. This paperexplores two ways of broadening this knowledge: (a) withan eye tracker, the salient features used by humans com-paring iris images on a screen are extracted, and (b) class-activation maps produced by the convolutional neural net-work solving the iris recognition task are analyzed. Bothhumans and deep learning-based solutions were examinedwith the same set of iris image pairs. This made it pos-sible to compare the attention maps and conclude that (a)deep learning-based method can offer human-interpretabledecisions backed by visual explanations pointing a humanexaminer to salient regions, and (b) in many cases hu-mans and a machine used different features, what meansthat a deep learning-based method can offer a complemen-tary support to human experts. This paper offers the firstknown to us human-interpretable comparison of machine-based and human-based post-mortem iris recognition, andthe trained models annotating salient iris image regions.
1. Introduction
Recent research has unveiled the potential that the irismight be useful in post-mortem identification and verifica-tion of humans [22, 21, 2, 23]. These studies, conductedin both the mortuary, cold-storage conditions, as well asin uncontrolled outside environment, have shown that cor-rect matches can be obtained with cadaver irises even threeweeks after death. However, existing iris matchers areweakly suited for this task, with error rates growing withincreased time horizon since subject’s death. There are alsono human-interpretable post-mortem iris recognition meth-ods reported in the literature to help human examiners intheir work. If post-mortem iris biometrics can be success-fully implemented, it could be a valuable addition to theforensic expert’s set of methods for identification, provinguseful in cases when other methods, such as DNA or dentalrecords, are unavailable or difficult to apply. It is easy toimagine a scenario of a hypothetical natural disaster victimsearch, when a fast positive identification can free up valu-able resources of emergency response teams and let themproceed with shorter delay.At the same time, simply providing a machine-backeddecision on to whom the iris might belong would not beconsidered sufficient during courthouse proceedings, simi-larly to the case of fingerprints, where the automated finger-print identification systems (AFIS) serve only as assistanceto the human expert, who is making the final decision. Suchuse case drives the motivation of this work, in which we pro-pose an algorithm incorporating deep convolutional neuralnetwork (DCNN) for cadaver iris recognition that, in addi-tion to its class-wise prediction, also offers a visualization a r X i v : . [ c s . C V ] O c t f the salient regions used by a classifier. Furthermore, wecompare attention maps generated by the neural networkwith attention maps obtained from human subjects with theuse of an eye tracker device, to gain insight into how differ-ently a machine and humans perform in this task, which irisregions they deem important, and whether the two methodscan complement each other.With this paper, we try to deliver answers for the follow-ing two questions:Q1. Which iris regions contribute the most to the class-wiseprediction made by a DCNN trained for iris recogni-tion?Q2. How do the DCNN-generated attention maps com-pare to the maps obtained from an eye tracker de-vice recording human’s eye gaze during iris recogni-tion task?To our knowledge, this is the first work analyzing differ-ences between attention to iris features in a DCNN clas-sifier and in human subjects. We also make the trainedDCNN classifier, annotating salient iris features, availablealong with this paper at: http://zbum.ia.pw.edu.pl/EN/node/46 .
2. Related work
Sansola [1] used IriShield M2120U iris recognition cam-era together with IriCore matching software in experimentsinvolving 43 subjects who had their irises photographedat different post-mortem time intervals. Depending on thepost-mortem interval, the method yielded 19-30% of falsenon-matches and no false matches. Saripalle et al . [15]used ex-vivo eyes of domestic pigs with a conclusion thatirises are slowly degrading after being taken out of thebody, and lose their biometric capabilities 6 to 8 hours afterdeath. Ross [13] drew some conclusions on the develop-ment of corneal opacity and fadeout of the pupillary andlimbic boundaries in post-mortem samples. Trokielewicz et al . have shown that the iris can still serve as a biometricidentifier for 27 hours after death [22], even with the exist-ing iris matchers. Later, they showed that correct matchescan still be expected after 17 days since a subject’s death[21]. A database of 1330 near infrared and visible lightpost-mortem iris images acquired from 17 cadavers was of-fered to the scientific community. Recent study by Trok-ielewicz et al . [23], employing images collected up to 34days post-mortem from 37 cadavers, shows that iris recog-nition occasionally works even 21 days since a subject’sdemise.Bolme et al . [2] pioneered the analysis of how fast faces,fingerprints and irises are losing their biometric capabili- ties during human decomposition in natural, outdoor envi-ronment, and in different weather conditions. The authorsshowed that the irises degraded quickly regardless of thetemperature, typically becoming useless only a few daysafter placement. A recent paper by Sauerwein et al . [16]followed these experiments, showing that irises stay read-able for up to 34 days after death, when cadavers were keptin outdoor conditions during winter. Their readability, how-ever, was assessed by human experts, and not by specializediris recognition algorithms.Some advancements have recently been made in auto-mated post-mortem iris biometrics, with an algorithm forcadaver iris image segmentation that is said to effectivelylearn specific, post-mortem deformations of the iris texture,and successfully exclude them during segmentation pro-posed by Trokielewicz et al . [20], as well as a method fordetecting iris images coming from post-mortem subjects,which correctly detects almost 99% of the cadaver samplepresentations [24].
Over the last few years, several deep learning-based ap-proaches to iris recognition have been proposed as an al-ternative to typical methods employing conventional, hand-crafted iris feature representations, such as those based onthe works of John Daugman [4, 5]. Minaee et al . [10]extracted features from the entire eye region using a pre-trained network based on the
VGG-Net architecture [18]with no fine-tuning, and an SVM applied as a classifier.Their solution, tested on CASIA-Iris-1000 database (20,000iris images from 1,000 subjects) and IIT Delhi database(2,240 images from 224 subjects), reached 88% and 98%recognition rates, respectively.Gangwar and Joshi [6] introduced
DeepIrisNet , con-stituting two convolutional architectures built specificallyfor the purpose of iris recognition, one being a typical,pyramid-like structure of stacked convolutional layers, andthe second being an inception-style network coupled withstacked convolutional layers. These were trained in aclosed-set scenario, then the softmax layer was removedand the output from the last dense layer was extracted toprovide a 4096-dimensional vector of iris features, com-pared using Euclidean distance. An equal error rate (EER)of 1.82% was reported.Liu et al . [8] introduced a
DeepIris network designed foriris images coming from two different sensors, with differ-ent resolution, quality, etc. The solution depends on pairsof features that are learnt from the data. The experimentsinvolved a CNN architecture comprising several convolu-tional layers, trained and tested on subject-disjoint subsetsof Q-FIRE and CASIA cross-sensor datasets. EER = 0.15%is reported.hao and Kumar [25] proposed a fully-convolutionalnetwork architecture for iris masking and representation,trained with the use of a triplet loss function with bit-shifting and iris masking. The approach employs binariza-tion of the network output and additional masking of the‘less reliable’ bits in the feature map, similarly to the con-cept of ignoring fragile bits in iris code [7]. This methodgave EERs of 0.73% and 0.99% for the IITD and ND-Iris-0405 datasets, and 2.28% and 3.85% for more challengingWVU Non-ideal and CASIA.v4-distance datasets.Nguyen et al . [11] explored off-the-shelf features ob-tained from selected modern CNN architectures, coupledwith a multi-class SVM classifier. The best performingmodels include DenseNet (best), ResNet, and Inception.Good recognition rates are reported, nearing 99% for thetwo databases used in the paper, namely the LG2200 andCASIA-Iris-Thousand, albeit for a closed-set experiment.
3. Datasets of cadaver iris images
In this work we take advantage of the two publicly avail-able, subject-disjoint datasets of iris images collected fromcadaver eyes: Warsaw-BioBase-PostMortem-Iris v1 andv2 * , which combined contain 1200 near infrared (NIR) im-ages and 1787 visible light images obtained from 37 sub-jects. These samples were collected in mortuary condi-tions over a period of time reaching up to 34 days post-mortem, in multiple sessions across different time horizonsafter death. Each cadaver eye was imaged multiple timesin several (from 2 to 13) acquisition sessions. A total of72 eyes are represented in the data, since two subjects hadonly one of their eyes photographed. In addition, data forone of the classes had to be removed from analysis as itwas only represented by a single NIR sample. Thus, thefinal database used in this study consists of 1199 NIR sam-ples and 1780 visible-light samples, representing 71 distincteyes. Since left and right irises are different, we assumethat each eye represents a separate identity, or class. Forthe purpose of both training the machine classifier as wellas experiments involving human subjects, the images weremanually segmented with circular approximations of the irisboundaries and cropped to square.
4. DCNN-based iris classifier
For the purpose of constructing our classifier, we takeadvantage of the
VGG-16 model [18], which is pre-trainedon the ImageNet database of natural images. The numberof network outputs was adapted to the number of individ-ual eyes, and fine-tuning was performed with the Warsaw-BioBase-PostMortem-Iris v1 and v2 datasets comprisingimages of 71 eyes. Such approach has been found by many * http://zbum.ia.pw.edu.pl/EN/node/46 researchers as the best way to adapt a CNN to a new do-main, with high chances to get a model that presents suffi-cient generalization capabilities. 10 independent train/testdata splits were created by randomly assigning 80% of thedata in each class to the training subset, and the remaining20% of the data to the testing subset. The training took 30epochs in each of the train/test split, and involved stochasticgradient descent with momentum m = 0 . , learning rate of . , and mini-batch size of . Experiments were re-peated three times with different types of iris image data:near-infrared images (NIR), red-channel images extractedfrom high-resolution RGB images (R), and with a combineddataset of both types of data (mixed).During testing, softmax outputs were utilized to plotthe receiver operating characteristic (ROC) curves for ourDCNN-based classifier for three types of training data.This, together with the classification accuracies (propor-tions of samples being correctly classified by the networkto the overall number of test samples in a given train/testsplit) and equal error rates are shown in Fig. 1. The modeltrained with R images is performing best, with EER as lowas 1.74% and an average classification accuracy of 90.7%,which can be attributed to better quality offered by theseimages, at least for early acquisition sessions. Also, most ofthe subjects in the experimental database (29 out of 37) hadlightly-colored eyes ( i.e . gray, blue, or light-green), whichare known to offer better visibility of the iris texture undervisible light illumination. This prevalence of light-coloredeyes can lead to an overestimation of the classification ac-curacy when compared to a more diverse population. Themodel trained with NIR data performs worse (EER=5.73%and accuracy of 73.1%), but the model employing bothkinds of data is only slightly worse than the R model, of-fering EER=2.5% and an accuracy of 84.2%. These resultsallow to conclude that the DCNN model offers a decentpost-mortem iris recognition tool that will be used in thecore component of this research presented in the next Sec-tion. Note that the observed results, on average worse thanusually observed in iris recognition, correspond to a chal-lenging biometric task: post-mortem iris recognition.
5. Humans vs machines
In this Section, we employ two methods, namely theGrad-CAM algorithm described in Sec. 2, and the eyetracking technique, to obtain attention maps highlighting re-gions of the iris image considered important when makingthe decision by the machine and by humans.
Basic DCNN’s designs do not provide a human-interpretable explanation for their decisions, which makesthem unsuitable for assisting human experts in a courtroom
IR images R images mixed images0.70.750.80.850.90.95
Accuracy of the DCNN-based classifier -6 -4 -2 False Match Rate - F a l s e N on - M a t c h R a t e Receiver operating characteristics
EER lineNIR images, EER = 0.03R images, EER = 0.02mixed images, EER = 0.06
Figure 1: Performance of our DCNN-based classifier in terms of: classification accuracy (left) and Receiver OperatingCharacteristic (ROC) curves with Equal Error Rates (right) , when trained on near infrared (NIR), red channel (R) images,and NIR+R (mixed) images.scenario, because a softmax output cannot be expected toconvince the jury of a person’s innocence or guilt.For identification of discriminative image regions, de-cisive for the model prediction, class activation mapping(CAM) techniques have been proposed, first introduced byZhou et al . [26]. The authors achieve this by removingfully-connected layers and replacing them with global av-erage pooling layers followed only by a softmax layer. Asa result, image regions that are important for discrimina-tion are highlighted with a heatmap. Selvaraju et al . im-proved Zhou’s method with Grad-CAM [17], which doesnot require any changes to the network’s architecture andyields coarse heat-maps highlighting the regions that con-tribute the most to the model’s prediction.By using these methods with our DCNN cadaver irisclassifier, we are able to provide the human expert moreknowledge on why a probe iris is assigned to a given classin addition to stating which class it most likely belongs to.To obtain machine-based attention maps, we take advan-tage of the method introduced in [17], by training the classi-fication network in the same manner as described in Section4. An adapted code from [12] is used for the implementa-tion in Keras/Tensorflow environment [3, 9]. A modifiedtraining procedure is employed here, where the subset ofdata that was used in the gaze-tracking part of the experi-ments constitutes the testing subset, and the remaining datais assigned to the training subset. The training samples aresegmented manually.
We set up an experiment employing an eye tracker de-vice to collect attention maps from human subjects who per-formed iris recognition task. Eye tracking enables following Figure 2: Example attention maps for the same iris imagepair coming from cadaver eyes, recorded during the exper-iment. Green and red dots represent the raw gaze fixationpoints (within, and outside of the iris, respectively), whereasthe yellow circles denote the averaged fixation regions gen-erated by clustering the raw data and drawing an arbitrarilysized circle around the cluster center.a person’s gaze as he or she is looking around a screen, andcalculating the numerical coordinates of the gaze with re-spect to the screen coordinate system, thus enabling a fairlyprecise analysis of what the user is looking at in any givenmoment. This is often used in psychological studies, mar-keting research and software usability studies, but the appli-cations extend far beyond that, from OS navigation, gamingcontrols, to even enabling computer use for the severely dis-abled people. For the purpose of this study we have selectedthe EyeTribe device [19]. After a calibration procedure, thedevice outputs gaze coordinates in the form of ( x, y ) pointsas a function of time, which can then be processed to comeup with an attention map. These coordinates represent twotypes of gaze: fixations and saccades. Fixations occur whenhe person is currently looking at something, focusing thegaze on it. The opposite to fixations are saccades, constitut-ing of larger eye movements, when the gaze in being movedbetween fixation points. As it has been proven that littleto no visual processing cognition can be achieved duringsaccades [14], this allows focusing on the fixations periodswhen analyzing the gaze data, assuming that these periodscontain the most useful information. Cluster analysis wasthen implemented on the raw data to find salient image re-gions by grouping together fixation points arranged simi-larly on the iris texture. These provided us regions that wereused by human subjects during their comparison efforts, asdepicted in Fig. 2.During this experiment, 28 subjects were asked to clas-sify selected post-mortem iris image pairs as either genuine(same eye) or impostor (different eyes). Each subject couldtake as much time as they deemed necessary for comingup with their decision. The image pairs were randomly se-lected from the Warsaw-BioBase-PostMortem-Iris-v1 andv2 datasets, as shown in Fig. 2. Since the GradCAM tech-nique gives us the activation maps for the winning class, andour intent is to demonstrate and compare the correct and in-correct behaviors of the network, we evaluate the human-based attention maps from those pairs that were genuine asground truth, but which were classified by humans as eithergenuine ( correct ) or impostor ( incorrect ). Fig. 3 presents the decision making accuracy achievedby the network, by pairs of human examiners (not necessar-ily the same), as well as by the ensemble of a machine so-lution and the two humans, with respect to the post-morteminterval (PMI). This is aggregated for the five cadaver eyesused in the attention map analysis described in the subse-quent Section. Notably, there is no clear trend visible, i.e .the longer PMI does not clearly contribute to lower deci-sion making accuracy. Also, applying the OR rule to themachine and human decisions allowed to rectify most ofthe recognition errors, which may suggest that the machineclassifier cold serve as an aid to the human expert.
In this Section, we present selected human-based atten-tion maps, and compare them with class activation mapsgenerated by the machine solution, similarly to those de-scribed in Sec. 5.1, in four situations, namely:• when the DCNN misclassified a sample, but the humansubject provided a correct decision, Fig. 6,• when both the DCNN and the human subject provideda correct decision (same eye or different eyes on thepresented pictures), Fig. 5, Figure 3: Decision accuracy achieved by DCNN, by pairsof humans, and by the ensemble of DCNN and the two hu-mans, with respect to the post-mortem interval (PMI).• when the human subject made a mistake, but theDCNN was correct, Fig. 4,• when both the human subject and the DCNN made amistake, Fig. 7.For each of the above, we consider two sub-cases: 1)when machine- and human-based attention maps are simi-lar, and 2) when machine- and human-based attention mapspoint to different iris regions.By inspecting these 8 cases in total, represented by 24samples, we investigate the differences and similarities be-tween human’s and DCNN’s attention to iris features, andsee if the attention maps correspond to each other when thedecision was correct, and when it was not. For the DCNN,a correct answer means giving the correct class-wise pre-diction. For experiments with human subjects, this meansgiving the correct genuine/impostor prediction.Figure 4 shows cases, in which the human subject gavethe correct decision and the DCNN solution failed, despiteattending the similar iris region as the human subject did(left pair). On the right, the DCNN also failed, but this timedifferent attention maps are presented. Notably, both themachine and the human attended multiple iris regions, yetonly human subject was able to give a correct answer. imilar maps: Different maps: q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . Correct Incorrect Correct IncorrectFigure 4: Human (gaze-tracking) and DCNN-based (CAM)attention maps when human subject provided a correct deci-sion, but DCNN was wrong. Samples with similar maps areon the left, samples with dissimilar maps are on the right.Cases, for which the DCNN model gave correct answersare illustrated in Fig. 6. On the top left, both the DCNN andthe human subject are attending a large, circular region inthe middle part of the iris. However, only the DCNN comesup with the correct solution. On the top right, the DCNNattends only a small portion of the iris, and still provides acorrect answer, compared to the human subject, who failsdespite attending more iris regions.Fig. 5 shows samples for which both the DCNN andthe human subject were able to give correct decisions, sup-ported by similar, and different attention maps.Finally, in Fig. 7 we show two samples for whichboth methods yielded incorrect results, supported by rathersparse (human subject on the left), but also by dense atten-tion maps (DCNN on the left, human subject on the right).In addition to qualitative (visual) assessment of corre-spondence of the DCNN and eye tracking-based salient re-gions, we provide a quantitative measure of how well theseregions overlap as a geometric average q of probability es-timates p c for class activation maps and p e for eye tracking-based map: q = N (cid:88) i =1 M (cid:88) j =1 (cid:112) p c ( i, j ) p e ( i, j ) (1)where (cid:80) i,j p c ( i, j ) = 1 , (cid:80) i,j p e ( i, j ) = 1 , and N, M de-termine the iris image size. Maps p c and p e may be in-terpreted as the probability of how important a given im- Similar maps: Different maps: q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . Correct Correct Correct CorrectFigure 5: Cases when both the DCNN and human subjectprovided a correct decision.age region was for a network and for a human, respectively.While p c is a rescaled class activation map (to make a sumof elements equal to 1), the p e comes from fixation pointsconvolved with an appropriate Gaussian function to accountfor the eye tracker uncertainty ± pixels for an HD screen( × pixels), as used in this work. Values of q close to zero denote low overlap between human-driven andDCNN salient regions. Values of q close to one denote veryhigh level of agreement between the DCNN model and thehuman. These numerical assessments of the saliency re-gions similarity are given for each image pair in Figs. 4through 7. Fig. 8 shows p c , p e and the square root of theirproduct for an example iris image.Results described in this Section can be summarized ina few observations. First, although we were able to findsamples for which both the DCNN-based and human-basedattention maps were strikingly similar, we were as wellable to find those that were region-disjoint. This suggeststhat DCNN-based visualization of salient iris regions maybe complementary to what humans perceive as importantin their judgements. Second, both the machine-based andhuman-based attention maps seem to omit the outer re-gions of the iris near the iris-sclera boundary, suggestingthat discriminatory capacity of these areas for post-mortemiris samples may be limited.
6. Discussion and Conclusions
This study shows that despite the inherent difficultyfound in the post-mortem iris image data, a DCNN-based imilar maps: Different maps: q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . Incorrect Correct Incorrect CorrectFigure 6: Cases when DCNN was correct, whereas the hu-man subject was wrong.classifier, fine-tuned to work with cadaver iris images, isable to efficiently learn discriminatory iris features and,when equipped with the class activation maps generationtechnique, can back its decisions in a human-intelligibleway. The second deliverable of this study is the comparisonbetween computer- and human-generated attention maps,with the latter being obtained with a gaze-tracking device.These experiments are important in the sense that we arenot aware of any other papers studying the human-basedattention maps obtained during a gaze-tracking study withhuman asked to classify iris images as genuine or impos-tor, compared to what machines are doing.
One conclu-sion of this study is that appearance, similarity, or densityof human-driven and machine-driven maps seem not to cor-respond in any clear way to decisions being made by ei-ther humans or machines. As for the similarities observedbetween humans and the neural network, both ‘examiners’tend to focus on a limited number of iris areas (often justone), which is opposite to the typically used iris code-basedmethods (such as Daugman’s), analyzing the entire non-occluded portion of the iris annulus (sometimes additionallylimited to “non-fragile” iris code bits). This may suggestthat an effective way of post-mortem iris recognition maybe based on sparse coding (such as minutiae-based codingin fingerprints, or keypoint-based object recognition) ratherthan on dense, iris code-based algorithms. The second con-clusion is that both humans and DCNN focused more onthe inner/middle part of the iris, what suggests that outerparts (close to sclera) may be less effective in post-mortem
Similar maps: Different maps: q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . q = 0 . Incorrect Incorrect Incorrect IncorrectFigure 7: Cases when both DCNN and human subject pro-vided an incorrect decision.iris recognition.
The third conclusion from this work isthat salient regions proposed by the DCNN and identifiedfrom human eye gaze do not overlap in general, hence thecomputer-added visual cues may potentially constitute avaluable addition to the forensic examiner’s expertise, as itcan highlight important discriminatory regions that the hu-man expert might miss in their proceedings.
The fourthconclusion from this study is that human subjects can pro-vide an incorrect decision even despite spending quite sometime observing many iris regions. Thus, we may hazarda guess that iris features ‘extracted’ by non-expert humansubjects do not always allow for post-mortem iris recogni-tion, and an additional training may be necessary, similar tothe training of forensic experts dealing with fingerprint, tobecome effective in recognizing post-mortem iris images.
References [1] A. Sansola. Postmortem iris recognition and its ap-plication in human identification, MSc Thesis, BostonUniv., 2015.[2] D. S. Bolme et al. Impact of environmental fac-tors on biometric matching during human decompo-sition.
IEEE 8th International Conference on Biomet-rics Theory, Applications and Systems, Sep 6-9, 2016,Buffalo, USA , 2016.[3] F. Chollet. Keras: Deep Learning library for Theanoand TensorFlow. 2015.igure 8: Estimation of salient regions obtained from eye-tracking ( p e , left ) and from CAM ( p c , middle ) for the same image.The square root of a product of p c and p e illustrates spatial agreement of salient regions between human and DCNN ( right ).[4] J. Daugman. High confidence visual recognition ofpersons by a test of statistical independence. IEEETransactions on Pattern Analysis and Machine Intelli-gence , 15(11):1148–1161, Nov 1993.[5] J. Daugman. New methods in iris recognition.
IEEETransactions on Systems, Man, and Cybernetics – PartB: Cybernetics , 37(5):1167–1175, 2007.[6] A. Gangwar and A. Joshi. DeepIrisNet: Deep iris rep-resentation with applications in iris recognition andcross-sensor iris recognition.
IEEE International Con-ference on Image Processing (ICIP 2016) , 2016.[7] K. P. Hollingsworth, K. W. Bowyer, and P. J. Flynn.Using fragile bit coincidence to improve iris recogni-tion.
Proceedings of the 3rd IEEE International Con-ference on Biometrics: Theory, Applications and Sys-tems , 2009.[8] N. Liu, M. Zhang, H. Li, Z. Sun, and T. Tan. DeepIris:Learning pairwise filter bank for heterogeneous irisverification.
Pattern Recognition Letters , 82:154 –161, 2016.[9] M. Abadi et al.
TensorFlow: Large-Scale MachineLearning on Heterogeneous Systems, 2015. tensor-flow.org.[10] S. Minaee, A. Abdolrashidiy, and Y. Wang. An Ex-perimental Study of Deep Convolutional Features ForIris Recognition.
IEEE Signal Processing in Medicineand Biology Symposium (SPMB 2016) , 2016.[11] K. Nguyen, C. Fookes, A. Ross, and S. Sridharan.Iris Recognition With Off-the-Shelf CNN Features: ADeep Learning Perspective.
IEEE Access , 6:848 –855, 2018.[12] V. Petsiuk. Keras implementation of GradCAM. ac-cessed: April 4, 2018. [13] A. Ross. Iris as a Forensic Modality: The Path For-ward.[14] D. Salvucci and J. Goldberg. Identifying fixations andsaccades in eye-tracking protocols. In
Eye TrackingResearch and Applications Symposium , pages 71–78,2000.[15] S. K. Saripalle, A. McLaughlin, R. Krishna, A. Ross,and R. Derakhshani. Post-mortem Iris BiometricAnalysis in Sus scrofa domesticus.
IEEE 7th Interna-tional Conference on Biometrics Theory, Applicationsand Systems (BTAS 2015), September 8-11, 2015, Ar-lington, USA , 2015.[16] K. Sauerwein, T. B. Saul, D. W. Steadman, and C. B.Boehnen. The effect of decomposition on the efficacyof biometrics for positive identification.
Journal ofForensic Sciences , 62(6):1599–1602, 2017.[17] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam,D. Parikh, and D. Batra. Grad-CAM: Visual Explana-tions from Deep Networks via Gradient-Based Local-ization.
International Conference on Computer Vision(ICCV) , 2016.[18] K. Simonyan and A. Zisserman. Very Deep Con-volutional Networks for Large-Scale Image Recogni-tion.
International Conference on Learning Represen-tations , 2014.[19] TheEyeTribe. The EyeTribe Doc-umentation and API Reference,https://github.com/EyeTribe/documentation
Intl Workshopon Biometrics and Forensics (IWBF2018), June 7-8,2018, Sassari, Italy .21] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Hu-man Iris Recognition in Post-mortem Subjects: Studyand Database. .[22] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Post-mortem Human Iris Recognition. , 2016.[23] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Irisrecognition after death.
IEEE Transactions on In-formation Forensics and Security , 14(6):1501–1514,2018.[24] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Pre-sentation Attack Detection for Cadaver Iris. , 2018.[25] Z. Zhao and A. Kumar. Towards More Accurate IrisRecognition Using Deeply Learned Spatially Corre-sponding Features.
IEEE International Conference onComputer Vision (ICCV 2017) , 2017.[26] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, andA. Torralba. Learning Deep Features for Discrimi-native Localization.