Cross-spectral Iris Recognition for Mobile Applications using High-quality Color Images
PPaper
Cross-spectral Iris Recognitionfor Mobile Applicationsusing High-quality Color Images
Mateusz Trokielewicz ; and Ewelina Bartuzi Biometrics Laboratory, Research and Academic Computer Network, Warsaw, Poland Institute of Control and Computation Engineering, Warsaw University of Technology, Warsaw, Poland
Abstract|With the recent shift towards mobile computing,new challenges for biometric authentication appear on thehorizon. This paper provides a comprehensive study of cross-spectral iris recognition in a scenario, in which high qualitycolor images obtained with a mobile phone are used againstenrollment images collected in typical, near-infrared setups.Grayscale conversion of the color images that employs selec-tive RGB channel choice depending on the iris coloration isshown to improve the recognition accuracy for some combi-nations of eye colors and matching software, when comparedto using the red channel only, with equal error rates drivendown to as low as 2%. The authors are not aware of any otherpaper focusing on cross-spectral iris recognition is a scenariowith near-infrared enrollment using a professional iris recog-nition setup and then a mobile-based veri(cid:12)cation employingcolor images.
Keywords|biometrics, cross-spectral, iris recognition, mobiletechnologies, smartphones.
1. Introduction
In the recent decades, biometric authentication and iden-ti(cid:12)cation of humans has received considerable interest asa fast, safe, and convenient way of replacing password,token, or key-based security measures. One of the mostaccurate biometric methods is iris recognition, whose con-cept was (cid:12)rst conceived by British ophthalmologists Sa(cid:12)rand Flom [1] and later patented and implemented by Daug-man [2], [3]. The iris, a part of the uvea, is located at theback of the anterior chamber of the eye, protected fromthe outside environment by eyelashes, tear (cid:12)lm, the corneaand the aqueous humor. Its usefulness as a biometric iden-ti(cid:12)er is attributed to the intricate patterns of the trabecu-lar meshwork found in the front part of the organ. Thesepatterns are developed in the embryonic stage and havelow genotype dependence, thus providing enough uniquefeatures for high con(cid:12)dence classi(cid:12)cation. Iris texture isalso believed to be exceptionally stable in time and virtu-ally impossible to alter without in(cid:13)icting extensive damageto the eye.
Iris recognition biometric systems are usually taking ad-vantage of images collected using near-infrared (NIR) illu-mination. This is due to certain light absorption propertiesof melanin { the pigment, to which the iris attributes itsappearance. While the absorption is signi(cid:12)cant for lightfrom the visible spectrum, it is almost negligible for higherwavelengths. Higher re(cid:13)ectance enables good visibility ofiris texture details even for highly pigmented (dark) irises.For this reason most commercial iris cameras collect im-ages under illumination from the 700{900 nm wavelengthrange. However, with the recent shift in consumer comput-ing towards mobile devices, visible light and cross-spectraliris recognition has received considerable attention.This study aims at analyzing cross-spectral performance ofthe state-of-the-art iris recognition methods when appliedwith visible spectrum and near-infrared images. To our bestknowledge, this is the (cid:12)rst analysis of such kind incorporat-ing high quality, (cid:13)ash-illuminated visible light iris imagesobtained using a mobile phone, which are then matchedagainst NIR-illuminated enrollment samples. If good per-formance of the recognition methods can be achieved, itcould pave the way for low-e(cid:11)ort, real world applications,such as user authentication on a mobile device, whichwould serve as a remote veri(cid:12)cation terminal complement-ing a typical enrollment setup employing a NIR camera.We envisage a scenario, in which the enrollment stage isperformed using a professional NIR setup for purposes suchas government-issued IDs, travel documents, etc. Then,mobile authentication could be performed using a phone ora tablet whenever user deems it necessary. As of August2016, there are only three iris recognition enabled mobilephones with the capability to obtain iris images in nearinfrared: Fujitsu NX F-04G ([4], available only in Japan),Microsoft Lumia 950 and 950XL ([5], employing WindowsHello iris-based authentication, currently in beta) and Sam-sung Galaxy Note 7 [6]. Due to this limited availability ofNIR iris scanning equipment in mobile devices, a possi-bility of cross-spectral iris matching using color images isexplored. Therefore, this paper o(cid:11)ers three main contribu-tions: (cid:15) evaluation of cross-spectral iris comparisons in a mo-bile recognition scenario, 91 ateusz Trokielewicz and Ewelina Bartuzi (cid:15) analysis of how the eye color in(cid:13)uences cross-spectraliris matching, (cid:15) selection of the most e(cid:14)cient RGB channel depend-ing on the eye color in order to optimize cross-spectral iris matching.This paper is organized as follows: Section 2 presents anoverview of the current state-of-the-art research related tothis topic. Section 3 o(cid:11)ers a brief explanation how theiris color in the human eye is determined. Multispectraldatabase of iris images, data subset creation in respectto the eye color, and image preprocessing using selec-tive RGB channel grayscale transformation are described inSection 4. Experimental methodology and software toolsare characterized in Section 5, while Section 6 presentsan overview of results. Finally, relevant conclusions aredrawn in Section 7.
2. Related work
Probably the (cid:12)rst systematic study devoted to multispectraliris biometrics was this of Boyce et al . [7], in which authorsstudied the re(cid:13)ectance response of the iris tissue depend-ing on the spectral channel employed: red, green, blue,and infrared. Results of matching performance evaluationacross channels and wavelengths are reported with the con-clusion that decreases in matching accuracy is smallest inspectral channels that are closest to each other wavelength-wise. Technique for improving the recognition accuracyby employing histogram equalization in the CIE L*a*bcolor space is shown. Authors also present insight on howiris recognition could bene(cid:12)t from multispectral fusion inboth segmentation and matching domains. Park et al . [8]explored fusing multispectral iris information as a coun-termeasure against spoo(cid:12)ng. Iris features extracted fromimages acquired almost simultaneously both in low wave-length band and in high wavelength band are fused togetherin an attempt to create a method that would di(cid:11)erentiate be-tween real and counterfeit samples without compromisingthe recognition accuracy. Spectral variations found in realimages obtained under di(cid:11)erent illumination conditions aresaid to o(cid:11)er enough variability to achieve this.Ross et al . [9] were the (cid:12)rst to investigate multispectral irisrecognition using wavelengths longer than 900 nm, prov-ing that images obtained in the range of 900{1400 nmcan o(cid:11)er iris texture visibility good enough for biomet-ric applications. At the same time, authors argue that theiris is able to give di(cid:11)erent responses as di(cid:11)erent wave-lengths, which could prove useful for improving segmen-tation algorithms. Intra- (i.e. between images obtained inthe same wavelength) and inter-spectral (i.e. between im-ages obtained in two di(cid:11)erent wavelengths) genuine andimpostor comparisons were generated. This revealed thatdespite inter-spectral genuine comparison distributions be-ing shifted towards relevant impostor distributions, nearlyperfect separation between genuine and impostor distribu-tions can be achieved with the use of multispectral fusionat the score level. Burge et al . [10] studied the iris texture appearance de-pending on the eye color combined with light wavelengthemployed for imaging. A method of approximating NIRimage from visible light image is presented, together withmultispectral iris fusion designed to create a high con(cid:12)-dence image that would improve cross-spectral matchingaccuracy. Zuo et al . [11] attempted to predict NIR im-ages from color images using predictive image mapping,to compare them against the enrolled typical NIR image.This method is said to outperform matching between NIRchannel and red channel by roughly 10%.Recently, advancements have also been made in the (cid:12)eldof visible light iris recognition applications in more prac-tical scenarios, including mobile devices { smartphonesand tablets. Several databases have been released, includ-ing the UPOL database of iris images obtained using anophthalmology device [12], the UBIRISv1 database, repre-senting images obtained using Nikon E5700 camera andthe UBIRISv2 database, which gathers images captured on-the-move and at-a-distance [13], [14]. These databasesrepresent images obtained in very unconstrained condi-tions, and therefore usually of low quality. Recently, wehave published the (cid:12)rst available to researchers database ofhigh quality iris images: the Warsaw-BioBase-Smartphone-Iris dataset [15], which comprises images obtained withiPhone 5s phone with embedded (cid:13)ash illumination (availa-ble online at [16]).Challenges related to visible light iris recognition were ex-tensively studied by Proenca et al ., including the amount ofinformation that can be extracted from such images [17],possible improvements to the segmentation stage [18], andmethods for image quality assessment to discard samples ofexceptionally poor quality [19]. Santos et al . explored pos-sible visible light illumination setups in the search for op-timal solution for unconstrained iris acquisition [20]. Seg-mentation of noisy, low quality iris images was also stud-ied by Radu et al . [21] and Frucci et al . [22]. Raja et al .explored visible spectrum iris recognition using a light(cid:12)eld camera [23] and white LED illumination [24]. Theyalso investigated a possibility of deploying iris recogni-tion onto mobile devices using deep sparse (cid:12)ltering [25]and K-means clustering [26], reporting promising resultssuch as EER as low as 0.31% when visible spectrum,smartphone-obtained images are used for recognition. Thefeasibility of face and iris biometrics implementations inmobile devices was also studied by De Marsico et al . [27].Our own experimentations devoted to this (cid:12)eld of re-search have shown that intra-wavelength visible spectrumiris recognition is possible when high-quality, color irisimages obtained using a modern smartphone are used withthe existing state-of-the-art methods, which are typicallydesigned for NIR images [15], [28].
3. Anatomical Background of Iris Color
The iris consists of two major layers: the outer stroma,a meshwork of interlacing blood vessels, collagen (cid:12)bers,92 ross-spectral Iris Recognition for Mobile Applications using High-quality Color Images and sometimes melanin particles, and the inner epithe-lium, which connects to the muscles that control the pupilaperture. The epithelium itself contains dark brown pig-ments regardless of the observed eye color. The eye colorperceived by the human observer depends mainly on theamount of melanin that can be found in the stroma. Themore melanin in the stroma, the darker the iris appears dueto absorption of incoming light by melanin. Blue hue ofthe iris, however, is attributed to light being scattered bythe stroma, with more scattering occurring at higher fre-quencies, hence the color blue. This phenomenon, calledthe Tyndall e(cid:11)ect, is similar to Rayleigh scattering and oc-curs in colloidal solutions, where the scattering particlesare smaller than the scattered light wavelengths. Greeneye color, on the other hand, is a result of combining thesetwo phenomena (melanin light absorption and Tyndall scat-tering) [29].
4. Multispectral Database of Iris Images
For the purpose of this study a multispectral database ofiris images has been collected, comprising NIR-illuminatedimages of standard quality (as recommended by ISO/IECstandard regarding biometric sample quality [30]) and highquality images obtained in visible light. 36 people pre-
Fig. 1. senting 72 di(cid:11)erent irises participated in the experiment.IrisGuard AD100, a two-eye NIR iris recognition camera,has been employed to capture six NIR-illuminated images( (cid:2) pixel bitmaps). Color images were acquiredusing the rear camera of Apple iPhone 5s (8-megapixel,JPG-compressed), with (cid:13)ash enabled. Data acquisition witha phone produced at least three images for each eye. In to-tal, 432 near-infrared images acquired by a professional irisrecognition camera and 272 color photos taken with a mo-bile phone were collected.Images were then divided into three separable groups in re-spect to the eye color: the blue eyes subset, the green eyessubset, and the brown/hazel eyes subset, comprising 32 blueeyes, 18 green eyes and 22 brown, hazel or mixed-coloreyes, respectively. Commercially available iris recognitionsoftware is typically built to cooperate with data compati-ble with the ISO/IEC standard speci(cid:12)cation. Color imageswere thus cropped to VGA resolution ( (cid:2) pixels)and then converted to grayscale using selective RGB chan-nel decomposition. Red, green, and blue channel of theRGB color space were extracted separately for each of thethree eye color groups. Figure 1 presents sample imagesobtained using both cameras employed in this study, to-gether with images extracted from each of the three RGBchannels.
5. Experimental Methodology
For the most comprehensive analysis, three commercial,state-of-the-art iris recognition methods and one algorithmof academic origin have been employed. This sectionbrie(cid:13)y characterizes each of these solutions.Monro Iris Recognition Library (
MIRLIN ) is a commer-cially available product, o(cid:11)ered on the market by FotoNa-tion (formerly SmartSensors) [31] as an SDK (Software De-velopment Kit). Its methodology incorporates calculatingbinary iris code based on the output of a discrete co-sine transform (DCT) applied to overlapping iris imagepatches [32]. The resulting binary iris templates are com-pared using XOR operation and comparison scores are gen-erated in the form of fractional Hamming distance, i.e. theproportion of disagreeing bits in the two iris codes. Withthis metric in place, we should expect values close to zerofor genuine (i.e. same-eye) comparisons, and values around0.5 for impostor (i.e. di(cid:11)erent-eye) comparisons. The lat-ter is due to the fact that comparing bits in iris codes oftwo di(cid:11)erent irises can be depicted as comparing two se-quences of independent Bernoulli trials (such as symmetriccoin tosses).
IriCore employs a proprietary and unpublished recogni-tion methodology. Similarly to MIRLIN, it is o(cid:11)ered onthe market in the form of an SDK by IriTech [33]. Withthis matcher, values between 0 and 1.1 should be expectedfor same-eye comparisons, while di(cid:11)erent-eye comparisonsshould yield scores between 1.1 and 2.0. 93 ateusz Trokielewicz and Ewelina Bartuzi
The third method involved in this study,
VeriEye , is avail-able commercially from Neurotechnology [34] and, simi-larly to the IriCore method, the precise mechanisms of therecognition methodology are not disclosed in any scienti(cid:12)cpapers. The manufacturer, however, claims to employ ac-tive shape modeling for iris localization using non-circularapproximations of pupillary and limbic iris boundaries.VeriEye, contrary to two previous methods, returns com-parison scores in a form of similarity metric { the higherthe score, the better the match. A perfect non-match shouldreturn a score equal to zero.The last method employed for the purpose of this studyis Open Source for IRIS (
OSIRIS ), developed within theBioSecure project [35] and o(cid:11)ered by its authors as a free,open-source solution. OSIRIS follows the well-known con-cept originating in the works of Daugman, incorporatingimage segmentation and normalization by unwrapping theiris image from polar coordinates onto a Cartesian rect-angle using Daugman’s rubber sheet model. Encoding ofthe iris is carried out using phase quantization of multipleGabor wavelet (cid:12)ltering outcomes, while matching is per-formed using XOR operation, with normalized Hammingdistance as an output dissimilarity metric. As in the MIR-LIN method, values close to zero are expected for genuinecomparisons, while impostor comparisons should typicallyproduce results around 0.5, however, due to shifting theiris code in search for the best match as a countermeasureagainst eye rotation, impostor score distributions will morelikely be centered around 0.4 to 0.45 values.
As this study aims at quantifying cross-spectral iris recog-nition accuracy in a scenario that would mimic potentialreal-world applications, where mobile-based veri(cid:12)cationwould complement a typical enrollment using professionaliris recognition hardware operating in NIR, the following experiments are performed. NIR images obtained using theIrisGuard AD100 camera are used as gallery (enrollment)samples. Visible light images obtained with the iPhone 5sare used as probe (veri(cid:12)cation) samples. All possible gen-uine and impostor comparisons are generated for all threesubsets of eyes and all three RGB channels, for each of thefour iris recognition methods employed. Thus, 36 ReceiverOperating Characteristic (ROC) curves can be constructed(4 methods (cid:2) (cid:2)
6. Results
Figures 2{5 illustrate ROC curves obtained when gener-ating genuine and impostor score distributions for eachRGB channel and each eye color subset. EER-wise, thered channel o(cid:11)ers the best performance in most of themethod/channel/subset combinations. There are however,a few exceptions from this behavior. For the blue eyes sub-set, the green channel provides recognition accuracy that isvery similar to this of the red channel (slightly better for theMIRLIN matcher, slightly worse for the VeriEye matcher,and the same for the remaining two methods). Interestingly,for the MIRLIN matcher, the blue channel gives the sameEER as the green channel, and better than the red channel.For the green eyes subset, the red channel o(cid:11)ers signi(cid:12)-cantly better performance than the other channels for theOSIRIS and MIRLIN matchers. However, for the VeriEyematcher, using green channel instead decreased the EERfrom 5 to 2%, a signi(cid:12)cant improvement over the recogni-tion accuracy o(cid:11)ered by the red channel.The brown/hazel eyes subset, unsurprisingly, achieves theoptimal recognition performance for the red channel, asit o(cid:11)ers signi(cid:12)cantly better iris pattern visibility than theother two channels. The IriCore method, however, seems tobe less susceptible to the type of the input data, as decrease
OSIRIS, blue eyesblue channel, EER = 0.13red channel, EER = 0.08green channel, EER = 0.081.0 1.00.8 0.80.6 0.60.4 0.40.2 0.200 T r u e P o s iti v e R a t e ( T P R ) False Positive Rate (FPR)EER line blue channel, EER = 0.44red channel, EER = 0.06green channel, EER = 0.18OSIRIS, green eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line blue channel, EER = 0.44red channel, EER = 0.04green channel, EER = 0.21OSIRIS, brown/hazel eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line
Fig. 2.
ROC curves for
OSIRIS matcher and scores obtained when matching di(cid:11)erent RGB channels of samples from the: (a) blueeyes, (b) green eyes, and (c) brown/hazel eyes subsets. Red channel scores are denoted with solid red line, blue channel scores: dottedblue line, green channel scores: dashed green line. Equal error rates (EER) are also shown. ross-spectral Iris Recognition for Mobile Applications using High-quality Color Images T r u e P o s iti v e R a t e ( T P R ) False Positive Rate (FPR)EER line blue channel, EER = 0.48blue channel, EER = 0.19 red channel, EER = 0.05red channel, EER = 0.07 green channel, EER = 0.02green channel, EER = 0.08 VeriEye, green eyesVeriEye, green eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line blue channel, EER = 0.52red channel, EER = 0.05green channel, EER = 0.24VeriEye, brown/hazel eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line
Fig. 3.
Same as in Fig. 2, but for the VeriEye matcher: (a) blue eyes, (b) green eyes, and (c) brown/hazel eyes.
MIRLIN, blue eyesblue channel, EER = 0.08red channel, EER = 0.09green channel, EER = 0.081.0 1.00.8 0.80.6 0.60.4 0.40.2 0.200 T r u e P o s iti v e R a t e ( T P R ) False Positive Rate (FPR)EER line blue channel, EER = 0.27red channel, EER = 0.03green channel, EER = 0.10MIRLIN, green eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line blue channel, EER = 0.40red channel, EER = 0.08green channel, EER = 0.19MIRLIN, brown/hazel eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.200 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line
Fig. 4.
Same as in Fig. 2, but for the MIRLIN matcher: (a) blue eyes, (b) green eyes, and (c) brown/hazel eyes.
IriCore, blue eyesblue channel, EER = 0.10red channel, EER = 0.08green channel, EER = 0.081.0 1.00.8 0.80.6 0.60.4 0.40.2 0.200 T r u e P o s iti v e R a t e ( T P R ) False Positive Rate (FPR)EER line blue channel, EER = 0.26red channel, EER = 0.02green channel, EER = 0.02IriCore, green eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line blue channel, EER = 0.49red channel, EER = 0.03green channel, EER = 0.08IriCore, brown/hazel eyes 1.01.0 0.80.8 0.60.6 0.40.4 0.20.2 00 False Positive Rate (FPR) T r u e P o s iti v e R a t e ( T P R ) EER line
Fig. 5.
Same as in Fig. 2, but for the IriCore matcher: (a) blue eyes, (b) green eyes, and (c) brown/hazel eyes. ateusz Trokielewicz and Ewelina Bartuzi in performance for the green channel is much lower than inthe remaining three methods (compared to the red channel).
7. Conclusions
This study provides a valuable analysis of cross-spectral irisrecognition, when high quality visible light images obtainedwith a mobile phone are used as veri(cid:12)cation counterpartsfor enrollment samples obtained in NIR. The red channelperforms best in general, however, in selected cases, em-ploying the green or the blue channel of the RGB colorspace when converting the color image to grayscale isshown to improve recognition accuracy. This is true forsome combination of recognition methods and eye colorsrecognition accuracy can be improved this way in blue andgreen eyes, while dark brown and hazel eyes generally per-form best when the red channel is used.The experiments revealed that cross-spectral iris recogni-tion in the discussed scenario is perfectly viable, with equalerror rates not exceeding 7, 2 and 3% for the blue, green andbrown/hazel eyes, respectively, when optimal combinationsof the recognition method and grayscale transformation areselected. As this incorporated only a simple selection of theRGB channel best suited for a given eye color, future exper-iments employing more advanced image processing couldbring the error rates even lower. This certainly lets us thinkof cross-spectral iris recognition using mobile phones witha great dose of optimism.
Acknowledgement
The authors would like to thank Dr. Adam Czajka for hisvaluable comments that contributed to the quality of thispaper. We are also grateful for the help of the Biomet-rics Scienti(cid:12)c Club members at the Warsaw University ofTechnology when building the database used in this study.
References [1] L. Flom and A. Sa(cid:12)r, \Iris recognition system", United States Patent,US 4641349, 1987.[2] J. Daugman, \Biometric personal identi(cid:12)cation system based on irisanalysis", United States Patent, US 5291560, 1994.[3] J. G. Daugman, \High con(cid:12)dence visual recognition of persons bya test of statistical independence",
IEEE Trans. Pattern Anal. andMachine Intell.
Proc. Conf. Comp. Vision &Pattern Recogn. Worksh. CVPRW’06 , New York, NY, USA, 2006(doi: 10.1109/CUPRW.2006.141). [8] J. H. Park and M. G. Kang, \Multispectral iris authentication sys-tem against counterfeit attack using gradient-based image fusion",
Optical Engin. , vol. 46, no. 11, 2007 (doi: 10.1117/1.2802367).[9] A. Ross, R. Pasula, and L. Hornak, \Iris recognition: On the seg-mentation of degraded images acquired in the visible wavelength",in
Proc. IEEE 3rd Int. Conf. Biometrics: Theory, Appl. and Syst.BTAS 2009 , Washington, DC, USA, 2009.[10] M. J. Burge and M. K. Monaco, \Multispectral iris fusion for en-hancement, interoperability, and cross wavelength matching", in
Al-gorithms and Technologies for Multispectral, Hyperspectral, and Ul-traspectral Imagery XV , S. S. Shen and P. E. Lewis, Eds.
Proc. ofSPIE , vol. 7334, 73341D, 2009 (doi: 10.1117/12.819058).[11] J. Zuo, F. Nicolo, and N. A. Schmid, \Cross spectral iris matchingbased on predictive image mapping", in , Washington, DC, USA,2010.[12] M. Dobes, L. Machala, P. Tichavsky, and J. Pospisil, \Human eyeiris recognition using the mutual information",
Optik , vol. 115,no. 9, pp. 399{404, 2004.[13] H. Proenc(cid:24)a and L. A. Alexandre, \UBIRIS: A noisy iris imagedatabase", Tech. Rep., ISBN: 972-99548-0-1, University of BeiraInterior, Portugal, 2005.[14] H. Proenc(cid:24)a, S. Filipe, R. Santos, J. Oliveira, and L. A. Alexandre,\The UBIRIS.v2: A database of visible wavelength iris images cap-tured on-the-move and at-a-distance",
IEEE Trans. Pattern Anal. andMachine Intell. , vol. 32, no. 8, pp. 1529{1535, 2010.[15] M. Trokielewicz, \Iris recognition with a database of iris imagesobtained in visible light using smartphone camera", in
Proc. IEEEInt. Conf. on Ident., Secur. and Behavior Anal. ISBA 2016 , Sendai,Japan, 2016.[16] Warsaw-BioBase-Smartphone-Iris-v1.0 [Online]. Available:http://zbum.ia.pw.edu.pl/en/node/46[17] H. Proenc(cid:24)a, \On the feasibility of the visible wavelength, at-a-distance and on-the-move iris recognition", in
Proc. IEEE Symp.Series on Computat. Intell. in Biometr.: Theory, Algorithms, & Appl.SSCI 2009 , Nashville, TN, USA, 2009, vol. 1, pp. 9{15.[18] H. Proenc(cid:24)a, \Iris recognition: On the segmentation of degradedimages acquired in the visible wavelength",
IEEE Trans. on PatternAnal. & Mach. Intellig. , vol. 32, no. 8l pp. 1502{1516, 2010.[19] H. Proenc(cid:24)a, \Quality assessment of degraded iris images acquiredin the visible wavelength",
IEEE Trans. Inform. Forens. and Secur. ,vol. 6, no. 1, pp. 82{95, 2011.[20] G. Santos, M. V. Bernardo, H. Proenca, and P. T. Fiadeiro, \Irisrecognition: Preliminary assessment about the discriminating ca-pacity of visible wavelength data", in
Proc. 6th IEEE Int. Worksh.Multim. Inform. Process. and Retrieval MIPR 2010 , Taichung, Tai-wan China, 2010, pp. 324{329.[21] P. Radu, K. Sirlantzis, G. Howells, S. Hoque, and F. Deravi,\A colour iris recognition system employing multiple classi(cid:12)er tech-niques",
Elec. Lett. Comp. Vision and Image Anal. , vol. 12, no. 2,pp. 54{65, 2013.[22] M. Frucci, C. Galdi, M. Nappi, D. Riccio, and G. Sanniti di Baja,\IDEM: Iris detection on mobile devices", in
Proc. 22nd Int. Conf.Pattern Recogn. ICPR 2014 , Stockholm, Sweden, 2014.[23] K. Raja, R. Raghavendra, F. Cheikh, B. Yang, and C. Busch, \Robustiris recognition using light (cid:12)eld camera", in
The 7th Colour andVisual Comput. Symp. CVCS 2013 , Gj(cid:28)vik, Norway, 2013.[24] K. Raja, R. Raghavendra, and C. Busch, \Iris imaging in visiblespectrum using white LED", in , Arlington, VA, USA, 2015.[25] K. Raja, R. Raghavendra, V. Vemuri, and C. Busch, \Smartphonebased visible iris recognition using deep sparse (cid:12)ltering",
PatternRecogn. Lett. , vol. 57, pp. 33{42, 2014.[26] K. B. Raja, R. Raghavendra, and C. Busch, \Smartphone basedrobust iris recognition in visible spectrum using clustered K-meanfeatures", in
Proc. IEEE Worksh. Biometr. Measur. and Syst. forSecur. and Med. Appl. BioMS 2014 , Rome, Italy, 2014, pp. 15{21.[27] M. De Marsico, C. Galdi, M. Nappi, and D. Riccio, \FIRME: Faceand iris recognition engagement",
Image and Vis. Comput. , vol. 32,no. 12, pp. 1161{1172, 2014. ross-spectral Iris Recognition for Mobile Applications using High-quality Color Images[28] M. Trokielewicz, E. Bartuzi, K. Michowska, A. Andrzejewska, andM. Selegrat, \Exploring the feasibility of iris recognition for visiblespectrum iris images obtained using smartphone camera", in Pho-tonics Applications in Astronomy, Communications, Industry, andHigh-Energy Physics Experiments 2015 , R. S. Romaniuk, Ed.
Proc.of SPIE , vol. 9662, 2015 (doi: 10.1117/12.2205913).[29] P. van Slembrouck, \Structural Eye Color is Amazing" [On-line]. Available: http://medium.com/@ptvan/structural-eye-color-is-amazing-24f47723bf9a (accessed Aug. 8, 2016).[30] ISO/IEC 19794-6:2011. Information technology { Biometric datainterchange formats { Part 6: Iris image data, 2011.[31] Smart Sensors Ltd., MIRLIN SDK, version 2.23, 2013.[32] D. M. Monro, S. Rakshit, and D. Zhang, \DCT-based iris recogni-tion",
IEEE Trans. Pattern Anal. and Machine Intell.
Mateusz Trokielewicz receivedhis B.Sc. and M.Sc. in Bio-medical Engineering from theFaculty of Mechatronics andthe Faculty of Electronics andInformation Technology at theWarsaw University of Technol-ogy, respectively. He is cur-rently with the Biometrics Lab-oratory at the Research andAcademic Computer Network and with the Institute of Control and Computation Engi-neering at the Warsaw University of Technology, wherehe is pursuing his Ph.D. in Biometrics. His current pro-fessional interests include iris biometrics and its reliabilityagainst biological processes, such as aging and diseases,and iris recognition on mobile devices.E-mail: [email protected] LaboratoryResearch and Academic Computer Network (NASK)Kolska st 1201-045 Warsaw, PolandInstitute of Control and Computation EngineeringWarsaw University of TechnologyNowowiejska st 15/1900-665 Warsaw, Poland