Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nathan D. Kalka is active.

Publication


Featured researches published by Nathan D. Kalka.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Image quality assessment for iris biometric

Nathan D. Kalka; Jinyu Zuo; Natalia A. Schmid; Bojan Cukic

Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is the most reliable biometric in terms of recognition and identification performance. However, performance of these systems is affected by poor quality imaging. In this work, we extend previous research efforts on iris quality assessment by analyzing the effect of seven quality factors: defocus blur, motion blur, off-angle, occlusion, specular reflection, lighting, and pixel-counts on the performance of traditional iris recognition system. We have concluded that defocus blur, motion blur, and off-angle are the factors that affect recognition performance the most. We further designed a fully automated iris image quality evaluation block that operates in two steps. First each factor is estimated individually, then the second step involves fusing the estimated factors by using Dempster-Shafer theory approach to evidential reasoning. The designed block is tested on two datasets, CASIA 1.0 and a dataset collected at WVU. Considerable improvement in recognition performance is demonstrated when removing poor quality images evaluated by our quality metric. The upper bound on processing complexity required to evaluate quality of a single image is O(n2 log n), that of a 2D-Fast Fourier Transform.


systems man and cybernetics | 2010

Estimating and Fusing Quality Factors for Iris Biometric Images

Nathan D. Kalka; Jinyu Zuo; Natalia A. Schmid; Bojan Cukic

Iris recognition, the ability to recognize and distinguish individuals by their iris pattern, is one of the most reliable biometrics in terms of recognition and identification performance. However, the performance of these systems is affected by poor-quality imaging. In this paper, we extend iris quality assessment research by analyzing the effect of various quality factors such as defocus blur, off-angle, occlusion/specular reflection, lighting, and iris resolution on the performance of a traditional iris recognition system. We further design a fully automated iris image quality evaluation block that estimates defocus blur, motion blur, off-angle, occlusion, lighting, specular reflection, and pixel counts. First, each factor is estimated individually, and then, the second step fuses the estimated factors by using a Dempster-Shafer theory approach to evidential reasoning. The designed block is evaluated on three data sets: Institute of Automation, Chinese Academy of Sciences (CASIA) 3.0 interval subset, West Virginia University (WVU) non-ideal iris, and Iris Challenge Evaluation (ICE) 1.0 dataset made available by National Institute for Standards and Technology (NIST). Considerable improvement in recognition performance is demonstrated when removing poor-quality images selected by our quality metric. The upper bound on computational complexity required to evaluate the quality of a single image is O(n2 log n).


2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference | 2006

A Robust IRIS Segmentation Procedure for Unconstrained Subject Presentation

Jinyu Zuo; Nathan D. Kalka; Natalia A. Schmid

Iris as a biometric, is the most reliable with respect to performance. However, this reliability is a function of the ideality of the data, therefore a robust segmentation algorithm is required to handle non-ideal data. In this paper, a segmentation methodology is proposed that utilizes shape, intensity, and location information that is intrinsic to the pupil/iris. The virtue of this methodology lies in its capability to reliably segment non-ideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and non-ideal datasets, namely CASIA, Iris Challenge Evaluation (ICE) data, WVU, and WVU Off-angle. Furthermore, we compare our performance to that of Camus and Wildes, and Libor Maseks algorithms. We demonstrate an increase in segmentation performance of 7.02%, 8.16%, 20.84%, 26.61%, over the former mentioned algorithms when evaluating these datasets, respectively.


Computer Vision and Image Understanding | 2008

Improving long range and high magnification face recognition: Database acquisition, evaluation, and enhancement

Yi Yao; Besma R. Abidi; Nathan D. Kalka; Natalia A. Schmid; Mongi A. Abidi

In this paper, we describe a face video database, UTK-LRHM, acquired from long distances and with high magnifications. Both indoor and outdoor sequences are collected under uncontrolled surveillance conditions. To our knowledge, it is the first database to provide face images from long distances (indoor: 10-16m and outdoor: 50-300m). The corresponding system magnifications range from 3x to 20x for indoor and up to 284x for outdoor. This database has applications in experimentations with human identification and authentication in long range surveillance and wide area monitoring. Deteriorations unique to long range and high magnification face images are investigated in terms of face recognition rates based on the UTK-LRHM database. Magnification blur is shown to be a major degradation source, the effect of which is quantified using a novel blur assessment measure and alleviated via adaptive deblurring algorithms. A comprehensive processing algorithm, including frame selection, enhancement, and super-resolution is introduced for long range and high magnification face images with a large variety of resolutions. Experimental results using face images of the UTK-LRHM database demonstrate a significant improvement in recognition rates after assessment and enhancement of degradations.


2007 IEEE Workshop on Automatic Identification Advanced Technologies | 2007

Protecting Iris Images through Asymmetric Digital Watermarking

Nick Bartlow; Nathan D. Kalka; Bojan Cukic; Arun Ross

When biometric systems require raw images to be stored in centralized databases, it is imperative that appropriate measures are taken to secure these images. A combination of asymmetric digital watermarking and cryptography can serve as a powerful mechanism for facilitating such security needs. The combination of these techniques enables the system to handle many issues associated with storing and using raw biometric data. In this paper, we propose a framework that encodes voice feature descriptors in raw iris images thereby offering an example of a secure biometric system. The contributions of this work are as follows: application of biometric watermarking to iris images in order to provide an added level of authentication; a mechanism to validate the originating source of iris images; understanding levels in which watermarks can be compromised in a biometric system; and implementation of an asymmetric watermarking framework.


international conference on pattern recognition | 2010

Cross-Spectral Face Verification in the Short Wave Infrared (SWIR) Band

Thirimachos Bourlai; Nathan D. Kalka; Arun Ross; Bojan Cukic; Lawrence A. Hornak

The problem of face verification across the short wave infrared spectrum (SWIR) is studied in order to illustrate the advantages and limitations of SWIR face verification. The contributions of this work are two-fold. First, a database of 50 subjects is assembled and used to illustrate the challenges associated with the problem. Second, a set of experiments is performed in order to demonstrate the possibility of SWIR cross-spectral matching. Experiments also show that images captured under different SWIR wavelengths can be matched to visible images with promising results. The role of multispectral fusion in improving recognition performance in SWIR images is finally illustrated. To the best of our knowledge, this is the first time cross-spectral SWIR face recognition is being investigated in the open literature.


international conference on biometrics theory applications and systems | 2009

An automated method for predicting iris segmentation failures

Nathan D. Kalka; Nick Bartlow; Bojan Cukic

Arguably the most important task in iris recognition systems involves localization of the iris region of interest, a process known as iris segmentation. Research has found that segmentation results are a dominant factor that drives iris recognition matching performance. This work proposes techniques based on probabilistic intensity features and geometric features to arrive at scores indicating the success of both pupil and iris segmentation. The technique is fully automated and therefore requires no human supervision or manual evaluation. This work also presents a machine learning approach which utilizes the pupil and iris scores to arrive at an overall iris segmentation result prediction. We test the techniques using two iris segmentation algorithms of varying performance on two publicly available iris datasets. Our analysis shows that the approach is capable of arriving at segmentation scores suitable for predicting both the success and failure of pupil or iris segmentation. The proposed machine learning approach achieves an average classification accuracy of 98.45% across the four combinations of algorithms and datasets tested when predicting overall segmentation results. Finally, we present one potential application of the technique specific to iris match score performance and outline many other potential uses for the algorithm.


2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference | 2006

High Magnification and Long Distance Face Recognition: Database Acquisition, Evaluation, and Enhancement

Yi Yao; Besma R. Abidi; Nathan D. Kalka; Natalia A. Schmid; Mongi A. Abidi

In this paper, we describe a face video database obtained from Long Distances and with High Magnifications, IRIS- LDHM. Both indoor and outdoor sequences are collected under uncontrolled surveillance conditions. The significance of this database lies in the fact that it is the first database to provide face images from long distances (indoor: 10 m~20 m and outdoor: 50 m~300 m). The corresponding system magnification is elevated from less than 3times to 20times for indoor and up to 375times for outdoor. The database has applications in experimentations with human identification and authentication in long range surveillance and wide area monitoring. The database will be made public to the research community for perusal towards long range face related research. Deteriorations unique to high magnification and long range face images are investigated in terms of face recognition rates. Magnification blur is proved to be an additional major degradation source, which can be alleviated via blur assessment and deblurring algorithms. Experimental results validate a relative improvement of up to 25% in recognition rates after assessment and enhancement of degradations.


computer vision and pattern recognition | 2017

IARPA Janus Benchmark-B Face Dataset

Cameron Whitelam; Emma Taborsky; Austin Blanton; Brianna Maze; Jocelyn C. Adams; Timothy Miller; Nathan D. Kalka; Anil K. Jain; James A. Duncan; Kristen Allen; Jordan Cheney; Patrick J. Grother

Despite the importance of rigorous testing data for evaluating face recognition algorithms, all major publicly available faces-in-the-wild datasets are constrained by the use of a commodity face detector, which limits, among other conditions, pose, occlusion, expression, and illumination variations. In 2015, the NIST IJB-A dataset, which consists of 500 subjects, was released to mitigate these constraints. However, the relatively low number of impostor and genuine matches per split in the IJB-A protocol limits the evaluation of an algorithm at operationally relevant assessment points. This paper builds upon IJB-A and introduces the IARPA Janus Benchmark-B (NIST IJB-B) dataset, a superset of IJB-A. IJB-B consists of 1,845 subjects with human-labeled ground truth face bounding boxes, eye/nose locations, and covariate metadata such as occlusion, facial hair, and skintone for 21,798 still images and 55,026 frames from 7,011 videos. IJB-B was also designed to have a more uniform geographic distribution of subjects across the globe than that of IJB-A. Test protocols for IJB-B represent operational use cases including access point identification, forensic quality media searches, surveillance video searches, and clustering. Finally, all images and videos in IJB-B are published under a Creative Commons distribution license and, therefore, can be freely distributed among the research community.


Archive | 2011

Ascertaining Human Identity in Night Environments

Thirimachos Bourlai; Nathan D. Kalka; Deng Cao; B. Decann; Zain Jafri; F. Nicolo; Cameron Whitelam; J. Zuo; Donald A. Adjeroh; Bojan Cukic; Jeremy M. Dawson; Lawrence A. Hornak; Arun Ross; Natalia A. Schmid

Understanding patterns of human activity from the fusion of multimodal sensor surveillance sources is an important capability. Most related research emphasizes improvement in the performance of biometric systems in controlled conditions characterized by suitable lighting and favorable acquisition distances. However, the need for monitoring humans in night environments is of equal if not greater importance. This chapter will present techniques for the extraction, processing and matching of biometrics under adverse night conditions in the presence of either natural or artificial illumination. Our work includes capture, analysis and evaluation of a broad range of electromagnetic bands suitable for night-time image acquisition, including visible light, near infrared (IR), extended near IR and thermal IR. We develop algorithms for human detection and tracking from night-time imagery at ranges between 5 and 200 meters. Identification algorithms include face, iris, and gait recognition, supplemented by soft biometric features. Our preliminary research indicates the challenges in performing human identification in night-time environments.

Collaboration


Dive into the Nathan D. Kalka's collaboration.

Top Co-Authors

Avatar

Bojan Cukic

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun Ross

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Nick Bartlow

West Virginia University

View shared research outputs
Top Co-Authors

Avatar

Anil K. Jain

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinyu Zuo

West Virginia University

View shared research outputs
Researchain Logo
Decentralizing Knowledge