Bradford T. Ulery
Noblis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bradford T. Ulery.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Bradford T. Ulery; R. Austin Hicklin; JoAnn Buscaglia; Maria Antonia Roberts
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
Forensic Science International | 2015
Bradford T. Ulery; R. Austin Hicklin; Maria Antonia Roberts; JoAnn Buscaglia
After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%).
Data in Brief | 2016
Bradford T. Ulery; R. Austin Hicklin; Maria Antonia Roberts; JoAnn Buscaglia
The data in this article supports the research paper entitled “Interexaminer variation of minutia markup on latent fingerprints” [1]. The data in this article describes the variability in minutia markup during both analysis of the latents and comparison between latents and exemplars. The data was collected in the “White Box Latent Print Examiner Study,” in which each of 170 volunteer latent print examiners provided detailed markup documenting their examinations of latent-exemplar pairs of prints randomly assigned from a pool of 320 pairs. Each examiner examined 22 latent-exemplar pairs; an average of 12 examiners marked each latent.
Forensic Science International | 2017
Bradford T. Ulery; R. Austin Hicklin; Maria Antonia Roberts; JoAnn Buscaglia
Exclusion is the determination by a latent print examiner that two friction ridge impressions did not originate from the same source. The concept and terminology of exclusion vary among agencies. Much of the literature on latent print examination focuses on individualization, and much less attention has been paid to exclusion. This experimental study assesses the associations between a variety of factors and exclusion determinations. Although erroneous exclusions are more likely to occur on some images and for some examiners, they were widely distributed among images and examiners. Measurable factors found to be associated with exclusion rates include the quality of the latent, value determinations, analysis minutia count, comparison difficulty, and the presence of cores or deltas. An understanding of these associations will help explain the circumstances under which errors are more likely to occur and when determinations are less likely to be reproduced by other examiners; the results should also lead to improved effectiveness and efficiency of training and casework quality assurance. This research is intended to assist examiners in improving the examination process and provide information to the broader community regarding the accuracy, reliability, and implications of exclusion decisions.
NIST Interagency/Internal Report (NISTIR) - 7123 | 2004
Charles L. Wilson; R. Austin Hicklin; Harold Korves; Bradford T. Ulery; Melissa Zoepfl; Mike Bone; Patrick J. Grother; Ross J. Micheals; Steve Otto; Craig I. Watson
PLOS ONE | 2012
Bradford T. Ulery; R. Austin Hicklin; JoAnn Buscaglia; Maria Antonia Roberts
NIST Interagency/Internal Report (NISTIR) - 7271 | 2005
Austin Hicklin; Bradford T. Ulery; Craig I. Watson
Forensic Science International | 2013
Bradford T. Ulery; R. Austin Hicklin; George Ihor Kiebuzinski; Maria Antonia Roberts; JoAnn Buscaglia
PLOS ONE | 2014
Bradford T. Ulery; R. Austin Hicklin; Maria Antonia Roberts; JoAnn Buscaglia
Forensic Science International | 2016
Bradford T. Ulery; R. Austin Hicklin; Maria Antonia Roberts; JoAnn Buscaglia