JoAnn Buscaglia
Federal Bureau of Investigation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by JoAnn Buscaglia.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Bradford T. Ulery; R. Austin Hicklin; JoAnn Buscaglia; Maria Antonia Roberts
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
Journal of Forensic Sciences | 2005
Christopher Latkoczy; Stefan Becker; Marc Dücking; Detlef Günther; Jurian Hoogewerff; José R. Almirall; JoAnn Buscaglia; Andrew Dobney; Robert D. Koons; Shirly Montero; Gerard van der Peijl; Wilfried Stoecklein; Tatiana Trejos; John Watling; V. Zdanowicz
Forensic analysis of glass samples was performed in different laboratories within the NITE-CRIME (Natural Isotopes and Trace Elements in Criminalistics and Environmental Forensics) European Network, using a variety of Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) systems. The main objective of the interlaboratory tests was to cross-validate the different combinations of laser ablation systems with different ICP-MS instruments. A first study using widely available samples, such as the NIST SRM 610 and NIST SRM 612 reference glasses, led to deviations in the determined concentrations for trace elements amongst the laboratories up to 60%. Extensive discussion among the laboratories and the production of new glass reference standards (FGS 1 and FGS 2) established an improved analytical protocol, which was tested on a well-characterized float glass sample (FG 10-1 from the BKA Wiesbaden collection). Subsequently, interlaboratory tests produced improved results for nearly all elements with a deviation of < 10%, demonstrating that LA-ICP-MS can deliver absolute quantitative measurements on major, minor and trace elements in float glass samples for forensic and other purposes.
Forensic Science International | 2012
Amanda Hepler; Christopher P. Saunders; Linda J. Davis; JoAnn Buscaglia
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing samples from over 400 writers. We found that, when provided with the same two items of evidence, the three methods often would lead to differing conclusions (with rates of disagreement ranging from 0.005 to 0.48). Rates of misleading evidence and Tippet plots are both used to characterize the range of behavior for the methods over varying sized questioned documents. The appendix shows that the three score-based likelihood ratios are theoretically very different not only from each other, but also from the likelihood ratio, and as a consequence each display drastically different behavior.
Journal of Forensic Sciences | 2002
Robert D. Koons; JoAnn Buscaglia
The concentrations of ten elements in 209 unrelated glass specimens received as evidence were used to assess the frequencies of errors of false association (Type II errors) using several comparison criteria at specified significance levels (Type I errors). Pairwise comparisons of the samples using either the equal-variance t-test or Welchs modification (unequal variances) result in a small number of errors of false association, even when adjusting the significance level (Bonferroni correction) for multivariate comparisons. At the 95% confidence level (overall Type I error of 0.05, or individual element comparison error of 0.005), only two Type II errors are made in 21736 comparisons (0.009%) when using the equal-variance t-test for comparison of sample means. In this study, the range overlap test using three replicate measurements per specimen results in no errors of false association. Most specimen pairs in this data set are readily discriminated either by differences in the concentrations of several elements or by an extremely large difference in the concentrations of one or more element.
Forensic Science International | 2013
R. Austin Hicklin; JoAnn Buscaglia; Maria Antonia Roberts
The ability of friction ridge examiners to correctly discern and make use of the ridges and associated features in finger or palm impressions is limited by clarity. The clarity of an impression relates to the examiners confidence that the presence, absence, and attributes of features can be correctly discerned. Despite the importance of clarity in the examination process, there have not previously been standard methods for assessing clarity in friction ridge impressions. We introduce a process for annotation, analysis, and interchange of friction ridge clarity information that can be applied to latent or exemplar impressions. This paper: (1) describes a method for evaluating the clarity of friction ridge impressions by using color-coded annotations that can be used by examiners or automated systems; (2) discusses algorithms for overall clarity metrics based on manual or automated clarity annotation; and (3) defines a method of quantifying the correspondence of clarity when comparing a pair of friction ridge images, based on clarity annotation and resulting metrics. Different uses of this approach include examiner interchange of data, quality assurance, metrics, and as an aid in automated fingerprint matching.
Journal of Analytical Atomic Spectrometry | 2013
Tatiana Trejos; Robert Koons; Peter Weis; Stefan Becker; Ted Berman; Claude Dalpe; Marc Duecking; JoAnn Buscaglia; Tiffany Eckert-Lumsdon; Troy Ernst; Christopher Hanlon; Alex Heydon; Kim Mooney; Randall W. Nelson; Kristine Olsson; Emily R. Schenk; Christopher Palenik; Edward Chip Pollock; David Rudell; Scott Ryland; Anamary Tarifa; Melissa Valadez; Andrew van Es; V. Zdanowicz; José R. Almirall
Four interlaboratory tests were designed to evaluate the performance of match criteria for forensic comparisons of elemental composition of glass by μ-XRF, solution nebulization SN-ICP-MS, LA-ICP-OES and LA-ICP-MS. A total of 24 analysts in 18 laboratories participated in the tests. Glass specimens were selected to study the capabilities of the techniques to discriminate glass produced in the same manufacturing plant at different time intervals and to associate samples that originated from a single source. The assessment of the effectiveness of several match criteria included: confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test, range overlap, and Hotellings T2. Error rates are reported for each of these criteria. Recommended match criteria were those found to produce the lowest combinations of type 1 and type 2 error rates. Performance of the studied match criteria was dependent on the homogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The best results for μ-XRF data were obtained using spectral overlay followed by a ±3s confidence interval or range overlap. For ICP-based measurements, a wider match criterion, such as a modified confidence interval based on a fixed minimum relative standard deviation (±4s, >3–5% RSD), is recommended due to the inherent precision of those methods (typically <1–5% RSD) and the greater number of elements measured. Glass samples that were manufactured in different plants, or at the same plant weeks or months apart, were readily differentiated by elemental composition when analyzed by these sensitive methods.
Forensic Science International | 2015
Bradford T. Ulery; R. Austin Hicklin; Maria Antonia Roberts; JoAnn Buscaglia
After the initial analysis of a latent print, an examiner will sometimes revise the assessment during comparison with an exemplar. Changes between analysis and comparison may indicate that the initial analysis of the latent was inadequate, or that confirmation bias may have affected the comparison. 170 volunteer latent print examiners, each randomly assigned 22 pairs of prints from a pool of 320 total pairs, provided detailed markup documenting their interpretations of the prints and the bases for their comparison conclusions. We describe changes in value assessments and markup of features and clarity. When examiners individualized, they almost always added or deleted minutiae (90.3% of individualizations); every examiner revised at least some markups. For inconclusive and exclusion determinations, changes were less common, and features were added more frequently when the image pair was mated (same source). Even when individualizations were based on eight or fewer corresponding minutiae, in most cases some of those minutiae had been added during comparison. One erroneous individualization was observed: the markup changes were notably extreme, and almost all of the corresponding minutiae had been added during comparison. Latents assessed to be of value for exclusion only (VEO) during analysis were often individualized when compared to a mated exemplar (26%); in our previous work, where examiners were not required to provide markup of features, VEO individualizations were much less common (1.8%).
Journal of Forensic Sciences | 2005
Robert D. Koons; JoAnn Buscaglia
The concentrations of seven elements in projectile lead specimens received as evidence were used to assess the frequency of the occurrence of two unrelated samples having indistinguishable compositions. A set of data from 1837 samples was selected for this study from a sampling of 23,054 lead bullets and shot pellets received as evidence in the FBI Laboratory over the period 1989 through 2002. The method used for selection of samples from case submissions ensured that no two samples of the same general type from the same case were included and that no bias was introduced concerning representation of manufacturers or production sources. A total of 1,686,366 pairwise lead sample comparisons were made using the concentrations of the elements Sb, Cu, As, Ag, Bi, Sn, and Cd using a match criterion of two times the sum of the standard deviations of the paired samples. Of the 1837 samples, 1397 samples, or 76%, are distinguishable from every other sample in this study. The total number of indistinguishable sample pairs is 674, for a frequency of 1 out of every 2502 comparisons. The frequency of occurrence of matching samples decreases as the number of measured elements is increased and as the precision of the measurements improves. For bullets in which all seven elements were determined, the match frequency is 1 in 7284. Compositional comparison of bullet lead provides a reliable, highly significant point of evidentiary comparison of potential sources of crime-related bullets.
Journal of Forensic Sciences | 2011
Christopher P. Saunders; Linda J. Davis; JoAnn Buscaglia
Abstract: The proposition that writing profiles are unique is considered a key premise underlying forensic handwriting comparisons. An empirical study cannot validate this proposition because of the impossibility of observing sample documents written by every individual. The goal of this paper is to illustrate what can be stated about the individuality of writing profiles using a database of handwriting samples and an automated comparison procedure. In this paper, we provide a strategy for bounding the probability of observing two writers with indistinguishable writing profiles (regardless of the comparison methodology used) with a random match probability that can be estimated statistically. We illustrate computation of this bound using a convenience sample of documents and an automated comparison procedure based on Pearson’s chi‐squared statistic applied to frequency distributions of letter shapes extracted from handwriting samples. We also show how this bound can be used when designing an empirical study of individuality.
Forensic Science International | 2012
Linda J. Davis; Christopher P. Saunders; Amanda Hepler; JoAnn Buscaglia
The likelihood ratio paradigm has been studied as a means for quantifying the strength of evidence for a variety of forensic evidence types. Although the concept of a likelihood ratio as a comparison of the plausibility of evidence under two propositions (or hypotheses) is straightforward, a number of issues arise when one considers how to go about estimating a likelihood ratio. In this paper, we illustrate one possible approach to estimating a likelihood ratio in comparative handwriting analysis. The novelty of our proposed approach relies on generating simulated writing samples from a collection of writing samples from a known source to form a database for estimating the distribution associated with the numerator of a likelihood ratio. We illustrate this approach using documents collected from 432 writers under controlled conditions.