Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simone Gittelson is active.

Publication


Featured researches published by Simone Gittelson.


Journal of Forensic Sciences | 2012

Bayesian Networks and the Value of the Evidence for the Forensic Two‐Trace Transfer Problem*

Simone Gittelson; Alex Biedermann; Silvia Bozza; Franco Taroni

Abstract:  Forensic scientists face increasingly complex inference problems for evaluating likelihood ratios (LRs) for an appropriate pair of propositions. Up to now, scientists and statisticians have derived LR formulae using an algebraic approach. However, this approach reaches its limits when addressing cases with an increasing number of variables and dependence relationships between these variables. In this study, we suggest using a graphical approach, based on the construction of Bayesian networks (BNs). We first construct a BN that captures the problem, and then deduce the expression for calculating the LR from this model to compare it with existing LR formulae. We illustrate this idea by applying it to the evaluation of an activity level LR in the context of the two‐trace transfer problem. Our approach allows us to relax assumptions made in previous LR developments, produce a new LR formula for the two‐trace transfer problem and generalize this scenario to n traces.


Journal of Forensic Sciences | 2016

A Practical Guide for the Formulation of Propositions in the Bayesian Approach to DNA Evidence Interpretation in an Adversarial Environment

Simone Gittelson; Tim Kalafut; Steven Myers; Duncan Taylor; Tacha Hicks; Franco Taroni; I.W. Evett; Jo-Anne Bright; John Buckleton

The interpretation of complex DNA profiles is facilitated by a Bayesian approach. This approach requires the development of a pair of propositions: one aligned to the prosecution case and one to the defense case. This note explores the issue of proposition setting in an adversarial environment by a series of examples. A set of guidelines generalize how to formulate propositions when there is a single person of interest and when there are multiple individuals of interest. Additional explanations cover how to handle multiple defense propositions, relatives, and the transition from subsource level to activity level propositions. The propositions depend on case information and the allegations of each of the parties. The prosecution proposition is usually known. The authors suggest that a sensible proposition is selected for the defense that is consistent with their stance, if available, and consistent with a realistic defense if their position is not known.


Forensic Science International | 2016

Low-template DNA: A single DNA analysis or two replicates?

Simone Gittelson; Carolyn R. Steffen; Michael D. Coble

This study investigates the following two questions: (1) Should the DNA analyst concentrate the DNA extract into a single amplification or should he/she split it up to do two replicates? (2) Given the electropherogram obtained from a first analysis, is it worthwhile for the DNA analyst to invest in obtaining a second replicate? A decision-theoretic approach addresses these questions by quantitatively expressing the expected net gain (ENG) of each DNA analysis of interest. The results indicate that two replicates generally have a greater ENG than a single DNA analysis for DNA quantities capable of producing two replicates having an average allelic peak height as low as 43rfu. This supports the position that two replicates increase the information content with regard to a single analysis.


Artificial Intelligence and Law | 2013

Modeling the forensic two-trace problem with Bayesian networks

Simone Gittelson; Alex Biedermann; Silvia Bozza; Franco Taroni

The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375–381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727–732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem.


Forensic Science International | 2013

Decision-theoretic reflections on processing a fingermark

Simone Gittelson; Silvia Bozza; Alex Biedermann; Franco Taroni

A recent publication in this journal [1] presented the results of a field study that revealed the data provided by the fingermarks not processed in a forensic science laboratory. In their study, the authors were interested in the usefulness of this additional data in order to determine whether such fingermarks would have been worth submitting to the fingermark processing workflow. Taking these ideas as a starting point, this communication here places the fingermark in its context of a case brought before a court, and examines the question of processing or not processing a fingermark from a decision-theoretic point of view. The decision-theoretic framework presented provides an answer to this question in the form of a quantified expression of the expected value of information (EVOI) associated with the processed fingermark, which can then be compared with the cost of processing the mark.


Forensic Science International-genetics | 2014

Decision analysis for the genotype designation in low-template-DNA profiles.

Simone Gittelson; Alex Biedermann; Silvia Bozza; Franco Taroni

What genotype should the scientist specify for conducting a database search to try to find the source of a low-template-DNA (lt-DNA) trace? When the scientist answers this question, he or she makes a decision. Here, we approach this decision problem from a normative point of view by defining a decision-theoretic framework for answering this question for one locus. This framework combines the probability distribution describing the uncertainty over the traces donors possible genotypes with a loss function describing the scientists preferences concerning false exclusions and false inclusions that may result from the database search. According to this approach, the scientist should choose the genotype designation that minimizes the expected loss. To illustrate the results produced by this approach, we apply it to two hypothetical cases: (1) the case of observing one peak for allele xi on a single electropherogram, and (2) the case of observing one peak for allele xi on one replicate, and a pair of peaks for alleles xi and xj, i ≠ j, on a second replicate. Given that the probabilities of allele drop-out are defined as functions of the observed peak heights, the threshold values marking the turning points when the scientist should switch from one designation to another are derived in terms of the observed peak heights. For each case, sensitivity analyses show the impact of the models parameters on these threshold values. The results support the conclusion that the procedure should not focus on a single threshold value for making this decision for all alleles, all loci and in all laboratories.


Data in Brief | 2016

Expected net gain data of low-template DNA analyses

Simone Gittelson; Carolyn R. Steffen; Michael D. Coble

Low-template DNA analyses are affected by stochastic effects which can produce a configuration of peaks in the electropherogram (EPG) that is different from the genotype of the DNA׳s donor. A probabilistic and decision-theoretic model can quantify the expected net gain (ENG) of performing a DNA analysis by the difference between the expected value of information (EVOI) and the cost of performing the analysis. This article presents data on the ENG of performing DNA analyses of low-template DNA for a single amplification, two replicate amplifications, and for a second replicate amplification given the result of a first analysis. The data were obtained using amplification kits AmpFlSTR Identifiler Plus and Promega׳s PowerPlex 16 HS, an ABI 3130xl genetic sequencer, and Applied Biosystem׳s GeneMapper ID-X software. These data are supplementary to an original research article investigating whether a forensic DNA analyst should perform a single DNA analysis or two replicate analyses from a decision-theoretic point of view, entitled “Low-template DNA: a single DNA analysis or two replicates?” (Gittelson et al., 2016) [1].


Forensic Science International-genetics | 2017

The factor of 10 in forensic DNA match probabilities

Simone Gittelson; Tamyra R. Moretti; Anthony J. Onorato; Bruce Budowle; Bruce S. Weir; John Buckleton

An update was performed of the classic experiments that led to the view that profile probability assignments are usually within a factor of 10 of each other. The data used in this study consist of 15 Identifiler loci collected from a wide range of forensic populations. Following Budowle et al. [1], the terms cognate and non-cognate are used. The cognate database is the database from which the profiles are simulated. The profile probability assignment was usually larger in the cognate database. In 44%-65% of the cases, the profile probability for 15 loci in the non-cognate database was within a factor of 10 of the profile probability in the cognate database. This proportion was between 60% and 80% when the FBI and NIST data were used as the non-cognate databases. A second experiment compared the match probability assignment using a generalised database and recommendation 4.2 from NRC II (the 4.2 assignment) with a proxy for the matching proportion developed using subpopulation allele frequencies and the product rule. The findings support that the 4.2 assignment has a large conservative bias. These results are in agreement with previous research results.


Journal of Forensic Sciences | 2018

The Probabilistic Genotyping Software STRmix: Utility and Evidence for its Validity

John S. Buckleton; Jo-Anne Bright; Simone Gittelson; Tamyra R. Moretti; Anthony J. Onorato; Frederick R. Bieber; Bruce Budowle; Duncan Taylor

Forensic DNA interpretation is transitioning from manual interpretation based usually on binary decision‐making toward computer‐based systems that model the probability of the profile given different explanations for it, termed probabilistic genotyping (PG). Decision‐making by laboratories to implement probability‐based interpretation should be based on scientific principles for validity and information that supports its utility, such as criteria to support admissibility. The principles behind STRmix™ are outlined in this study and include standard mathematics and modeling of peak heights and variability in those heights. All PG methods generate a likelihood ratio (LR) and require the formulation of propositions. Principles underpinning formulations of propositions include the identification of reasonably assumed contributors. Substantial data have been produced that support precision, error rate, and reliability of PG, and in particular, STRmix™. A current issue is access to the code and quality processes used while coding. There are substantial data that describe the performance, strengths, and limitations of STRmix™, one of the available PG software.


Forensic Science International | 2018

A response to "Likelihood ratio as weight of evidence: A closer look" by Lund and Iyer.

Simone Gittelson; Charles E.H. Berger; Graham Jackson; I.W. Evett; Christophe Champod; Bernard Robertson; James M. Curran; Duncan Taylor; Bruce S. Weir; Michael D. Coble; John Buckleton

Recently, Lund and Iyer (L&I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&I argue that the decision maker should not accept the experts likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based.

Collaboration


Dive into the Simone Gittelson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Buckleton

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael D. Coble

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony J. Onorato

Federal Bureau of Investigation

View shared research outputs
Top Co-Authors

Avatar

Bruce Budowle

University of North Texas Health Science Center

View shared research outputs
Top Co-Authors

Avatar

Bruce S. Weir

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge