Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin Aitken is active.

Publication


Featured researches published by Colin Aitken.


Archive | 2006

Bayesian Networks and Probabilistic Inference in Forensic Science: Taroni/Bayesian Networks and Probabilistic Inference in Forensic Science

F. Taroni; Colin Aitken; Paolo Garbolino; Alex Biedermann

Preface. Foreword. 1. The logic of uncertainty. 1.1 Uncertainty and probability. 1.2 Reasoning under uncertainty. 1.3 Frequencies and probabilities. 1.4 Induction and probability. 1.5 Further readings. 2. The logic of Bayesian networks. 2.1 Reasoning with graphical models. 2.2 Reasoning with Bayesian networks. 2.3 Further readings. 3. Evaluation of scientific evidence. 3.1 Introduction. 3.2 The value of evidence. 3.3 Relevant propositions. 3.4 Pre-assessment of the case. 3.5 Evaluation using graphical models. 4. Bayesian networks for evaluating scientific evidence. 4.1 Issues in one-trace transfer cases. 4.2 When evidence has more than one component: footwear marks evidence. 4.3 Scenarios with more than one stain. 5. DNA evidence. 5.1 DNA likelihood ratio. 5.2 Network approaches to the DNA likelihood ratio. 5.3 Missing suspect. 5.4 Analysis when the alternative proposition is that a sibling of the suspect left the stain. 5.5 Interpretation with more than two propositions. 5.6 Evaluation of evidence with more than two propositions. 5.7 Partial matches. 5.8 Mixtures. 5.9 Relatedness testing. 5.10 Database search. 5.11 Error rates. 5.12 Sub-population and co-ancestry coefficient. 5.13 Further reading. 6. Transfer evidence. 6.1 Assessment of transfer evidence under crime level propositions. 6.2 Assessment of transfer evidence under activity level propositions. 6.3 Cross- or two-way transfer of evidential material. 6.4 Increasing the level of detail of selected nodes. 6.5 Missing evidence. 7. Aspects of the combination of evidence. 7.1 Introduction. 7.2 A difficulty in combining evidence. 7.3 The likelihood ratio and the combination of evidence. 7.4 Combination of distinct items of evidence. 8. Pre-assessment. 8.1 Introduction. 8.2 Pre-assessment. 8.3 Pre-assessment for a fibres scenario. 8.4 Pre-assessment in a cross-transfer scenario. 8.5 Pre-assessment with multiple propositions. 8.6 Remarks. 9. Qualitative and sensitivity analyses. 9.1 Qualitative probability models. 9.2 Sensitivity analyses. 10. Continuous networks. 10.1 Introduction. 10.2 Samples and estimates. 10.3 Measurements. 10.4 Use of a continuous distribution which is not normal. 10.5 Appendix. 11. Further applications. 11.1 Offender profiling. 11.2 Decision making. Bibliography. Author Index. Subject Index.


Biometrics | 1989

The evolving role of statistical assessments as evidence in the courts

Colin Aitken; Stephen E. Fienberg

Thank you very much for downloading the evolving role of statistical assessments as evidence in the courts. Maybe you have knowledge that, people have look numerous times for their chosen books like this the evolving role of statistical assessments as evidence in the courts, but end up in harmful downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some malicious bugs inside their desktop computer.


Archive | 2010

Data analysis in forensic science: a Bayesian decision perspective

Franco Taroni; Silvia Bozza; Alex Biedermann; Paolo Garbolino; Colin Aitken

Foreword. Preface. I The Foundations of Inference and Decision in Forensic Science. 1 Introduction. 1.1 The Inevitability of Uncertainty. 1.2 Desiderata in Evidential Assessment. 1.3 The Importance of the Propositional Framework and the Nature of Evidential Assessment. 1.4 From Desiderata to Applications. 1.5 The Bayesian Core of Forensic Science. 1.6 Structure of the Book. 2 Scientific Reasoning and Decision Making. 2.1 Coherent Reasoning Under Uncertainty. 2.2 Coherent Decision Making Under Uncertainty of Reasoning. 2.3 Scientific Reasoning as Coherent Decision Making. 2.4 Forensic Reasoning as Coherent Decision Making. 3 Concepts of Statistical Science and Decision Theory. 3.1 Random Variables and Distribution Functions. 3.2 Statistical Inference and Decision Theory. 3.3 The Bayesian Paradigm. 3.4 Bayesian Decision Theory. 3.5 R Code. II Forensic Data Analysis. 4 Point Estimation. 4.1 Introduction. 4.2 Bayesian Decision for a Proportion. 4.3 Bayesian Decision for a Poisson Mean. 4.4 Bayesian Decision for Normal Mean. 4.5 R Code. 5 Credible Intervals. 5.1 Introduction. 5.2 Credible Intervals. 5.3 Decision-Theoretic Evaluation of Credible Intervals. 5.4 R Code. 6 Hypothesis Testing. 6.1 Introduction. 6.2 Bayesian Hypothesis Testing. 6.3 One-sided testing. 6.4 Two-Sided Testing. 6.5 R Code. 7 Sampling. 7.1 Introduction. 7.2 Sampling Inspection. 7.3 Graphical Models for Sampling Inspection. 7.4 Sampling Inspection under a Decision-Theoretic Approach. 7.5 R Code. 8 Classification of Observations. 8.1 Introduction. 8.2 Standards of Coherent Classification. 8.3 Comparing Models using Discrete Data. 8.4 Comparison of Models using Continuous Data. 8.5 Non-Normal Distributions and Cocaine on Bank Notes. 8.6 A note on Multivariate Continuous Data. 8.7 R Code. 9 Bayesian Forensic Data Analysis: Conclusions and Implications. 9.1 Introduction. 9.2 What is the Past and Current Position of Statistics in Forensic Science? 9.3 Why Should Forensic Scientists Conform to a Bayesian Framework for Inference and Decision Making? 9.4 Why Regard Probability as a Personal Degree of Belief? 9.5 Why Should Scientists be Aware of Decision Analysis? 9.6 How to Implement Bayesian Inference and Decision Analysis? A Discrete Distributions. B Continuous Distributions. Bibliography. Author Index. Subject Index.


Journal of Forensic Sciences | 2007

A Two‐Level Model for Evidence Evaluation

Colin Aitken; Grzegorz Zadora; David Lucy

ABSTRACT: A random effects model using two levels of hierarchical nesting has been applied to the calculation of a likelihood ratio as a solution to the problem of comparison between two sets of replicated multivariate continuous observations where it is unknown whether the sets of measurements shared a common origin. Replicate measurements from a population of such measurements allow the calculation of both within‐group and between‐group variances/covariances. The within‐group distribution has been modelled assuming a Normal distribution, and the between‐group distribution has been modelled using a kernel density estimation procedure. A graphical method of estimating the dependency structure among the variables has been used to reduce this highly multivariate problem to several problems of lower dimension. The approach was tested using a database comprising measurements of eight major elements from each of four fragments from each of 200 glass objects and found to perform well compared with previous approaches, achieving a 15.2% false‐positive rate, and a 5.5% false‐negative rate. The modelling was then applied to two examples of casework in which glass found at the scene of the criminal activity has been compared with that found in association with a suspect.


Journal of Forensic Sciences | 1999

Sampling-how big a sample?

Colin Aitken

It is thought that, in a consignment of discrete units, a certain proportion of the units contain illegal material. A sample of the consignment is to be inspected. Various methods for the determination of the sample size are compared. The consignment will be considered as a random sample from some super-population of units, a certain proportion of which contain drugs. For large consignments, a probability distribution, known as the beta distribution, for the proportion of the consignment which contains illegal material is obtained. This distribution is based on prior beliefs about the proportion. Under certain specific conditions the beta distribution gives the same numerical results as an approach based on the binomial distribution. The binomial distribution provides a probability for the number of units in a sample which contain illegal material, conditional on knowing the proportion of the consignment which contains illegal material. This is in contrast to the beta distribution which provides probabilities for the proportion of a consignment which contains illegal material, conditional on knowing the number of units in the sample which contain illegal material. The interpretation when the beta distribution is used is much more intuitively satisfactory. It is also much more flexible in its ability to cater for prior beliefs which may vary given the different circumstances of different crimes. For small consignments, a distribution, known as the beta-binomial distribution, for the number of units in the consignment which are found to contain illegal material, is obtained, based on prior beliefs about the number of units in the consignment which are thought to contain illegal material. As with the beta and binomial distributions for large samples, it is shown that, in certain specific conditions, the beta-binomial and hypergeometric distributions give the same numerical results. However, the beta-binomial distribution, as with the beta distribution, has a more intuitively satisfactory interpretation and greater flexibility. The beta and the beta-binomial distributions provide methods for the determination of the minimum sample size to be taken from a consignment in order to satisfy a certain criterion. The criterion requires the specification of a proportion and a probability.


Journal of Forensic Sciences | 2013

Information-theoretical assessment of the performance of likelihood ratio computation methods

Daniel Ramos; Joaquin Gonzalez-Rodriguez; Grzegorz Zadora; Colin Aitken

Performance of likelihood ratio (LR) methods for evidence evaluation has been represented in the past using, for example, Tippett plots. We propose empirical cross‐entropy (ECE) plots as a metric of accuracy based on the statistical theory of proper scoring rules, interpretable as information given by the evidence according to information theory, which quantify calibration of LR values. We present results with a case example using a glass database from real casework, comparing performance with both Tippett and ECE plots. We conclude that ECE plots allow clearer comparisons of LR methods than previous metrics, allowing a theoretical criterion to determine whether a given method should be used for evidence evaluation or not, which is an improvement over Tippett plots. A set of recommendations for the use of the proposed methodology by practitioners is also given.


Computational Statistics & Data Analysis | 2006

Evaluation of transfer evidence for three-level multivariate data with the use of graphical models

Colin Aitken; David Lucy; G. Zadora; J. M. Curran

The evaluation of measurements on characteristics of trace evidence found at a crime scene and on a suspect is an important part of forensic science. There are commonly three levels of variation in such evidence. First, there is measurement error on the individual items. Then, individual items are gathered in groups and there is variation within and between groups. There are also commonly many variables which can be measured on the items, such as elemental or chemical composition. There are usually inadequate data to enable proper estimation of a full parametric model to be made. A method is described here for evaluating the evidence by means of a likelihood ratio. The likelihood ratio compares the probability of the measurements on the evidence assuming a common source for evidence from the crime scene and evidence associated with the suspect with the probability of the measurements on the evidence assuming different sources for the crime scene and suspect evidence. It is a well-documented measure of the value of the evidence. A three-level model for multivariate normal data is described. The structure of the data is determined through consideration of the inverse of the covariance matrix from which a graphical model may be determined. This enables a considerable reduction in the parameterisation from the full model whilst retaining a credible dependence structure, not recognised in a model which assumes full independence. The model for the structure of the data thus obtained provides a novel solution to a problem in forensic science where full independence is often assumed for multivariate data. The performances of the derived models are investigated on a data set provided by a European forensic science laboratory.


Journal of The Forensic Science Society | 1989

Probabilistic reasoning in evidential assessment

Colin Aitken; Alexander Gammerman

Computational procedures, based on probabilities for incorporating and assessing evidence obtained in the course of a criminal investigation, are described. These procedures depend on a network approach. Suspects and individual pieces of evidence are the nodes of the network. Causal connections are the links of the network. Various applications of the procedures are discussed in the context of one artificial example.


Theoretical Population Biology | 2003

A graphical model for the evaluation of cross-transfer evidence in DNA profiles

Colin Aitken; Franco Taroni; Paolo Garbolino

The role of graphical models in the assessment of transfer evidence is described with particular reference to the role of cross-transfer evidence. The issues involved in the determination of factors (nodes), associations (links) and probabilities to be included are discussed. Four types of subjective probabilities are of particular interest: those for transfer, persistence and recovery; innocent acquisition; relevance; innocent presence. Examples are given to illustrate the roles of various aspects of the suspects and victims lifestyle and the investigation of the evidence found on the suspect and victim in assessing the probability of ultimate issue, that the suspect committed the crime.


Journal of Forensic Sciences | 2005

Decision analysis in forensic science.

Franco Taroni; Silvia Bozza; Colin Aitken

Forensic scientists are routinely faced with the problems of making decisions under circumstances of uncertainty (i.e., to perform or not perform a test). A decision making model in forensic science is proposed, illustrated with an example from the field of forensic genetics. The approach incorporates available evidence and associated uncertainties with the assessment of utilities (or desirability of the consequences). The paper examines a general example for which identification will be made of the decision maker, the possible actions, the uncertain states of nature, the possible source of evidence and the kind of utility assessments required. It is argued that a formal approach can help to clarify the decision process and give a coherent means of combining elements to reach a decision.

Collaboration


Dive into the Colin Aitken's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Garbolino

Università Iuav di Venezia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grzegorz Zadora

University of Silesia in Katowice

View shared research outputs
Top Co-Authors

Avatar

Daniel Ramos

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

F. Taroni

University of Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge