Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dan E. Krane is active.

Publication


Featured researches published by Dan E. Krane.


Journal of Forensic Sciences | 2015

Letter to the Editor- Context Management Toolbox: A Linear Sequential Unmasking (LSU) Approach for Minimizing Cognitive Bias in Forensic Decision Making.

Itiel E. Dror; J.D. William C. Thompson Ph.D.; Christian A. Meissner; Irv Kornfield; Dan E. Krane; Michael J. Saks; J D Michael Risinger

J Forensic Sci, July 2015, Vol. 60, No. 4 doi: 10.1111/1556-4029.12805 Available online at: onlinelibrary.wiley.com Letter to the Editor— Context Management Toolbox: A Linear Sequential Unmasking (LSU) Approach for Minimizing Cognitive Bias in Forensic Decision Making Sir, The 2009 NAS report (1) criticized forensic scientists for making insufficient efforts to reduce their vulnerability to cogni- tive and contextual bias. Over the past few years, however, the field has begun to take steps to address this issue. There have been major workshops on cognitive bias, and the Organization of Scientific Area Committees (OSAC), 1 as well as the National Commission on Forensic Science, have created committees on Human Factors that are specifically charged with examining this issue. 2 A number of tools and methods for minimizing bias are under consideration. Some of these tools have already been imple- mented in a few forensic laboratories. In general, these tools are designed to protect and enhance the independence of mind of forensic examiners, particularly those who rely on subjective judgment to make their decisions. Several types of contextual information are of concern, as illus- trated in Fig. 1. We organize them into a taxonomy of five levels (based on a four-level taxonomy suggested by Stoel et al. [2]). The five-level taxonomy differentiates task-irrelevant information that may be conveyed to an analyst by the trace evidence itself (Level 1), the reference samples (Level 2), the case information (Level 3), examiners’ base rate expectations that arise from their experience (e.g., when the examiner expects a particular result— Level 4), and organizational and cultural factors (Level 5). A variety of tools are available for addressing cognitive bias. Different tools are useful for managing exposure to each level of task-irrelevant information. For example, case managers (3,4) is a straightforward tool for dealing with bias from case informa- tion (Level 3). In general, these procedures are designed to pre- vent contextual bias by protecting the examiner from exposure to task-irrelevant information. However, it is important to note that some types of information, while potentially biasing, may also be task relevant (5). These types of biasing information are more difficult to deal with. For example, in some instances, evidence that analysts must examine to perform their duties may contain information that is potentially biasing. This can pertain to cases in which Level 1 information, the trace evidence being evaluated, contains contextual informa- tion (e.g., blood spatter patterns that contain information about the nature of the crime, or handwriting and voice samples in which the meaning of the words is potentially biasing). Reference samples are another example of relevant material that is also potentially biasing (Level 2). These samples are clearly relevant because the analyst must compare them to trace evidence samples to determine whether they are similar enough to conclude that they come from the same source. But it is pos- sible that an analyst’s interpretation of the trace evidence might inadvertently be influenced by knowing the characteristics of the See: http://www.nist.gov/forensics/osac/hfc.cfm. The authors of this letter are comprised primarily from members of the OSAC Human Factors Committee, the Human Factors Subcommittee of the National Commission on Forensic Science, and authors of the original Sequential Unmasking.


Science | 2009

Time for DNA Disclosure

Dan E. Krane; V. Bahn; David J. Balding; B. Barlow; H. Cash; B. L. Desportes; P. D'Eustachio; Keith Devlin; Travis E. Doom; Itiel E. Dror; Simon Ford; C. Funk; Jason R. Gilder; G. Hampikian; Keith Inman; Allan Jamieson; P. E. Kent; Roger Koppl; Irving L. Kornfield; Sheldon Krimsky; Jennifer L. Mnookin; Laurence D. Mueller; E. Murphy; David R. Paoletti; Dmitri A. Petrov; Michael L. Raymer; D. M. Risinger; Alvin E. Roth; Norah Rudin; W. Shields

The legislation that established the U.S. National DNA Index System (NDIS) in 1994 explicitly anticipated that database records would be available for purposes of research and quality control “if personally identifiable information is removed” [42 U.S.C. Sec 14132(b)(3)(D)]. However, the Federal


International Journal of Legal Medicine | 2009

Comments on the review of low copy number testing

Jason R. Gilder; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Laurence D. Mueller; William C. Thompson

Dear Sir,A challenge to the reliability of low copy number (LCN)DNA profiling in the trial of Sean Hoey in Belfast CrownCourt in Northern Ireland (R v Hoey [2007] NICC 49, 20December, 2007) prompted the UK’s new Forensic ScienceRegulator (Andrew Rennison) to commission a review oflow template DNA profiling techniques. That review [2],conducted by Professor Brian Caddy (with the assistance ofDr. Adrian Linacre and Dr. Graham Taylor) was released on12 April, 2008 and concluded that LCN DNA profiling is“robust” and “fit for purpose.” Yet, the review accepts thatthe evidence presented in Sean Hoey’s trial was insufficientto establish the validity of the technique. It also enumerates21 recommendations for specific improvements that shouldbe undertaken to improve the methodology, including suchbasic steps as the development of a consensus on theinterpretation of test results and efforts to establish “bestpractices” for interpretation.We believe the conclusions of the review are inconsistentwith its recommendations in a number of respects. Forexample, it is difficult to see how a forensic techniquecould be deemed adequately validated for use in thecourtroom when there is not yet a consensus on how itsresults should be interpreted. The review thus raisesimportant issues about what it means for a forensic sciencetechnique to be validated. It also establishes grounds forconcern about the way that LCN DNA test results havebeen interpreted in earlier cases.We are concerned that the review team relied only oninput regarding the merits of LCN approaches fromorganizations that are dedicated to promoting its use bylaw enforcement. Consultation with known critics of thetechnique (or even a review of their published works)would have provided the reviewers with a broaderperspective of what work remains to be done before theapproach can become generally accepted within theinternational scientific community. There are in fact thingsabout LCN approaches upon which the reviewers andcritics do agree. For instance, caution that “[p]ublicizing thepotential of the application of LCN typing withoutdescribing its limitations may cause misunderstanding” [1]which is consistent with the review’s recommendations 1,3, and 13. But given the conclusion that “[t]he methodcannot be used for exculpatory purposes” [1], the review’sultimate conclusion that LCN testing is “fit for purpose”leaves the important but unanswered question of “what isthat purpose?”


Science & Justice | 2014

Regarding Champod, editorial: “Research focused mainly on bias will paralyse forensic science”

D. Michael Risinger; William C. Thompson; Allan Jamieson; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Jennifer L. Mnookin; Robert Rosenthal; Michael J. Saks; Sandy L. Zabell

Dear Dr. Barron, Regarding Champod, editorial: “Research focused mainly on bias will paralyse forensic science.” In 2009, a report of the (U.S.) National Research Council declared that “[t]he forensic science disciplines are just beginning to become aware of contextual bias and the dangers it poses” [1]. The report called for additional research and discussion of how best to address this problem. Since that time, the literature on the topic of contextual bias in forensic science has begun to expand, and some laboratories are beginning to change procedures to address the problem. In his recent editorial in Science and Justice, Christophe Champod suggests that this trend has gone too far and threatens to “paralyse forensic science” [2]. We think his arguments are significantly overstated and deserve forceful refutation, lest they stand in the way of meaningful progress on this important issue. Dr. Champod opens by acknowledging that forensic scientists are vulnerable to bias. He says that he does not “want to minimize the importance of [research on this issue] and how it contributes to a better management of forensic science…” He continues by asking “...but should research remain focused on processes, or should we not move on to the basic understanding of the forensic traces?” He then comments on risks of “being focused on bias only.” By framing the matter in this way, Dr. Champod creates a false dichotomy, and implies facts about the current state of funding and research that are simply not the case. He seems to be saying that currently all or most research funding and publication is directed toward problems of bias, and little or none toward “basic understanding of the forensic traces.” Dr. Champod should know this is not the case, however, since (among other things) he is a co-author of a marvelous recently-released empirical study on fingerprint analysis funded by the (U.S.) National Institute of Justice [3]. Any perusal of NIJ grants, or the contents of leading forensic science journals, would not support Dr. Champod’s apparent view of the current research world. It would of course be a mistake for all of the available funding for research on forensic science topics to be devoted to the potential effects of bias, but again, this is neither the case currently nor is it in our opinion likely to become the case in the future. To discuss the risks of focusing “on bias only,” is simply raising a straw man when no one, not even the most ardent supporter of sequential unmasking or other approaches to the control of biasing information in forensic science practice, suggests focusing research “on bias only.” That said, we do believe that the research record both in forensic science and in a variety of other scientific areas has reached a point that clearly establishes the pressing need for all forensic areas to address the problem of contextual bias. As Andrew Rennison, who was then the forensic science regulator for England and Wales, told the plenary session of the American Academy of Forensic Science in February, “we don’t need more research on this issue, what we need is action.” This is not to say that further research on bias and its effects is not valuable, and


Forensic Science Policy & Management: An International Journal | 2015

Do Observer Effects Matter? A Comment on Langenburg, Bochet, and Ford

Roger Koppl; D. Charlton; Irving L. Kornfield; Dan E. Krane; M. Risinger; C. Robertson; Michael J. Saks; William C. Thompson

ABSTRACT We identify methodological problems in Langenburg et al. (2014), which undermine its conclusions about the size of the observer effect problem and the importance of sequential unmasking as a solution. The scoring method of Langenburg et al. (2014) appears to be subjective. The classification of cases is not congruent with the three keys to observer effects in forensic science: the analysts state of expectation, the analysts state of desire, and the degree of ambiguity in the evidence being examined. Nor does the paper adequately support its claim, “[I]t has been asserted that the high context/high interaction cases are essentially where there is the most danger of bias.” While the paper tends to minimize concern over observer effects, the evidence in it seems to support the view that fingerprint analysts look to contextual information to help them make decisions.


Journal of Forensic Sciences | 2018

Letter to the Editor—Appropriate Standards for Verification and Validation of Probabilistic Genotyping Systems

Nathaniel Adams; Roger Koppl; Dan E. Krane; William C. Thompson; Sandy L. Zabell

Sir, As the President’s Council of Advisors on Science and Technology (PCAST) recently noted, the interpretation of mixed DNA samples can be extremely difficult, particularly when there is uncertainty about the number of contributors and whether all of the contributors’ alleles have been detected (1). One promising way to address this problem is the development of automated systems for probabilistic genotyping (PG), (2) but how will we know that these automated systems work properly? If conventional approaches that rely on human judgment are not up to the task, how will we assess the validity of PG systems? These are essential questions for forensic scientists to consider as standards for DNA testing are developed through groups such as OSAC and SWGDAM. They will also be important for courts to consider when they assess the admissibility of evidence generated by PG systems. We urge forensic scientists interested in these questions to pay close attention to the standards for software validation that have been developed by the Institute of Electrical and Electronics Engineers (IEEE). Software-based PG approaches are necessarily rooted in collaboration between experts in the areas of molecular biology, population genetics, statistics, forensic science, computer science, and software engineering. While it is important to consider the perspectives of all of these disciplines on the validation issue, we think that the perspectives of software engineers are particularly important. Decades of experience with software failures have led to established practices for what is commonly known as verification and validation (V&V) of software. We urge that those practices be followed when evaluating PG systems. In the world of software engineering, verification and validation entail “evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements” (3). IEEE Standard 1012-2012, IEEE Standard for System and Software Verification and Validation (4) spells out universally applicable and broadly accepted software V&V standards. It requires that each software component be assigned an integrity level that increases from 1 to 4 depending on the consequences of a failure. Consequences are categorized as “negligible,” “minor,” “critical” (causing “major and permanent injury, partial loss of mission, major system damage, or major financial or social loss”) or “catastrophic” (causing “loss of human life, complete mission failure, loss of system security and safety, or extensive financial or social loss”). As the “criticality” of software increases across these integrity levels, the intensity and rigor of the V&V tasks required by the standards also increase. A PG system used to generate evidence in criminal cases should clearly be assigned a very high integrity level and hence would warrant intense and rigorous V&V under these standards. As the integrity level increases, it also becomes more important that V&V be independent of software development. IEEE Standard 1012-2012 describes three distinct dimensions of independence in the V&V process. Technical independence involves utilizing “personnel who are not involved in the development of the software” (5). Managerial independence is accomplished when V&V responsibilities are administrated by an organization that is separate from the organizations that develop and manage the software system. Financial independence requires that “control of the V&V budget be vested in an organization independent of the development organization” (5). As a general rule of thumb, software developers working in areas as varied as word processing and satellite communications expect that 10–50% of their budget is set aside for V&V processes; for example, 35% of IBM’s budget for developing the Space Shuttle’s flight software was allocated to its V&V team (6). Software defects can be caused by flaws in both the design and the implementation (coding) of a system or component. For PG systems, there is a multitude of potential points of failure that warrant independent evaluation. For example, is the modeling of stutter artifacts used the best available, coded as designed, or even appropriate to the problem? Does the PG algorithm systematically favor inclusions? How likely are false negatives and positives? Would outside experts agree with the software’s decisions at each stage of analysis? And so on. It is important to know both the conceptual model being used and whether the software implements that model reliably (2). Without independent verification and validation, we can only hope and trust. In any scientific endeavor, it is better to verify and validate. The failure of PG software developers to complete appropriate V&V processes can translate directly to severe financial hardships or even the loss of liberty or life. Hence, V&V of PG systems requires urgent attention from the forensic science community. Users of PG software should demand that PG developers utilize software industry standards (e.g., IEEE) for the development, documentation, verification, validation, acquisition, use, and maintenance of their systems—in the same way that they expect and demand that companies that provide their reagents and instruments adhere to rigorous quality assurance and quality control practices. Standards-setting bodies like OSAC should incorporate the IEEE standards into their standards for software validation. Accrediting bodies such as ASCLD, ANAB, and UKAS should require accredited laboratories to use only PG systems that can demonstrate compliance with appropriate software industry standards. Defense experts and attorneys should insist on complete documentation of V&V processes for any PG systems used in criminal cases in which they are involved. Criminal defendants should have access to documentation generated during V&V processes. Without independent V&V, courts will only have the self-interested assurance of the software developers themselves that the system works properly. Given the critical importance of PG systems in criminal justice, those assurances are not good enough. History provides many examples of the failure of engineered systems, (7) which underlines the importance of validation and *Co-authors Nathaniel Adams and Dan Krane declare a conflict of interest, in that they are employed by Forensic Bioinformatic Services, Inc., an independent forensic DNA consulting firm.


Journal of Forensic Sciences | 2011

Commentary on: Thornton JI. Letter to the editor--aa rejection of "working blind" as a cure for contextual bias. J Forensic Sci 2010;55(6):1663.

William C. Thompson; Simon Ford; Gilder; Keith Inman; Allan Jamieson; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Jennifer L. Mnookin; Risinger Dm; Norah Rudin; Michael J. Saks; Sandy L. Zabell

Sir, In a recent letter (1) on the subject of contextual bias, Dr. John Thornton criticized what he called the ‘‘working blind’’ approach. According to Thornton, some commentators (he does not say who) have suggested that forensic scientists should know nothing about the case they are working on ‘‘apart from that which is absolutely necessary to conduct the indicated analysis and examination.’’ This ‘‘blind’’ approach is dangerous, Thornton argues, because forensic scientists need to know the facts of a case to make reasonable judgments about what specimens to test and how to test them. Thornton’s argument is correct, but he is attacking a straw man. As far as we know, no one has suggested that the individuals who decide what specimens to collect at a crime scene, or what analyses and examinations to perform on those specimens, should be blind to the facts of the case. What we, and others, have proposed is that individuals be blind to unnecessary contextual information when performing analytical tests and when making interpretations that require subjective judgment (2–5). One obvious way for forensic scientists to be ‘‘blind’’ during the analytical and interpretational phases of their work is to separate functions in the laboratory. Under what has been called the case manager approach (2–5), there would be two possible roles that a forensic scientist could perform. The case manager would ‘‘communicate with police officers and detectives, participate in decisions about what specimens to collect at crime scenes and how to test those specimens, and manage the flow of work to the laboratory’’ (5). The analyst would perform analytical tests and comparisons on specimens submitted to the laboratory in accordance with the instructions of the case manager. Under this model, the analyst can be blind to unnecessary contextual facts, while the case manager remains fully informed. A well-trained examiner could perform either role on different cases. The roles could be rotated among laboratory examiners to allow the laboratory access to the full breadth of expertise available; this would also allow the examiners to acquire and maintain a diversity of skills. Some of us have proposed a procedure called sequential unmasking as a means of minimizing contextual bias (6–8). Thornton mentions sequential unmasking but has not described it correctly. The purpose of sequential unmasking is not to provide analysts an opportunity to ‘‘determine whether tests that they have already run have been appropriate’’ (1). The purpose of sequential unmasking is to protect analysts from being biased unintentionally by information irrelevant to the exercise of their expertise or information that may have avoidable biasing effects if seen too early in the process of analysis. As an illustration, we presented a protocol that would prevent a DNA analyst from being influenced inappropriately by knowledge of reference profiles while making critical subjective judgments about the interpretation of evidentiary profiles. Aspects of this particular sequential unmasking approach have already been adopted by some laboratories in the U.S. in accordance with 2010 SWGDAM guideline 3.6.1, which states: ‘‘to the extent possible, DNA typing results from evidentiary samples are interpreted before comparison with any known samples, other than those of assumed contributors’’ (http://www.fbi.gov/about-us/lab/codis/swgdaminterpretation-guidelines). However, the approach is by no means limited to DNA. We believe similar sequential unmasking protocols can and should be developed for other forensic science disciplines. Sequential unmasking is not a call for uninformed decision making. We believe that analysts should have access to whatever information is actually necessary to conduct a thorough and appropriate analysis at whatever point that information becomes necessary. We recognize that difficult decisions will need to be made about what information is domain relevant and about when and how to ‘‘unmask’’ information that, while relevant, also has biasing potential. We believe that forensic scientists should be actively discussing these questions, rather than arguing that such a discussion is unnecessary. Calls for greater use of blind procedures to increase scientific rigor in forensic testing have indeed become more common in recent years. We were pleased that Dr. Thornton reported encountering such calls ‘‘everywhere we now turn,’’ although we were disappointed that a scientist with his distinguished record of contributions to the field remains unpersuaded of their value. The only argument Thornton offers in opposition is the mistaken claim that forensic scientists can ‘‘vanquish’’ bias by force of will. As he put it: ‘‘I reject the insinuation that we do not have the wit or the intellectual capacity to deal with bias, of whatever sort’’ (1). Let us be clear. We are not ‘‘insinuating’’ that forensic scientists lack this intellectual capacity; we are asserting that it is a proven and well-accepted scientific fact that all human beings, including forensic scientists, lack this capacity. Cognitive scientists and psychologists who study the operation of the human mind in judgmental tasks have shown repeatedly that people lack conscious awareness of factors that influence them (9–16). People often believe they were influenced by factors that did not affect their judgments and believe they were not influenced by factors that did affect their judgments. This research has a clear implication for the present discussion: contextual bias cannot be conquered by force of will because people are not consciously aware of the extent to which they are influenced by contextual factors. The inevitability of contextual bias is recognized and accepted in most scientific fields. Imagine the reaction in the medical community if a medical researcher claimed that he need not use blind procedures in his clinical trials because he is a person of integrity who will not allow himself to be biased. The claim would not only be rejected, but it would also likely invoke ridicule from professional colleagues. Forensic scientists who claim to be able to avoid contextual bias through force of will are making a claim contrary to well-established scientific facts concerning human judgment. If science is to progress, erroneous statements of this type must be rebutted forcefully even when (perhaps especially when) they are made by respected, senior scientists.


The Champion | 2003

Evaluating Forensic DNA Evidence, Part 2

William C. Thompson; Simon Ford; Travis E. Doom; Michael L. Raymer; Dan E. Krane


Archive | 2003

Physical Mapping of DNA

Dan E. Krane; Michael L. Raymer


Archive | 2009

CS 271/BIO 371: Introduction to Bioinformatics

Michael L. Raymer; Dan E. Krane

Collaboration


Dive into the Dan E. Krane's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Inman

California State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge