Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer L. Mnookin is active.

Publication


Featured researches published by Jennifer L. Mnookin.


Law, Probability and Risk | 2010

The Use of Technology in Human Expert Domains: Challenges and Risks Arising from the Use of Automated Fingerprint Identification Systems in Forensic Science

Itiel E. Dror; Jennifer L. Mnookin

Cognitive technologies have increased in sophistication and use, to the point of interactively collaborating and distributing cognition between technology and humans. The use of Automated Fingerprint Identification Systems (AFIS), computerized databases of fingerprints, by latent fingerprint experts, is a par-excellence illustration of such a partnership in forensic investigations. However, the deployment and use of cognitive technology is not a simple matter. If a technology is going to be used to its maximum potential, we must first understand the implications and consequences of using it and make whatever adaptations are necessary both to the technology and to the way humans work with it. As we demonstrate with AFIS, latent fingerprint identification has been transformed by technology, but the strategies used by humans who work with this technology have not adequately been modified and adjusted in response to these transformations. For example, the chances that an AFIS search will produce prints with incidental similarities-i.e. that highly similar, look-alike, prints from different sources will result from an AFIS search-has not been sufficiently investigated or explored. This risk, as well as others, may mean that the use of AFIS introduces new concerns into the process of latent fingerprint identification, some of which may even increase the chances of making erroneous identifications. Only by appropriate and explicit adaptation to the new potential and the new challenges posed by the new technology will AFIS and other cognitive technologies produce efficient and effective partnerships.


Science | 2009

Time for DNA Disclosure

Dan E. Krane; V. Bahn; David J. Balding; B. Barlow; H. Cash; B. L. Desportes; P. D'Eustachio; Keith Devlin; Travis E. Doom; Itiel E. Dror; Simon Ford; C. Funk; Jason R. Gilder; G. Hampikian; Keith Inman; Allan Jamieson; P. E. Kent; Roger Koppl; Irving L. Kornfield; Sheldon Krimsky; Jennifer L. Mnookin; Laurence D. Mueller; E. Murphy; David R. Paoletti; Dmitri A. Petrov; Michael L. Raymer; D. M. Risinger; Alvin E. Roth; Norah Rudin; W. Shields

The legislation that established the U.S. National DNA Index System (NDIS) in 1994 explicitly anticipated that database records would be available for purposes of research and quality control “if personally identifiable information is removed” [42 U.S.C. Sec 14132(b)(3)(D)]. However, the Federal


PLOS ONE | 2014

Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

Philip J. Kellman; Jennifer L. Mnookin; Gennady Erlikhman; Patrick Garrigan; Tandra Ghose; Everett Mettler; David Charlton; Itiel E. Dror

Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.


Science & Justice | 2014

Regarding Champod, editorial: “Research focused mainly on bias will paralyse forensic science”

D. Michael Risinger; William C. Thompson; Allan Jamieson; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Jennifer L. Mnookin; Robert Rosenthal; Michael J. Saks; Sandy L. Zabell

Dear Dr. Barron, Regarding Champod, editorial: “Research focused mainly on bias will paralyse forensic science.” In 2009, a report of the (U.S.) National Research Council declared that “[t]he forensic science disciplines are just beginning to become aware of contextual bias and the dangers it poses” [1]. The report called for additional research and discussion of how best to address this problem. Since that time, the literature on the topic of contextual bias in forensic science has begun to expand, and some laboratories are beginning to change procedures to address the problem. In his recent editorial in Science and Justice, Christophe Champod suggests that this trend has gone too far and threatens to “paralyse forensic science” [2]. We think his arguments are significantly overstated and deserve forceful refutation, lest they stand in the way of meaningful progress on this important issue. Dr. Champod opens by acknowledging that forensic scientists are vulnerable to bias. He says that he does not “want to minimize the importance of [research on this issue] and how it contributes to a better management of forensic science…” He continues by asking “...but should research remain focused on processes, or should we not move on to the basic understanding of the forensic traces?” He then comments on risks of “being focused on bias only.” By framing the matter in this way, Dr. Champod creates a false dichotomy, and implies facts about the current state of funding and research that are simply not the case. He seems to be saying that currently all or most research funding and publication is directed toward problems of bias, and little or none toward “basic understanding of the forensic traces.” Dr. Champod should know this is not the case, however, since (among other things) he is a co-author of a marvelous recently-released empirical study on fingerprint analysis funded by the (U.S.) National Institute of Justice [3]. Any perusal of NIJ grants, or the contents of leading forensic science journals, would not support Dr. Champod’s apparent view of the current research world. It would of course be a mistake for all of the available funding for research on forensic science topics to be devoted to the potential effects of bias, but again, this is neither the case currently nor is it in our opinion likely to become the case in the future. To discuss the risks of focusing “on bias only,” is simply raising a straw man when no one, not even the most ardent supporter of sequential unmasking or other approaches to the control of biasing information in forensic science practice, suggests focusing research “on bias only.” That said, we do believe that the research record both in forensic science and in a variety of other scientific areas has reached a point that clearly establishes the pressing need for all forensic areas to address the problem of contextual bias. As Andrew Rennison, who was then the forensic science regulator for England and Wales, told the plenary session of the American Academy of Forensic Science in February, “we don’t need more research on this issue, what we need is action.” This is not to say that further research on bias and its effects is not valuable, and


Journal of Forensic Sciences | 2011

Commentary on: Thornton JI. Letter to the editor-a rejection of "working blind" as a cure for contextual bias. J Forensic Sci 2010;55(6):1663

William C. Thompson; Simon Ford; Jason R. Gilder; Keith Inman; Allan Jamieson; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Jennifer L. Mnookin; D. Michael Risinger; Norah Rudin; Michael J. Saks; Sandy L. Zabell

Sir, In a recent letter (1) on the subject of contextual bias, Dr. John Thornton criticized what he called the ‘‘working blind’’ approach. According to Thornton, some commentators (he does not say who) have suggested that forensic scientists should know nothing about the case they are working on ‘‘apart from that which is absolutely necessary to conduct the indicated analysis and examination.’’ This ‘‘blind’’ approach is dangerous, Thornton argues, because forensic scientists need to know the facts of a case to make reasonable judgments about what specimens to test and how to test them. Thornton’s argument is correct, but he is attacking a straw man. As far as we know, no one has suggested that the individuals who decide what specimens to collect at a crime scene, or what analyses and examinations to perform on those specimens, should be blind to the facts of the case. What we, and others, have proposed is that individuals be blind to unnecessary contextual information when performing analytical tests and when making interpretations that require subjective judgment (2–5). One obvious way for forensic scientists to be ‘‘blind’’ during the analytical and interpretational phases of their work is to separate functions in the laboratory. Under what has been called the case manager approach (2–5), there would be two possible roles that a forensic scientist could perform. The case manager would ‘‘communicate with police officers and detectives, participate in decisions about what specimens to collect at crime scenes and how to test those specimens, and manage the flow of work to the laboratory’’ (5). The analyst would perform analytical tests and comparisons on specimens submitted to the laboratory in accordance with the instructions of the case manager. Under this model, the analyst can be blind to unnecessary contextual facts, while the case manager remains fully informed. A well-trained examiner could perform either role on different cases. The roles could be rotated among laboratory examiners to allow the laboratory access to the full breadth of expertise available; this would also allow the examiners to acquire and maintain a diversity of skills. Some of us have proposed a procedure called sequential unmasking as a means of minimizing contextual bias (6–8). Thornton mentions sequential unmasking but has not described it correctly. The purpose of sequential unmasking is not to provide analysts an opportunity to ‘‘determine whether tests that they have already run have been appropriate’’ (1). The purpose of sequential unmasking is to protect analysts from being biased unintentionally by information irrelevant to the exercise of their expertise or information that may have avoidable biasing effects if seen too early in the process of analysis. As an illustration, we presented a protocol that would prevent a DNA analyst from being influenced inappropriately by knowledge of reference profiles while making critical subjective judgments about the interpretation of evidentiary profiles. Aspects of this particular sequential unmasking approach have already been adopted by some laboratories in the U.S. in accordance with 2010 SWGDAM guideline 3.6.1, which states: ‘‘to the extent possible, DNA typing results from evidentiary samples are interpreted before comparison with any known samples, other than those of assumed contributors’’ (http://www.fbi.gov/about-us/lab/codis/swgdaminterpretation-guidelines). However, the approach is by no means limited to DNA. We believe similar sequential unmasking protocols can and should be developed for other forensic science disciplines. Sequential unmasking is not a call for uninformed decision making. We believe that analysts should have access to whatever information is actually necessary to conduct a thorough and appropriate analysis at whatever point that information becomes necessary. We recognize that difficult decisions will need to be made about what information is domain relevant and about when and how to ‘‘unmask’’ information that, while relevant, also has biasing potential. We believe that forensic scientists should be actively discussing these questions, rather than arguing that such a discussion is unnecessary. Calls for greater use of blind procedures to increase scientific rigor in forensic testing have indeed become more common in recent years. We were pleased that Dr. Thornton reported encountering such calls ‘‘everywhere we now turn,’’ although we were disappointed that a scientist with his distinguished record of contributions to the field remains unpersuaded of their value. The only argument Thornton offers in opposition is the mistaken claim that forensic scientists can ‘‘vanquish’’ bias by force of will. As he put it: ‘‘I reject the insinuation that we do not have the wit or the intellectual capacity to deal with bias, of whatever sort’’ (1). Let us be clear. We are not ‘‘insinuating’’ that forensic scientists lack this intellectual capacity; we are asserting that it is a proven and well-accepted scientific fact that all human beings, including forensic scientists, lack this capacity. Cognitive scientists and psychologists who study the operation of the human mind in judgmental tasks have shown repeatedly that people lack conscious awareness of factors that influence them (9–16). People often believe they were influenced by factors that did not affect their judgments and believe they were not influenced by factors that did affect their judgments. This research has a clear implication for the present discussion: contextual bias cannot be conquered by force of will because people are not consciously aware of the extent to which they are influenced by contextual factors. The inevitability of contextual bias is recognized and accepted in most scientific fields. Imagine the reaction in the medical community if a medical researcher claimed that he need not use blind procedures in his clinical trials because he is a person of integrity who will not allow himself to be biased. The claim would not only be rejected, but it would also likely invoke ridicule from professional colleagues. Forensic scientists who claim to be able to avoid contextual bias through force of will are making a claim contrary to well-established scientific facts concerning human judgment. If science is to progress, erroneous statements of this type must be rebutted forcefully even when (perhaps especially when) they are made by respected, senior scientists.


Journal of Leukocyte Biology | 2016

Forensic bitemark identification: weak foundations, exaggerated claims

Michael J. Saks; Thomas D. Albright; Thomas L. Bohan; Barbara E. Bierer; C. Michael Bowers; Mary A. Bush; Peter J. Bush; Arturo Casadevall; Simon A. Cole; M. Bonner Denton; Shari Seidman Diamond; Rachel Dioso-Villa; Jules Epstein; David L. Faigman; Lisa Faigman; Stephen E. Fienberg; Brandon L. Garrett; Paul C. Giannelli; Henry T. Greely; Edward J. Imwinkelried; Allan Jamieson; Karen Kafadar; Jerome P. Kassirer; Jonathan J. Koehler; David Korn; Jennifer L. Mnookin; Alan B. Morrison; Erin Murphy; Nizam Peerwani; Joseph L. Peterson

Abstract Several forensic sciences, especially of the pattern-matching kind, are increasingly seen to lack the scientific foundation needed to justify continuing admission as trial evidence. Indeed, several have been abolished in the recent past. A likely next candidate for elimination is bitemark identification. A number of DNA exonerations have occurred in recent years for individuals convicted based on erroneous bitemark identifications. Intense scientific and legal scrutiny has resulted. An important National Academies review found little scientific support for the field. The Texas Forensic Science Commission recently recommended a moratorium on the admission of bitemark expert testimony. The California Supreme Court has a case before it that could start a national dismantling of forensic odontology. This article describes the (legal) basis for the rise of bitemark identification and the (scientific) basis for its impending fall. The article explains the general logic of forensic identification, the claims of bitemark identification, and reviews relevant empirical research on bitemark identification—highlighting both the lack of research and the lack of support provided by what research does exist. The rise and possible fall of bitemark identification evidence has broader implications—highlighting the weak scientific culture of forensic science and the laws difficulty in evaluating and responding to unreliable and unscientific evidence.


Supreme Court Review | 2013

Confronting Science: Expert Evidence and the Confrontation Clause

Jennifer L. Mnookin; David H. Kaye

In Crawford v Washington, the Supreme Court substantially changed its understanding of how the Confrontation Clause applies to hearsay evidence. Since then, the Court has issued three bitterly contested expert-evidence-related Confrontation Clause decisions, and each one has generated at least as many questions as answers. This article analyzes this trilogy of cases, especially the most recent, Williams v Illinois. In Williams, the Court issued a bewildering array of opinions in which majority support for admitting the opinion of a DNA analyst about tests that she did not perform was awkwardly knitted together out of several incompatible doctrinal bases. The most prominent and fully developed argument for admission was that the references to the work of the analysts who actually did the testing but who never testified were admitted for a purpose other than their truth. Although we maintain that this argument is, on the facts of Williams, implausible, we also recognize that in other, relatively limited instances, expert basis evidence might legitimately be introduced for a purpose other than its truth. After striving for precision on this doctrinal point, we step back and suggest that the ongoing anxiety about how to think about expert evidence and the Confrontation Clause exists in large part because the Court has yet to face directly a set of larger, background concerns. There is significant uncertainty about how, and to what extent, scientific evidence should be treated as special or distinct from other kinds of evidence for confrontation purposes. We suggest that scientific and expert evidence might warrant some limited special treatment, based on what we see as one of the most critical dimensions of scientific knowledge production — that it is a collective, rather than an individual enterprise. Recognizing that scientists inevitably rely and build on facts, data, opinions, and test results of others, we suggest that courts should engage in a modest form of scientific exceptionalism within Confrontation Clause jurisprudence, through efforts to create procedures that respect the fundamental values of the Confrontation Clause, but also adapt when necessary, to the epistemic structures and processes of science.


Journal of Forensic Sciences | 2011

Commentary on: Thornton JI. Letter to the editor--aa rejection of "working blind" as a cure for contextual bias. J Forensic Sci 2010;55(6):1663.

William C. Thompson; Simon Ford; Gilder; Keith Inman; Allan Jamieson; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Jennifer L. Mnookin; Risinger Dm; Norah Rudin; Michael J. Saks; Sandy L. Zabell

Sir, In a recent letter (1) on the subject of contextual bias, Dr. John Thornton criticized what he called the ‘‘working blind’’ approach. According to Thornton, some commentators (he does not say who) have suggested that forensic scientists should know nothing about the case they are working on ‘‘apart from that which is absolutely necessary to conduct the indicated analysis and examination.’’ This ‘‘blind’’ approach is dangerous, Thornton argues, because forensic scientists need to know the facts of a case to make reasonable judgments about what specimens to test and how to test them. Thornton’s argument is correct, but he is attacking a straw man. As far as we know, no one has suggested that the individuals who decide what specimens to collect at a crime scene, or what analyses and examinations to perform on those specimens, should be blind to the facts of the case. What we, and others, have proposed is that individuals be blind to unnecessary contextual information when performing analytical tests and when making interpretations that require subjective judgment (2–5). One obvious way for forensic scientists to be ‘‘blind’’ during the analytical and interpretational phases of their work is to separate functions in the laboratory. Under what has been called the case manager approach (2–5), there would be two possible roles that a forensic scientist could perform. The case manager would ‘‘communicate with police officers and detectives, participate in decisions about what specimens to collect at crime scenes and how to test those specimens, and manage the flow of work to the laboratory’’ (5). The analyst would perform analytical tests and comparisons on specimens submitted to the laboratory in accordance with the instructions of the case manager. Under this model, the analyst can be blind to unnecessary contextual facts, while the case manager remains fully informed. A well-trained examiner could perform either role on different cases. The roles could be rotated among laboratory examiners to allow the laboratory access to the full breadth of expertise available; this would also allow the examiners to acquire and maintain a diversity of skills. Some of us have proposed a procedure called sequential unmasking as a means of minimizing contextual bias (6–8). Thornton mentions sequential unmasking but has not described it correctly. The purpose of sequential unmasking is not to provide analysts an opportunity to ‘‘determine whether tests that they have already run have been appropriate’’ (1). The purpose of sequential unmasking is to protect analysts from being biased unintentionally by information irrelevant to the exercise of their expertise or information that may have avoidable biasing effects if seen too early in the process of analysis. As an illustration, we presented a protocol that would prevent a DNA analyst from being influenced inappropriately by knowledge of reference profiles while making critical subjective judgments about the interpretation of evidentiary profiles. Aspects of this particular sequential unmasking approach have already been adopted by some laboratories in the U.S. in accordance with 2010 SWGDAM guideline 3.6.1, which states: ‘‘to the extent possible, DNA typing results from evidentiary samples are interpreted before comparison with any known samples, other than those of assumed contributors’’ (http://www.fbi.gov/about-us/lab/codis/swgdaminterpretation-guidelines). However, the approach is by no means limited to DNA. We believe similar sequential unmasking protocols can and should be developed for other forensic science disciplines. Sequential unmasking is not a call for uninformed decision making. We believe that analysts should have access to whatever information is actually necessary to conduct a thorough and appropriate analysis at whatever point that information becomes necessary. We recognize that difficult decisions will need to be made about what information is domain relevant and about when and how to ‘‘unmask’’ information that, while relevant, also has biasing potential. We believe that forensic scientists should be actively discussing these questions, rather than arguing that such a discussion is unnecessary. Calls for greater use of blind procedures to increase scientific rigor in forensic testing have indeed become more common in recent years. We were pleased that Dr. Thornton reported encountering such calls ‘‘everywhere we now turn,’’ although we were disappointed that a scientist with his distinguished record of contributions to the field remains unpersuaded of their value. The only argument Thornton offers in opposition is the mistaken claim that forensic scientists can ‘‘vanquish’’ bias by force of will. As he put it: ‘‘I reject the insinuation that we do not have the wit or the intellectual capacity to deal with bias, of whatever sort’’ (1). Let us be clear. We are not ‘‘insinuating’’ that forensic scientists lack this intellectual capacity; we are asserting that it is a proven and well-accepted scientific fact that all human beings, including forensic scientists, lack this capacity. Cognitive scientists and psychologists who study the operation of the human mind in judgmental tasks have shown repeatedly that people lack conscious awareness of factors that influence them (9–16). People often believe they were influenced by factors that did not affect their judgments and believe they were not influenced by factors that did affect their judgments. This research has a clear implication for the present discussion: contextual bias cannot be conquered by force of will because people are not consciously aware of the extent to which they are influenced by contextual factors. The inevitability of contextual bias is recognized and accepted in most scientific fields. Imagine the reaction in the medical community if a medical researcher claimed that he need not use blind procedures in his clinical trials because he is a person of integrity who will not allow himself to be biased. The claim would not only be rejected, but it would also likely invoke ridicule from professional colleagues. Forensic scientists who claim to be able to avoid contextual bias through force of will are making a claim contrary to well-established scientific facts concerning human judgment. If science is to progress, erroneous statements of this type must be rebutted forcefully even when (perhaps especially when) they are made by respected, senior scientists.


Journal of Forensic Sciences | 2011

Commentary on: Thornton JI. Letter to the editor-a rejection of “working blind” as a cure for contextual bias. J Forensic Sci 2010;55(6):1663: LETTER TO THE EDITOR

William C. Thompson; Simon Ford; Jason R. Gilder; Keith Inman; Allan Jamieson; Roger Koppl; Irving L. Kornfield; Dan E. Krane; Jennifer L. Mnookin; D. Michael Risinger; Norah Rudin; Michael J. Saks; Sandy L. Zabell

Sir, In a recent letter (1) on the subject of contextual bias, Dr. John Thornton criticized what he called the ‘‘working blind’’ approach. According to Thornton, some commentators (he does not say who) have suggested that forensic scientists should know nothing about the case they are working on ‘‘apart from that which is absolutely necessary to conduct the indicated analysis and examination.’’ This ‘‘blind’’ approach is dangerous, Thornton argues, because forensic scientists need to know the facts of a case to make reasonable judgments about what specimens to test and how to test them. Thornton’s argument is correct, but he is attacking a straw man. As far as we know, no one has suggested that the individuals who decide what specimens to collect at a crime scene, or what analyses and examinations to perform on those specimens, should be blind to the facts of the case. What we, and others, have proposed is that individuals be blind to unnecessary contextual information when performing analytical tests and when making interpretations that require subjective judgment (2–5). One obvious way for forensic scientists to be ‘‘blind’’ during the analytical and interpretational phases of their work is to separate functions in the laboratory. Under what has been called the case manager approach (2–5), there would be two possible roles that a forensic scientist could perform. The case manager would ‘‘communicate with police officers and detectives, participate in decisions about what specimens to collect at crime scenes and how to test those specimens, and manage the flow of work to the laboratory’’ (5). The analyst would perform analytical tests and comparisons on specimens submitted to the laboratory in accordance with the instructions of the case manager. Under this model, the analyst can be blind to unnecessary contextual facts, while the case manager remains fully informed. A well-trained examiner could perform either role on different cases. The roles could be rotated among laboratory examiners to allow the laboratory access to the full breadth of expertise available; this would also allow the examiners to acquire and maintain a diversity of skills. Some of us have proposed a procedure called sequential unmasking as a means of minimizing contextual bias (6–8). Thornton mentions sequential unmasking but has not described it correctly. The purpose of sequential unmasking is not to provide analysts an opportunity to ‘‘determine whether tests that they have already run have been appropriate’’ (1). The purpose of sequential unmasking is to protect analysts from being biased unintentionally by information irrelevant to the exercise of their expertise or information that may have avoidable biasing effects if seen too early in the process of analysis. As an illustration, we presented a protocol that would prevent a DNA analyst from being influenced inappropriately by knowledge of reference profiles while making critical subjective judgments about the interpretation of evidentiary profiles. Aspects of this particular sequential unmasking approach have already been adopted by some laboratories in the U.S. in accordance with 2010 SWGDAM guideline 3.6.1, which states: ‘‘to the extent possible, DNA typing results from evidentiary samples are interpreted before comparison with any known samples, other than those of assumed contributors’’ (http://www.fbi.gov/about-us/lab/codis/swgdaminterpretation-guidelines). However, the approach is by no means limited to DNA. We believe similar sequential unmasking protocols can and should be developed for other forensic science disciplines. Sequential unmasking is not a call for uninformed decision making. We believe that analysts should have access to whatever information is actually necessary to conduct a thorough and appropriate analysis at whatever point that information becomes necessary. We recognize that difficult decisions will need to be made about what information is domain relevant and about when and how to ‘‘unmask’’ information that, while relevant, also has biasing potential. We believe that forensic scientists should be actively discussing these questions, rather than arguing that such a discussion is unnecessary. Calls for greater use of blind procedures to increase scientific rigor in forensic testing have indeed become more common in recent years. We were pleased that Dr. Thornton reported encountering such calls ‘‘everywhere we now turn,’’ although we were disappointed that a scientist with his distinguished record of contributions to the field remains unpersuaded of their value. The only argument Thornton offers in opposition is the mistaken claim that forensic scientists can ‘‘vanquish’’ bias by force of will. As he put it: ‘‘I reject the insinuation that we do not have the wit or the intellectual capacity to deal with bias, of whatever sort’’ (1). Let us be clear. We are not ‘‘insinuating’’ that forensic scientists lack this intellectual capacity; we are asserting that it is a proven and well-accepted scientific fact that all human beings, including forensic scientists, lack this capacity. Cognitive scientists and psychologists who study the operation of the human mind in judgmental tasks have shown repeatedly that people lack conscious awareness of factors that influence them (9–16). People often believe they were influenced by factors that did not affect their judgments and believe they were not influenced by factors that did affect their judgments. This research has a clear implication for the present discussion: contextual bias cannot be conquered by force of will because people are not consciously aware of the extent to which they are influenced by contextual factors. The inevitability of contextual bias is recognized and accepted in most scientific fields. Imagine the reaction in the medical community if a medical researcher claimed that he need not use blind procedures in his clinical trials because he is a person of integrity who will not allow himself to be biased. The claim would not only be rejected, but it would also likely invoke ridicule from professional colleagues. Forensic scientists who claim to be able to avoid contextual bias through force of will are making a claim contrary to well-established scientific facts concerning human judgment. If science is to progress, erroneous statements of this type must be rebutted forcefully even when (perhaps especially when) they are made by respected, senior scientists.


UCLA Law Review | 2011

The Need for a Research Culture in the Forensic Sciences

Jennifer L. Mnookin; Simon A. Cole; Itiel E. Dror; Barry A. J. Fisher; Max Houk; Keith Inman; David H. Kaye; Jonathan J. Koehler; Glenn Langenburg; D. Michael Risinger; Norah Rudin; Jay A. Siegel; David A. Stoney

Collaboration


Dive into the Jennifer L. Mnookin's collaboration.

Top Co-Authors

Avatar

Itiel E. Dror

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan E. Krane

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norah Rudin

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David H. Kaye

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge