Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nigel W. Bond is active.

Publication


Featured researches published by Nigel W. Bond.


Applied Cognitive Psychology | 1999

Beliefs and data on the relationship between consistency and accuracy of eyewitness testimony

Neil Brewer; Rob Potter; Ronald P. Fisher; Nigel W. Bond; Mary A. Luszcz

Two studies concerned with consistency and accuracy of eyewitness testimony were conducted. In Study 1 potential jurors indicated the degree to which they considered that various witness on-stand behaviours indicated testimonial accuracy. Witness statements that were inconsistent with previous statements were considered to be the strongest indicators of inaccuracy. Study 2 examined the relationship between consistency and accuracy of testimony. Witnesses viewed a film of a robbery and were interviewed twice (2 weeks apart) about the crime in a 4 (interview format)×2 (interview occasion) design. Regardless of whether consistency was operationalised in terms of direct contradictions between interviews, or degree of agreement on detail across interviews, no more than 10% of the variance in overall accuracy rate was explained by any individual measure. Number of contradictions and overall agreement between interviews did, however, make additive contributions to prediction of overall accuracy. Also, higher correlations between contradiction-based consistency measures and interview two accuracy rate were detected. Neither consistency nor accuracy for specific testimonial dimensions were predictive of accuracy on the other dimensions, or overall accuracy. Copyright


Journal of The Royal Statistical Society Series A-statistics in Society | 2003

A multilevel cross-classified modelling approach to peer review of grant proposals: the effects of assessor and researcher attributes on assessor ratings

Upali W. Jayasinghe; Herbert W. Marsh; Nigel W. Bond

The peer review of grant proposals is very important to academics from all disciplines. Although there is limited research on the reliability of assessments for grant proposals, previously reported single-rater reliabilities have been disappointingly low (between 0.17 and 0.37). We found that the single-rater reliability of the overall assessor rating for Australian Research Council grants was 0.21 for social science and humanities (2870 ratings, 1928 assessors and 687 proposals) and 0.19 for science (7153 ratings, 4295 assessors and 1644 proposals). We used a multilevel, cross-classification approach (level 1, assessor and proposal cross-classification; level 2, field of study), taking into account that 34% of the assessors evaluated more than one proposal. Researcher-nominated assessors (those chosen by the authors of the research proposal) gave higher ratings than panel-nominated assessors chosen by the Australian Research Council, and proposals from more prestigious universities received higher ratings. In the social sciences and humanities, the status of Australian universities had significantly more effect on Australian assessors than on overseas assessors. In science, ratings were higher when assessors rated fewer proposals and apparently had a more limited frame of reference for making such ratings and when researchers were professors rather than non-professors. Particularly, the methodology of this large scale study is applicable to other forms of peer review (publications, job interviews, awarding of prizes and election to prestigious societies) where peer review is employed as a selection process. Copyright 2003 Royal Statistical Society.


Educational Evaluation and Policy Analysis | 2001

Peer review in the funding of research in higher education : the Australian experience

Upali W. Jayasinghe; Herbert W. Marsh; Nigel W. Bond

In this article we evaluate the peer review process used to fund Australian university research across all disciplines. Peer reviews of research proposals (2,989 proposals, 6,233 external reviewers) submitted to the Australian Research Council (ARC) are related to characteristics of the researchers and of external reviewers. The reliability of the peer reviews was disappointingly low (interrater agreement of .53 for researcher ratings based on an average of 4.3 external reviewers per proposal). The gender and age of a researcher and the number of researchers on a research team did not affect the probability that funding would be granted, but professors were more likely to be funded than nonprofessors. Australian external reviewers gave lower ratings than did non-Australian reviewers, particularly those from North America. The number of external reviewers for each proposal and the number of proposals assessed by each external reviewer had small negative effects on ratings. Researcher-nominated external reviewers (those chosen by the authors of a research proposal) gave higher, less-reliable ratings than did panel-nominated external reviewers chosen by the ARC. To improve the reliability of peer reviews, we offer the following recommendations: (a) Researcher-nominated reviewers should not be used; (b) there should be more reviews per proposal; and (c) a smaller number of more highly selected reviewers should perform most of the reviews within each subdiscipline, thereby providing greater control over error associated with individual reviewers.


Higher Education Research & Development | 1997

Approaches to Studying and Academic Performance in a Traditional Psychology Course.

Stephen C. Provost; Nigel W. Bond

ABSTRACT Richardsons short‐version of the Approaches to Studying Inventory (ASI) was administered to psychology students at the commencement of the semester of study. This inventory seeks to indicate the degree to which students employ a reproducing (i.e., surface,) or meaning (i.e., deep,) approach to learning. Scores for meaning orientation did not predict academic performance in any way, while there was a very small negative relationship between reproducing orientation and academic achievement. The internal reliability of subscales within meaning and reproducing orientation were not satisfactory.


Australian Psychologist | 2007

Peer review process: Assessments by applicant‐nominated referees are biased, inflated, unreliable and invalid

Herbert W. Marsh; Nigel W. Bond; Upali W. Jayasinghe

Abstract How trustworthy are peer reviews by applicant-nominated assessors (ANAs)? For Australian Research Council (ARC) proposals (N = 2,330) with at least one ANA and one assessor nominated by the funding panel (PNAs), ANAs gave substantially higher ratings in all nine discipline panels (covering sciences, social sciences, and humanities). Compared to reviews by PNAs, ANA ratings were less related to ratings by other assessors, less related to the ARC final assessment, and contributed to the unreliability of peer reviews. Furthermore, when the same assessor was both an ANA and PNA for different proposals, ratings in the role of ANA were biased whereas those by the same person in the role of PNA were not. ANA ratings of ARC grant proposals are biased, inflated, unreliable, and invalid, leading the ARC to abandon use of ANAs. Particularly if replicated in other situations, the results have important implications for other evaluations based on ANAs.


Scientometrics | 2006

A new reader trial approach to peer review in funding research grants: An Australian experiment

Upali W. Jayasinghe; Herbert W. Marsh; Nigel W. Bond

SummaryPeer reviews are highly valued in academic life, but are notoriously unreliable. A major problem is the substantial measurement error due to the idiosyncratic responses when large numbers of different assessors each evaluate only a single or a few submissions (e.g., journal articles, grants, etc.). To address this problem, the main funding body of academic research in Australia trialed a “reader system” in which each of a small number of senior academics read all proposals within their subdiscipline. The traditional peer review process for 1996 (2,989 proposals, 6,233 assessors) resulted in unacceptably low reliabilities comparable with those found in other research (0.475 for research project, 0.572 for researcher). For proposals from psychology and education in 1997, the new reader system resulted in substantially higher reliabilities: 0.643 and 0.881, respectively. In comparison to the traditional peer review approach, the new reader system is substantially more reliable, timely, and cost efficient - and applicable to many peer review situations.


Journal of Cognitive Engineering and Decision Making | 2013

Measuring Relative Cue Strength as a Means of Validating an Inventory of Expert Offender Profiling Cues

Ben W. Morrison; Mark W. Wiggins; Nigel W. Bond; Michael D. Tyler

Cues have been identified as important precursors to successful diagnoses among expert practitioners. However, current approaches to the identification of expert cues typically rely on subjective methods, making the validity of cues difficult to establish. The present research examined the utility of a Paired-Concept Association Task (P-CAT) as a basis for discriminating expert and novice cue activation in the context of offender profiling. Three studies are reported: 1A employed a cognitive interview for the acquisition of cue-based concepts used by experts and novices; 1B presented pairs of concepts as part of the P-CAT, which recorded response latency; and, 1C employed a survey to further gauge participants’ perceptions of the concepts. The results revealed differences between experts and novices in the cue-based associations activated, and in the response latencies associated with the P-CAT, across expertise. The P-CAT accurately discriminated expert from novice cue activation and consequently offers a new method for objectively validating expert cue use.


Journal of Informetrics | 2011

Gender differences in peer reviews of grant applications :a substantive-methodological synergy in support of the null hypothesis model

Herbert W. Marsh; Upali W. Jayasinghe; Nigel W. Bond

Peer review serves a gatekeeper role, the final arbiter of what is valued in academia, but is widely criticized in terms of potential biases—particularly in relation to gender. In this substantive-methodological synergy, we demonstrate methodological and multilevel statistical approaches to testing a null hypothesis model in relation to the effect of researcher gender on peer reviews of grant proposals, based on 10,023 reviews by 6233 external assessors of 2331 proposals from social science, humanities, and science disciplines. Utilizing multilevel cross-classified models, we show that support for the null hypothesis model positing researcher gender has no significant effect on proposal outcomes. Furthermore, these non-effects of gender generalize over assessor gender (contrary to a matching hypothesis), discipline, assessors chosen by the researchers themselves compared to those chosen by the funding agency, and country of the assessor. Given the large, diverse sample, the powerful statistical analyses, and support for generalizability, these results – coupled with findings from previous research – offer strong support for the null hypothesis model of no gender differences in peer reviews of grant proposals.


American Psychologist | 2008

Improving the Peer-Review Process for Grant Applications : Reliability, Validity, Bias, and Generalizability

Herbert W. Marsh; Upali W. Jayasinghe; Nigel W. Bond


Target-international Journal of Translation Studies | 2011

Interpreting accent in the courtroom

Sandra Beatriz Hale; Nigel W. Bond; Jeanna Sutton

Collaboration


Dive into the Nigel W. Bond's collaboration.

Top Co-Authors

Avatar

Herbert W. Marsh

Australian Catholic University

View shared research outputs
Top Co-Authors

Avatar

Upali W. Jayasinghe

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Ben W. Morrison

Australian College of Applied Psychology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael D. Tyler

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandra Beatriz Hale

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge