Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael T. Bradley is active.

Publication


Featured researches published by Michael T. Bradley.


Perceptual and Motor Skills | 1987

Machiavellianism, the Control Question Test and the Detection of Deception

Michael T. Bradley; K. I. Klohn

Individuals, differing in levels of Machiavellianism, were involved in a mock crime psychophysiological detection of deception study. It was hypothesized that those scoring high on Machiavellianism would perceive detection results as more accurately reflecting their actual guilt or innocence, especially under conditions of high arousal, than those with low scores. The hypothesis was based on assumptions that subjects must appropriately discriminate amongst crime-relevant and irrelevant questions, that this discrimination is moderately difficult with Control Question Tests, and that high-Mach scorers under arousing conditions will make this discrimination more readily than low-Mach scorers. Partial support for the hypothesis was found in that guilty high-Mach scorers were more accurately detected than guilty low-Mach scorers. This result did not hold for innocent Mach scorers, and there was no augmentation of the effect in conditions designed to increase emotional arousal.


Journal of Applied Psychology | 1992

Awareness of crime-relevant information and the Guilty Knowledge Test

Michael T. Bradley; J. Rettinger

The effects of awareness of crime-relevant information on the detection of deception with the Guilty Knowledge Test were examined. Student subjects were assigned to 1 of 3 groups: a guilty group, members of which commited a mock crime; an innocent group aware of details about the crime; or an innocent group unaware of such information. After following instructions, subjects were tested on the polygraph with a 10-item Guilty Knowledge Test and were offered


Perceptual and Motor Skills | 2008

Accuracy of Effect Size Estimates from Published Psychological Research

Andrew Brand; Michael T. Bradley; Lisa A. Best; George Stoica

20.00 for an innocent test outcome. Skin resistance responses scores of guilty subjects lying about crime-relevant information were higher than the scores of innocent informed subjects, whose scores in turn were higher than those of innocent unaware subjects.


Journal of General Psychology | 2010

Multiple Trials May Yield Exaggerated Effect Size Estimates

Andrew Brand; Michael T. Bradley; Lisa A. Best; George Stoica

A Monte-Carlo simulation was used to model the biasing of effect sizes in published studies. The findings from the simulation indicate that, when a predominant bias to publish studies with statistically significant results is coupled with inadequate statistical power, there will be an overestimation of effect sizes. The consequences such an effect size overestimation will then have on meta-analyses and power analyses are highlighted and discussed along with measures which can be taken to reduce the problem.


Journal of General Psychology | 2012

More Voodoo Correlations: When Average-Based Measures Inflate Correlations

Andrew Brand; Michael T. Bradley

ABSTRACT Published psychological research attempting to support the existence of small and medium effect sizes may not have enough participants to do so accurately, and thus, repeated trials or the use of multiple items may be used in an attempt to obtain significance. Through a series of Monte-Carlo simulations, this article describes the results of multiple trials or items on effect size estimates when the averages and aggregates of a dependent measure are analyzed. The simulations revealed a large increase in observed effect size estimates when the numbers of trials or items in an experiment were increased. Overestimation effects are mitigated by correlations between trials or items, but remain substantial in some cases. Some concepts, such as a P300 wave or a test score, are best defined as a composite of measures. Troubles may arise in more exploratory research where the interrelations among trials or items may not be well described.


Perceptual and Motor Skills | 1997

Estimating the effect of the file drawer problem in meta-analysis.

Michael T. Bradley; R. D. Gupta

ABSTRACT A Monte-Carlo simulation was conducted to assess the extent that a correlation estimate can be inflated when an average-based measure is used in a commonly employed correlational design. The results from the simulation reveal that the inflation of the correlation estimate can be substantial, up to 76%. Additionally, data was re-analyzed from two previously published studies to determine the extent that the correlation estimate was inflated due to the use of an averaged based measure. The re-analyses reveal that correlation estimates had been inflated by just over 50% in both studies. Although these findings are disconcerting, we are somewhat comforted by the fact that there is a simple and easy analysis that can be employed to prevent the inflation of the correlation estimate that we have simulated and observed.


Perceptual and Motor Skills | 2004

DIAGNOSING ESTIMATE DISTORTION DUE TO SIGNIFICANCE TESTING IN LITERATURE ON DETECTION OF DECEPTION

Michael T. Bradley; George Stoica

Although meta-analysis appears to be a useful technique to verify the existence of an effect and to summarize large bodies of literature, there are problems associated with its use and interpretation. Amongst difficulties is the “file drawer problem.” With this problem it is assumed that a certain percentage of studies are not published or are not available to be included in any given meta-analysis. We present a cautionary table to quantify the magnitude of this problem. The table shows that distortions exaggerating the effect size are substantial and that the exaggerations of effects are strongest when the true effect size approaches zero. A meta-analysis could be very misleading were the true effect size close to zero.


Social Science Computer Review | 2012

Assessing the Effects of Technical Variance on the Statistical Outcomes of Web Experiments Measuring Response Times

Andrew Brand; Michael T. Bradley

Studies journals typically report or feature results significant by statistical test criterion. This is a bias that prevents obtaining precise estimates of the magnitude of any underlying effect. It is severe with small effect sizes and small numbers of measurements. To illustrate the problem and a diagnosis technique, results of published studies on the detection of deception are graphed. The literature contains large effect sizes affirming that deceptive responses in contrast to truthful responses are associated with more reactive Skin Resistance Responses. These effect sizes when graphed on the x-axis against n on the y-axis are distributed as funnel graphs. A subset of studies show support for predicted small to medium effects on different physiological measures, individual differences, and condition manipulations. These effect sizes graphed by sample ns follow negative correlations, suggesting that effect sizes from published values of t, F, and z are exaggerations.


Perceptual and Motor Skills | 2002

A Monte-Carlo estimation of effect size distortion due to significance testing.

Michael T. Bradley; D. Smith; George Stoica

A simulation was conducted to assess the effect of technical variance on the statistical power of web experiments measuring response times. The results of the simulation showed that technical variance reduced the statistical power and the accuracy of the effect size estimate by a negligible magnitude. This finding therefore suggests that researchers’ preconceptions concerning the unsuitability of web experiments for conducting research using response time as a dependent measure are misguided.


Perceptual and Motor Skills | 1996

The Control Question Test in Polygraphic Examinations with Actual Controls for Truth

Michael T. Bradley; Vance V. MacLaren; M. E. Black

A Monte-Carlo study was done with true effect sizes in deviation units ranging from 0 to 2 and a variety of sample sizes. The purpose was to assess the amount of bias created by considering only effect sizes that passed a statistical cut-off criterion of α = .05. The deviation values obtained at the .05 level jointly determined by the set effect sizes and sample sizes are presented. This table is useful when summarizing sets of studies to judge whether published results reflect an accurate appraisal of an underlying effect or a distorted estimate expected because significant studies are published and nonsignificant results are not. The table shows that the magnitudes of error are substantial with small sample sizes and inherently small effect sizes. Thus, reviews based on published literature could be misleading and especially so if true effect sizes were close to zero. A researcher should be particularly cautious of small sample sizes showing large effect sizes when larger samples indicate diminishing smaller effects.

Collaboration


Dive into the Michael T. Bradley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Stoica

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

Lisa A. Best

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

A. Luke MacNeill

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. E. Black

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

Vance V. MacLaren

University of New Brunswick

View shared research outputs
Top Co-Authors

Avatar

David Trafimow

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar

Igor Dolgov

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge