Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harold Pashler is active.

Publication


Featured researches published by Harold Pashler.


Psychological Bulletin | 1994

Dual-task interference in simple tasks: data and theory

Harold Pashler

People often have trouble performing 2 relatively simple tasks concurrently. The causes of this interference and its implications for the nature of attentional limitations have been controversial for 40 years, but recent experimental findings are beginning to provide some answers. Studies of the psychological refractory period effect indicate a stubborn bottleneck encompassing the process of choosing actions and probably memory retrieval generally, together with certain other cognitive operations. Other limitations associated with task preparation, sensory-perceptual processes, and timing can generate additional and distinct forms of interference. These conclusions challenge widely accepted ideas about attentional resources and probe reaction time methodologies. They also suggest new ways of thinking about continuous dual-task performance, effects of extraneous stimulation (e.g., stop signals), and automaticity. Implications for higher mental processes are discussed.


Perspectives on Psychological Science | 2009

Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition

Edward Vul; Christine R. Harris; Piotr Winkielman; Harold Pashler

Functional magnetic resonance imaging (fMRI) studiesofemotion, personality, and social cognition have drawn much attention in recent years, with high-profile studies frequently reporting extremely high (e.g., >.8) correlations between brain activation and personality measures. We show that these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations were obtained. We surveyed authors of 55 articles that reported findings of this kind to determine a few details on how these correlations were computed. More than half acknowledged using a strategy that computes separate correlations for individual voxels and reports means of only those voxels exceeding chosen thresholds. We show how this nonindependent analysis inflates correlations while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample. In addition, we argue that, in some cases, other analysis problems likely created entirely spurious correlations. We outline how the data from these studies could be reanalyzed with unbiased methods to provide accurate estimates of the correlations in question and urge authors to perform such reanalyses. The underlying problems described here appear to be common in fMRI research of many kinds—not just in studies of emotion, personality, and social cognition.


Psychological Bulletin | 2006

Distributed practice in verbal recall tasks: A review and quantitative synthesis

Nicholas J. Cepeda; Harold Pashler; Edward Vul; John T. Wixted; Doug Rohrer

The authors performed a meta-analysis of the distributed practice effect to illuminate the effects of temporal variables that have been neglected in previous reviews. This review found 839 assessments of distributed practice in 317 experiments located in 184 articles. Effects of spacing (consecutive massed presentations vs. spaced learning episodes) and lag (less spaced vs. more spaced learning episodes) were examined, as were expanding interstudy interval (ISI) effects. Analyses suggest that ISI and retention interval operate jointly to affect final-test retention; specifically, the ISI producing maximal retention increased as retention interval increased. Areas needing future research and theoretical implications are discussed.


Attention Perception & Psychophysics | 1988

Familiarity and visual change detection

Harold Pashler

Detection of change when one display of familiar objects replaces another display might be based purely upon visual codes, or also on identity information (i.e., knowingwhat was presentwhere in the initial display). Displays of 10 alphanumeric characters were presented and, after a brief offset, were presented again in the same position, with or without a change in a single character. Subjects’ accuracy in change detection did not suggest preservation of any more information than is usually available in whole report, except with the briefest of offsets (under 50 msec). Stimulus duration had only modest effects. The interaction of masking with offset duration followed the pattern previously observed with unfamiliar visual stimuli (Phillips, 1974). Accuracy was not reduced by reflection of the characters about a horizontal axis, suggesting that categorical information contributed negligibly. Detection of change appears to depend upon capacity-limited visual memory; (putative) knowledge of what identities are present in different display locations does not seem to contribute.


Psychological Review | 2000

How persuasive is a good fit? A comment on theory testing

Seth Roberts; Harold Pashler

Quantitative theories with free parameters often gain credence when they closely fit data. This is a mistake. A good fit reveals nothing about the flexibility of the theory (how much it cannot fit), the variability of the data (how firmly the data rule out what the theory cannot fit), or the likelihood of other outcomes (perhaps the theory could have fit any plausible result), and a reader needs all 3 pieces of information to decide how much the fit should increase belief in the theory. The use of good fits as evidence is not supported by philosophers of science nor by the history of psychology; there seem to be no examples of a theory supported mainly by good fits that has led to demonstrable progress. A better way to test a theory with free parameters is to determine how the theory constrains possible outcomes (i.e., what it predicts), assess how firmly actual outcomes agree with those constraints, and determine if plausible alternative outcomes would have been inconsistent with the theory, allowing for the variability of the data.


Perspectives on Psychological Science | 2012

Editors’ Introduction to the Special Section on Replicability in Psychological Science A Crisis of Confidence?

Harold Pashler; Eric-Jan Wagenmakers

Is there currently a crisis of confidence in psychological science reflecting an unprecedented level of doubt among practitioners about the reliability of research findings in the field? It would certainly appear that there is. These doubts emerged and grew as a series of unhappy events unfolded in 2011: the Diederik Stapel fraud case (see Stroebe, Postmes, & Spears, 2012, this issue), the publication in a major social psychology journal of an article purporting to show evidence of extrasensory perception (Bem, 2011) followed by widespread public mockery (see Galak, LeBoeuf, Nelson, & Simmons, in press; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011), reports by Wicherts and colleagues that psychologists are often unwilling or unable to share their published data for reanalysis (Wicherts, Bakker, & Molenaar, 2011; see also Wicherts, Borsboom, Kats, & Molenaar, 2006), and the publication of an important article in Psychological Science showing how easily researchers can, in the absence of any real effects, nonetheless obtain statistically significant differences through various questionable research practices (QRPs) such as exploring multiple dependent variables or covariates and only reporting these when they yield significant results (Simmons, Nelson, & Simonsohn, 2011). For those psychologists who expected that the embarrassments of 2011 would soon recede into memory, 2012 offered instead a quick plunge from bad to worse, with new indications of outright fraud in the field of social cognition (Simonsohn, 2012), an article in Psychological Science showing that many psychologists admit to engaging in at least some of the QRPs examined by Simmons and colleagues (John, Loewenstein, & Prelec, 2012), troubling new meta-analytic evidence suggesting that the QRPs described by Simmons and colleagues may even be leaving telltale signs visible in the distribution of p values in the psychological literature (Masicampo & Lalande, in press; Simonsohn, 2012), and an acrimonious dust-up in science magazines and blogs centered around the problems some investigators were having in replicating well-known results from the field of social cognition (Bower, 2012; Yong, 2012). Although the very public problems experienced by psychology over this 2-year period are embarrassing to those of us working in the field, some have found comfort in the fact that, over the same period, similar concerns have been arising across the scientific landscape (triggered by revelations that will be described shortly). Some of the suspected causes of unreplicability, such as publication bias (the tendency to publish only positive findings) have been discussed for years; in fact, the phrase file-drawer problem was first coined by a distinguished psychologist several decades ago (Rosenthal, 1979). However, many have speculated that these problems have been exacerbated in recent years as academia reaps the harvest of a hypercompetitive academic climate and an incentive scheme that provides rich rewards for overselling one’s work and few rewards at all for caution and circumspection (see Giner-Sorolla, 2012, this issue). Equally disturbing, investigators seem to be replicating each others’ work even less often than they did in the past, again presumably reflecting an incentive scheme gone askew (a point discussed in several articles in this issue, e.g., Makel, Plucker, & Hegarty, 2012). The frequency with which errors appear in the psychological literature is not presently known, but a number of facts suggest it might be disturbingly high. Ioannidis (2005) has shown through simple mathematical modeling that any scientific field that ignores replication can easily come to the miserable state wherein (as the title of his most famous article puts it) “most published research findings are false” (see also Ioannidis, 2012, this issue, and Pashler & Harris, 2012, this issue). Meanwhile, reports emerging from cancer research have made such grim scenarios seem more plausible: In 2012, several large pharmaceutical companies revealed that their efforts to replicate exciting preclinical findings from published academic studies in cancer biology were only rarely verifying the original results (Begley & Ellis, 2012; see also Osherovich, 2011; Prinz, Schlange, & Asadullah, 2011).


Quarterly Journal of Experimental Psychology | 1989

Chronometric evidence for central postponement in temporally overlapping tasks

Harold Pashler; James C. Johnston

When the stimuli from two tasks arrive in rapid succession (the overlapping tasks paradigm), response delays are typically observed. Two general types of models have been proposed to account for these delays. Postponement models suppose that processing stages in the second task are delayed due to a single-channel bottleneck. Capacity-sharing models suppose that processing on both tasks occurs at reduced rates because of sharing of common resources. Postponement models make strong and distinctive predictions for the behaviour of variables slowing particular second-task stages, when assessed in single- and dual-task conditions. In Experiment 1, subjects were required to make manual classification responses to a tone (S1) and a letter (S2), presented at stimulus onset asynchronies of 50, 100, and 400 msec, making R1 responses to S1 as promptly as possible. The second response, R2, but not R1, was delayed in the dual task condition, and the effects of two S2 variables (degradation and repetition) on R2 response times in dual- and single-task conditions closely matched the predictions of a postponement model with a processing bottleneck at the decision/response-selection stage. In Experiment 2, subjects were encouraged to emit both responses close together in time. Use of this response grouping procedure had little effect on the magnitude of R2 response times, or on the pattern of stimulus factor effects on R2, supporting the hypothesis that the same underlying postponement process was operating. R1 response times were, however, dramatically delayed, and were now affected by S2 difficulty variables. The results provide strong support for postponement models of dual-task interference in the overlapping tasks paradigm, even when response times are delayed on both tasks.


Memory & Cognition | 1992

The influence of retrieval on retention

Mark Carrier; Harold Pashler

Four experiments tested the hypothesis that successful retrieval of an item from memory affects retention only because the retrieval provides an additional presentation of the target item. Two methods of learning paired associates were compared, In the pure study trial (pure ST condition) method, both items of a pair were presented simultaneously for study. In the test trial/study trial (TTST condition) method, subjects attempted to retrieve the response term during aperiod in which only the stimulus term was present (and the response term of the pair was presented after a 5-sec delay). Finalretention of target items was tested with cued-recall tests. In Experiment 1, there was a reliable advantagein final testing for nonsense-syllable/number pairs in the TTST condition over pairs in the pure ST condition. In Experiment 2, the same result was obtained with Eskimo/English word pairs, This benefit of the TTST condition was not apparently different for final retrieval after 5 min or after 24 h. Experiments 3 and 4 ruled out two artifactual explanations of the TTST advantage observed in the first two experiments. Because performing a memory retrieval (TTST condition) led to better performance than pure study (pure ST condition), the results reject the hypothesis that a successful retrieval is beneficial only to the extent that it provides another study experience.


Attention Perception & Psychophysics | 1992

Improvement in line orientation discrimination is retinally local but dependent on cognitive set

Ling-po Shiu; Harold Pashler

The ability of human observers to discriminate the orientation of a pair of straight lines differing by 3° improved with practice. The improvement did not transfer across hemifield or across quadrants within the same hemifield. The practice effect occurred whether or not observers were given feedback. However, orientation discrimination did not improve when observers attended to brightness rather than orientation of the lines. This suggests that cognitive set affects tuning in retinally local orientation channels (perhaps by guiding some form of unsupervised learning mechanism) and that retinotopic feature extraction may not be wholly preattentive.


Perspectives on Psychological Science | 2012

Is the Replicability Crisis Overblown? Three Arguments Examined

Harold Pashler; Christine R. Harris

We discuss three arguments voiced by scientists who view the current outpouring of concern about replicability as overblown. The first idea is that the adoption of a low alpha level (e.g., 5%) puts reasonable bounds on the rate at which errors can enter the published literature, making false-positive effects rare enough to be considered a minor issue. This, we point out, rests on statistical misunderstanding: The alpha level imposes no limit on the rate at which errors may arise in the literature (Ioannidis, 2005b). Second, some argue that whereas direct replication attempts are uncommon, conceptual replication attempts are common—providing an even better test of the validity of a phenomenon. We contend that performing conceptual rather than direct replication attempts interacts insidiously with publication bias, opening the door to literatures that appear to confirm the reality of phenomena that in fact do not exist. Finally, we discuss the argument that errors will eventually be pruned out of the literature if the field would just show a bit of patience. We contend that there are no plausible concrete scenarios to back up such forecasts and that what is needed is not patience, but rather systematic reforms in scientific practice.

Collaboration


Dive into the Harold Pashler's collaboration.

Top Co-Authors

Avatar

Doug Rohrer

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Michael C. Mozer

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward Vul

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liqiang Huang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

John T. Wixted

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Noriko Coburn

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge