Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian J. Scholl is active.

Publication


Featured researches published by Brian J. Scholl.


Trends in Cognitive Sciences | 2000

Perceptual causality and animacy

Brian J. Scholl; Patrice D. Tremoulet

Certain simple visual displays consisting of moving 2-D geometric shapes can give rise to percepts with high-level properties such as causality and animacy. This article reviews recent research on such phenomena, which began with the classic work of Michotte and of Heider and Simmel. The importance of such phenomena stems in part from the fact that these interpretations seem to be largely perceptual in nature - to be fairly fast, automatic, irresistible and highly stimulus driven - despite the fact that they involve impressions typically associated with higher-level cognitive processing. This research suggests that just as the visual system works to recover the physical structure of the world by inferring properties such as 3-D shape, so too does it work to recover the causal and social structure of the world by inferring properties such as causality and animacy.


Psychological Review | 2005

What You See Is What You Set: Sustained Inattentional Blindness and the Capture of Awareness.

Steven B. Most; Brian J. Scholl; Erin R. Clifford; Daniel J. Simons

This article reports a theoretical and experimental attempt to relate and contrast 2 traditionally separate research programs: inattentional blindness and attention capture. Inattentional blindness refers to failures to notice unexpected objects and events when attention is otherwise engaged. Attention capture research has traditionally used implicit indices (e.g., response times) to investigate automatic shifts of attention. Because attention capture usually measures performance whereas inattentional blindness measures awareness, the 2 fields have existed side by side with no shared theoretical framework. Here, the authors propose a theoretical unification, adapting several important effects from the attention capture literature to the context of sustained inattentional blindness. Although some stimulus properties can influence noticing of unexpected objects, the most influential factor affecting noticing is a persons own attentional goals. The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.


Journal of Experimental Psychology: General | 2005

The Automaticity of Visual Statistical Learning

Nicholas B. Turk-Browne; Justin A. Junge; Brian J. Scholl

The visual environment contains massive amounts of information involving the relations between objects in space and time, and recent studies of visual statistical learning (VSL) have suggested that this information can be automatically extracted by the visual system. The experiments reported in this article explore the automaticity of VSL in several ways, using both explicit familiarity and implicit response-time measures. The results demonstrate that (a) the input to VSL is gated by selective attention, (b) VSL is nevertheless an implicit process because it operates during a cover task and without awareness of the underlying statistical patterns, and (c) VSL constructs abstracted representations that are then invariant to changes in extraneous surface features. These results fuel the conclusion that VSL both is and is not automatic: It requires attention to select the relevant population of stimuli, but the resulting learning then occurs without intent or awareness.


Cognitive Psychology | 1999

Tracking Multiple Items Through Occlusion: Clues to Visual Objecthood

Brian J. Scholl; Zenon W. Pylyshyn

In three experiments, subjects attempted to track multiple items as they moved independently and unpredictably about a display. Performance was not impaired when the items were briefly (but completely) occluded at various times during their motion, suggesting that occlusion is taken into account when computing enduring perceptual objecthood. Unimpaired performance required the presence of accretion and deletion cues along fixed contours at the occluding boundaries. Performance was impaired when items were present on the visual field at the same times and to the same degrees as in the occlusion conditions, but disappeared and reappeared in ways which did not implicate the presence of occluding surfaces (e.g., by imploding and exploding into and out of existence instead of accreting and deleting along a fixed contour). Unimpaired performance did not require visible occluders (i.e., Michottes tunnel effect) or globally consistent occluder positions. We discuss implications of these results for theories of objecthood in visual attention.


Psychological Science | 2001

How not to be Seen: The Contribution of Similarity and Selective Ignoring to Sustained Inattentional Blindness

Steven B. Most; Daniel J. Simons; Brian J. Scholl; Rachel Jimenez; Erin R. Clifford; Christopher F. Chabris

When people attend to objects or events in a visual display, they often fail to notice an additional, unexpected, but fully visible object or event in the same display. This phenomenon is now known as inattentional blindness. We present a new approach to the study of sustained inattentional blindness for dynamic events in order to explore the roles of similarity, distinctiveness, and attentional set in the detection of unexpected objects. In Experiment 1, we found that the similarity of an unexpected object to other objects in the display influences attentional capture: The more similar an unexpected object is to the attended items, and the greater its difference from the ignored items, the more likely it is that people will notice it. Experiment 2 explored whether this effect of similarity is driven by selective ignoring of irrelevant items or by selective focusing on attended items. The results of Experiment 3 suggest that the distinctiveness of the unexpected object alone cannot entirely account for the similarity effects found in the first two experiments; when attending to black items or white items in a dynamic display, nearly 30% of observers failed to notice a bright red cross move across the display, even though it had a unique color, luminance, shape, and motion trajectory and was visible for 5 s. Together, the results suggest that inattentional blindness for ongoing dynamic events depends both on the similarity of the unexpected object to the other objects in the display and on the observers attentional set.


Journal of Cognitive Neuroscience | 2009

Neural evidence of statistical learning: Efficient detection of visual regularities without awareness

Nicholas B. Turk-Browne; Brian J. Scholl; Marvin M. Chun; Marcia K. Johnson

Our environment contains regularities distributed in space and time that can be detected by way of statistical learning. This unsupervised learning occurs without intent or awareness, but little is known about how it relates to other types of learning, how it affects perceptual processing, and how quickly it can occur. Here we use fMRI during statistical learning to explore these questions. Participants viewed statistically structured versus unstructured sequences of shapes while performing a task unrelated to the structure. Robust neural responses to statistical structure were observed, and these responses were notable in four ways: First, responses to structure were observed in the striatum and medial temporal lobe, suggesting that statistical learning may be related to other forms of associative learning and relational memory. Second, statistical regularities yielded greater activation in category-specific visual regions (object-selective lateral occipital cortex and word-selective ventral occipito-temporal cortex), demonstrating that these regions are sensitive to information distributed in time. Third, evidence of learning emerged early during familiarization, showing that statistical learning can operate very quickly and with little exposure. Finally, neural signatures of learning were dissociable from subsequent explicit familiarity, suggesting that learning can occur in the absence of awareness. Overall, our findings help elucidate the underlying nature of statistical learning.


The Journal of Neuroscience | 2010

Implicit Perceptual Anticipation Triggered by Statistical Learning

Nicholas B. Turk-Browne; Brian J. Scholl; Marcia K. Johnson; Marvin M. Chun

Our environments are highly regular in terms of when and where objects appear relative to each other. Statistical learning allows us to extract and represent these regularities, but how this knowledge is used by the brain during ongoing perception is unclear. We used rapid event-related fMRI to measure hemodynamic responses to individual visual images in a continuous stream that contained sequential contingencies. Sixteen human observers encountered these statistical regularities while performing an unrelated cognitive task, and were unaware of their existence. Nevertheless, the right anterior hippocampus showed greater hemodynamic responses to predictive stimuli, providing evidence for implicit anticipation as a consequence of unsupervised statistical learning. Hippocampal anticipation based on predictive stimuli correlated with subsequent processing of the predicted stimuli in occipital and parietal cortex, and anticipation in additional brain regions correlated with facilitated object recognition as reflected in behavioral priming. Additional analyses suggested that implicit perceptual anticipation does not contribute to explicit familiarity, but can result in predictive potentiation of category-selective ventral visual cortex. Overall, these findings show that future-oriented processing can arise incidentally during the perception of statistical regularities.


Psychological Science | 2003

Attentive Tracking of Objects Versus Substances

Kristy vanMarle; Brian J. Scholl

Recent research in vision science, infant cognition, and word learning suggests a special role for the processing of discrete objects. But what counts as an object? Answers to this question often depend on contrasting object-based processing with the processing of spatial areas or unbound visual features. In infant cognition and word learning, though, another salient contrast has been between rigid cohesive objects and nonsolid substances. Whereas objects may move from one location to another, a nonsolid substance must pour from one location to another. In the study reported here, we explored whether attentive tracking processes are sensitive to dynamic information of this type. Using a multiple-object tracking task, we found that subjects could easily track four items in a display of eight identical unpredictably moving entities that moved as discrete objects from one location to another, but could not track similar entities that noncohesively “poured” from one location to another—even when the items in both conditions followed the same trajectories at the same speeds. Other conditions revealed that this inability to track multiple “substances” stemmed not from violations of rigidity or cohesiveness per se, because subjects were able to track multiple noncohesive collections and multiple nonrigid deforming objects. Rather, the impairment was due to the dynamic extension and contraction during the substancelike motion, which rendered the location of the entity ambiguous. These results demonstrate a convergence between processes of midlevel adult vision and infant cognition, and in general help to clarify what can count as a persisting dynamic object of attention.


Journal of Experimental Psychology: General | 2005

The Role of Salience in the Extraction of Algebraic Rules

Ansgar D. Endress; Brian J. Scholl; Jacques Mehler

Recent research suggests that humans and other animals have sophisticated abilities to extract both statistical dependencies and rule-based regularities from sequences. Most of this research stresses the flexibility and generality of such processes. Here the authors take up an equally important project, namely, to explore the limits of such processes. As a case study for rule-based generalizations, the authors demonstrate that only repetition-based structures with repetitions at the edges of sequences (e.g., ABCDEFF but not ABCDDEF) can be reliably generalized, although token repetitions can easily be discriminated at both sequence edges and middles. This finding suggests limits on rule-based sequence learning and new interpretations of earlier work alleging rule learning in infants. Rather than implementing a computerlike, formal process that operates over all patterns equally well, rule-based learning may be a highly constrained and piecemeal process driven by perceptual primitives--specialized type operations that are highly sensitive to perceptual factors.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2008

Multidimensional Visual Statistical Learning

Nicholas B. Turk-Browne; Phillip Isola; Brian J. Scholl; Teresa A. Treat

Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not explored the underlying units over which VSL operates. In a sequence of colored shapes, for example, does VSL operate over each feature dimension independently, or over multidimensional objects in which color and shape are bound together? The studies reported here demonstrate that VSL can be both object-based and feature-based, in systematic ways based on how different feature dimensions covary. For example, when each shape covaried perfectly with a particular color, VSL was object-based: Observers expressed robust VSL for colored-shape sub-sequences at test but failed when the test items consisted of monochromatic shapes or color patches. When shape and color pairs were partially decoupled during learning, however, VSL operated over features: Observers expressed robust VSL when the feature dimensions were tested separately. These results suggest that VSL is object-based, but that sensitivity to feature correlations in multidimensional sequences (possibly another form of VSL) may in turn help define what counts as an object.

Collaboration


Dive into the Brian J. Scholl's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge