Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael A. Cohen is active.

Publication


Featured researches published by Michael A. Cohen.


Trends in Cognitive Sciences | 2012

The attentional requirements of consciousness

Michael A. Cohen; Patrick Cavanagh; Marvin M. Chun; Ken Nakayama

It has been widely claimed that attention and awareness are doubly dissociable and that there is no causal relation between them. In support of this view are numerous claims of attention without awareness, and awareness without attention. Although there is evidence that attention can operate on or be drawn to unconscious stimuli, various recent findings demonstrate that there is no empirical support for awareness without attention. To properly test for awareness without attention, we propose that a stimulus be studied using a battery of tests based on diverse, mainstream paradigms from the current attention literature. When this type of analysis is performed, the evidence is fully consistent with a model in which attention is necessary, but not sufficient, for awareness.


Psychological Science | 2011

Natural-Scene Perception Requires Attention

Michael A. Cohen; George A. Alvarez; Ken Nakayama

Is visual attention required for visual consciousness? In the past decade, many researchers have claimed that awareness can arise in the absence of attention. This claim is largely based on the notion that natural scene (or “gist”) perception occurs without attention. This article presents evidence against this idea. We show that when observers perform a variety of demanding, sustained-attention tasks, inattentional blindness occurs for natural scenes. In addition, scene perception is impaired under dual-task conditions, but only when the primary task is sufficiently demanding. This finding suggests that previous studies that have been interpreted as demonstrating scene perception without attention failed to fully engage attention and that natural-scene perception does indeed require attention. Thus, natural-scene perception is not a preattentive process and cannot be used to support the idea of awareness without attention.


Journal of Vision | 2010

Distinguishing between parallel and serial accounts of multiple object tracking

Piers D. L. Howe; Michael A. Cohen; Yair Pinto; Todd S. Horowitz

Humans can track multiple moving objects. Is this accomplished by attending to all the objects at the same time or do we attend to each object in turn? We addressed this question using a novel application of the classic simultaneous-sequential paradigm. We considered a display in which objects moved for only part of the time. In one condition, the objects moved sequentially, whereas in the other condition they all moved and paused simultaneously. A parallel model would predict that the targets are tracked independently, so the tracking of one target should not be influenced by the movement of another target. Thus, one would expect equal performance in the two conditions. Conversely, a simple serial account of object tracking would predict that an observers accuracy should be greater in the sequential condition because in that condition, at any one time, fewer targets are moving and thus need to be attended. In fact, in our experiments we observed performance in the simultaneous condition to be equal to or greater than the performance in the sequential condition. This occurred regardless of the number of targets or how the targets were positioned in the visual field. These results are more directly in line with a parallel account of multiple object tracking.


Psychonomic Bulletin & Review | 2011

Auditory and visual memory in musicians and nonmusicians

Michael A. Cohen; Karla K. Evans; Todd S. Horowitz; Jeremy M. Wolfe

Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.


Psychonomic Bulletin & Review | 2008

Telephone conversation impairs sustained visual attention via a central bottleneck

Melina A. Kunar; Randall Carter; Michael A. Cohen; Todd S. Horowitz

Recent research has shown that holding telephone conversations disrupts one’s driving ability. We asked whether this effect could be attributed to a visual attention impairment. In Experiment 1, participants conversed on a telephone or listened to a narrative while engaged in multiple object tracking (MOT), a task requiring sustained visual attention. We found that MOT was disrupted in the telephone conversation condition, relative to single-task MOT performance, but that listening to a narrative had no effect. In Experiment 2, we asked which component of conversation might be interfering with MOT performance. We replicated the conversation and single-task conditions of Experiment 1 and added two conditions in which participants heard a sequence of words over a telephone. In the shadowing condition, participants simply repeated each word in the sequence. In the generation condition, participants were asked to generate a new word based on each word in the sequence. Word generation interfered with MOT performance, but shadowing did not. The data indicate that telephone conversation disrupts attention at a central stage, the act of generating verbal stimuli, rather than at a peripheral stage, such as listening or speaking.


Attention Perception & Psychophysics | 2010

Direction information in multiple object tracking is limited by a graded resource

Todd S. Horowitz; Michael A. Cohen

Is multiple object tracking (MOT) limited by a fixed set of structures (slots), a limited but divisible resource, or both? Here, we answer this question by measuring the precision of the direction representation for tracked targets. The signature of a limited resource is a decrease in precision as the square root of the tracking load. The signature of fixed slots is a fixed precision. Hybrid models predict a rapid decrease to asymptotic precision. In two experiments, observers tracked moving disks and reported target motion direction by adjusting a probe arrow. We derived the precision of representation of correctly tracked targets using a mixture distribution analysis. Precision declined with target load according to the square-root law up to six targets. This finding is inconsistent with both pure and hybrid slot models. Instead, directional information in MOT appears to be limited by a continuously divisible resource.


Quarterly Journal of Experimental Psychology | 2009

The speed of free will

Todd S. Horowitz; Jeremy M. Wolfe; George A. Alvarez; Michael A. Cohen; Yoana Kuzmova

Do voluntary and task-driven shifts of attention have the same time course? In order to measure the time needed to voluntarily shift attention, we devised several novel visual search tasks that elicited multiple sequential attentional shifts. Participants could only respond correctly if they attended to the right place at the right time. In control conditions, search tasks were similar but participants were not required to shift attention in any order. Across five experiments, voluntary shifts of attention required 200–300 ms. Control conditions yielded estimates of 35–100 ms for task-driven shifts. We suggest that the slower speed of voluntary shifts reflects the “clock speed of free will”. Wishing to attend to something takes more time than shifting attention in response to sensory input.


Attention Perception & Psychophysics | 2011

The what–where trade-off in multiple-identity tracking

Michael A. Cohen; Yair Pinto; Piers D. L. Howe; Todd S. Horowitz

Observers are poor at reporting the identities of objects that they have successfully tracked (Pylyshyn, Visual Cognition, 11, 801–822, 2004; Scholl & Pylyshyn, Cognitive Psychology, 38, 259–290, 1999). Consequently, it has been claimed that objects are tracked in a manner that does not encode their identities (Pylyshyn, 2004). Here, we present evidence that disputes this claim. In a series of experiments, we show that attempting to track the identities of objects can decrease an observer’s ability to track the objects’ locations. This indicates that the mechanisms that track, respectively, the locations and identities of objects draw upon a common resource. Furthermore, we show that this common resource can be voluntarily distributed between the two mechanisms. This is clear evidence that the location- and identity-tracking mechanisms are not entirely dissociable.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Processing multiple visual objects is limited by overlap in neural channels

Michael A. Cohen; Talia Konkle; Juliana Y. Rhee; Ken Nakayama; George A. Alvarez

Significance Human cognition is inherently limited: only a finite amount of visual information can be processed at a given instant. What determines those limits? Here, we show that more objects can be processed when they are from different stimulus categories than when they are from the same category. This behavioral benefit maps directly onto the functional organization of the ventral visual pathway. These results suggest that our ability to process multiple items at once is limited by the extent to which those items compete with one another for neural representation. Broadly, these results provide strong evidence that the capacities and limitations of human behavior can be inferred from our functional neural architecture. High-level visual categories (e.g., faces, bodies, scenes, and objects) have separable neural representations across the visual cortex. Here, we show that this division of neural resources affects the ability to simultaneously process multiple items. In a behavioral task, we found that performance was superior when items were drawn from different categories (e.g., two faces/two scenes) compared to when items were drawn from one category (e.g., four faces). The magnitude of this mixed-category benefit depended on which stimulus categories were paired together (e.g., faces and scenes showed a greater behavioral benefit than objects and scenes). Using functional neuroimaging (i.e., functional MRI), we showed that the size of the mixed-category benefit was predicted by the amount of separation between neural response patterns, particularly within occipitotemporal cortex. These results suggest that the ability to process multiple items at once is limited by the extent to which those items are represented by separate neural populations.


Attention Perception & Psychophysics | 2011

Does visual expertise improve visual recognition memory

Karla K. Evans; Michael A. Cohen; Rosemary H. Tambouret; Todd S. Horowitz; Erica Kreindel; Jeremy M. Wolfe

In general, humans have impressive recognition memory for previously viewed pictures. Many people spend years becoming experts in highly specialized image sets. For example, cytologists are experts at searching micrographs filled with potentially cancerous cells and radiologists are expert at searching mammograms for indications of cancer. Do these experts develop robust visual long-term memory for their domain of expertise? If so, is this expertise specific to the trained image class, or do such experts possess generally superior visual memory? We tested recognition memory of cytologists, radiologists, and controls with no medical experience for three visual stimulus classes: isolated objects, scenes, and mammograms or micrographs. Experts were better than control observers at recognizing images from their domain, but their memory for those images was not particularly good (D’ ~ 1.0) and was much worse than memory for objects or scenes (D’ > 2.0). Furthermore, experts were not better at recognizing scenes or isolated objects than control observers.

Collaboration


Dive into the Michael A. Cohen's collaboration.

Top Co-Authors

Avatar

Todd S. Horowitz

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yair Pinto

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge