Mazyar Fallah
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mazyar Fallah.
Neuron | 2003
Tirin Moore; Katherine M. Armstrong; Mazyar Fallah
Covert spatial attention produces biases in perceptual performance and neural processing of behaviorally relevant stimuli in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. This paper surveys past evidence of a relationship between oculomotor control and visual spatial attention and more recent evidence of a causal link between the control of saccadic eye movements by frontal cortex and covert visual selection. Both suggest that the mechanism of covert spatial attention emerges as a consequence of the reciprocal interactions between neural circuits primarily involved in specifying the visual properties of potential targets and those involved in specifying the movements needed to fixate them.
The Journal of Neuroscience | 2007
Clara Bodelón; Mazyar Fallah; John H. Reynolds
The visual system decomposes stimuli into their constituent features, represented by neurons with different feature selectivities. How the signals carried by these feature-selective neurons are integrated into coherent object representations is unknown. To constrain the set of possible integrative mechanisms, we quantified the temporal resolution of perception for color, orientation, and conjunctions of these two features. We find that temporal resolution is measurably higher for each feature than for their conjunction, indicating that time is required to integrate features into a perceptual whole. This finding places temporal limits on the mechanisms that could mediate this form of perceptual integration.
Nature Neuroscience | 2006
Heather Jordan; Mazyar Fallah; Gene R. Stoner
Human observers adapted to complex biological motions that distinguish males from females: viewing the gait of one gender biased judgments of subsequent gaits toward the opposite gender. This adaptation was not simply due to local features of the stimuli but instead relied upon the global motion of the figures. These results suggest the existence of neurons selective for gender and demonstrate that gender-from-motion judgments are not fixed but depend upon recent viewing history.
Proceedings of the National Academy of Sciences of the United States of America | 2007
Mazyar Fallah; Gene R. Stoner; John H. Reynolds
Macaque visual area V4 has been implicated in the selective processing of stimuli. Prior studies of selection in area V4 have used spatially separate stimuli, thus confounding selection of retinotopic location with selection of the stimulus at that location. We asked whether V4 neurons can selectively respond to one of two differently colored stimuli even when they are spatially superimposed. We find that delaying one of the two stimuli leads to selective processing of the delayed stimulus by area V4 neurons. This selective processing persists when the stimuli move together across the visual field, thereby successively activating different populations of neurons. We also find that this effect is not a spatially global form of feature-based selection. We conclude that selective processing in area V4 is neither exclusively spatial nor feature-based and may thus be surface- or object-based.
Vision Research | 2003
Jude F. Mitchell; Gene R. Stoner; Mazyar Fallah; John H. Reynolds
When two differently colored, superimposed patterns of dots rotate in opposite directions, this yields the percept of two superimposed transparent surfaces. If observers are cued to attend to one set of dots, they are impaired in making judgments about the other set. Since the two sets of dots are overlapping, the cueing effect cannot be explained by spatial attention. This has led to the interpretation that the impairment reflects surface-based attentional selection. However, recent single-unit recording studies in monkeys have found that attention can modulate the gain of neurons tuned for features such as color. Thus, rather than reflecting the selection of a surface, the behavioral effects might simply reflect a reduction in the gain of color channels selective for the color of the uncued set of dots (feature-based attention), as if viewing the surfaces through a colored filter. If so, then the impairment should be eliminated when the two surfaces are made the same color. Instead, we find that the impairment persists with no reduction in strength. Our findings thus rule out the color gain explanation.
Frontiers in Computational Neuroscience | 2014
Carolyn J. Perry; Mazyar Fallah
The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features.
PLOS ONE | 2010
Illia Tchernikov; Mazyar Fallah
Visual processing of color starts at the cones in the retina and continues through ventral stream visual areas, called the parvocellular pathway. Motion processing also starts in the retina but continues through dorsal stream visual areas, called the magnocellular system. Color and motion processing are functionally and anatomically discrete. Previously, motion processing areas MT and MST have been shown to have no color selectivity to a moving stimulus; the neurons were colorblind whenever color was presented along with motion. This occurs when the stimuli are luminance-defined versus the background and is considered achromatic motion processing. Is motion processing independent of color processing? We find that motion processing is intrinsically modulated by color. Color modulated smooth pursuit eye movements produced upon saccading to an aperture containing a surface of coherently moving dots upon a black background. Furthermore, when two surfaces that differed in color were present, one surface was automatically selected based upon a color hierarchy. The strength of that selection depended upon the distance between the two colors in color space. A quantifiable color hierarchy for automatic target selection has wide-ranging implications from sports to advertising to human-computer interfaces.
Progress in Brain Research | 2005
Gene R. Stoner; Jude F. Mitchell; Mazyar Fallah; John H. Reynolds
Visuomotor processing is selective - only a small subset of stimuli that impinge on the retinae reach perceptual awareness and/or elicit behavioral responses. Both binocular rivalry and attention involve visual selection, but affect perception quite differently. During rivalry, awareness alternates between different stimuli presented to the two eyes. In contrast, attending to one of the two stimuli impairs discrimination of the ignored stimulus, but without causing it to perceptually disappear. We review experiments demonstrating that, despite their phenomenological differences, attention and rivalry depend upon shared competitive selection mechanisms. These experiments, moreover, reveal stimulus selection that is surface-based and requires coordination between the different neuronal populations that respond as a surface changes its attributes (type of motion) over time. This surface-based selection, in turn biases interocular competition, favoring the eye whose image is consistent with the selected surface. The review ends with speculation about the role of the thalamus in mediating this dynamic coordination, as well as thoughts about what underlies the differences in the phenomenology of selective attention and rivalry.
Frontiers in Psychology | 2012
Carolyn J. Perry; Mazyar Fallah
When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored.
Journal of Neurophysiology | 2015
Carolyn J. Perry; Lauren E. Sergio; J. Douglas Crawford; Mazyar Fallah
Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements.