Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arash Afraz is active.

Publication


Featured researches published by Arash Afraz.


Journal of Vision | 2009

The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations.

Arash Afraz; Patrick Cavanagh

In four experiments, we measured the gender-specific face-aftereffect following subjects eye movement, head rotation, or head movement toward the display and following movement of the adapting stimulus itself to a new test location. In all experiments, the face aftereffect was strongest at the retinal position, orientation, and size of the adaptor. There was no advantage for the spatiotopic location in any experiment nor was there an advantage for the location newly occupied by the adapting face after it moved in the final experiment. Nevertheless, the aftereffect showed a broad gradient of transfer across location, orientation and size that, although centered on the retinotopic values of the adapting stimulus, covered ranges far exceeding the tuning bandwidths of neurons in early visual cortices. These results are consistent with a high-level site of adaptation (e.g. FFA) where units of face analysis have modest coverage of visual field, centered in retinotopic coordinates, but relatively broad tolerance for variations in size and orientation.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Optogenetic and pharmacological suppression of spatial clusters of face neurons reveal their causal role in face gender discrimination

Arash Afraz; Edward S. Boyden; James J. DiCarlo

Significance There exist subregions of the primate brain that contain neurons that respond more to images of faces over other objects. These subregions are thought to support face-detection and discrimination behaviors. Although the role of these areas in telling faces from other objects is supported by direct evidence, their causal role in distinguishing faces from each other lacks direct experimental evidence. Using optogenetics, here we reveal their causal role in face-discrimination behavior and provide a mechanistic explanation for the process. This study is the first documentation of behavioral effects of optogenetic intervention in primate object-recognition behavior. The methods developed here facilitate the usage of the technical advantages of optogenetics for future studies of high-level vision. Neurons that respond more to images of faces over nonface objects were identified in the inferior temporal (IT) cortex of primates three decades ago. Although it is hypothesized that perceptual discrimination between faces depends on the neural activity of IT subregions enriched with “face neurons,” such a causal link has not been directly established. Here, using optogenetic and pharmacological methods, we reversibly suppressed the neural activity in small subregions of IT cortex of macaque monkeys performing a facial gender-discrimination task. Each type of intervention independently demonstrated that suppression of IT subregions enriched in face neurons induced a contralateral deficit in face gender-discrimination behavior. The same neural suppression of other IT subregions produced no detectable change in behavior. These results establish a causal link between the neural activity in IT face neuron subregions and face gender-discrimination behavior. Also, the demonstration that brief neural suppression of specific spatial subregions of IT induces behavioral effects opens the door for applying the technical advantages of optogenetics to a systematic attack on the causal relationship between IT cortex and high-level visual perception.


Journal of Vision | 2008

Topography of the motion aftereffect with and without eye movements.

Ali Ezzati; Ashkan Golzar; Arash Afraz

Although a lot is known about various properties of the motion aftereffect (MAE), there is no systematic study of the topographic organization of MAE. In the current study, first we provided a topographic map of the MAE to investigate its spatial properties in detail. To provide a fine topographic map, we measured MAE with small test stimuli presented at different loci after adaptation to motion in a large region within the visual field. We found that strength of MAE is highest on the internal edge of the adapted area. Our results show a sharper aftereffect boundary for the shearing motion compared to compression and expansion boundaries. In the second experiment, using a similar paradigm, we investigated topographic deformation of the MAE area after a single saccadic eye movement. Surprisingly, we found that topographic map of MAE splits into two separate regions after the saccade: one corresponds to the retinal location of the adapted stimulus and the other matches the spatial location of the adapted region on the display screen. The effect was stronger at the retinotopic location. The third experiment is basically replication of the second experiment in a smaller zone that confirms the results of previous experiments in individual subjects. The eccentricity of spatiotopic area is different from retinotopic area in the second experiment; Experiment 3 controls the effect of eccentricity and confirms the major results of the second experiment.


Neuron | 2017

Navigating the Neural Space in Search of the Neural Code

Mehrdad Jazayeri; Arash Afraz

The advent of powerful perturbation tools, such as optogenetics, has created new frontiers for probing causal dependencies in neural and behavioral states. These approaches have significantly enhanced the ability to characterize the contribution of different cells and circuits to neural function in health and disease. They have shifted the emphasis of research toward causal interrogations and increased the demand for more precise and powerful tools to control and manipulate neural activity. Here, we clarify the conditions under which measurements and perturbations support causal inferences. We note that the brain functions at multiple scales and that causal dependencies may be best inferred with perturbation tools that interface with the system at the appropriate scale. Finally, we develop a geometric framework to facilitate the interpretation of causal experiments when brain perturbations do or do not respect the intrinsic patterns of brain activity. We describe the challenges and opportunities of applying perturbations in the presence of dynamics, and we close with a general perspective on navigating the activity space of neurons in the search for neural codes.


Trends in Cognitive Sciences | 2010

Attention Pointers: Response to Mayo and Sommer

Patrick Cavanagh; Amelia R. Hunt; Arash Afraz; Martin Rolfs

Mayo and Sommer [1] raise several interesting questions about our opinion article [2]. First, they propose that the receptive field (RF) sizes in saccade control areas are too large to support the localization of attentional benefits seen in behavioral studies. Clearly the RF sizes in these areas are large, but attentional resolution is correspondingly extremely crude [3]. However, the RF sizes in frontal eye fields (FEF) and lateral intraparietal (LIP) area are perhaps twice as large as the corresponding ‘attentional field’ at the same eccentricity.


Trends in Cognitive Sciences | 2010

Attentional pointers: response to Melcher

Patrick Cavanagh; Amelia R. Hunt; Arash Afraz; Martin Rolfs

Melcher appears to have misunderstood our opinion piece on attention pointers [1], or perhaps we did not state it clearly enough. We did not claim that when it comes to visual stability, the attention system does the work. The oculomotor system does the work and the attention system comes along for the ride. The performance benefits that comprise the central properties of spatial attention appear to be parasitic on the functions of the eye movement control system [2]. As reviewed by Awh et al.[3], stimulation of cells in the saccade control centers triggers attentional benefits at corresponding retinotopic locations.


Cold Spring Harbor Symposia on Quantitative Biology | 2014

Neural Mechanisms Underlying Visual Object Recognition

Arash Afraz; Daniel Yamins; James J. DiCarlo

Invariant visual object recognition and the underlying neural representations are fundamental to higher-level human cognition. To understand these neural underpinnings, we combine human and monkey psychophysics, large-scale neurophysiology, neural perturbation methods, and computational modeling to construct falsifiable, predictive models that aim to fully account for the neural encoding and decoding processes that underlie visual object recognition. A predictive encoding model must minimally describe the transformation of the retinal image to population patterns of neural activity along the entire cortical ventral stream of visual processing and must accurately predict the responses to any retinal image. A predictive decoding model must minimally describe the transformation from those population patterns of neural activity to observed object recognition behavior (i.e., subject reports), and, given that population pattern of activity, it must accurately predict behavior for any object recognition task. To date, we have focused on core object recognition-a remarkable behavior that is accomplished with image viewing durations of <200 msec. Our work thus far reveals that the neural encoding process is reasonably well explained by a largely feed-forward, highly complex, multistaged nonlinear neural network-the current best neuronal simulation models predict approximately one-half of the relevant neuronal response variance across the highest levels of the ventral stream (areas V4 and IT). Remarkably, however, the decoding process from IT to behavior for all object recognition tasks tested thus far is very accurately predicted by simple direct linear conversion of the inferior temporal neural population state to behavior choice. We have recently examined the behavioral consequences of direct suppression of IT neural activity using pharmacological and optogenetic methods and find them to be well-explained by the same linear decoding model.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Head to toe, in the head

Arash Afraz

Sometime about 250,000 y ago, primates started talking to each other (1). Before that time facial expressions and body language were the main modes of communication among primates. Even today in the presence of our sophisticated language system, face and body gestures play a major role in human communication. If someone tells you that she is not bored with a conversation but her half-open eyelids, raised eyebrows, dropped shoulders, and the way she puts her hand under her chin “tell” you the opposite, you would probably trust the ancient signal more than the modern sounds that we call words. In a recent PNAS article, Fisher and Freiwald (2) might have unveiled where in the brain such signals are encoded.


Journal of Vision | 2010

The face aftereffect spreads over changes in position, orientation and size in retinotopic, not space- or object-based coordinates

Arash Afraz; Patrick Cavanagh

We examined the coordinate frame of face aftereffects (FAE) by measuring the FAE following eye movement, head movement, head rotation, or stimulus movement. 1) Following adaptation to a face at one location, subjects made a saccade. A test face was then presented either at the same location as the adaptor on the screen, the same retinal location as the adaptor or a different location with the same eccentricity as the adaptor. 2) Subjects tilted their head 45 deg to the right during adaptation, then tilted their head 45 deg to the left to view the test. The test was displayed with various orientations including the same retinal angle as the adaptor or the same angle on the screen as the adaptor. 3) Following adaptation, subjects moved their head and halved their distance to the monitor. The test face was then presented at various sizes including the same screen size and the same retinal size (half of the screen size) as the adaptor. 4) The adapting face turned around after adaptation, showing the blank back of the head, and moved to a new location. The test face was then presented at various locations including the original and the new location of the adaptor (where only the back of the head had been presented). In all experiments, the face aftereffect was strongest at the retinal position/angle/ size of the adaptor. There was substantial spread of the FAE across location, size and orientation but no additional benefit was found for test locations, sizes or orientations fixed in display-based or object-based coordinates (spatiotopy or object-otopy) rather than retinal coordinates. Our findings suggest that face analysis is grounded in a retinotopic coordinate frame and that spatiotopy across head and eye movements is not constructed at the level of visual feature/object analysis.


Trends in Cognitive Sciences | 2010

Visual stability based on remapping of attention pointers

Patrick Cavanagh; Amelia R. Hunt; Arash Afraz; Martin Rolfs

Collaboration


Dive into the Arash Afraz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James J. DiCarlo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Rolfs

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Ali Ezzati

Albert Einstein College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edith Reshef

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Edward S. Boyden

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge