Arash Yazdanbakhsh
Boston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arash Yazdanbakhsh.
Vision Research | 2005
Stephen Grossberg; Arash Yazdanbakhsh
The 3D LAMINART neural model is developed to explain how the visual cortex gives rise to 3D percepts of stratification, transparency, and neon color spreading in response to 2D pictures and 3D scenes. Such percepts are sensitive to whether contiguous image regions have the same contrast polarity and ocularity. The model predicts how like-polarity competition at V1 simple cells in layer 4 may cause these percepts when it interacts with other boundary and surface processes in V1, V2, and V4. The model also explains how: the Metelli Rules cause transparent percepts, bistable transparency percepts arise, and attention influences transparency reversal.
The Journal of Neuroscience | 2005
Bevil R. Conway; Akiyoshi Kitaoka; Arash Yazdanbakhsh; Christopher C. Pack; Margaret S. Livingstone
Most people see movement in Figure 1, although the image is static. Motion is seen from black → blue → white → yellow → black. Many hypotheses for the illusory motion have been proposed, although none have been tested physiologically. We found that the illusion works well even if it is achromatic: yellow is replaced with light gray, and blue is replaced with dark gray. We show that the critical feature for inducing illusory motion is the luminance relationship of the static elements. Illusory motion is seen from black → dark gray → white → light gray → black. In psychophysical experiments, we found that all four pairs of adjacent elements when presented alone each produced illusory motion consistent with the original illusion, a result not expected from any current models. We also show that direction-selective neurons in macaque visual cortex gave directional responses to the same static element pairs, also in a direction consistent with the illusory motion. This is the first demonstration of directional responses by single neurons to static displays and supports a model in which low-level, first-order motion detectors interpret contrast-dependent differences in response timing as motion. We demonstrate that this illusion is a static version of four-stroke apparent motion.
Nature Neuroscience | 2006
Arash Yazdanbakhsh; Margaret S. Livingstone
Common situations that result in different perceptions of grouping and border ownership, such as shadows and occlusion, have distinct sign-of-contrast relationships at their edge-crossing junctions. Here we report a property of end stopping in V1 that distinguishes among different sign-of-contrast situations, thereby obviating the need for explicit junction detectors. We show that the inhibitory effect of the end zones in end-stopped cells is highly selective for the relative sign of contrast between the central activating stimulus and stimuli presented at the end zones. Conversely, the facilitatory effect of end zones in length-summing cells is not selective for the relative sign of contrast between the central activating stimulus and stimuli presented at the end zones. This finding indicates that end stopping belongs in the category of cortical computations that are selective for sign of contrast, such as direction selectivity and disparity selectivity, but length summation does not.
Neuroscience Letters | 2008
Arash Yazdanbakhsh; Simone Gori
When a line extends beyond the width of an aperture, its direction of motion cannot be detected correctly. Only the component of motion perpendicular to the line is detectable (aperture problem). Early visual areas face the same aperture problem because receptive field sizes are relatively small. The susceptibility of early visual areas to the aperture problem opens an opportunity to measure the aperture width of a receptive field psychophysically that can be used to estimate the receptive field size. We found an already established visual illusion (the rotating tilted lines illusion or RTLI) can be used to measure the aperture size and hence estimate the receptive field size. To estimate the receptive field size, we conducted a psychophysical experiment in which the radii and tilted line length of RTLI were systematically changed. Our psychophysical estimation of receptive field size strongly corresponds with the previous measures of receptive field size using electrophysiological and fMRI methods.
Neural Networks | 2004
Arash Yazdanbakhsh; Stephen Grossberg
Perceptual grouping is well known to be a fundamental process during visual perception, notably grouping across scenic regions that do not receive contrastive visual inputs. Illusory contours are a classical example of such groupings. Recent psychophysical and neurophysiological evidence have shown that the grouping process can facilitate rapid synchronization of the cells that are bound together by a grouping, even when the grouping must be completed across regions that receive no contrastive inputs. Synchronous grouping can hereby bind together different object parts that may have become desynchronized due to a variety of factors, and can enhance the efficiency of cortical transmission. Neural models of perceptual grouping have clarified how such fast synchronization may occur by using bipole grouping cells, whose predicted properties have been supported by psychophysical, anatomical, and neurophysiological experiments. These models have not, however, incorporated some of the realistic constraints in which groupings in the brain are conditioned, notably the measured spatial extent of long-range interactions in layer 2/3 of a grouping network, and realistic synaptic and axonal signaling delays within and across cells in different cortical layers. This work addresses the question: Can long-range interactions that obey the bipole constraint achieve fast synchronization under realistic anatomical and neurophysiological constraints that initially desynchronize grouping signals? Can the cells that synchronize retain their analog sensitivity to changing input amplitudes? Can the grouping process complete and synchronize illusory contours across gaps in bottom-up inputs? Our simulations show that the answer to these questions is Yes.
Perception | 2008
Simone Gori; Arash Yazdanbakhsh
Gori and Hamburger (2006, Perception 35 853–857) devised a new visual illusion of relative motion elicited by the observers motion. We propose that the systematic error of direction discrimination found by Lorenceau et al (1993, Vision Research 33 1207–1217) can explain this illusion. The neural correlate of such a systematic error with respect to the two types of neurons in the primary visual cortex, namely end-stopped and contour cells, is discussed.
PLOS ONE | 2016
Kameron K. Clayton; Jayaganesh Swaminathan; Arash Yazdanbakhsh; Jennifer Zuk; Aniruddh D. Patel; Gerald Kidd
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”.
Journal of Vision | 2012
Oliver W. Layton; Ennio Mingolla; Arash Yazdanbakhsh
Humans are capable of rapidly determining whether regions in a visual scene appear as figures in the foreground or as background, yet how figure-ground segregation occurs in the primate visual system is unknown. Figures in the environment are perceived to own their borders, and recent neurophysiology has demonstrated that certain cells in primate visual area V2 have border-ownership selectivity. We present a dynamic model based on physiological data that indicates areas V1, V2, and V4 act as an interareal network to determine border-ownership. Our model predicts that competition between curvature- sensitive cells in V4 that have on-surround receptive fields of different sizes can determine likely figure locations and rapidly propagate the information interareally to V2 border-ownership cells that receive contrast information from V1. In the model border-ownership is an emergent property produced by the dynamic interactions between V1, V2, and V4, one which could not be determined by any single cortical area alone.
Frontiers in Psychology | 2015
Stephen Grossberg; Karthik Srinivasan; Arash Yazdanbakhsh
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an objects surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Neuropsychologia | 2015
Mirella Díaz-Santos; Bo Cao; Arash Yazdanbakhsh; Daniel Norton; Sandra Neargarder; Alice Cronin-Golomb
Parkinsons disease (PD) is associated with motor and non-motor rigidity symptoms (e.g., cognitive and personality). The question is raised as to whether rigidity in PD also extends to perception, and if so, whether perceptual, cognitive, and personality rigidities are correlated. Bistable stimuli were presented to 28 non-demented individuals with PD and 26 normal control adults (NC). Necker cube perception and binocular rivalry were examined during passive viewing, and the Necker cube was additionally used for two volitional-control conditions: Hold one percept in front, and Switch between the two percepts. Relative to passive viewing, PD were significantly less able than NC to reduce dominance durations in the Switch condition, indicating perceptual rigidity. Tests of cognitive flexibility and a personality questionnaire were administered to explore the association with perceptual rigidity. Cognitive flexibility was not correlated with perceptual rigidity for either group. Personality (novelty seeking) correlated with dominance durations on Necker passive viewing for PD but not NC. The results indicate the presence in mild-moderate PD of perceptual rigidity and suggest shared neural substrates with novelty seeking, but functional divergence from those supporting cognitive flexibility. The possibility is raised that perceptual rigidity may be a harbinger of cognitive inflexibility later in the disease course.