Anna Kosovicheva
Northeastern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anna Kosovicheva.
Journal of Vision | 2011
Anna Kosovicheva; Gerrit W. Maus; Stuart Anstis; Patrick Cavanagh; Peter U. Tse; David Whitney
Motion can bias the perceived location of a stationary stimulus (Whitney & Cavanagh, 2000), but whether this occurs at a high level of representation or at early, retinotopic stages of visual processing remains an open question. As coding of orientation emerges early in visual processing, we tested whether motion could influence the spatial location at which orientation adaptation is seen. Specifically, we examined whether the tilt aftereffect (TAE) depends on the perceived or the retinal location of the adapting stimulus, or both. We used the flash-drag effect (FDE) to produce a shift in the perceived position of the adaptor away from its retinal location. Subjects viewed a patterned disk that oscillated clockwise and counterclockwise while adapting to a small disk containing a tilted linear grating that was flashed briefly at the moment of the rotation reversals. The FDE biased the perceived location of the grating in the direction of the disks motion immediately following the flash, allowing dissociation between the retinal and perceived location of the adaptor. Brief test gratings were subsequently presented at one of three locations-the retinal location of the adaptor, its perceived location, or an equidistant control location (antiperceived location). Measurements of the TAE at each location demonstrated that the TAE was strongest at the retinal location, and was larger at the perceived compared to the antiperceived location. This indicates a skew in the spatial tuning of the TAE consistent with the FDE. Together, our findings suggest that motion can bias the location of low-level adaptation.
Frontiers in Behavioral Neuroscience | 2012
Anna Kosovicheva; Summer Sheremata; Ariel Rokem; Ayelet N. Landau; Michael A. Silver
Acetylcholine (ACh) reduces the spatial spread of excitatory fMRI responses in early visual cortex and receptive field size of V1 neurons. We investigated the perceptual consequences of these physiological effects of ACh with surround suppression and crowding, two phenomena that involve spatial interactions between visual field locations. Surround suppression refers to the reduction in perceived stimulus contrast by a high-contrast surround stimulus. For grating stimuli, surround suppression is selective for the relative orientations of the center and surround, suggesting that it results from inhibitory interactions in early visual cortex. Crowding refers to impaired identification of a peripheral stimulus in the presence of flankers and is thought to result from excessive integration of visual features. We increased synaptic ACh levels by administering the cholinesterase inhibitor donepezil to healthy human subjects in a placebo-controlled, double-blind design. In Experiment 1, we measured surround suppression of a central grating using a contrast discrimination task with three conditions: (1) surround grating with the same orientation as the center (parallel), (2) surround orthogonal to the center, or (3) no surround. Contrast discrimination thresholds were higher in the parallel than in the orthogonal condition, demonstrating orientation-specific surround suppression (OSSS). Cholinergic enhancement decreased thresholds only in the parallel condition, thereby reducing OSSS. In Experiment 2, subjects performed a crowding task in which they reported the identity of a peripheral letter flanked by letters on either side. We measured the critical spacing between the targets and flanking letters that allowed reliable identification. Cholinergic enhancement with donepezil had no effect on critical spacing. Our findings suggest that ACh reduces spatial interactions in tasks involving segmentation of visual field locations but that these effects may be limited to early visual cortical processing.
Attention Perception & Psychophysics | 2014
Anna Kosovicheva; Benjamin Wolfe; David Whitney
Saccades are made thousands of times a day and are the principal means of localizing objects in our environment. However, the saccade system faces the challenge of accurately localizing objects as they are constantly moving relative to the eye and head. Any delays in processing could cause errors in saccadic localization. To compensate for these delays, the saccade system might use one or more sources of information to predict future target locations, including changes in position of the object over time, or its motion. Another possibility is that motion influences the represented position of the object for saccadic targeting, without requiring an actual change in target position. We tested whether the saccade system can use motion-induced position shifts to update the represented spatial location of a saccade target, by using static drifting Gabor patches with either a soft or a hard aperture as saccade targets. In both conditions, the aperture always remained at a fixed retinal location. The soft aperture Gabor patch resulted in an illusory position shift, whereas the hard aperture stimulus maintained the motion signals but resulted in a smaller illusory position shift. Thus, motion energy and target location were equated, but a position shift was generated in only one condition. We measured saccadic localization of these targets and found that saccades were indeed shifted, but only with a soft-aperture Gabor patch. Our results suggest that motion shifts the programmed locations of saccade targets, and this remapped location guides saccadic localization.
Current Biology | 2017
Anna Kosovicheva; David Whitney
Perceptual processes in human observers vary considerably across a number of domains, producing idiosyncratic biases in the appearance of ambiguous figures [1], faces [2], and a number of visual illusions [3-6]. This work has largely emphasized object and pattern recognition, which suggests that these are more likely to produce individual differences. However, the presence of substantial variation in the anatomy and physiology of the visual system [4,7,8] suggests that individual variations may be found in even more basic visual tasks. To support this idea, we demonstrate observer-specific biases in a fundamental visual task - object localization throughout the visual field. We show that localization judgments of briefly presented targets produce idiosyncratic signatures of perceptual distortions in each observer and suggest that even the most basic visual judgments, such as object location, can differ substantially between individuals.
Psychonomic Bulletin & Review | 2018
Mauro Manassi; Alina Liberman; Anna Kosovicheva; Kathy Zhang; David Whitney
Observers perceive objects in the world as stable over space and time, even though the visual experience of those objects is often discontinuous and distorted due to masking, occlusion, camouflage, or noise. How are we able to easily and quickly achieve stable perception in spite of this constantly changing visual input? It was previously shown that observers experience serial dependence in the perception of features and objects, an effect that extends up to 15 seconds back in time. Here, we asked whether the visual system utilizes an object’s prior physical location to inform future position assignments in order to maximize location stability of an object over time. To test this, we presented subjects with small targets at random angular locations relative to central fixation in the peripheral visual field. Subjects reported the perceived location of the target on each trial by adjusting a cursor’s position to match its location. Subjects made consistent errors when reporting the perceived position of the target on the current trial, mislocalizing it toward the position of the target in the preceding two trials (Experiment 1). This pull in position perception occurred even when a response was not required on the previous trial (Experiment 2). In addition, we show that serial dependence in perceived position occurs immediately after stimulus presentation, and it is a fast stabilization mechanism that does not require a delay (Experiment 3). This indicates that serial dependence occurs for position representations and facilitates the stable perception of objects in space. Taken together with previous work, our results show that serial dependence occurs at many stages of visual processing, from initial position assignment to object categorization.
bioRxiv | 2018
Mauro Manassi; Alina Liberman; Anna Kosovicheva; Kathy Zhang; David Whitney
Observers perceive objects in the world as stable over space and time, even though the visual experience of those objects is often discontinuous and distorted due to masking, occlusion, camouflage, noise, etc. How are we able to easily and quickly achieve stable perception in spite of this constantly changing visual input? It was previously shown that observers experience serial dependence in the perception of features and objects, an effect that extends up to 15 seconds back in time. Here, we asked whether the visual system utilizes an object’s prior physical location to inform future position assignments in order to maximize location stability of an object over time. To test this, we presented subjects with small targets at random angular locations relative to central fixation in the peripheral visual field. Subjects reported the perceived location of the target on each trial by adjusting a cursor’s position to match its location. Subjects made consistent errors when reporting the perceived position of the target on the current trial, mislocalizing it toward the position of the target in the preceding two trials (Experiment 1). This pull in position perception occurred even when a response was not required on the previous trial (Experiment 2). In addition, we show that serial dependence in perceived position occurs immediately after stimulus presentation, and is a fast stabilization mechanism that does not require a delay (Experiment 3). This indicates that serial dependence occurs for position representations and facilitates the stable perception of objects in space. Taken together with previous work, our results show that serial dependence occurs at many stages of visual processing, from initial position assignment to object categorization.
Psychological Science | 2018
Zhimin Chen; Anna Kosovicheva; Benjamin Wolfe; Patrick Cavanagh; Andrei Gorea; David Whitney
Visual space is perceived as continuous and stable even though visual inputs from the left and right visual fields are initially processed separately within the two cortical hemispheres. In the research reported here, we examined whether the visual system utilizes a dynamic recalibration mechanism to integrate these representations and to maintain alignment across the visual fields. Subjects adapted to randomly oriented moving lines that straddled the vertical meridian; these lines were vertically offset between the left and right hemifields. Subsequent vernier alignment judgments revealed a negative aftereffect: An offset in the same direction as the adaptation was required to correct the perceived misalignment. This aftereffect was specific to adaptation to vertical, but not horizontal, misalignments and also occurred following adaptation to movie clips and patterns without coherent motion. Our results demonstrate that the visual system unifies the left and right halves of visual space by continuously recalibrating the alignment of elements across the visual fields.
Cognitive Research: Principles and Implications | 2016
Benjamin Wolfe; Jonathan Dobres; Anna Kosovicheva; Ruth Rosenholtz; Bryan Reimer
Aging-related changes in the visual system diminish the capacity to perceive the world with the ease and fidelity younger adults are accustomed to. Among many consequences of this, older adults find that text that they could once read easily proves difficult to read, even with sufficient acuity correction. Building on previous work examining visual factors in legibility, we examine potential causes for these age-related effects in the absence of other ocular pathology. We asked participants to discriminate words from non-words in a lexical decision task. The stimuli participants viewed were either blurred or presented in a noise field to simulate, respectively, decreased sensitivity to fine detail (loss of acuity) and detuning of visually selective neurons. We then use the differences in performance between older and younger participants to suggest how older participants’ performance could be approximated to facilitate maximally usable designs.
Journal of Vision | 2015
Anna Kosovicheva; Benjamin Wolfe; Patrick Cavanagh; Andrei Gorea; David Whitney
Visual input from the left and right visual fields is initially processed separately in the two cortical hemispheres, yet the visual system integrates these representations into a single continuous percept of space. In order to represent alignment and symmetry across the visual field, the visual system may continually recalibrate visual information across the hemifields. If so, any differences, such as misalignments across the two hemifields, should be adaptable. To test this, observers adapted to a set of large randomly rotating and moving colored lines in a circular Gaussian contrast aperture on a dark background, while performing a target detection task at fixation. The stimulus was split across the vertical meridian such that the lines in the left hemifield were shifted 1.8º higher than the lines in the right hemifield, or vice versa. An occluder strip (3.5º wide) eliminated visibility of the discontinuity in the lines at the vertical meridian, and observers were tested in a dark room with neutral density filter goggles to eliminate visual references. After 8 minutes of initial adaptation, observers performed a Vernier discrimination task in which they judged the relative positions of two brief (83 ms) horizontal lines straddling the vertical meridian. Vernier judgments reflected a negative aftereffect; an average shift of 0.08º in the direction of adaptation was required to null the perceived misalignment (p = 0.006). We replicated this result with adaptation to natural movies with the left and right halves of the image vertically misaligned. Results showed that a Vernier offset of 0.07º in the direction of adaptation was necessary to cancel the perceived misalignment (p = 0.02). Our results indicate that the visual system computes and dynamically recalibrates the relative alignment of elements across the visual fields-a mechanism that would help achieve and maintain continuous and stable perception of space. Meeting abstract presented at VSS 2015.
Journal of Vision | 2010
Anna Kosovicheva; Francesca C. Fortenbaugh; Lynn C. Robertson