Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoana Kuzmova is active.

Publication


Featured researches published by Yoana Kuzmova.


Psychological Science | 2010

Color Channels, Not Color Appearance or Color Categories, Guide Visual Search for Desaturated Color Targets

Delwin T. Lindsey; Angela M. Brown; Ester Reijnen; Anina N. Rich; Yoana Kuzmova; Jeremy M. Wolfe

In this article, we report that in visual search, desaturated reddish targets are much easier to find than other desaturated targets, even when perceptual differences between targets and distractors are carefully equated. Observers searched for desaturated targets among mixtures of white and saturated distractors. Reaction times were hundreds of milliseconds faster for the most effective (reddish) targets than for the least effective (purplish) targets. The advantage for desaturated reds did not reflect an advantage for the lexical category “pink,” because reaction times did not follow named color categories. Many pink stimuli were not found quickly, and many quickly found stimuli were not labeled “pink.” Other possible explanations (e.g., linear-separability effects) also failed. Instead, we propose that guidance of visual search for desaturated colors is based on a combination of low-level color-opponent signals that is different from the combinations that produce perceived color. We speculate that this guidance might reflect a specialization for human skin.


Quarterly Journal of Experimental Psychology | 2009

The speed of free will

Todd S. Horowitz; Jeremy M. Wolfe; George A. Alvarez; Michael A. Cohen; Yoana Kuzmova

Do voluntary and task-driven shifts of attention have the same time course? In order to measure the time needed to voluntarily shift attention, we devised several novel visual search tasks that elicited multiple sequential attentional shifts. Participants could only respond correctly if they attended to the right place at the right time. In control conditions, search tasks were similar but participants were not required to shift attention in any order. Across five experiments, voluntary shifts of attention required 200–300 ms. Control conditions yielded estimates of 35–100 ms for task-driven shifts. We suggest that the slower speed of voluntary shifts reflects the “clock speed of free will”. Wishing to attend to something takes more time than shifting attention in response to sensory input.


Vision Research | 2011

Can we track holes

Todd S. Horowitz; Yoana Kuzmova

The evidence is mixed as to whether the visual system treats objects and holes differently. We used a multiple object tracking task to test the hypothesis that figural objects are easier to track than holes. Observers tracked four of eight items (holes or objects). We used an adaptive algorithm to estimate the speed allowing 75% tracking accuracy. In Experiments 1-5, the distinction between holes and figures was accomplished by pictorial cues, while red-cyan anaglyphs were used to provide the illusion of depth in Experiment 6. We variously used Gaussian pixel noise, photographic scenes, or synthetic textures as backgrounds. Tracking was more difficult when a complex background was visible, as opposed to a blank background. Tracking was easier when disks carried fixed, unique markings. When these factors were controlled for, tracking holes was no more difficult than tracking figures, suggesting that they are equivalent stimuli for tracking purposes.


Journal of Vision | 2010

Predictability matters for multiple object tracking

Todd S. Horowitz; Yoana Kuzmova

Most accounts of multiple object tracking (MOT) suggest that only the spatial arrangement of objects at any one time is important for explaining performance. In contrast, we argue that observers predict future target positions. Previously this proposition was tested by studying the recovery of targets after a period of invisibility (Fencsik, Klieger, & Horowitz, 2007; Keane & Pylyshyn, 2006). Here, we test the predictive hypothesis in a continuous tracking paradigm. In two experiments, we asked observers to track three out of twelve moving disks for three to six seconds, and varied the average turn angle. We held speed constant at 8°/s, but direction for each disk changed with probability .025 on each 13.33 ms frame. Observers marked all targets at the end of the trial. Experiment 1 used turn angles of 0°, 30°, and 90°, while Experiment 2 used 0°, 15°, 30°, 45°, 60°, 75°, and 90°. Turn angle was fixed for all objects within a trial but varied across trials. In both experiments, accuracy was maximal at 0° and declined as turn angle increased (Exp 1: p = .001; Exp 2: p = .001). In Experiment 2, the steepest decline in accuracy was from 0° to 30°, while accuracy was roughly constant from 45° to 90°. These data demonstrate that it is easier to track predictably moving targets. Since velocity, density, and other factors known to affect MOT performance were constant, this suggests that observers predict target motion online to improve tracking. Furthermore, the pattern of data in Experiment 2 is compatible with a model in which the visual system assumes that target trajectories will vary only within a narrow 30° band.


Psychonomic Bulletin & Review | 2011

How many pixels make a memory? Picture memory for small pictures

Jeremy M. Wolfe; Yoana Kuzmova

Torralba (Visual Neuroscience, 26, 123–131, 2009) showed that, if the resolution of images of scenes were reduced to the information present in very small “thumbnail images,” those scenes could still be recognized. The objects in those degraded scenes could be identified, even though it would be impossible to identify them if they were removed from the scene context. Can tiny and/or degraded scenes be remembered, or are they like brief presentations, identified but not remembered. We report that memory for tiny and degraded scenes parallels the recognizability of those scenes. You can remember a scene to approximately the degree to which you can classify it. Interestingly, there is a striking asymmetry in memory when scenes are not the same size on their initial appearance and subsequent test. Memory for a large, full-resolution stimulus can be tested with a small, degraded stimulus. However, memory for a small stimulus is not retrieved when it is tested with a large stimulus.


Attention Perception & Psychophysics | 2011

Visual search for arbitrary objects in real scenes

Jeremy M. Wolfe; George A. Alvarez; Ruth Rosenholtz; Yoana Kuzmova; Ashley M. Sherman


Wiley Interdisciplinary Reviews: Cognitive Science | 2011

Visual attention: Visual attention

Karla K. Evans; Todd S. Horowitz; Piers D. L. Howe; Roccardo Pedersini; Ester Reijnen; Yair Pinto; Yoana Kuzmova; Jeremy M. Wolfe


Journal of Vision | 2010

Search for arbitrary objects in natural scenes is remarkably efficient

Jeremy M. Wolfe; George A. Alvarez; Ruth Rosenholtz; Aude Oliva; Antonio Torralba; Yoana Kuzmova; Max Uhlenhuth


Journal of Vision | 2010

PINK: the most colorful mystery in visual search

Yoana Kuzmova; Jeremy M. Wolfe; Anina N. Rich; Angela M. Brown; Delwin T. Lindsey; Ester Reijnen


Vision Research | 2009

In visual search, guidance by surface type is different than classic guidance.

Jeremy M. Wolfe; Ester Reijnen; Michael J. Van Wert; Yoana Kuzmova

Collaboration


Dive into the Yoana Kuzmova's collaboration.

Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Todd S. Horowitz

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth Rosenholtz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Torralba

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge