M. Pilar Aivar
Autonomous University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Pilar Aivar.
Vision Research | 2006
Dirk Kerzel; M. Pilar Aivar; Nathalie E. Ziegler; Eli Brenner
Targets that are briefly flashed during smooth pursuit eye movements are mislocalized in the direction of motion (forward shift) and away from the fovea (spatial expansion). Hansen [Hansen, R. M. (1979). Spatial localization during pursuit eye movements. Vision Research 19(11), 1213-1221] reported that these errors are not present for fast motor responses in the dark, whereas Rotman et al. [Rotman, G., Brenner, E., Smeets, J. B. (2004). Quickly tapping targets that are flashed during smooth pursuit reveals perceptual mislocalizations. Experimental Brain Research 156(4), 409-414] reported that they are present for fast motor responses in the light. To evaluate whether the lighting conditions are the critical factor, we asked observers to point to the positions of flashed objects during smooth pursuit either in the dark or with the room lights on. In a first experiment, the flash, which could appear at 1 of 15 different positions, was always shown when the eye had reached a certain spatial position. We found a forward bias and spatial expansion that were independent of the target and ambient luminance. In a second experiment, the flash was always shown at the same retinal position, but the spatial position of the eye at the moment of flash presentation was varied. In this case we found differences between the luminance conditions, in terms of how the errors depended on the velocity and position on the trajectory. We also found specific conditions in which people did not mislocalize the target in the direction of pursuit at all. These findings may account for the above-mentioned discrepancy. We conclude that although the lighting conditions do influence the localization errors under some circumstances, it is certainly not so that such errors are absent whenever the experiment is conducted in the dark.
Journal of Vision | 2016
Chia Ling Li; M. Pilar Aivar; Dmitry Kit; Matthew Tong; Mary Hayhoe
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Frontiers in Psychology | 2013
Annie J. Olmstead; Navin Viswanathan; M. Pilar Aivar; Sarath Manuel
Experiments investigating phonetic convergence in conversation often focus on interlocutors with similar phonetic inventories. Extending these experiments to those with dissimilar inventories requires understanding the capacity of speakers to imitate native and non-native phones. In the present study, we tested native Spanish and native English speakers to determine whether imitation of non-native tokens differs qualitatively from imitation of native tokens. Participants imitated a [ba]–[pa] continuum that varied in VOT from −60 ms (prevoiced, Spanish [b]) to +60 ms (long lag, English [p]) such that the continuum consisted of some tokens that were native to Spanish speakers and some that were native to English speakers. Analysis of the imitations showed two critical results. First, both groups of speakers demonstrated sensitivity to VOT differences in tokens that fell within their native regions of the VOT continuum (prevoiced region for Spanish and long lag region for English). Secondly, neither group of speakers demonstrated such sensitivity to VOT differences among tokens that fell in their non-native regions of the continuum. These results show that, even in an intentional imitation task, speakers cannot accurately imitate non-native tokens, but are clearly flexible in producing native tokens. Implications of these findings are discussed with reference to the constraints on convergence in interlocutors from different linguistic backgrounds.
Scientific Reports | 2018
Chia-Ling Li; M. Pilar Aivar; Matthew H. Tong; Mary Hayhoe
Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.
Journal of Vision | 2015
Chia-Ling Li; M. Pilar Aivar; Matthew Tong; Mary Hayhoe
Previous studies have indicated the effect of memory for both context and targets on search in 2D images of naturalistic scenes. However, recent results in 3D immersive environments failed to show much effect of context (Li et al., JOV, 2014). To examine whether this reflects differences between 2D vs. 3D environments, we ran a 2D experiment designed to parallel our previous 3D virtual reality environment. Subjects viewed 2D snapshots taken from the two rooms in the 3D immersive environment and then searched those images for a series of targets. The number of fixations required to locate the targets improved rapidly and was similar in both 2D and 3D environments. Interestingly, most of the improvement reflects learning to choose the correct room to look for a given target. Once in the correct room, search is very rapid and objects were located within 3-5 fixations in either environment. Previous exposure (one minute) to the context did not facilitate subsequent search. This was true for both 2D and 3D. In addition, there was little or no effect of experience with the environment on subsequent search for contextual objects in the scene. Even after 24 search trials, the number of fixations required to locate contextual objects in the room was close to values found with no experience. Incidental fixations made during previous trials also do not seem to benefit search much (though a small effect is detectable). Thus, search in both 2D and 3D environments is very comparable, and the primary effect of experience on search depends on task relevance (i.e., previously searched objects are easily remembered but not otherwise). We speculate that the effects of context either require much more extensive experience, or else a pre-exposure that immediately precedes the search episode. Meeting abstract presented at VSS 2015.
Journal of Vision | 2015
M. Pilar Aivar; Chia-Ling Li; Dmitry Kit; Matthew Tong; Mary Hayhoe
Measurement of eye movements has revealed rapid development of memory for object locations in 3D immersive environments. To examine the nature of that representation, and to see if memory is coded with respect to the 3D coordinates of the room, head position was recorded while participants performed a visual search task in an immersive virtual reality apartment. The apartment had two rooms, connected by a corridor. Participants searched the apartment for a series of geometric target objects. Some target objects were always placed at the same location (stable objects), while others appeared at a new location in each trial (random objects). We analyzed whether body movements showed changes that reflected memory for target location. In each trial we calculated how far the participants trajectory deviated from a straight path to the target object. Changes in head orientation from the moment the room was entered to the moment the target was reached were also computed. We found that the average deviation from the straight path was larger and more variable for random target objects (.47 vs .31 meters). Also the point of maximum deviation from the straight path occurred earlier for random objects than for stable objects (at 42% vs 52% of the total trajectory). On room entry lateral head deviation from the room center was already bigger for stable objects than for random objects (18º vs. 10º). Thus for random objects participants move to the center of the room until the target is located, while for stable objects subjects are more likely to follow a straight trajectory from first entry. We conclude that memory for target location is coded with respect to room coordinates and is revealed by body orientation at first entry. The visually guided component of search seems to be relatively unimportant or occurs very quickly upon entry. Meeting abstract presented at VSS 2015.
Experimental Brain Research | 2008
M. Pilar Aivar; Eli Brenner; Jeroen B. J. Smeets
Vision Research | 2015
M. Pilar Aivar; Eli Brenner; Jeroen B. J. Smeets
Behavioral and Brain Sciences | 2007
David Travieso; M. Pilar Aivar; Antoni Gomila
Journal of Vision | 2016
M. Pilar Aivar; Chia-Ling Li; Matthew Tong; Dmitry Kit; Mary Hayhoe