Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joan López-Moliner is active.

Publication


Featured researches published by Joan López-Moliner.


Journal of Vision | 2007

Motion signal and the perceived positions of moving objects.

Daniel Linares; Joan López-Moliner; Alan Johnston

When a flash is presented in spatial alignment with a moving stimulus, the flash appears to lag behind (the flash-lag effect). The motion of the object can influence the position of the flash, but there may also be a reciprocal effect of the flash on the moving object. Here, we demonstrate that this is the case. We show that when a flash is presented near the moving object, the flash-lag effect does not depend greatly on the duration of the preflash trajectory. However, when the flash is presented sufficiently far from the moving object, the flash-lag effect increases with the duration of the preflash trajectory, until it reaches an asymptotic level. We also show that the interaction of the near flash can occur when it is task irrelevant. Finally, using the motion aftereffect, we demonstrate that motion signals are involved in the time evolution of the flash-lag effect.


Journal of Experimental Psychology: General | 2007

Modes of Executive Control in Sequence Learning: From Stimulus-Based to Plan-Based Control

Elisabet Tubau; Bernhard Hommel; Joan López-Moliner

The authors argue that human sequential learning is often but not always characterized by a shift from stimulus- to plan-based action control. To diagnose this shift, they manipulated the frequency of 1st-order transitions in a repeated manual left-right sequence, assuming that performance is sensitive to frequency-induced biases under stimulus- but not plan-based control. Indeed, frequency biases tended to disappear with practice, but only for explicit learners. This tendency was facilitated by visual-verbal target stimuli, response-contingent sounds, and intentional instructions and hampered by auditory (but not visual) noise. Findings are interpreted within an event-coding model of action control, which holds that plans for sequences of discrete actions are coded phonetically, integrating order and relative timing. The model distinguishes between plan acquisition, linked to explicit knowledge, and plan execution, linked to the action control mode.


Journal of Vision | 2007

Interceptive timing: Prior knowledge matters

Joan López-Moliner; David T. Field; John P. Wann

Fast interceptive actions, such as catching a ball, rely upon accurate and precise information from vision. Recent models rely on flexible combinations of visual angle and its rate of expansion of which the tau parameter is a specific case. When an object approaches an observer, however, its trajectory may introduce bias into tau-like parameters that render these computations unacceptable as the sole source of information for actions. Here we show that observer knowledge of object size influences their action timing, and known size combined with image expansion simplifies the computations required to make interceptive actions and provides a route for experience to influence interceptive action.


Journal of Vision | 2012

Seeing the last part of a hitting movement is enough to adapt to a temporal delay.

C. de la Malla; Joan López-Moliner; Eli Brenner

Being able to see the object that you are aiming for is evidently useful for guiding the hand to a moving object. We examined to what extent seeing the moving hand also influences performance. Subjects tried to intercept moving targets while either instantaneous or delayed feedback about the moving hand was provided at certain times. After each attempt, subjects had to indicate whether they thought they had hit the target, had passed ahead of it, or had passed behind it. Providing visual feedback early in the movement enabled subjects to use visual information about the moving hand to correct their movements. Providing visual feedback when the moving hand passed the target helped them judge how they had performed. Performance was almost as good when visual feedback about the moving hand was provided only when the hand was passing the target as when it was provided throughout the movement. We conclude that seeing the temporal relationship between the hand and the target as the hand crosses the targets path is instrumental for adapting to a temporal delay.


Vision Research | 2006

The flash-lag effect is reduced when the flash is perceived as a sensory consequence of our action.

Joan López-Moliner; Daniel Linares

The flash-lag effect (FLE) is defined as an error in localization that consists of perceiving a flashed object to lag behind a moving one when both are presented in physical alignment. Previous studies have addressed the question if it is the predictability of the flash, or the moving object, that modulates the amount of the error. However, the case when the flash is self-generated, and hence can be internally predicted, has not yet been addressed. In Experiment 1, we compare four conditions: flash unpredictable, flash externally predicted by a beep, flash internally generated (and predicted) by pressing a key, and flash triggered by a key press but temporally unpredictable. The FLE was significantly reduced only when the flash was internally predictable. In Experiment 2, we rule out the possibility that the reduction of the FLE was due to the use of the key press as a temporal marker. We conclude that when the flash is perceived as a sensory consequence of our own action, its detection can be speeded up, thereby resulting in a reduction of the FLE. A third experiment supports this interpretation. The mechanism by virtue of which the detection is accelerated could be related to efferent signals from motor areas predicting the sensory consequences of our actions.


Journal of Vision | 2006

Perceptual asynchrony between color and motion with a single direction change

Daniel Linares; Joan López-Moliner

When a stimulus repeatedly and rapidly changes color (e.g., between red and green) and motion direction (e.g., upwards and downwards) with the same frequency, it was found that observers were most likely to pair colors and motion directions when the direction changes lead the color changes by approximately 80 ms. This is the color-motion asynchrony illusion. According to the differential processing time model, the illusion is explained because the neural activity leading to the perceptual experience of motion requires more time than that of color. Alternatively, the time marker model attributes the misbinding to a failure in matching different sorts of changes at rapid alternations. Here, running counter to the time marker model, we demonstrate that the illusion can arise with a single direction change. Using this simplified version of the illusion we also show that, although some form of visual masking takes place between colors, the measured asynchrony genuinely reflects processing time differences.


Journal of Neurophysiology | 2013

Sound-driven enhancement of vision: disentangling detection-level from decision-level contributions.

Alexis Pérez-Bellido; Salvador Soto-Faraco; Joan López-Moliner

Cross-modal enhancement can be mediated both by higher-order effects due to attention and decision making and by detection-level stimulus-driven interactions. However, the contribution of each of these sources to behavioral improvements has not been conclusively determined and quantified separately. Here, we apply psychophysical analysis based on Piéron functions in order to separate stimulus-dependent changes from those accounted by decision-level contributions. Participants performed a simple visual speeded detection task on Gabor patches of different spatial frequencies and contrast values, presented with and without accompanying sounds. On one hand, we identified an additive cross-modal improvement in mean reaction times across all types of visual stimuli that would be well explained by interactions not strictly based on stimulus-driven modulations (e.g., due to reduction of temporal uncertainty and motor times). On the other hand, we singled out an audio-visual benefit that strongly depended on stimulus features such as frequency and contrast. This particular enhancement was selective to low-visual spatial frequency stimuli, optimized for magnocellular sensitivity. We therefore conclude that interactions at detection stages and at decisional processes in response selection that contribute to audio-visual enhancement can be separated online and express on partly different aspects of visual processing.


Vision Research | 2002

Speed of response initiation in a time-to-contact discrimination task reflects the use of η

Joan López-Moliner; Claude Bonnet

Avoiding collisions and making interceptions seem to require an organism to estimate the time that will elapse before an object will arrive to the point of observation (time-to-contact). The most outstanding account for precise timing has been the tau hypothesis. However, recent studies demonstrate that tau is not the only source of information in judging time-to-contact. By measuring reaction time in a time-to-contact discrimination task, we show that the eta function, which is a specific combination of optical size and rate of expansion, explains both accuracy and the observed RT pattern. The results conform to the hypothesis that the observers initiate the response when eta reaches a response threshold value.


Journal of Vision | 2007

Vision affects how fast we hear sounds move

Joan López-Moliner; Salvador Soto-Faraco

There is a growing body of knowledge about the behavioral and neural correlates of cross-modal interactions in the perception of motion direction, as well as about the computations that underlie unimodal visual speed processing. Yet, the multisensory contributions to the perception of motion speed remain largely uncharted. Here we show that visual motion information exerts a profound influence on the perception of auditory speed. Moreover, our results suggest that this influence is specifically caused by visual velocity rather than by earlier, more local, frequency-based components of visual motion. The way in which visual speed information affects how fast we hear a sound move can be well described by a weighted average model that takes into account the visual speed signal in the computation of auditory speed.


tests and proofs | 2015

The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality

Elena Kokkinara; Mel Slater; Joan López-Moliner

We easily adapt to changes in the environment that involve cross-sensory discrepancies (e.g., between vision and proprioception). Adaptation can lead to changes in motor commands so that the experienced sensory consequences are appropriate for the new environment (e.g., we program a movement differently while wearing prisms that shift our visual space). In addition to these motor changes, perceptual judgments of space can also be altered (e.g., how far can I reach with my arm?). However, in previous studies that assessed perceptual judgments of space after visuomotor adaptation, the manipulation was always a planar spatial shift, whereas changes in body perception could not directly be assessed. In this study, we investigated the effects of velocity-dependent (spatiotemporal) and spatial scaling distortions of arm movements on space and body perception, taking advantage of immersive virtual reality. Exploiting the perceptual illusion of embodiment in an entire virtual body, we endowed subjects with new spatiotemporal or spatial 3D mappings between motor commands and their sensory consequences. The results imply that spatiotemporal manipulation of 2 and 4 times faster can significantly change participants’ proprioceptive judgments of a virtual object’s size without affecting the perceived body ownership, although it did affect the agency of the movements. Equivalent spatial manipulations of 11 and 22 degrees of angular offset also had a significant effect on the perceived virtual object’s size; however, the mismatched information did not affect either the sense of body ownership or agency. We conclude that adaptation to spatial and spatiotemporal distortion can similarly change our perception of space, although spatiotemporal distortions can more easily be detected.

Collaboration


Dive into the Joan López-Moliner's collaboration.

Top Co-Authors

Avatar

Eli Brenner

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge