Thomas U. Otto
Paris Descartes University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas U. Otto.
Vision Research | 2006
Haluk Ogmen; Thomas U. Otto; Michael H. Herzog
The human visual system computes features of moving objects with high precision despite the fact that these features can change or blend into each other in the retinotopic image. Very little is known about how the human brain accomplishes this complex feat. Using a Ternus-Pikler display, introduced by Gestalt psychologists about a century ago, we show that human observers can perceive features of moving objects at locations these features are not present. More importantly, our results indicate that these non-retinotopic feature attributions are not errors caused by the limitations of the perceptual system but follow rules of perceptual grouping. From a computational perspective, our data imply sophisticated real-time transformations of retinotopic relations in the visual cortex. Our results suggest that the human motion and form systems interact with each other to remap the retinotopic projection of the physical space in order to maintain the identity of moving objects in the perceptual space.
Journal of Vision | 2006
Thomas U. Otto; Haluk Ogmen; Michael H. Herzog
How features are attributed to objects is one of the most puzzling issues in the neurosciences. A deeply entrenched view is that features are perceived at the locations where they are presented. Here, we show that features in motion displays can be systematically attributed from one location to another although the elements possessing the features are invisible. Furthermore, features can be integrated across locations. Feature mislocalizations are usually treated as errors and limits of the visual system. On the contrary, we show that the nonretinotopic feature attributions, reported herein, follow rules of grouping precisely suggesting that they reflect a fundamental computational strategy and not errors of visual processing.
Vision Research | 2006
Thomas U. Otto; Michael H. Herzog; Manfred Fahle; Li Zhaoping
In perceptual learning, stimuli are usually assumed to be presented to a constant retinal location during training. However, due to tremor, drift, and microsaccades of the eyes, the same stimulus covers different retinal positions on sequential trials. Because of these variations the mathematical decision problem changes from linear to non-linear (). This non-linearity implies three predictions. First, varying the spatial position of a stimulus within a moderate range does not deteriorate perceptual learning. Second, improvement for one stimulus variant can yield negative transfer to other variants. Third, interleaved training with two stimulus variants yields no or strongly diminished learning. Using a bisection task, we found psychophysical evidence for the first and last prediction. However, no negative transfer was found as opposed to the second prediction.
The Journal of Neuroscience | 2013
Thomas U. Otto; Brice Dassy; Pascal Mamassian
The combined use of multisensory signals is often beneficial. Based on neuronal recordings in the superior colliculus of cats, three basic rules were formulated to describe the effectiveness of multisensory signals: the enhancement of neuronal responses to multisensory compared with unisensory signals is largest when signals occur at the same location (“spatial rule”), when signals are presented at the same time (“temporal rule”), and when signals are rather weak (“principle of inverse effectiveness”). These rules are also considered with respect to multisensory benefits as observed with behavioral measures, but do they capture these benefits best? To uncover the principles that rule benefits in multisensory behavior, we here investigated the classical redundant signal effect (RSE; i.e., the speedup of response times in multisensory compared with unisensory conditions) in humans. Based on theoretical considerations using probability summation, we derived two alternative principles to explain the effect. First, the “principle of congruent effectiveness” states that the benefit in multisensory behavior (here the speedup of response times) is largest when behavioral performance in corresponding unisensory conditions is similar. Second, the “variability rule” states that the benefit is largest when performance in corresponding unisensory conditions is unreliable. We then tested these predictions in two experiments, in which we manipulated the relative onset and the physical strength of distinct audiovisual signals. Our results, which are based on a systematic analysis of response time distributions, show that the RSE follows these principles very well, thereby providing compelling evidence in favor of probability summation as the underlying combination rule.
Journal of Vision | 2008
Khatuna Parkosadze; Thomas U. Otto; Maka Malania; A. Kezeli; Michael H. Herzog
In perceptual learning, performance often improves within a short time if only one stimulus variant is presented, such as a line bisection stimulus with one outer-line-distance. However, performance stagnates if two bisection stimuli with two outer-line-distances are presented randomly interleaved. Recently, S. G. Kuai, J. Y. Zhang, S. A. Klein, D. M. Levi, and C. Yu, (2005) proposed that learning under roving conditions is impossible in general. Contrary to this proposition, we show here that perceptual learning with bisection stimuli under roving is possible with extensive training of 18000 trials. Despite this extensive training, the improvement of performance is still largely specific. Furthermore, this improvement of performance cannot be explained by an accommodation to stimulus uncertainty caused by roving.
Journal of Experimental Psychology: Human Perception and Performance | 2009
Thomas U. Otto; Haluk Ogmen; Michael H. Herzog
The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases-presumably because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space-based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time.
NeuroImage | 2009
Gijs Plomp; Manuel Mercier; Thomas U. Otto; Olaf Blanke; Michael H. Herzog
When presented with dynamic scenes, the brain integrates visual elements across space and time. Such non-retinotopic processing has been intensively studied from a psychophysical point of view, but little is known about the underlying neural processes. Here we used high-density EEG to reveal neural correlates of non-retinotopic feature integration. In an offset-discrimination task we presented sequences of lines for which feature integration depended on a small, endogenous shift of attention. Attention effects were observed in the stimulus-locked evoked potentials but non-retinotopic feature integration was reflected in voltage topographies time-locked to the behavioral response, lasting for about 400 ms. Statistical parametric mapping of estimated current densities revealed that this integration reduced electrical activity in an extensive network of brain areas, with the effects progressing from high-level visual, via frontal, to central ones. The results suggest that endogenously timed neural processes, rather than bottom-up ones, underlie non-retinotopic feature integration.
Psychological Science | 2010
Thomas U. Otto; Haluk Ogmen; Michael H. Herzog
Perceptual learning is the ability to improve perception through practice. Perceptual learning is usually specific for the task and features learned. For example, improvements in performance for a certain stimulus do not transfer if the stimulus is rotated by 90° or is presented at a different location. These findings are usually taken as evidence that orientation-specific, retinotopic encoding processes are changed during training. In this study, we used a novel masking paradigm in which the offset in an invisible, oblique vernier stimulus was perceived in an aligned vertical or horizontal flanking stimulus presented at a different location. Our results show that learning is specific for the perceived orientation of the vernier offset but not for its actual orientation and location. Specific encoding processes cannot be invoked to explain this improvement. We propose that perceptual learning involves changes in nonretinotopic, attentional readout processes.
Journal of Vision | 2010
Thomas U. Otto; Haluk Ogmen; Michael H. Herzog
Features of moving objects are non-retinotopically integrated along their motion trajectories as demonstrated by a variety of recent studies. The mechanisms of non-retinotopic feature integration are largely unknown. Here, we investigated the role of attention in non-retinotopic feature integration by using the sequential metacontrast paradigm. A central line was offset either to the left or right. A sequence of flanking lines followed eliciting the percept of two diverging motion streams. Although the central line was invisible, its offset was perceived within the streams. Observers attended to one stream. If an offset was introduced to one of the flanking lines in the attended stream, this offset integrated with the central line offset. No integration occurred when the offset was in the non-attended stream. Here, we manipulated the allocation of attention by using an auditory cueing paradigm. First, we show that mandatory non-retinotopic integration occurred even when the cue came long after the motion sequence. Second, we used more than two streams of which two could merge. Offsets in different streams were integrated when the streams merged. However, offsets of one stream were not integrated when this stream had to be ignored. We propose a hierarchical two stage model, in which motion grouping determines mandatory feature integration while attention selects motion streams for optional feature integration.
Frontiers in Psychology | 2012
Michael H. Herzog; Thomas U. Otto; Haluk Ogmen
To investigate the integration of features, we have developed a paradigm in which an element is rendered invisible by visual masking. Still, the features of the element are visible as part of other display elements presented at different locations and times (sequential metacontrast). In this sense, we can “transport” features non-retinotopically across space and time. The features of the invisible element integrate with features of other elements if and only if the elements belong to the same spatio-temporal group. The mechanisms of this kind of feature integration seem to be quite different from classical mechanisms proposed for feature binding. We propose that feature processing, binding, and integration occur concurrently during processes that group elements into wholes.