Michael H. Herzog
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael H. Herzog.
Journal of Vision | 2009
Frank Scharnowski; Johannes Rüter; Jacob Jolij; Frouke Hermens; Thomas Kammer; Michael H. Herzog
The human brain analyzes a visual object first by basic feature detectors. On the objects way to a conscious percept, these features are integrated in subsequent stages of the visual hierarchy. The time course of this feature integration is largely unknown. To shed light on the temporal dynamics of feature integration, we applied transcranial magnetic stimulation (TMS) to a feature fusion paradigm. In feature fusion, two stimuli which differ in one feature are presented in rapid succession such that they are not perceived individually but as one single stimulus only. The fused percept is an integration of the features of both stimuli. Here, we show that TMS can modulate this integration for a surprisingly long period of time, even though the individual stimuli themselves are not consciously perceived. Hence, our results reveal a long-lasting integration process of unconscious feature traces.
Proceedings of the National Academy of Sciences of the United States of America | 2001
Michael H. Herzog; Christof Koch
We characterize a class of spatio-temporal illusions with two complementary properties. Firstly, if a vernier stimulus is flashed for a short time on a monitor and is followed immediately by a grating, the latter can express features of the vernier, such as its offset, its orientation, or its motion (feature inheritance). Yet the vernier stimulus itself remains perceptually invisible. Secondly, the vernier can be rendered visible by presenting gratings with a larger number of elements (shine-through). Under these conditions, subjects perceive two independent “objects” each carrying their own features. Transition between these two domains can be effected by subtle changes in the spatio-temporal layout of the grating. This should allow psychophysicists and electrophysiologists to investigate feature binding in a precise and quantitative manner.
Biological Cybernetics | 1998
Michael H. Herzog; Manfred Fahle
Abstract. We investigated the roles of feedback and attention in training a vernier discrimination task as an example of perceptual learning. Human learning even of simple stimuli, such as verniers, relies on more complex mechanisms than previously expected – ruling out simple neural network models. These findings are not just an empirical oddity but are evidence that present models fail to reflect some important characteristics of the learning process. We will list some of the problems of neural networks and develop a new model that solves them by incorporating top-down mechanisms. Contrary to neural networks, in our model learning is not driven by the set of stimuli only. Internal estimations of performance and knowledge about the task are also incorporated. Our model implies that under certain conditions the detectability of only some of the stimuli is enhanced while the overall improvement of performance is attributed to a change of decision criteria. An experiment confirms this prediction.
Journal of Vision | 2009
Toni P. Saarela; Bilge Sayim; Gerald Westheimer; Michael H. Herzog
In crowding, neighboring elements impair the perception of a peripherally presented target. Crowding is often regarded to be a consequence of spatial pooling of information that leads to the perception of textural wholes. We studied the effects of stimulus configuration on crowding using Gabor stimuli. In accordance with previous studies, contrast and orientation discrimination of a Gabor target were impaired in the presence of flanking Gabors of equal length. The stimulus configuration was then changed (1) by making the flankers either shorter or longer than the target or (2) by constructing each flanker from two or three small Gabors. These simple configural changes greatly reduced or even abolished crowding, even though the orientation, spatial frequency, and phase of the stimuli were unchanged. The results challenge simple pooling explanations for crowding. We propose that crowding is weak whenever the target stands out from the stimulus array and strong when the target groups with the flanking elements to form a coherent texture.
Nature | 2002
Michael H. Herzog; Manfred Fahle
Perception of a visual target and the responses of cortical neurons can be strongly influenced by a context surrounding the target. This observation relates to the fundamental issue of how cortical neurons code objects of the external world. In high-contrast regimes, embedding a target in an iso-oriented context reduces neural responses and deteriorates performance in psychophysical experiments. Performance from orthogonal surrounds is better than that from iso-oriented ones. This contextual interference is often postulated to be caused by long- or short-range interactions between neurons tuned to orientation. Here we show, using a new illusion called ‘shine-through’ as a sensitive psychophysical probe, that the orientation difference between target and context does not determine performance. Instead, contextual modulation depends on the overall spatial structure of the context. We propose that contextual suppression vanishes if the contextual elements are grouped to an independent and coherent object.
Vision Research | 2006
Haluk Ogmen; Thomas U. Otto; Michael H. Herzog
The human visual system computes features of moving objects with high precision despite the fact that these features can change or blend into each other in the retinotopic image. Very little is known about how the human brain accomplishes this complex feat. Using a Ternus-Pikler display, introduced by Gestalt psychologists about a century ago, we show that human observers can perceive features of moving objects at locations these features are not present. More importantly, our results indicate that these non-retinotopic feature attributions are not errors caused by the limitations of the perceptual system but follow rules of perceptual grouping. From a computational perspective, our data imply sophisticated real-time transformations of retinotopic relations in the visual cortex. Our results suggest that the human motion and form systems interact with each other to remap the retinotopic projection of the physical space in order to maintain the identity of moving objects in the perceptual space.
Vision Research | 1999
Michael H. Herzog; Manfred Fahle
We investigate the influence of biased feedback on decision and learning processes in a vernier discrimination task. Subjects adjust their decision criteria and hence their responses according to biased external feedback. However, they do not use learning processes to encode incorrectly classified stimuli. As soon as correct feedback is restored observers regain their original performance indicating an involvement of internal criteria. If the external feedback is switched off instead of being corrected, the rebound is less vigorous. The findings contradict predictions of supervised neural network models.
Vision Research | 2009
Kristoffer C. Aberg; Elisa M. Tartaglia; Michael H. Herzog
In most models of perceptual learning, the amount of improvement of performance does not depend on the regime of stimulus presentations, but only on the sheer number of trials. Here, we kept the number of stimulus presentations constant while varying the number of trials per session. We show that a minimal number of stimulus presentations per session is necessary, transfer depends strongly on the presentation regime, but sleep has only weak, if at all, effects.
Journal of Vision | 2006
Thomas U. Otto; Haluk Ogmen; Michael H. Herzog
How features are attributed to objects is one of the most puzzling issues in the neurosciences. A deeply entrenched view is that features are perceived at the locations where they are presented. Here, we show that features in motion displays can be systematically attributed from one location to another although the elements possessing the features are invisible. Furthermore, features can be integrated across locations. Feature mislocalizations are usually treated as errors and limits of the visual system. On the contrary, we show that the nonretinotopic feature attributions, reported herein, follow rules of grouping precisely suggesting that they reflect a fundamental computational strategy and not errors of visual processing.
Vision Research | 2001
Michael H. Herzog; Manfred Fahle; Christof Koch
When a vernier stimulus is presented for a short time and followed by a grating comprising five straight lines, the vernier remains invisible but may bequeath its offset to the grating (feature inheritance). For more than seven grating elements, the vernier is rendered visible as a shine-through element. However, shine-through depends strongly on the spatio-temporal layout of the grating. Here, we show that spatially inhomogeneous gratings diminish shine-through and vernier discrimination. Even subtle deviations, in the range of a few minutes of arc, matter. However, longer presentation times of the vernier regenerate shine-through. Feature inheritance and shine-through may become a useful tool in investigating such different topics as time course of information processing, feature binding, attention, and masking.
Collaboration
Dive into the Michael H. Herzog's collaboration.
Swiss Federal Institute of Aquatic Science and Technology
View shared research outputs