Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manfred Fahle is active.

Publication


Featured researches published by Manfred Fahle.


Psychological Science | 2000

Grasping Visual Illusions: No Evidence for a Dissociation Between Perception and Action

Volker H. Franz; Karl R. Gegenfurtner; Hh Bülthoff; Manfred Fahle

Neuropsychological studies prompted the theory that the primate visual system might be organized into two parallel pathways, one for conscious perception and one for guiding action. Supporting evidence in healthy subjects seemed to come from a dissociation in visual illusions: In previous studies, the Ebbinghaus (or Titchener) illusion deceived perceptual judgments of size, but only marginally influenced the size estimates used in grasping. Contrary to those results, the findings from the present study show that there is no difference in the sizes of the perceptual and grasp illusions if the perceptual and grasping tasks are appropriately matched. We show that the differences found previously can be accounted for by a hitherto unknown, nonadditive effect in the illusion. We conclude that the illusion does not provide evidence for the existence of two distinct pathways for perception and action in the visual system.


Vision Research | 1993

Long-term learning in vernier acuity: Effects of stimulus orientation, range and of feedback

Manfred Fahle; Shimon Edelman

In hyperacuity, as in many other tasks, performance improves with practice. To better understand the underlying mechanisms, we measured thresholds of 41 inexperienced observers for the discrimination of vernier displacements. In spite of considerable inter-individual differences, mean thresholds decreased monotonically over the 10,000 stimuli presented to each observer, if stimulus orientation was constant. Generalization of learning seemed to be possible across offset-ranges, but not across orientations. Learning was slightly faster with error feedback than without it in one experiment. These results effectively constrain the range of conceivable models for learning of hyperacuity.


Vision Research | 1995

Fast perceptual learning in hyperacuity

Manfred Fahle; Shimon Edelman; Tomaso Poggio

We investigated fast improvement of visual performance in several hyperacuity tasks such as vernier acuity and stereoscopic depth perception in almost 100 observers. Results indicate that the fast phase of perceptual learning, occurring within less than 1 hr of training, is specific for the visual field position and for the particular hyperacuity task, but is only partly specific for the eye trained and for the offset tested. Learning occurs without feedback. We conjecture that the site of learning may be quite early in the visual pathway.


Current Opinion in Neurobiology | 2005

Perceptual learning: specificity versus generalization

Manfred Fahle

Perceptual learning improves performance on many tasks, from orientation discrimination to the identification of faces. Although conventional wisdom considered sensory cortices as hard-wired, the specificity of improvement achieved through perceptual learning indicates an involvement of early sensory cortices. These cortices might be more plastic than previously assumed, and both sum-potential and single cell recordings indeed demonstrate plasticity of neuronal responses of these sensory cortices. However, for learning to be optimally useful, it must generalize to other tasks. Further research on perceptual learning should therefore, in my opinion, investigate first, the conditions for generalization of training-induced improvement, second, its use for teaching and rehabilitation, and third, its dependence on pharmacological agents.


Proceedings of the Royal Society of London. Series B, Biological sciences | 1981

Visual Hyperacuity: Spatiotemporal Interpolation in Human Vision

Manfred Fahle; Tomaso Poggio

Stroboscopic presentation of a moving object can be interpolated by our visual system into the perception of continuous motion. The precision of this interpolation process has been explored by measuring the vernier discrimination threshold for targets displayed stroboscopically at a sequence of stations. The vernier targets, moving at constant velocity, were presented either with a spatial offset or with a temporal offset or with both. The main results are: (1) vernier acuity for spatial offset is rather invariant over a wide range of velocities and separations between the stations (see Westheimer & McKee 1975); (2) vernier acuity for temporal offset depends on spatial separation and velocity. At each separation there is an optimal velocity such that the strobe interval is roughly constant at about 30 ms; optimal acuity decreases with increasing separation; (3) blur of the vernier pattern decreases acuity for spatial offsets, but improves acuity for temporal offsets (at high velocities and large separations); (4) a temporal offset exactly compensates the equivalent (at the given velocity) spatial offset only for a small separation and optimal velocity; otherwise the spatial offset dominates. A theoretical analysis of the interpolation problem suggests a computational scheme based on the assumption of constant velocity motion. This assumption reflects a constraint satisfied in normal vision over the short times and small distances normally relevant for the interpolation process. A reasonable implementation of this scheme only requires a set of independent, direction selective spatiotemporal channels, that is receptive fields with the different sizes and temporal properties revealed by psychophysical experiments. It is concluded that sophisticated mechanisms are not required to account for the main properties of vernier acuity with moving targets. It is furthermore suggested that the spatiotemporal channels of human vision may be the interpolation filters themselves. Possible neurophysiological implications are briefly discussed.


Journal of Vision | 2004

Perceptual learning: A case for early selection

Manfred Fahle

Perceptual learning is any relatively permanent change of perception as a result of experience. Visual learning leads to sometimes dramatic and quite fast improvements of performance in perceptual tasks, such as hyperacuity discriminations. The improvement often is very specific for the exact task trained, for the precise stimulus orientation, the stimulus position in the visual field, and the eye used during training. This specificity indicates location of the underlying changes in the nervous system at least partly on the level of the primary visual cortex. The dependence of learning on error feedback and on attention, on the other hand, proves the importance of top-down influences from higher cortical centers. In summary, perceptual learning seems to rely at least partly on changes on a relatively early level of cortical information processing (early selection), such as the primary visual cortex under the influence of top-down influences (selection and shaping). An alternative explanation based on late selection is discussed.


Vision Research | 1996

The Influence of Temporal Phase Differences on Texture Segmentation

Ute Leonards; Wolf Singer; Manfred Fahle

Scene segmentation and perceptual grouping are important operations in visual processing. Pattern elements constituting individual perceptual objects need to be segregated from those of other objects and the background and have to be bound together for further joint evaluation. Both textural (spatial) and temporal cues are exploited for this grouping operation. Thus, pattern elements might get bound that share certain textural features and/or appear in close spatial or temporal contiguity. However, results on the involvement of temporal cues in perceptual grouping are contradictory [Kiper et al. (1991). Society for Neuroscience Abstracts, 17, 1209; Fahle (1993). Proceedings of the Royal Society of London B, 254, 199-203]. We therefore reinvestigated the relative contributions of temporal and spatial cues and their interactions in a texture-segmentation paradigm. Our data show that the visual system can segregate figures solely on the basis of temporal cues if the temporal offset between figure and ground elements exceeds 10 msec. Moreover, segregation of figures defined by orientation differences among pattern elements is facilitated by additional temporal cues if these define the same figure. If temporal and textural cues define different figures, the two cues compete and only the more salient pattern is perceived. By contrast, the detection of a figure defined by orientation is not impaired by conflicting temporal cues if these do not define a figure themselves and do not exceed offset intervals of 100 msec. These results indicate the existence of a flexible binding mechanism that exploits both temporal and textural cues either alone or in combination if they serve perceptual grouping but can exclude either of the two cues if they are in conflict or do not define a figure. It is proposed that this flexibility is achieved by the implementation of two segmentation mechanisms which differ in their sensitivity for spatial and temporal cues and interact in a facultative way.


Current Biology | 1996

No transfer of perceptual learning between similar stimuli in the same retinal position

Manfred Fahle; Michael J. Morgan

BACKGROUND Recent experiments have demonstrated a remarkable amount of specificity in the learning of simple visual tasks in humans, as well as considerable plasticity of receptive fields in the visual cortex of adult monkeys. Here, we tested the specificity of improvement through learning in the performance of human observers on two tasks using almost identical stimuli. RESULTS Two groups, of six observers each, were trained in two hyperacuity tasks - three-dot bisection and three-dot vernier discrimination. The groups started with different tasks, and switched tasks after one hour of training. Training improved performance significantly, in spite of considerable variability between observers, but improvement did not generalize from one of these tasks to the other. This result indicates that perceptual learning can be extremely stimulus specific, and that deviations from the same standard but in orthogonal directions require completely new training. CONCLUSIONS Learning is not based on the development of a more exact map of positional information, or on training to fixate or accommodate the eye, but on a better discrimination between the stimuli using one specific stimulus dimension. We also demonstrate that observers differ considerably, not only in their speed of learning, but also in their relative level of performance on the two similar tasks.


Biological Cybernetics | 1998

Modeling perceptual learning: difficulties and how they can be overcome

Michael H. Herzog; Manfred Fahle

Abstract. We investigated the roles of feedback and attention in training a vernier discrimination task as an example of perceptual learning. Human learning even of simple stimuli, such as verniers, relies on more complex mechanisms than previously expected – ruling out simple neural network models. These findings are not just an empirical oddity but are evidence that present models fail to reflect some important characteristics of the learning process. We will list some of the problems of neural networks and develop a new model that solves them by incorporating top-down mechanisms. Contrary to neural networks, in our model learning is not driven by the set of stimuli only. Internal estimations of performance and knowledge about the task are also incorporated. Our model implies that under certain conditions the detectability of only some of the stimuli is enhanced while the overall improvement of performance is attributed to a change of decision criteria. An experiment confirms this prediction.


Perception | 1994

Human Pattern Recognition: Parallel Processing and Perceptual Learning

Manfred Fahle

A new theory of visual object recognition by Poggio et al that is based on multidimensional interpolation between stored templates requires fast, stimulus-specific learning in the visual cortex. Indeed, performance in a number of perceptual tasks improves as a result of practice. We distinguish between two phases of learning a vernier-acuity task, a fast one that takes place within less than 20 min and a slow phase that continues over 10 h of training and probably beyond. The improvement is specific for relatively ‘simple’ features, such as the orientation of the stimulus presented during training, for the position in the visual field, and for the eye through which learning occurred. Some of these results are simulated by means of a computer model that relies on object recognition by multidimensional interpolation between stored templates. Orientation specificity of learning is also found in a jump-displacement task. In a manner parallel to the improvement in performance, cortical potentials evoked by the jump displacement tend to decrease in latency and to increase in amplitude as a result of training. The distribution of potentials over the brain changes significantly as a result of repeated exposure to the same stimulus. The results both of psychophysical and of electrophysiological experiments indicate that some form of perceptual learning might occur very early during cortical information processing. The hypothesis that vernier breaks are detected ‘early’ during pattern recognition is supported by the fact that reaction times for the detection of verniers depend hardly at all on the number of stimuli presented simultaneously. Hence, vernier breaks can be detected in parallel at different locations in the visual field, indicating that deviation from straightness is an elementary feature for visual pattern recognition in humans that is detected at an early stage of pattern recognition. Several results obtained during the last few years are reviewed, some new results are presented, and all these results are discussed with regard to their implications for models of pattern recognition.

Collaboration


Dive into the Manfred Fahle's collaboration.

Top Co-Authors

Avatar

Michael H. Herzog

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christof Koch

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomaso Poggio

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge