Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Veith Weilnhammer is active.

Publication


Featured researches published by Veith Weilnhammer.


The Journal of Neuroscience | 2013

Frontoparietal Cortex Mediates Perceptual Transitions in Bistable Perception

Veith Weilnhammer; Karin Ludwig; Guido Hesselmann; Philipp Sterzer

During bistable vision, perception oscillates between two mutually exclusive percepts despite constant sensory input. Greater BOLD responses in frontoparietal cortex have been shown to be associated with endogenous perceptual transitions compared with “replay” transitions designed to closely match bistability in both perceptual quality and timing. It has remained controversial, however, whether this enhanced activity reflects causal influences of these regions on processing at the sensory level or, alternatively, an effect of stimulus differences that result in, for example, longer durations of perceptual transitions in bistable perception compared with replay conditions. Using a rotating Lissajous figure in an fMRI experiment on 15 human participants, we controlled for potential confounds of differences in transition duration and confirmed previous findings of greater activity in frontoparietal areas for transitions during bistable perception. In addition, we applied dynamic causal modeling to identify the neural model that best explains the observed BOLD signals in terms of effective connectivity. We found that enhanced activity for perceptual transitions is associated with a modulation of top-down connectivity from frontal to visual cortex, thus arguing for a crucial role of frontoparietal cortex in perceptual transitions during bistable perception.


PLOS Computational Biology | 2017

A predictive coding account of bistable perception - a model-based fMRI study

Veith Weilnhammer; Heiner Stuke; Guido Hesselmann; Philipp Sterzer; Katharina Schmack

In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model’s predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants’ perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work provides a theoretical framework that allows for the analysis of behavioural and neural data using a predictive coding perspective on bistable perception. In this, our approach posits a crucial role of prediction error signalling for the resolution of perceptual ambiguities.


Frontiers in Human Neuroscience | 2016

Learning What to See in a Changing World

Katharina Schmack; Veith Weilnhammer; Jakob Heinzle; Klaas E. Stephan; Philipp Sterzer

Visual perception is strongly shaped by expectations, but it is poorly understood how such perceptual expectations are learned in our dynamic sensory environment. Here, we applied a Bayesian framework to investigate whether perceptual expectations are continuously updated from different aspects of ongoing experience. In two experiments, human observers performed an associative learning task in which rapidly changing expectations about the appearance of ambiguous stimuli were induced. We found that perception of ambiguous stimuli was biased by both learned associations and previous perceptual outcomes. Computational modeling revealed that perception was best explained by a model that continuously updated priors from associative learning and perceptual history and combined these priors with the current sensory information in a probabilistic manner. Our findings suggest that the construction of visual perception is a highly dynamic process that incorporates rapidly changing expectations from different sources in a manner consistent with Bayesian learning and inference.


PLOS Computational Biology | 2017

Psychotic Experiences and Overhasty Inferences Are Related to Maladaptive Learning

Heiner Stuke; Hannes Stuke; Veith Weilnhammer; Katharina Schmack

Theoretical accounts suggest that an alteration in the brain’s learning mechanisms might lead to overhasty inferences, resulting in psychotic symptoms. Here, we sought to elucidate the suggested link between maladaptive learning and psychosis. Ninety-eight healthy individuals with varying degrees of delusional ideation and hallucinatory experiences performed a probabilistic reasoning task that allowed us to quantify overhasty inferences. Replicating previous results, we found a relationship between psychotic experiences and overhasty inferences during probabilistic reasoning. Computational modelling revealed that the behavioral data was best explained by a novel computational learning model that formalizes the adaptiveness of learning by a non-linear distortion of prediction error processing, where an increased non-linearity implies a growing resilience against learning from surprising and thus unreliable information (large prediction errors). Most importantly, a decreased adaptiveness of learning predicted delusional ideation and hallucinatory experiences. Our current findings provide a formal description of the computational mechanisms underlying overhasty inferences, thereby empirically substantiating theories that link psychosis to maladaptive learning.


The Journal of Neuroscience | 2018

The Neural Correlates of Hierarchical Predictions for Perceptual Decisions

Veith Weilnhammer; Heiner Stuke; Philipp Sterzer; Katharina Schmack

Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although “high-level” predictions about the cue–target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic “low-level” predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that “high-level” predictions about the strength of dynamic cue–target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas “low-level” conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain.


PLOS ONE | 2016

Perceptual Stability of the Lissajous Figure Is Modulated by the Speed of Illusory Rotation.

Veith Weilnhammer; Philipp Sterzer; Guido Hesselmann

Lissajous figures represent ambiguous structure-from-motion stimuli rotating in depth and have proven to be a versatile tool to explore the cognitive and neural mechanisms underlying bistable perception. They are generated by the intersection of two sinusoids with perpendicular axes and increasing phase-shift whose frequency determines the speed of illusory 3D rotation. Recently, we found that Lissajous figures of higher shifting frequencies elicited longer perceptual phase durations and tentatively proposed a “representational momentum” account. In this study, our aim was twofold. First, we aimed to gather more behavioral evidence related to the perceptual dynamics of the Lissajous figure by simultaneously varying its shifting frequency and size. Using a conventional analysis, we investigated the effects of our experimental manipulations on transition probability (i.e., the probability that the current percept will change at the next critical stimulus configuration). Second, we sought to test the impact of our experimental factors on the occurrence of transitions in bistable perception by means of a Bayesian approach that can be used to directly quantify the impact of contextual cues on perceptual stability. We thereby estimated the implicit prediction of perceptual stability and how it is modulated by experimental manipulations.


PLOS Computational Biology | 2017

Correction: Psychotic Experiences and Overhasty Inferences Are Related to Maladaptive Learning

Heiner Stuke; Hannes Stuke; Veith Weilnhammer; Katharina Schmack

[This corrects the article DOI: 10.1371/journal.pcbi.1005328.].


Perception | 2015

Contextual modulation of effective connectivity in primary visual cortex in schizophrenia

Veith Weilnhammer; Kiley Seymour; Philipp Sterzer

Adaptation to videos of human locomotion (videos recorded from the London Marathon) affects observers’ subsequent perception of human locomotion speed: normal speed test stimuli are perceived as being played in slow-motion after adaptation to fast-forward stimuli and conversely, are perceived as being played in fast-forward after adaptation to slow-motion stimuli. In this study we investigated whether the presence of recognisable human motion in the adapting stimulus is necessary for the effect. The adapting stimuli were spatially scrambled: horizontal pixel rows were randomly shuffled. The same shuffled order was used for all frames preserving horizontal motion information, but ensuring no human form could be recognised. Results showed that the after-effect persisted despite spatially scrambling the adapting stimuli; human motion is not a necessary requirement for the locomotion after-effect. The after-effect seems to be driven by adaptation in relatively low-level visual channels rather than the high-level processes that encode human motion.Perception is usually non-retinotopic. For example, a reflector on the wheel of a bicycle is perceived to rotate on a circular orbit, while its retinotopic motion is cycloidal. To investigate non-retinotopic motion perception, we used the Ternus-Pikler display. Two disks are repeatedly flashed on a computer screen. A dot moves linearly up-down in the left disk and left-right in the right disk (retinotopic percept). If a third disk is added alternatingly to the left and right, the three disks form a group moving predictably back and forth horizontally. The dot in the central disk now appears to move on a circular orbit (non-retinotopic percept), because the brain subtracts the horizontal group motion from the up-down and left-right motion. Here, we show that predictability is not necessary to compute non-retinotopic motion. In experiment 1, the three disks moved randomly in any direction. In experiment 2, we additionally varied the shape and contrast polarity of the stimuli from frame to frame. In both cases, strong non-retinotopic rotation was perceived. Hence, the visual system can flexibly solve the non-retinotopic motion correspondence problem, even when the retinotopic reference motion is unpredictable and no efference copy-like signals can be used.In Object Substitution Masking (OSM) a mask surrounding, simultaneously onsetting with, and trailing a target leads to a reduction in target perceptibility (Di Lollo et al., 2000). It has been questioned whether this process is due to target substitution or the addition of noise to the percept (Podor, 2012). Two experiments examined this issue using an adjustment task in which a test Landolt C is presented and participants rotate it to match the target Landolt C shown during the trial (typical OSM paradigms use 2-4 alternative forced choice); the dependent measure was the angle of error. In Experiment 1 the effect of a trailing OSM mask (80ms-320ms) is compared against that of adding stimulus noise of varying densities (25%-75%) to the target location. Both manipulations (OSM, stimulus noise) produced a similar change in the distribution of errors compared against a baseline (0ms trailing mask, 0%-noise). The pattern is consistent with both mask manipulations reducing the fidelity of the target percept. In Experiment 2 the OSM and stimulus noise manipulations were varied factorially. Here the two manipulations had combinatorial effects on the error distribution. Implications are discussed regarding the mechanisms of OSM and the consequences of OSM for target perceptionDistributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tiltafter effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.claims surround the effects of colour on performance. Elliot, Maier, Moller, Friedman and Meinhardt (2007) proposed that in an achievement context (e.g. maths test) the perception of red impedes performance by inducing avoidance motivation. However, replications of the effect are scant, especially in the UK and some suffer from a lack of stimulus colour control. We report five experiments that attempt to replicate the red-effect in an achievement context across a range of settings: online; in school classrooms; and in the laboratory. In each experiment, stimuli were carefully specified and calibrated to ensure that they varied in hue but not luminance or saturation. Only one experiment replicated the red effect – participants who were primed with a red stimulus (relative to white) for 5 s scored worse on a subsequent verbal task. However, replication and extension of this experiment failed to reproduce the effect. Explanations for the findings are discussed including: the effect is not present in a UK population; the effect requires very specific methodology; the effect does not generalise to applied settings; and/or the original body of work overestimates the prevalence of these effects. PhD research funded by studentship provided by the University of Surrey Psychology Faculty.This poster was presented at 38th European Conference on Visual Perception (ECVP) 2015 Liverpool, abstract published in Perception on 21 August 2015Dynamic stimuli capture attention, even if not in the focus of endogenous attention. Such a stimulus is apparent motion, given that it benefits perception of targets in the motion path. These benefits have been attributed to motion-induced ‘entrainment’ of attention to expected locations (spatial extrapolation) and/or expected time-points (temporal entrainment). Here, we studied the automatic nature of spatial extrapolation versus temporal entrainment with apparent motion stimuli, when motion was task-irrelevant. Participants performed an endogenously cued target detection task, in which symbolic cues prompted attention shifts to lateralized target positons (75% validity). Simultaneously, apparent motion cues flickered either rhythmically or arhythmically across the screen, such that targets appeared either in or out of motion trajectory. Although the motion cue can be considered a distractor (non-informative as to target location), motion direction influenced target detection, which is in line with automatic extrapolation of spatial positions during apparent motion. An effect that was independent and additive to the endogenous cueing benefit. Importantly, temporal cueing in the motion stream also influenced target detection. However, this effect was independent of reflexive motion-cueing to spatial positions. We conclude that spatial extrapolation and temporal entrainment of attention by apparent motion are governed by partially independent reflexive mechanisms.How do interpersonal behavioural dynamics predict individual and joint decisions? Recent interactionist views on social cognition suggest that the most under-studied and important aspect of social cognition may be interaction dynamics. However, it has hitherto proven extremely difficult to devise a controlled setup in which social cues, such as eye gaze, are subject to unconstrained interaction. To address these issues, we use a dual interactive eye-tracking paradigm. Participants are presented with the face of an anthropomorphic avatar, the eye movements of which are linked in real-time to another participant’s eye-gaze. This allows for control of interaction aspects that are not related to the experience of gaze contingency. Participants have to choose which one out of two spheres on either side of the avatar face is the largest. These spheres can have a medium, small, and no difference. Specifically in the latter condition, gaze dynamics guide choices. Using cross-recurrence quantification, we analyse the time course of the gaze interactions and look at how this predicts individual and joint decisions about sphere size, which participant will follow the other, and assess collaboration in a subsequent “stag hunt” game, a variation on the prisoner’s dilemma game.We report a new after-effect of visual motion in which the apparent speed of human locomotion is affected by prior exposure to speeded-up or slowed-down motion. In each trial participants were shown short video clips of running human figures (recorded from the London Marathon) and asked to report whether the speed of movement was ‘slower than natural’ or ‘faster than natural’, by pressing one of two response buttons. The clips were displayed at different playback speeds ranging from slow-motion (0.48x natural speed) to fast-forward (1.44x natural speed). Adaptation to stimuli played at normal speed resulted in the P50 of the psychometric function falling close to normal-speed playback. However after adaptation to 1.44x playback, normal-speed playback appeared too slow, so the P50 shifted significantly towards a higher playback speed; after adaptation to 0.48xplayback, normal-speed playback appeared too fast, so the P50 shifted significantly towards a lower playback speed. The shifts in apparent speed were obtained using both same- and opposite-direction adaptation-test stimulus pairs, indicating that the effect is a speed adaptation effect rather than a directional velocity after-effect. These findings are consistent with norm-based coding of the speed of movement.Young adults typically display a processing advantage for the left side of space (‘‘pseudoneglect’’) but older adults display either no strongly lateralised bias or a preference towards the right (Benwell et al., 2014; Schmitz & Peigneux, 2011). We have previously reported an additive rightward shift in the spatial attention vector with decreasing landmark task line length and increasing age (Benwell et al., 2014). However there is very little neuroimaging evidence to show how this change is represented at a neural level. We tested 20 young (18–25) and 20 older (60–80) adults on long vs short landmark lines whilst recording activity using EEG. The peak ‘‘line length effect’’ (long vs short lines) was localised to the right parieto-occipital cortex (PO4) 137 ms post-stimulus. Importantly, older adults showed additional involvement of left frontal regions (AF3: 386 ms & F7: 387 ms) for short lines only, which may represent the neural correlate of this rightward shift. These behavioural results align with the HAROLD model of aging (Cabeza, 2002) where brain activity becomes distributed across both hemispheres in older adults to support successful performance.We studied the effect of age on visual perceptual decisions of bi-stable stimuli. We used two different stimuli: bi-stable rotating spheres and a binocular rivalry stimulus. At onset, both stimuli can evoke two different percepts: for the sphere clockwise or anti-clockwise rotation and for the binocular rivalry stimulus a percept that switches between the stimuli in the two eyes. The stimuli were presented intermittently for 1 second with a range of inter-stimulus intervals (0.1 – 2 seconds). Subjects ranged between 18 and 73 years old and were instructed to indicate which of the two percepts dominate at each onset of the bi-stable stimulus. Our results show that perceptual choices are more stable for older subjects for the binocular rivalry stimulus and not for the bi-stable rotating spheres. The results will be discussed in the context of current models for bi-stable visual perception.The visual system combines spatial signals from the two eyes to achieve single vision. But if binocular disparity is too large, this perceptual fusion gives way to diplopia. We studied and modelled the processes underlying fusion and the transition to diplopia. The likely basis for fusion is linear summation of inputs onto binocular cortical cells. Previous studies of perceived position, contrast matching and contrast discrimination imply the computation of a dynamicallyweighted sum, where the weights vary with relative contrast. For gratings, perceived contrast was almost constant across all disparities, and this can be modelled by allowing the ocular weights to increase with disparity (Zhou, Georgeson & Hess, 2014). However, when a single Gaussian-blurred edge was shown to each eye perceived blur was invariant with disparity (Georgeson & Wallis, ECVP 2012) – not consistent with linear summation (which predicts that perceived blur increases with disparity). This blur constancy is consistent with a multiplicative form of combination (the contrast-weighted geometric mean) but that is hard to reconcile with the evidence favouring linear combination. We describe a 2-stage spatial filtering model with linear binocular combination and suggest that nonlinear output transduction (eg. ‘half-squaring’) at each stage may account for the blur constancy.


Schizophrenia Bulletin | 2018

Delusion Proneness is Linked to a Reduced Usage of Prior Beliefs in Perceptual Decisions

Heiner Stuke; Veith Weilnhammer; Philipp Sterzer; Katharina Schmack


Vision Research | 2014

Revisiting the Lissajous figure as a tool to study bistable perception

Veith Weilnhammer; Karin Ludwig; Philipp Sterzer; Guido Hesselmann

Collaboration


Dive into the Veith Weilnhammer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hannes Stuke

Free University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Karin Ludwig

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge