Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sophie M. Wuerger is active.

Publication


Featured researches published by Sophie M. Wuerger.


Journal of Vision | 2009

Multivoxel fMRI analysis of color tuning in human primary visual cortex

Laura M. Parkes; Jan-Bernard C. Marsman; D. C. Oxley; John Yannis Goulermas; Sophie M. Wuerger

We use multivoxel pattern analysis (MVPA) to study the spatial clustering of color-selective neurons in the human brain. Our main objective was to investigate whether MVPA reveals the spatial arrangements of color-selective neurons in human primary visual cortex (V1). We measured the distributed fMRI activation patterns for different color stimuli (Experiment 1: cardinal colors (to which the LGN is known to be tuned), Experiment 2: perceptual hues) in V1. Our two main findings were that (i) cone-opponent cardinal color modulations produce highly reproducible patterns of activity in V1, but these were not unique to each color. This suggests that V1 neurons with tuning characteristics similar to those found in LGN are not spatially clustered. (ii) Unique activation patterns for perceptual hues in V1 support current evidence for a spatially clustered hue map. We believe that our work is the first to show evidence of spatial clustering of neurons with similar color preferences in human V1.


Neuroreport | 2001

Cross-modal integration of auditory and visual motion signals

Georg Meyer; Sophie M. Wuerger

Real-world moving objects are usually defined by correlated information in multiple sensory modalities such as vision and hearing. The aim of our study was to assess whether simultaneous auditory supra-threshold motion introduces a bias or affects the sensitivity in a visual motion detection task. We demonstrate a bias in the perceived direction of visual motion that is consistent with the direction of the auditory motion (audio-visual motion capture). This bias effect is robust and occurs even if the auditory and visual motion signals come from different locations or move at different speeds. We also show that visual motion detection thresholds are higher for consistent auditory motion than for inconsistent motion, provided the stimuli move at the same speed and are co-localised.


Experimental Brain Research | 2005

Low-Level Integration of Auditory and Visual Motion Signals Requires Spatial Co-Localisation

Georg Meyer; Sophie M. Wuerger; Florian Röhrbein; Christoph Zetzsche

It is well known that the detection thresholds for stationary auditory and visual signals are lower if the signals are presented bimodally rather than unimodally, provided the signals coincide in time and space. Recent work on auditory–visual motion detection suggests that the facilitation seen for stationary signals is not seen for motion signals. We investigate the conditions under which motion perception also benefits from the integration of auditory and visual signals. We show that the integration of cross-modal local motion signals that are matched in position and speed is consistent with thresholds predicted by a neural summation model. If the signals are presented in different hemi-fields, move in different directions, or both, then behavioural thresholds are predicted by a probability-summation model. We conclude that cross-modal signals have to be co-localised and co-incident for effective motion integration. We also argue that facilitation is only seen if the signals contain all localisation cues that would be produced by physical objects.


Vision Research | 2005

The cone inputs to the unique-hue mechanisms

Sophie M. Wuerger; Philip Atkinson; Simon J. Cropper

Our aim was to characterise the chromatic mechanisms that yield the four unique hues: red, green, yellow and blue. We measured the null planes for all four unique hues and report the following two main results. (1) We confirm that three chromatic mechanisms are required to account for the four unique hues. These three chromatic mechanisms do not coincide with the chromatic tuning found in parvocellular LGN neurones, i.e., neurones tuned to L-M and S-(L+M); these subcortical chromatic mechanisms are hence not the neural substrate of the perceptual unique hues and further higher-order colour mechanisms need to be postulated. Our results are consistent with the idea that the two higher-order colour mechanisms that yield unique red and unique green respectively combine the incremental and decremental responses of the subcortical chromatic mechanisms with different weights. In contrast, unique yellow and unique blue can be explained by postulating a single higher-order chromatic mechanism that combines the incremental and decremental subcortical chromatic responses with similar weights. (2) The variability between observers is small when expressed in terms of perceptual errors, which is consistent with the hypothesis that the colour vision system in adult humans is able to recalibrate itself based on prior visual experience.


Attention Perception & Psychophysics | 2003

The integration of auditory and visual motion signals at threshold

Sophie M. Wuerger; Markus Hofbauer; Georg Meyer

To interpret our environment, we integrate information from all our senses. For moving objects, auditory and visual motion signals are correlated and provide information about the speed and the direction of the moving object. We investigated at what level the auditory and the visual modalities interact and whether the human brain integrates only motion signals that are ecologically valid. We found that the sensitivity for identifying motion was improved when motion signals were provided in both modalities. This improvement in sensitivity can be explained byprobability summation. That is, auditory and visual stimuli are combined at a decision level, after the stimuli have been processed independently in the auditory and the visual pathways. Furthermore, this integration is direction blind and is not restricted to ecologically valid motion signals.


Behavioral and Cognitive Neuroscience Reviews | 2005

The perception of motion in chromatic stimuli.

Simon J. Cropper; Sophie M. Wuerger

The issue of whether there is a motion mechanism sensitive to purely chromatic stimuli has been pertinent for the past 30 or more years. The aim of this review is to examine why such different conclusions have been drawn in the literature and to reach some reconciliation. The review critically examines the behavioral evidence and concludes that there is a purely chromatic motion mechanism but that it is limited to the fovea. Examination of motion performance for chromatic and luminance stimuli provides convincing evidence that there are at least two different mechanisms for the two kinds of stimuli. The authors further argue that the chromatic mechanism may be at a particular disadvantage when the integration of multiple local motion signals is required. Finally, the authors present a descriptive model that may go some way toward explaining the reasons for the differences in collected data outlined in this article.


Vision Research | 1995

Proximity judgments in color space: tests of a Euclidean color geometry.

Sophie M. Wuerger; Laurence T. Maloney; John Krauskopf

We describe two tests of the hypothesis that human judgments of the proximity of colors are consistent with a Euclidean geometry on color matching space. The first test uses proximity judgments to measure the angle between any two intersecting lines in color space. Pairwise estimates of the angles between three lines in a plane were made in order to test the additivity of angles. Three different color proximity tasks were considered. Additivity failed for each of the three proximity tasks. Secondly, we tested a prediction concerning the growth of the variability of judgments of similarity with the distance between the test and reference stimuli. The Euclidean hypothesis was also rejected by this test. The results concerning the growth of variability are consistent with the assumption that observers use a city-block metric when judging the proximity of colored lights.


Journal of Vision | 2011

Event-related potentials reveal an early advantage for luminance contours in the processing of objects.

Jasna Martinovic; Justyna Mordal; Sophie M. Wuerger

Detection and identification of objects are the most crucial goals of visual perception. We studied the role of luminance and chromatic information for object processing by comparing performance of familiar, meaningful object contours with those of novel, non-object contours. Comparisons were made between full-color and reduced-color object (or non-object) contours. Full-color stimuli contained both chromatic and luminance information, whereas luminance information was absent in the reduced-color stimuli. All stimuli were made equally salient by fixing them at multiples of discrimination threshold contrast. In a subsequent electroencephalographic experiment observers were asked to classify contours as objects or non-objects. An advantage in accuracy was found for full-color stimuli over the reduced-color stimuli but only if the contours depicted objects as opposed to non-objects. Event-related potentials revealed the neural correlate of this object-specific luminance advantage. The amplitude of the centro-occipital N1 component was modulated by stimulus class with the effect being driven by the presence of luminance information. We conclude that high-level discrimination processes in the cortex start relatively early and exhibit object-selective effects only in the presence of luminance information. This is consistent with the superiority of luminance in subserving object identification processes.


Vision Research | 1996

Color Appearance Changes Resulting from Iso-luminant Chromatic Adaptation

Sophie M. Wuerger

By means of asymmetric color matching, the effects of steady-state chromatic adaptation on the color appearance of briefly presented chromatic flashes were investigated. The adapting and test lights were of equal luminance (35 cd/m2) and differed from the standard grey adapting light either along the L-2M (red and green), or along the S-(L+M) (yellow and violet) line. The red (green) adapting light results in 6% positive (negative) L cone contrast and 11% negative (positive) M cone contrast with respect to the grey adapting light. The violet (yellowish) adapting light yields a positive (negative) S cone contrast of 50% relative to the standard adapting light. The main findings are: (i) iso-luminant adapting lights that differ only in their L-2M signal (red and green) resulted in asymmetric matches that differ mainly in the L-2M coordinate; (ii) iso-luminant adapting lights that differ in their S cone excitation only (yellow and violet) result in asymmetric matches that differ mainly in their S cone coordinate; (iii) the largest difference between test and match coordinates is found in the S cone signal for violet adaptation; (iv) the luminance differences of the asymmetric matches are within 1% of the mean luminance and are mostly non-systematic; (v) adaptation to iso-luminant red and green lights yields adaptational changes mainly in the L cones and not in the M cones; (vi) substantial quantitative deviations from a von Kries law are observed for L cone signals for red and green adaptation and for S cone decrements under yellow adaptation; (vii) S cone-isolating adapting lights results in small additive shifts in the S cone matches; adapting lights differing only in the L and M cone signal from the standard grey adapting light yield additive shifts only in the L and M cone matches.


Journal of Cognitive Neuroscience | 2011

Interactions between auditory and visual semantic stimulus classes: Evidence for common processing networks for speech and body actions

Georg Meyer; Mark W. Greenlee; Sophie M. Wuerger

Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.

Collaboration


Dive into the Sophie M. Wuerger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georg Meyer

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar

Dimosthenis Karatzas

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chenyang Fu

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge