Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Georg Meyer is active.

Publication


Featured researches published by Georg Meyer.


Neuroreport | 2001

Cross-modal integration of auditory and visual motion signals

Georg Meyer; Sophie M. Wuerger

Real-world moving objects are usually defined by correlated information in multiple sensory modalities such as vision and hearing. The aim of our study was to assess whether simultaneous auditory supra-threshold motion introduces a bias or affects the sensitivity in a visual motion detection task. We demonstrate a bias in the perceived direction of visual motion that is consistent with the direction of the auditory motion (audio-visual motion capture). This bias effect is robust and occurs even if the auditory and visual motion signals come from different locations or move at different speeds. We also show that visual motion detection thresholds are higher for consistent auditory motion than for inconsistent motion, provided the stimuli move at the same speed and are co-localised.


Experimental Brain Research | 2005

Low-Level Integration of Auditory and Visual Motion Signals Requires Spatial Co-Localisation

Georg Meyer; Sophie M. Wuerger; Florian Röhrbein; Christoph Zetzsche

It is well known that the detection thresholds for stationary auditory and visual signals are lower if the signals are presented bimodally rather than unimodally, provided the signals coincide in time and space. Recent work on auditory–visual motion detection suggests that the facilitation seen for stationary signals is not seen for motion signals. We investigate the conditions under which motion perception also benefits from the integration of auditory and visual signals. We show that the integration of cross-modal local motion signals that are matched in position and speed is consistent with thresholds predicted by a neural summation model. If the signals are presented in different hemi-fields, move in different directions, or both, then behavioural thresholds are predicted by a probability-summation model. We conclude that cross-modal signals have to be co-localised and co-incident for effective motion integration. We also argue that facilitation is only seen if the signals contain all localisation cues that would be produced by physical objects.


Attention Perception & Psychophysics | 2003

The integration of auditory and visual motion signals at threshold

Sophie M. Wuerger; Markus Hofbauer; Georg Meyer

To interpret our environment, we integrate information from all our senses. For moving objects, auditory and visual motion signals are correlated and provide information about the speed and the direction of the moving object. We investigated at what level the auditory and the visual modalities interact and whether the human brain integrates only motion signals that are ecologically valid. We found that the sensitivity for identifying motion was improved when motion signals were provided in both modalities. This improvement in sensitivity can be explained byprobability summation. That is, auditory and visual stimuli are combined at a decision level, after the stimuli have been processed independently in the auditory and the visual pathways. Furthermore, this integration is direction blind and is not restricted to ecologically valid motion signals.


IEEE Transactions on Speech and Audio Processing | 1998

Improvement of speech spectrogram accuracy by the method of reassignment

Fabrice Plante; Georg Meyer; William A. Ainsworth

An improvement of the speech spectrogram based on the method of reassignment is presented. This method consists of moving each point of the spectrogram to a new point that represents the distribution of the energy in the time-frequency window more accurately. Examples of natural speech show an improvement of the energy localization in both time and frequency domains. This allows a better description of speech features.


PLOS ONE | 2013

Shared Brain Lateralization Patterns in Language and Acheulean Stone Tool Production: A Functional Transcranial Doppler Ultrasound Study

Natalie Uomini; Georg Meyer

Background The popular theory that complex tool-making and language co-evolved in the human lineage rests on the hypothesis that both skills share underlying brain processes and systems. However, language and stone tool-making have so far only been studied separately using a range of neuroimaging techniques and diverse paradigms. Methodology/Principal Findings We present the first-ever study of brain activation that directly compares active Acheulean tool-making and language. Using functional transcranial Doppler ultrasonography (fTCD), we measured brain blood flow lateralization patterns (hemodynamics) in subjects who performed two tasks designed to isolate the planning component of Acheulean stone tool-making and cued word generation as a language task. We show highly correlated hemodynamics in the initial 10 seconds of task execution. Conclusions/Significance Stone tool-making and cued word generation cause common cerebral blood flow lateralization signatures in our participants. This is consistent with a shared neural substrate for prehistoric stone tool-making and language, and is compatible with language evolution theories that posit a co-evolution of language and manual praxis. In turn, our results support the hypothesis that aspects of language might have emerged as early as 1.75 million years ago, with the start of Acheulean technology.


European Journal of Nuclear Medicine and Molecular Imaging | 2001

Comparison of visual and ROI-based brain tumour grading using 18F-FDG PET: ROC analyses

Philipp T. Meyer; Mathias Schreckenberger; Uwe Spetzger; Georg Meyer; Osama Sabri; Keyvan Setani; Thomas Zeggel; U. Buell

Several studies have suggested that the use of simple visual interpretation criteria for the investigation of brain tumours by positron emission tomography with fluorine-18 fluorodeoxyglucose (FDG-PET) might be similarly or even more accurate than quantitative or semi-quantitative approaches. We investigated this hypothesis by comparing the accuracy of FDG-PET brain tumour grading using a proposed six-step visual grading scale (VGS; applied by three independent observers unaware of the clinical history and the results of histopathology) and three different region of interest (ROI) ratios (maximal tumour uptake compared with contralateral tissue [Tu/Tis], grey matter [Tu/GM] and white matter [Tu/WM]). The patient population comprised 47 patients suffering from 17 benign (7 gliomas of grade II, 10 non-gliomatous tumours) and 30 malignant (23 gliomas of grade III–IV, 7 non-gliomatous tumours) tumours. The VGS results were highly correlated with the different ROI ratios (R=0.91 for Tu/GM, R=0.82 for Tu/WM, and R=0.79 for Tu/Tis), and high inter-observer agreement was achieved (κ=0.63, 0.76 and 0.81 for the three observers). The mean ROI ratios and VGS readings of gliomatous and non-gliomatous lesions were not significantly different. For all measures, high-grade lesions showed significantly higher FDG uptake than low-grade lesions (P<0.005 to P<0.0001, depending on the measure used). Nominal logistic regressions and receiver operating characteristic (ROC) analyses were used to calculate cut-off values to differentiate low- from high-grade lesions. The predicted (by ROC) diagnostic sensitivity/specificity of the different tests (cut-off ratios shown in parentheses) were: Tu/GM: 0.87/0.85 (0.7), Tu/WM: 0.93/0.80 (1.3), Tu/Tis: 0.80/0.80 (0.8) and VGS: 0.84/0.95 (uptake < GM, but >> WM). The VGS yielded the highest Az (±SE) value (i.e. area under the ROC curve as a measure of predicted accuracy), 0.97±0.03, which showed a strong tendency towards being significantly greater than the Az of Tu/Tis (0.88±0.06; P=0.06). Tu/GM (0.92±0.04) and Tu/WM (0.91±0.05) reached intermediate Az values (not significantly different from any other value). We conclude that the VGS represents a measure at least as accurate as the Tu/GM and Tu/WM ratios. The Tu/Tis ratio is less valid owing to the high dependence on the location of the lesion. Depending on the investigators experience and the structure of the lesions, the easily used VGS might be the most favourable grading criterion.


Journal of Cognitive Neuroscience | 2011

Interactions between auditory and visual semantic stimulus classes: Evidence for common processing networks for speech and body actions

Georg Meyer; Mark W. Greenlee; Sophie M. Wuerger

Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.


Information Fusion | 2004

Continuous audio–visual digit recognition using N-best decision fusion

Georg Meyer; Jeffrey B. Mulligan; Sophie M. Wuerger

Abstract Audio–visual speech recognition systems can be categorised into systems that integrate audio–visual features before decisions are made (feature fusion) and those that integrate decisions of separate recognisers for each modality (decision fusion). Decision fusion has been applied at the level of individual analysis time frames, phone segments and for isolated word recognition but in its basic form cannot be used for continuous speech recognition because of the combinatorial explosion of possible word string hypotheses that have to be evaluated. We present a case for decision fusion at the utterance level and propose an algorithm that can be applied efficiently to continuous speech recognition tasks, which we call N -best decision fusion. The system was tested on a single-speaker, continuous digit recognition task where the audio stream was contaminated by additive multi-speaker babble noise. The audio–visual recognition system resulted in lower word error rates for all signal-to-noise conditions tested compared to the audio-alone system. The magnitude of the improvement was dependent on the signal-to-noise ratio.


Frontiers in Integrative Neuroscience | 2013

Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing

Anton L. Beer; Tina Plank; Georg Meyer; Mark W. Greenlee

Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex.


Neuropsychologia | 2013

The time course of auditory–visual processing of speech and body actions: Evidence for the simultaneous activation of an extended neural network for semantic processing

Georg Meyer; Neil Harrison; Sophie M. Wuerger

An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions.

Collaboration


Dive into the Georg Meyer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark White

University of Liverpool

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neil Harrison

Liverpool Hope University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge