Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jochem W. Rieger is active.

Publication


Featured researches published by Jochem W. Rieger.


Nature Neuroscience | 2010

Categorical speech representation in human superior temporal gyrus

Edward F. Chang; Jochem W. Rieger; Keith Johnson; Mitchel S. Berger; Nicholas M. Barbaro; Robert T. Knight

Speech perception requires the rapid and effortless extraction of meaningful phonetic information from a highly variable acoustic signal. A powerful example of this phenomenon is categorical speech perception, in which a continuum of acoustically varying sounds is transformed into perceptually distinct phoneme categories. We found that the neural representation of speech sounds is categorically organized in the human posterior superior temporal gyrus. Using intracranial high-density cortical surface arrays, we found that listening to synthesized speech stimuli varying in small and acoustically equal steps evoked distinct and invariant cortical population response patterns that were organized by their sensitivities to critical acoustic features. Phonetic category boundaries were similar between neurometric and psychometric functions. Although speech-sound responses were distributed, spatially discrete cortical loci were found to underlie specific phonetic discrimination. Our results provide direct evidence for acoustic-to–higher order phonetic level encoding of speech sounds in human language receptive cortex.


The Journal of Neuroscience | 2007

Audiovisual Temporal Correspondence Modulates Human Multisensory Superior Temporal Sulcus Plus Primary Sensory Cortices

Toemme Noesselt; Jochem W. Rieger; Mircea Ariel Schoenfeld; Martin Kanowski; Hermann Hinrichs; Hans-Jochen Heinze; Jon Driver

The brain should integrate related but not unrelated information from different senses. Temporal patterning of inputs to different modalities may provide critical information about whether those inputs are related or not. We studied effects of temporal correspondence between auditory and visual streams on human brain activity with functional magnetic resonance imaging (fMRI). Streams of visual flashes with irregularly jittered, arrhythmic timing could appear on right or left, with or without a stream of auditory tones that coincided perfectly when present (highly unlikely by chance), were noncoincident with vision (different erratic, arrhythmic pattern with same temporal statistics), or an auditory stream appeared alone. fMRI revealed blood oxygenation level-dependent (BOLD) increases in multisensory superior temporal sulcus (mSTS), contralateral to a visual stream when coincident with an auditory stream, and BOLD decreases for noncoincidence relative to unisensory baselines. Contralateral primary visual cortex and auditory cortex were also affected by audiovisual temporal correspondence or noncorrespondence, as confirmed in individuals. Connectivity analyses indicated enhanced influence from mSTS on primary sensory areas, rather than vice versa, during audiovisual correspondence. Temporal correspondence between auditory and visual streams affects a network of both multisensory (mSTS) and sensory-specific areas in humans, including even primary visual and auditory cortex, with stronger responses for corresponding and thus related audiovisual inputs.


Current Biology | 2000

Sensory and cognitive contributions of color to the recognition of natural scenes

Karl R. Gegenfurtner; Jochem W. Rieger

Although color plays a prominent part in our subjective experience of the visual world, the evolutionary advantage of color vision is still unclear [1] [2], with most current answers pointing towards specialized uses, for example to detect ripe fruit amongst foliage [3] [4] [5] [6]. We investigated whether color has a more general role in visual recognition by looking at the contribution of color to the encoding and retrieval processes involved in pattern recognition [7] [8] [9]. Recognition accuracy was higher for color images of natural scenes than for luminance-matched black and white images, and color information contributed to both components of the recognition process. Initially, color leads to an image-coding advantage at the very early stages of sensory processing, most probably by easing the image-segmentation task. Later, color leads to an advantage in retrieval, presumably as the result of an enhanced image representation in memory due to the additional attribute. Our results ascribe color vision a general role in the processing of visual form, starting at the very earliest stages of analysis: color helps us to recognize things faster and to remember them better.


The Journal of Neuroscience | 2006

The Neural Site of Attention Matches the Spatial Scale of Perception

Jens-Max Hopf; Steven J. Luck; Kai Boelmans; Mircea Ariel Schoenfeld; Carsten N. Boehler; Jochem W. Rieger; Hans-Jochen Heinze

What is the neural locus of visual attention? Here we show that the locus is not fixed but instead changes rapidly to match the spatial scale of task-relevant information in the current scene. To accomplish this, we obtained electrical, magnetic, and hemodynamic measures of attention from human subjects while they detected large-scale or small-scale targets within multiscale stimulus patterns. Subjects did not know the scale of the target before stimulus onset, and yet the neural locus of attention-related activity between 250 and 300 ms varied according to the scale of the target. Specifically, maximal attention-related activity spread from a high-level, relatively anterior visual area (the lateral occipital complex) for large-scale targets to include a lower-level, more posterior area (visual area V4) for small-scale targets. This rapid change indicates that the neural locus of attention in visual cortex is not static but is instead determined rapidly and dynamically by means of an interaction between top-down task information and local information about the current visual input.


Frontiers in Neuroinformatics | 2009

PyMVPA: A Unifying Approach to the Analysis of Neuroscientific Data

Michael Hanke; Yaroslav O. Halchenko; Per B. Sederberg; Ingo Fründ; Jochem W. Rieger; Christoph Herrmann; James V. Haxby; Stephen José Hanson; Stefan Pollmann

The Python programming language is steadily increasing in popularity as the language of choice for scientific computing. The ability of this scripting environment to access a huge code base in various languages, combined with its syntactical simplicity, make it the ideal tool for implementing and sharing ideas among scientists from numerous fields and with heterogeneous methodological backgrounds. The recent rise of reciprocal interest between the machine learning (ML) and neuroscience communities is an example of the desire for an inter-disciplinary transfer of computational methods that can benefit from a Python-based framework. For many years, a large fraction of both research communities have addressed, almost independently, very high-dimensional problems with almost completely non-overlapping methods. However, a number of recently published studies that applied ML methods to neuroscience research questions attracted a lot of attention from researchers from both fields, as well as the general public, and showed that this approach can provide novel and fruitful insights into the functioning of the brain. In this article we show how PyMVPA, a specialized Python framework for machine learning based data analysis, can help to facilitate this inter-disciplinary technology transfer by providing a single interface to a wide array of machine learning libraries and neural data-processing methods. We demonstrate the general applicability and power of PyMVPA via analyses of a number of neural data modalities, including fMRI, EEG, MEG, and extracellular recordings.


Frontiers in Neuroengineering | 2014

Decoding spectrotemporal features of overt and covert speech from the human cortex

Stéphanie Martin; Peter Brunner; Chris Holdgraf; Hans-Jochen Heinze; Nathan E. Crone; Jochem W. Rieger; Robert T. Knight; Brian N. Pasley

Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate.


NeuroImage | 2012

Single trial discrimination of individual finger movements on one hand: a combined MEG and EEG study.

F. Quandt; Christoph Reichert; Hermann Hinrichs; Hans-Jochen Heinze; Robert T. Knight; Jochem W. Rieger

It is crucial to understand what brain signals can be decoded from single trials with different recording techniques for the development of Brain-Machine Interfaces. A specific challenge for non-invasive recording methods are activations confined to small spatial areas on the cortex such as the finger representation of one hand. Here we study the information content of single trial brain activity in non-invasive MEG and EEG recordings elicited by finger movements of one hand. We investigate the feasibility of decoding which of four fingers of one hand performed a slight button press. With MEG we demonstrate reliable discrimination of single button presses performed with the thumb, the index, the middle or the little finger (average over all subjects and fingers 57%, best subject 70%, empirical guessing level: 25.1%). EEG decoding performance was less robust (average over all subjects and fingers 43%, best subject 54%, empirical guessing level 25.1%). Spatiotemporal patterns of amplitude variations in the time series provided best information for discriminating finger movements. Non-phase-locked changes of mu and beta oscillations were less predictive. Movement related high gamma oscillations were observed in average induced oscillation amplitudes in the MEG but did not provide sufficient information about the fingers identity in single trials. Importantly, pre-movement neuronal activity provided information about the preparation of the movement of a specific finger. Our study demonstrates the potential of non-invasive MEG to provide informative features for individual finger control in a Brain-Machine Interface neuroprosthesis.


Epilepsy & Behavior | 2010

Proceedings of the Third International Workshop on Advances in Electrocorticography

Anthony L. Ritaccio; Michael S. Beauchamp; Conrado A. Bosman; Peter Brunner; Edward F. Chang; Nathan E. Crone; Aysegul Gunduz; Disha Gupta; Robert T. Knight; Eric C. Leuthardt; Brian Litt; Daniel W. Moran; Jeffrey G. Ojemann; Josef Parvizi; Nick F. Ramsey; Jochem W. Rieger; Jonathan Viventi; Bradley Voytek; Justin C. Williams

The Third International Workshop on Advances in Electrocorticography (ECoG) was convened in Washington, DC, on November 10-11, 2011. As in prior meetings, a true multidisciplinary fusion of clinicians, scientists, and engineers from many disciplines gathered to summarize contemporary experiences in brain surface recordings. The proceedings of this meeting serve as evidence of a very robust and transformative field but will yet again require revision to incorporate the advances that the following year will surely bring.


Perception | 1997

Interpolation Processes in the Perception of Real and Illusory Contours

Karl R. Gegenfurtner; Joel E Brown; Jochem W. Rieger

The spatial and temporal characteristics of mechanisms that bridge gaps between line segments were determined. The presentation time that was necessary for localisation and identification of a triangular shape made up of pacmen, pacmen with lines, lines, line segments (corners), or pacmen with circles (amodal completion) was measured. The triangle was embedded in a field of distractors made up of the same components but at random orientations. Subjects had to indicate whether the triangle was on the left or on the right of the display (localisation) and whether it was pointing upward or downward (identification). Poststimulus masks consisted of pinwheels for the pacmen stimuli or wheels defined by lines. Stimuli were presented on a grey background and defined by luminance or isoluminant contrast. Thresholds were fastest when the triangle was defined by real contours, as for the pacmen with lines (105 ms) and the lines only (92 ms), slightly slower for corners (118 ms) and pacmen (136 ms), and much slower for the amodally completed pacmen (285 ms). For all inducer types localisation was about 20 ms faster than identification. In a second experiment the relative length of the gap between inducers was varied. Thresholds increased as a function of gap length, indicating that the gaps between the inducers need to be interpolated. There was no significant difference in the speed of this interpolation process between the pacman stimuli and the line-segment stimuli. About 40 ms were required to interpolate 1 deg of visual angle, corresponding to about one third of the distance between inducers. In a third experiment, it was found that processing of isoluminant stimuli was as fast as for low-contrast luminance stimuli, when targets were defined by real contours (lines), but much slower for illusory contours (pacmen). The conclusion is that the time necessary to interpolate a contour depends greatly on the spatial configuration of the stimulus. Since interpolation is faster for the line-segment stimuli, which do not elicit the percept of an illusory contour, the interpolation process seems to be independent of the formation of illusory contours.


Journal of Vision | 2005

The dynamics of visual pattern masking in natural scene processing: A magnetoencephalography study

Jochem W. Rieger; Christoph Braun; Hh Bülthoff; Karl R. Gegenfurtner

We investigated the dynamics of natural scene processing and mechanisms of pattern masking in a scene-recognition task. Psychophysical recognition performance and the magnetoencephalogram (MEG) were recorded simultaneously. Photographs of natural scenes were briefly displayed and in the masked condition immediately followed by a pattern mask. Viewing the scenes without masking elicited a transient occipital activation that started approximately 70 ms after the pattern onset, peaked at 110 ms, and ended after 170 ms. When a mask followed the target an additional transient could be reliably identified in the MEG traces. We assessed psychophysical performance levels at different latencies of this transient. Recognition rates were reduced only when the additional activation produced by the pattern mask overlapped with the initial 170 ms of occipital activation from the target. Our results are commensurate with an early cortical locus of pattern masking and indicate that 90 ms of undistorted cortical processing is necessary to reliably recognize a scene. Our data also indicate that as little as 20 ms of undistorted processing is sufficient for above-chance discrimination of a scene from a distracter.

Collaboration


Dive into the Jochem W. Rieger's collaboration.

Top Co-Authors

Avatar

Hans-Jochen Heinze

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hermann Hinrichs

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Anirudh Unni

University of Oldenburg

View shared research outputs
Top Co-Authors

Avatar

Christoph Reichert

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klas Ihme

German Aerospace Center

View shared research outputs
Top Co-Authors

Avatar

Mircea Ariel Schoenfeld

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Rudolf Kruse

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Cristiano Micheli

Radboud University Nijmegen

View shared research outputs
Researchain Logo
Decentralizing Knowledge