Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian A. Kell is active.

Publication


Featured researches published by Christian A. Kell.


Nature Neuroscience | 2002

Rhythmic gene expression in pituitary depends on heterologous sensitization by the neurohormone melatonin

Charlotte von Gall; Martine L. Garabette; Christian A. Kell; Sascha Frenzel; Faramarz Dehghani; Petra-Maria Schumm-Draeger; David R. Weaver; Horst-Werner Korf; Michael H. Hastings; Jörg H. Stehle

In mammals, many daily cycles are driven by a central circadian clock, which is based on the cell-autonomous rhythmic expression of clock genes. It is not clear, however, how peripheral cells are able to interpret the rhythmic signals disseminated from this central oscillator. Here we show that cycling expression of the clock gene Period1 in rodent pituitary cells depends on the heterologous sensitization of the adenosine A2b receptor, which occurs through the nocturnal activation of melatonin mt1 receptors. Eliminating the impact of the neurohormone melatonin simultaneously suppresses the expression of Period1 and evokes an increase in the release of pituitary prolactin. Our findings expose a mechanism by which two convergent signals interact within a temporal dimension to establish high-amplitude, precise and robust cycles of gene expression.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Spontaneous local variations in ongoing neural activity bias perceptual decisions

Guido Hesselmann; Christian A. Kell; Evelyn Eger; Andreas Kleinschmidt

Neural variability in responding to identical repeated stimuli has been related to trial-by-trial fluctuations in ongoing activity, yet the neural and perceptual consequences of these fluctuations remain poorly understood. Using functional neuroimaging, we recorded brain activity in subjects who reported perceptual decisions on an ambiguous figure, Rubins vase-faces picture, which was briefly presented at variable intervals of ≥20 s. Prestimulus activity in the fusiform face area, a cortical region preferentially responding to faces, was higher when subjects subsequently perceived faces instead of the vase. This finding suggests that endogenous variations in prestimulus neuronal activity biased subsequent perceptual inference. Furnishing evidence that evoked sensory responses, we then went on to show that the pre- and poststimulus activity interact in a nonlinear way and the ensuing perceptual decisions depend upon the prestimulus context in which they occur.


The Journal of Neuroscience | 2009

Dual Neural Routing of Visual Facilitation in Speech Processing

Luc H. Arnal; Benjamin Morillon; Christian A. Kell; Anne-Lise Giraud

Viewing our interlocutor facilitates speech perception, unlike for instance when we telephone. Several neural routes and mechanisms could account for this phenomenon. Using magnetoencephalography, we show that when seeing the interlocutor, latencies of auditory responses (M100) are the shorter the more predictable speech is from visual input, whether the auditory signal was congruent or not. Incongruence of auditory and visual input affected auditory responses ∼20 ms after latency shortening was detected, indicating that initial content-dependent auditory facilitation by vision is followed by a feedback signal that reflects the error between expected and received auditory input (prediction error). We then used functional magnetic resonance imaging and confirmed that distinct routes of visual information to auditory processing underlie these two functional mechanisms. Functional connectivity between visual motion and auditory areas depended on the degree of visual predictability, whereas connectivity between the superior temporal sulcus and both auditory and visual motion areas was driven by audiovisual (AV) incongruence. These results establish two distinct mechanisms by which the brain uses potentially predictive visual information to improve auditory perception. A fast direct corticocortical pathway conveys visual motion parameters to auditory cortex, and a slower and indirect feedback pathway signals the error between visual prediction and auditory input.


The Journal of Neuroscience | 2008

Ongoing Activity Fluctuations in hMT+ Bias the Perception of Coherent Visual Motion

Guido Hesselmann; Christian A. Kell; Andreas Kleinschmidt

We have recently shown that intrinsic fluctuations of ongoing activity during baseline have an impact on perceptual decisions reported for an ambiguous visual stimulus (Hesselmann et al., 2008). To test whether this result generalizes from the visual object domain to other perceptual and neural systems, the current study investigated the effect of ongoing signal fluctuations in motion-sensitive brain regions on the perception of coherent visual motion. We determined motion coherence thresholds individually for each subject using a dynamic random dot display. During functional magnetic resonance imaging (fMRI), brief events of subliminal, supraliminal, and periliminal coherent motion were presented with long and variable interstimulus intervals between them. On each trial, subjects reported whether they had perceived “coherent” or “random” motion, and fMRI signal time courses were analyzed separately as a function of stimulus and percept type. In the right motion-sensitive occipito-temporal cortex (hMT+), coherent percepts of periliminal stimuli yielded a larger stimulus-evoked response than random percepts. Prestimulus baseline activity in this region was also significantly higher in these coherent trials than in random trials. As in our previous study, however, the relation between ongoing and evoked activity was not additive but interacted with perceptual outcome. Our data thus suggest that endogenous fluctuations in baseline activity have a generic effect on subsequent perceptual decisions. Although mainstream analytical techniques used in functional neuroimaging do not capture this nonadditive effect of baseline on evoked response, it is in accord with postulates from theoretical frameworks as, for instance, predictive coding.


Proceedings of the National Academy of Sciences of the United States of America | 2008

Simulation of talking faces in the human brain improves auditory speech recognition.

Katharina von Kriegstein; Özgür Dogan; Martina Grüter; Anne-Lise Giraud; Christian A. Kell; Thomas Grüter; Andreas Kleinschmidt; Stefan J. Kiebel

Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and controls, behavioral improvement in auditory-only speech recognition was based on an area typically involved in face-movement processing. Improvement in speaker recognition was only present in controls and was based on an area involved in face-identity processing. These findings challenge current unisensory models of speech processing, because they show that, in auditory-only speech, the brain exploits previously encoded audiovisual correlations to optimize communication. We suggest that this optimization is based on speaker-specific audiovisual internal models, which are used to simulate a talking face.


The Journal of Neuroscience | 2005

The sensory cortical representation of the human penis: revisiting somatotopy in the male homunculus.

Christian A. Kell; Katharina von Kriegstein; Alexander Rösler; Andreas Kleinschmidt; Helmut Laufs

Pioneering mapping studies of the human cortex have established the notion of somatotopy in sensory representation, which transpired into Penfield and Rasmussens famous sensory homunculus diagram. However, regarding the primary cortical representation of the genitals, classical and modern findings appear to be at odds with the principle of somatotopy, often assigning it to the cortex on the mesial wall. Using functional neuroimaging, we established a mediolateral sequence of somatosensory foot, penis, and lower abdominal wall representation on the contralateral postcentral gyrus in primary sensory cortex and a bilateral secondary somatosensory representation in the parietal operculum.


Cerebral Cortex | 2011

Lateralization of Speech Production Starts in Sensory Cortices—A Possible Sensory Origin of Cerebral Left Dominance for Speech

Christian A. Kell; Benjamin Morillon; Frédérique Kouneiher; Anne-Lise Giraud

Speech production is a left-lateralized brain function, which could arise from a left dominance either in speech executive or sensory processes or both. Using functional magnetic resonance imaging in healthy subjects, we show that sensory cortices already lateralize when speaking is intended, while the frontal cortex only lateralizes when speech is acted out. The sequence of lateralization, first temporal then frontal lateralization, suggests that the functional lateralization of the auditory cortex could drive hemispheric specialization for speech production.


Headache | 2004

Water‐Deprivation Headache: A New Headache With Two Variants

Joseph N. Blau; Christian A. Kell; Julia M. Sperling

Objective.—To describe a new type of headache induced by water deprivation.


The Journal of Neuroscience | 2013

Affective and Sensorimotor Components of Emotional Prosody Generation

Swann Pichon; Christian A. Kell

Although advances have been made regarding how the brain perceives emotional prosody, the neural bases involved in the generation of affective prosody remain unclear and debated. Two models have been forged on the basis of clinical observations: a first model proposes that the right hemisphere sustains production and comprehension of emotional prosody, while a second model proposes that emotional prosody relies heavily on basal ganglia. Here, we tested their predictions in two functional magnetic resonance imaging experiments that used a cue-target paradigm, which allows distinguishing affective from sensorimotor aspects of emotional prosody generation. Both experiments show that when participants prepare for emotional prosody, bilateral ventral striatum is specifically activated and connected to temporal poles and anterior insula, regions in which lesions frequently cause dysprosody. The bilateral dorsal striatum is more sensitive to cognitive and motor aspects of emotional prosody preparation and production and is more strongly connected to the sensorimotor speech network compared with the ventral striatum. Right lateralization during increased prosodic processing is confined to the posterior superior temporal sulcus, a region previously associated with perception of emotional prosody. Our data thus provide physiological evidence supporting both models and suggest that bilateral basal ganglia are involved in modulating motor behavior as a function of affective state. Right lateralization of cortical regions mobilized for prosody control could point to efficient processing of slowly changing acoustic speech parameters in the ventral stream and thus identify sensorimotor processing as an important factor contributing to right lateralization of prosody.


NeuroImage: Clinical | 2014

Pathomechanisms and compensatory efforts related to Parkinsonian speech.

Christiane Arnold; Johannes Gehrig; Suzana Gispert; Carola Seifried; Christian A. Kell

Voice and speech in Parkinsons disease (PD) patients are classically affected by a hypophonia, dysprosody, and dysarthria. The underlying pathomechanisms of these disabling symptoms are not well understood. To identify functional anomalies related to pathophysiology and compensation we compared speech-related brain activity and effective connectivity in early PD patients who did not yet develop voice or speech symptoms and matched controls. During fMRI 20 PD patients ON and OFF levodopa and 20 control participants read 75 sentences covertly, overtly with neutral, or with happy intonation. A cue-target reading paradigm allowed for dissociating task preparation from execution. We found pathologically reduced striato-prefrontal preparatory effective connectivity in early PD patients associated with subcortical (OFF state) or cortical (ON state) compensatory networks. While speaking, PD patients showed signs of diminished monitoring of external auditory feedback. During generation of affective prosody, a reduced functional coupling between the ventral and dorsal striatum was observed. Our results suggest three pathomechanisms affecting speech in PD: While diminished energization on the basis of striato-prefrontal hypo-connectivity together with dysfunctional self-monitoring mechanisms could underlie hypophonia, dysarthria may result from fading speech motor representations given that they are not sufficiently well updated by external auditory feedback. A pathological interplay between the limbic and sensorimotor striatum could interfere with affective modulation of speech routines, which affects emotional prosody generation. However, early PD patients show compensatory mechanisms that could help improve future speech therapies.

Collaboration


Dive into the Christian A. Kell's collaboration.

Top Co-Authors

Avatar

Anne-Lise Giraud

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Gehrig

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Christiane Arnold

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Jörg H. Stehle

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jochen Kaiser

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Julia Restle

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Katrin Neumann

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Marion Behrens

Goethe University Frankfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge