J Schultz
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J Schultz.
Neuron | 2005
J Schultz; K. J. Friston; John P. O’Doherty; Daniel M. Wolpert; Chris Frith
An essential, evolutionarily stable feature of brain function is the detection of animate entities, and one of the main cues to identify them is their movement. We developed a model of a simple interaction between two objects, in which an increase of the correlation between their movements varied the amount of interactivity and animacy observers attributed to them. Functional magnetic resonance imaging revealed that activation in the posterior superior temporal sulcus and gyrus (pSTS/pSTG) increased in relation to the degree of correlated motion between the two objects. This activation increase was not different when subjects performed an explicit or implicit task while observing these interacting objects. These data suggest that the pSTS and pSTG play a role in the automatic identification of animate entities, by responding directly to an objective movement characteristic inducing the percept of animacy, such as the amount of interactivity between two moving objects.
Journal of Cognitive Neuroscience | 2004
J Schultz; Hiroshi Imamizu; Mitsuo Kawato; Chris Frith
Previous functional imaging experiments in humans showed activation increases in the posterior superior temporal gyrus and sulcus during observation of geometrical shapes whose movements appear intentional or goal-directed. We modeled a chase scenario between two objects, in which the chasing object used different strategies to reach the target object: The chaser either followed the targets path or appeared to predict its end position. Activation in the superior temporal gyrus of human observers was greater when the chaser adopted a predict rather than a follow strategy. Attending to the chasers strategy induced slightly greater activation in the left superior temporal gyrus than attending to the outcome of the chase. These data implicate the superior temporal gyrus in the identification of objects displaying complex goal-directed motion.
Experimental Brain Research | 2009
J Schultz; Ks Pilz
The ability to perceive facial motion is important to successfully interact in social environments. Previously, imaging studies have investigated neural correlates of facial motion primarily using abstract motion stimuli. Here, we studied how the brain processes natural non-rigid facial motion in direct comparison to static stimuli and matched phase-scrambled controls. As predicted from previous studies, dynamic faces elicit higher responses than static faces in lateral temporal areas corresponding to hMT+/V5 and STS. Interestingly, individually defined, static-face-sensitive regions in bilateral fusiform gyrus and left inferior occipital gyrus also respond more to dynamic than static faces. These results suggest integration of form and motion information during the processing of dynamic faces even in ventral temporal and inferior lateral occipital areas. In addition, our results show that dynamic stimuli are a robust tool to localize areas related to the processing of static and dynamic face information.
Cerebral Cortex | 2013
J Schultz; Matthias Brockhaus; Hh Bülthoff; Ks Pilz
Facial motion carries essential information about other peoples emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces.
Frontiers in Human Neuroscience | 2014
Janina Esins; J Schultz; Christian Wallraven; I Bülthoff
Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.
NeuroImage | 2012
Hb Helbig; Marc O. Ernst; Emiliano Ricciardi; Pietro Pietrini; Axel Thielscher; Katja M. Mayer; J Schultz; Uta Noppeney
Behaviourally, humans have been shown to integrate multisensory information in a statistically-optimal fashion by averaging the individual unisensory estimates according to their relative reliabilities. This form of integration is optimal in that it yields the most reliable (i.e. least variable) multisensory percept. The present study investigates the neural mechanisms underlying integration of visual and tactile shape information at the macroscopic scale of the regional BOLD response. Observers discriminated the shapes of ellipses that were presented bimodally (visual-tactile) or visually alone. A 2 × 5 factorial design manipulated (i) the presence vs. absence of tactile shape information and (ii) the reliability of the visual shape information (five levels). We then investigated whether regional activations underlying tactile shape discrimination depended on the reliability of visual shape. Indeed, in primary somatosensory cortices (bilateral BA2) and the superior parietal lobe the responses to tactile shape input were increased when the reliability of visual shape information was reduced. Conversely, tactile inputs suppressed visual activations in the right posterior fusiform gyrus, when the visual signal was blurred and unreliable. Somatosensory and visual cortices may sustain integration of visual and tactile shape information either via direct connections from visual areas or top-down effects from higher order parietal areas.
Journal of Vision | 2013
J Schultz; Hh Bülthoff
Identifying moving things in the environment is a priority for animals as these could be prey, predators, or mates. When the shape of a moving object is hard to see, motion becomes an important cue to distinguish animate from inanimate things. We report a new stimulus in which a single moving dot evokes a reasonably strong percept of animacy by mimicking the motion of naturally occurring stimuli, with minimal context information. Stimulus movements are controlled by an equation such that changes in a single movement parameter lead to gradual changes in animacy judgments with minimal changes in low-level stimulus properties. An infinite number of stimuli can be created between the animate and inanimate extremes. A series of experiments confirm the strength of the percept and show that observers tend to follow the stimulus with their eye gaze. However, eye movements are not necessary for perceptual judgments, as forced fixation on the display center only slightly reduces the amplitude of percept changes. Withdrawing attentional resources from the animacy judgment using a simultaneous secondary task further reduces percept amplitudes without abolishing them. This stimulus could open new avenues for the principled study of animacy judgments based on object motion only.
The Journal of Neuroscience | 2015
Junsuk Kim; J Schultz; Tim Rohe; Christian Wallraven; Seong Whan Lee; Hh Bülthoff
Emotions can be aroused by various kinds of stimulus modalities. Recent neuroimaging studies indicate that several brain regions represent emotions at an abstract level, i.e., independently from the sensory cues from which they are perceived (e.g., face, body, or voice stimuli). If emotions are indeed represented at such an abstract level, then these abstract representations should also be activated by the memory of an emotional event. We tested this hypothesis by asking human participants to learn associations between emotional stimuli (videos of faces or bodies) and non-emotional stimuli (fractals). After successful learning, fMRI signals were recorded during the presentations of emotional stimuli and emotion-associated fractals. We tested whether emotions could be decoded from fMRI signals evoked by the fractal stimuli using a classifier trained on the responses to the emotional stimuli (and vice versa). This was implemented as a whole-brain searchlight, multivoxel activation pattern analysis, which revealed successful emotion decoding in four brain regions: posterior cingulate cortex (PCC), precuneus, MPFC, and angular gyrus. The same analysis run only on responses to emotional stimuli revealed clusters in PCC, precuneus, and MPFC. Multidimensional scaling analysis of the activation patterns revealed clear clustering of responses by emotion across stimulus types. Our results suggest that PCC, precuneus, and MPFC contain representations of emotions that can be evoked by stimuli that carry emotional information themselves or by stimuli that evoke memories of emotional stimuli, while angular gyrus is more likely to take part in emotional memory retrieval.
Frontiers in Human Neuroscience | 2016
Stephan de la Rosa; Frieder L. Schillinger; Hh Bülthoff; J Schultz; Kamil Uludag
Mirror neurons (MNs) are considered to be the supporting neural mechanism for action understanding. MNs have been identified in monkey’s area F5. The identification of MNs in the human homolog of monkeys’ area F5 Broadmann Area 44/45 (BA 44/45) has been proven methodologically difficult. Cross-modal functional MRI (fMRI) adaptation studies supporting the existence of MNs restricted their analysis to a priori candidate regions, whereas studies that failed to find evidence used non-object-directed (NDA) actions. We tackled these limitations by using object-directed actions (ODAs) differing only in terms of their object directedness in combination with a cross-modal adaptation paradigm and a whole-brain analysis. Additionally, we tested voxels’ blood oxygenation level-dependent (BOLD) response patterns for several properties previously reported as typical MN response properties. Our results revealed 52 voxels in left inferior frontal gyrus (IFG; particularly BA 44/45), which respond to both motor and visual stimulation and exhibit cross-modal adaptation between the execution and observation of the same action. These results demonstrate that part of human IFG, specifically BA 44/45, has BOLD response characteristics very similar to monkey’s area F5.
Journal of Autism and Developmental Disorders | 2014
Nicole David; J Schultz; Elizabeth Milne; Odette Schunke; Daniel Schöttle; Alexander Münchau; Markus Siegel; Kai Vogeley; Andreas K. Engel
Individuals with an autism spectrum disorder (ASD) show hallmark deficits in social perception. These difficulties might also reflect fundamental deficits in integrating visual signals. We contrasted predictions of a social perception and a spatial–temporal integration deficit account. Participants with ASD and matched controls performed two tasks: the first required spatiotemporal integration of global motion signals without social meaning, the second required processing of socially relevant local motion. The ASD group only showed differences to controls in social motion evaluation. In addition, gray matter volume in the temporal–parietal junction correlated positively with accuracy in social motion perception in the ASD group. Our findings suggest that social–perceptual difficulties in ASD cannot be reduced to deficits in spatial–temporal integration.