Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy I. Skipper is active.

Publication


Featured researches published by Jeremy I. Skipper.


NeuroImage | 2005

Listening to talking faces: motor cortical activation during speech perception

Jeremy I. Skipper; Howard C. Nusbaum; Steven L. Small

Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.


Human Brain Mapping | 2009

Co‐speech gestures influence neural activity in brain regions associated with processing semantic information

Anthony Steven Dick; Susan Goldin-Meadow; Uri Hasson; Jeremy I. Skipper; Steven L. Small

Everyday communication is accompanied by visual information from several sources, including co‐speech gestures, which provide semantic information listeners use to help disambiguate the speakers message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory‐only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech. Hum Brain Mapp, 2009.


Current Biology | 2009

Gestures Orchestrate Brain Networks for Language Understanding

Jeremy I. Skipper; Susan Goldin-Meadow; Howard C. Nusbaum; Steven L. Small

Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension [1]. But how does the brain derive meaningful information from these movements? Mouth movements provide information about phonological aspects of speech [2-3]. In contrast, cospeech gestures display semantic information relevant to the intended message [4-6]. We show that when language comprehension is accompanied by observable face movements, there is strong functional connectivity between areas of cortex involved in motor planning and production and posterior areas thought to mediate phonological aspects of speech perception. In contrast, language comprehension accompanied by cospeech gestures is associated with tuning of and strong functional connectivity between motor planning and production areas and anterior areas thought to mediate semantic aspects of language comprehension. These areas are not tuned to hand and arm movements that are not meaningful. Results suggest that when gestures accompany speech, the motor system works with language comprehension areas to determine the meaning of those gestures. Results also suggest that the cortical networks underlying language comprehension, rather than being fixed, are dynamically organized by the type of contextual information available to listeners during face-to-face communication.


In: Action to Language Via the Mirror Neuron System. (pp. 250-286). (2006) | 2006

Action to Language via the Mirror Neuron System: Lending a helping hand to hearing: another motor theory of speech perception

Jeremy I. Skipper; Howard C. Nusbaum; Steven L. Small

© Cambridge University Press 2006. … any comprehensive account of how speech is perceived should encompass audiovisual speech perception. The ability to see as well as hear has to be integral to the design, not merely a retro-fitted after-thought. Summerfield (1987) The “lack of invariance problem” and multisensory speech perception In speech there is a many-to-many mapping between acoustic patterns and phonetic categories. That is, similar acoustic properties can be assigned to different phonetic categories or quite distinct acoustic properties can be assigned to the same linguistic category. Attempting to solve this “lack of invariance problem” has framed much of the theoretical debate in speech research over the years. Indeed, most theories may be characterized as to how they deal with this “problem.” Nonetheless, there is little evidence for even a single invariant acoustic property that uniquely identifies phonetic features and that is used by listeners (though see Blumstein and Stevens, 1981; Stevens and Blumstein, 1981). Phonetic constancy can be achieved in spite of this lack of invariance by viewing speech perception as an active process (Nusbaum and Magnuson, 1997). Active processing models like the one to be described here derive from Helmholtz who described visual perception as a process of “unconscious inference” (see Hatfield, 2002). That is, visual perception is the result of forming and testing hypotheses about the inherently ambiguous information available to the retina.


NeuroImage | 2008

Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

Uri Hasson; Jeremy I. Skipper; Michael Wilde; Howard C. Nusbaum; Steven L. Small

The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.


Brain and Language | 2017

The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception

Jeremy I. Skipper; Joseph T. Devlin; Daniel R. Lametti

HIGHLIGHTSThe role of the motor system in speech perception is reviewed.Distributed production regions/networks ubiquitously participate in perception.Regions/networks specific to production and vary dynamically with context.Data consistent with a sensorimotor/complex network models of speech perception.Existing models of the organization of language and the brain fail to explain results. ABSTRACT Does “the motor system” play “a role” in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non‐human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta‐analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non‐linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual‐stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self‐organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.


The Journal of Neuroscience | 2010

Domain General Change Detection Accounts for “Dishabituation” Effects in Temporal–Parietal Regions in Functional Magnetic Resonance Imaging Studies of Speech Perception

Jason D. Zevin; Jianfeng Yang; Jeremy I. Skipper; Bruce D. McCandliss

Functional magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in which a stimulus is presented repeatedly to conditions in which multiple stimuli are presented. This approach has established that a set of superior temporal and inferior parietal regions respond more strongly to conditions containing stimulus change. Here, we examine whether this contrast is driven by habituation to a repeating condition or by selective responding to change. Experiment 1 directly tests this by comparing the observed response to long trains of stimuli against a constructed hemodynamic response modeling the hypothesis that no habituation occurs. The results are consistent with the view that enhanced response to conditions involving phonemic variability reflect change detection. In a second experiment, the specificity of these responses to linguistically relevant stimulus variability was studied by including a condition in which the talker, rather than phonemic category, was variable from stimulus to stimulus. In this context, strong change detection responses were observed to changes in talker, but not to changes in phoneme category. The results prompt a reconsideration of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporoparietal responses in passive paradigms such as those used here are better characterized as reflecting change detection than habituation, and that their apparent selectivity to speech sound categories may reflect a more general preference for variability in highly salient or behaviorally relevant stimulus dimensions.


Quarterly Journal of Experimental Psychology | 2011

When less is heard than meets the ear: Change deafness in a telephone conversation

Kimberly M. Fenn; Hadas Shintel; Alexandra S. Atkins; Jeremy I. Skipper; Veronica C. Bond; Howard C. Nusbaum

During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talkers voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.


Philosophical Transactions of the Royal Society B | 2014

Echoes of the spoken past: how auditory cortex hears context during speech perception

Jeremy I. Skipper

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.


bioRxiv | 2017

A Core Speech Circuit Between Primary Motor, Somatosensory, And Auditory Cortex: Evidence From Connectivity And Genetic Descriptions

Jeremy I. Skipper; Uri Hasson

What adaptations allow humans to produce and perceive speech so effortlessly? We show that speech is supported by a largely undocumented core of structural and functional connectivity between the central sulcus (CS or primary motor and somatosensory cortex) and the transverse temporal gyrus (TTG or primary auditory cortex). Anatomically, we show that CS and TTG cortical thickness covary across individuals and that they are connected by white matter tracts. Neuroimaging network analyses confirm the functional relevance and specificity of these structural relationships. Specifically, the CS and TTG are functionally connected at rest, during natural audiovisual speech perception, and are coactive over a large variety of linguistic stimuli and tasks. Importantly, across structural and functional analyses, connectivity of regions immediately adjacent to the TTG are with premotor and prefrontal regions rather than the CS. Finally, we show that this structural/functional CS-TTG relationship is mediated by a constellation of genes associated with vocal learning and disorders of efference copy. We propose that this core circuit constitutes an interface for rapidly exchanging articulatory and acoustic information and discuss implications for current models of speech.

Collaboration


Dive into the Jeremy I. Skipper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason D. Zevin

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anthony Steven Dick

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hia Datta

City University of New York

View shared research outputs
Researchain Logo
Decentralizing Knowledge