Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kayoko Okada is active.

Publication


Featured researches published by Kayoko Okada.


Brain and Language | 2011

Conduction aphasia, sensory-motor integration, and phonological short-term memory - An aggregate analysis of lesion and fMRI data

Bradley R. Buchsbaum; Juliana V. Baldo; Kayoko Okada; Karen Faith Berman; Nina F. Dronkers; Mark D’Esposito; Gregory Hickok

Conduction aphasia is a language disorder characterized by frequent speech errors, impaired verbatim repetition, a deficit in phonological short-term memory, and naming difficulties in the presence of otherwise fluent and grammatical speech output. While traditional models of conduction aphasia have typically implicated white matter pathways, recent advances in lesions reconstruction methodology applied to groups of patients have implicated left temporoparietal zones. Parallel work using functional magnetic resonance imaging (fMRI) has pinpointed a region in the posterior most portion of the left planum temporale, area Spt, which is critical for phonological working memory. Here we show that the region of maximal lesion overlap in a sample of 14 patients with conduction aphasia perfectly circumscribes area Spt, as defined in an aggregate fMRI analysis of 105 subjects performing a phonological working memory task. We provide a review of the evidence supporting the idea that Spt is an interface site for the integration of sensory and vocal tract-related motor representations of complex sound sequences, such as speech and music and show how the symptoms of conduction aphasia can be explained by damage to this system.


Journal of Neurophysiology | 2009

Area Spt in the Human Planum Temporale Supports Sensory-Motor Integration for Speech Processing

Gregory Hickok; Kayoko Okada; John T. Serences

Processing incoming sensory information and transforming this input into appropriate motor responses is a critical and ongoing aspect of our moment-to-moment interaction with the environment. While the neural mechanisms in the posterior parietal cortex (PPC) that support the transformation of sensory inputs into simple eye or limb movements has received a great deal of empirical attention-in part because these processes are easy to study in nonhuman primates-little work has been done on sensory-motor transformations in the domain of speech. Here we used functional magnetic resonance imaging and multivariate analysis techniques to demonstrate that a region of the planum temporale (Spt) shows distinct spatial activation patterns during sensory and motor aspects of a speech task. This result suggests that just as the PPC supports sensorimotor integration for eye and limb movements, area Spt forms part of a sensory-motor integration circuit for the vocal tract.


Cerebral Cortex | 2010

Hierarchical Organization of Human Auditory Cortex: Evidence from Acoustic Invariance in the Response to Intelligible Speech

Kayoko Okada; Feng Rong; Jon Venezia; William Matchin; I-Hui Hsieh; Kourosh Saberi; John T. Serences; Gregory Hickok

Hierarchical organization of human auditory cortex has been inferred from functional imaging observations that core regions respond to simple stimuli (tones) whereas downstream regions are selectively responsive to more complex stimuli (band-pass noise, speech). It is assumed that core regions code low-level features, which are combined at higher levels in the auditory system to yield more abstract neural codes. However, this hypothesis has not been critically evaluated in the auditory domain. We assessed sensitivity to acoustic variation within intelligible versus unintelligible speech using functional magnetic resonance imaging and a multivariate pattern analysis. Core auditory regions on the dorsal plane of the superior temporal gyrus exhibited high levels of sensitivity to acoustic features, whereas downstream auditory regions in both anterior superior temporal sulcus and posterior superior temporal sulcus (pSTS) bilaterally showed greater sensitivity to whether speech was intelligible or not and less sensitivity to acoustic variation (acoustic invariance). Acoustic invariance was most pronounced in more pSTS regions of both hemispheres, which we argue support phonological level representations. This finding provides direct evidence for a hierarchical organization of human auditory cortex and clarifies the cortical pathways supporting the processing of intelligible speech.


Brain and Language | 2008

Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

Gregory Hickok; Kayoko Okada; William B. Barr; J. Pa; Corianne Rogalsky; K.M. Donnelly; L. Barde; Arthur C. Grant

Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernickes aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.


Brain and Language | 2006

Left posterior auditory-related cortices participate both in speech perception and speech production: Neural overlap revealed by fMRI

Kayoko Okada; Gregory Hickok

Recent neuroimaging studies and neuropsychological data suggest that there are regions in posterior auditory cortex that participate both in speech perception and speech production. An outstanding question is whether the same neural regions support both perception and production or whether there exist discrete cortical fields subserving these functions. Previous neurophysiological studies suggest that there is indeed regional overlap between these systems, but those studies used a rehearsal task to assess production. The present study addressed this question in an event-related fMRI experiment in which subjects listened to speech and in separate trials, performed a covert object naming task. Single subject analysis revealed regions of coactivation for speech perception and production in the left posterior superior temporal sulcus (pSTS), left area Spt (a region in the Sylvian fissure at the parietal-temporal boundary), and left inferior frontal gyrus. These results are consistent with lesion data and previous physiological data indicating that posterior auditory cortex plays a role in both reception and expression of speech. We discuss these findings within the context of a neuroanatomical framework that proposes these neural sites are a part of an auditory-motor integration system.


PLOS ONE | 2013

An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex

Kayoko Okada; Jonathan H. Venezia; William Matchin; Kourosh Saberi; Gregory Hickok

Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.


Human Brain Mapping | 2012

Comparison of the Neural Correlates of Retrieval Success in Tests of Cued Recall and Recognition Memory

Kayoko Okada; Kaia L. Vilberg; Michael D. Rugg

The neural correlates of successful retrieval on tests of word stem recall and recognition memory were compared. In the recall test, subjects viewed word stems, half of which were associated with studied items and half with unstudied items, and for each stem attempted to recall a corresponding study word. In the recognition test, old/new judgments were made on old and new words. The neural correlates of successful retrieval were identified by contrasting activity elicited by correctly endorsed test items. Old > new effects common to the two tasks were found in medial and lateral parietal and right entorhinal cortex. Common new > old effects were identified in medial and left frontal cortex, and left anterior intra‐parietal sulcus. Greater old > new effects were evident for cued recall in inferior parietal regions abutting those demonstrating common effects, whereas larger new > old effects were found for recall in left frontal cortex and the anterior cingulate. New > old effects were also found for the recall task in right lateral anterior prefrontal cortex, where they were accompanied by old > new effects during recognition. It is concluded that successful recall and recognition are associated with enhanced activity in a common set of recollection‐sensitive parietal regions, and that the greater activation in these regions during recall reflects the greater dependence of that task on recollection. Larger new > old effects during recall are interpreted as reflections of the greater opportunity for iterative retrieval attempts when retrieval cues are partial rather than copy cues. Hum Brain Mapp, 2012.


Psychonomic Bulletin & Review | 2018

Neural evidence for predictive coding in auditory cortex during speech production

Kayoko Okada; William Matchin; Gregory Hickok

Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407-–422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329-–347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528-–537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147–1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.


Neuropsychologia | 2016

An fMRI study of perception and action in deaf signers.

Kayoko Okada; Corianne Rogalsky; Lucinda O'Grady; Leila Hanaumi; Ursula Bellugi; David P. Corina; Gregory Hickok

Since the discovery of mirror neurons, there has been a great deal of interest in understanding the relationship between perception and action, and the role of the human mirror system in language comprehension and production. Two questions have dominated research. One concerns the role of Brocas area in speech perception. The other concerns the role of the motor system more broadly in understanding action-related language. The current study investigates both of these questions in a way that bridges research on language with research on manual actions. We studied the neural basis of observing and executing American Sign Language (ASL) object and action signs. In an fMRI experiment, deaf signers produced signs depicting actions and objects as well as observed/comprehended signs of actions and objects. Different patterns of activation were found for observation and execution although with overlap in Brocas area, providing prima facie support for the claim that the motor system participates in language perception. In contrast, we found no evidence that action related signs differentially involved the motor system compared to object related signs. These findings are discussed in the context of lesion studies of sign language execution and observation. In this broader context, we conclude that the activation in Brocas area during ASL observation is not causally related to sign language understanding.


Neuroreport | 2006

Identification of lexical-phonological networks in the superior temporal sulcus using functional magnetic resonance imaging.

Kayoko Okada; Gregory Hickok

Collaboration


Dive into the Kayoko Okada's collaboration.

Top Co-Authors

Avatar

Gregory Hickok

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kourosh Saberi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin R. Smith

University of California

View shared research outputs
Top Co-Authors

Avatar

Arthur C. Grant

SUNY Downstate Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin Humphries

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge