Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroto Kawasaki is active.

Publication


Featured researches published by Hiroto Kawasaki.


Nature Neuroscience | 2001

Single-neuron responses to emotional visual stimuli recorded in human ventral prefrontal cortex

Hiroto Kawasaki; Ralph Adolphs; Olaf Kaufman; Hanna Damasio; Antonio R. Damasio; Mark A. Granner; Hans Bakken; Tomokatsu Hori; Matthew A. Howard

Both lesion and functional imaging studies in humans, as well as neurophysiological studies in nonhuman primates, demonstrate the importance of the prefrontal cortex in representing the emotional value of sensory stimuli. Here we investigated single-neuron responses to emotional stimuli in an awake person with normal intellect. Recording from neurons within healthy tissue in ventral sites of the right prefrontal cortex, we found short-latency (120–160 ms) responses selective for aversive visual stimuli.


The Journal of Neuroscience | 2009

Temporal envelope of time-compressed speech represented in the human auditory cortex.

Kirill V. Nourski; Richard A. Reale; Hiroyuki Oya; Hiroto Kawasaki; Christopher K. Kovach; Haiming Chen; Matthew A. Howard; John F. Brugge

Speech comprehension relies on temporal cues contained in the speech envelope, and the auditory cortex has been implicated as playing a critical role in encoding this temporal information. We investigated auditory cortical responses to speech stimuli in subjects undergoing invasive electrophysiological monitoring for pharmacologically refractory epilepsy. Recordings were made from multicontact electrodes implanted in Heschls gyrus (HG). Speech sentences, time compressed from 0.75 to 0.20 of natural speaking rate, elicited average evoked potentials (AEPs) and increases in event-related band power (ERBP) of cortical high-frequency (70–250 Hz) activity. Cortex of posteromedial HG, the presumed core of human auditory cortex, represented the envelope of speech stimuli in the AEP and ERBP. Envelope following in ERBP, but not in AEP, was evident in both language-dominant and -nondominant hemispheres for relatively high degrees of compression where speech was not comprehensible. Compared to posteromedial HG, responses from anterolateral HG—an auditory belt field—exhibited longer latencies, lower amplitudes, and little or no time locking to the speech envelope. The ability of the core auditory cortex to follow the temporal speech envelope over a wide range of speaking rates leads us to conclude that such capacity in itself is not a limiting factor for speech comprehension.


PLOS ONE | 2008

Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain

Naotsugu Tsuchiya; Hiroto Kawasaki; Hiroyuki Oya; Matthew A. Howard; Ralph Adolphs

Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.


Journal of Neurophysiology | 2009

Coding of Repetitive Transients by Auditory Cortex on Heschl's Gyrus

John F. Brugge; Kirill V. Nourski; Hiroyuki Oya; Richard A. Reale; Hiroto Kawasaki; Mitchell Steinschneider; Matthew A. Howard

The capacity of auditory cortex on Heschls gyrus (HG) to encode repetitive transients was studied in human patients undergoing surgical evaluation for medically intractable epilepsy. Multicontact depth electrodes were chronically implanted in gray matter of HG. Bilaterally presented stimuli were click trains varying in rate from 4 to 200 Hz. Averaged evoked potentials (AEPs) and event-related band power (ERBP), computed from responses at each of 14 recording sites, identified two auditory fields. A core field, which occupies posteromedial HG, was characterized by a robust polyphasic AEP on which could be superimposed a frequency following response (FFR). The FFR was prominent at click rates below approximately 50 Hz, decreased rapidly as click rate was increased, but could reliably be detected at click rates as high as 200 Hz. These data are strikingly similar to those obtained by others in the monkey under essentially the same stimulus conditions, indicating that mechanisms underlying temporal processing in the auditory core may be highly conserved across primate species. ERBP, which reflects increases or decreases of both phase-locked and non-phase-locked power within given frequency bands, showed stimulus-related increases in gamma band frequencies as high as 250 Hz. The AEPs recorded in a belt field anterolateral to the core were typically of low amplitude, showing little or no evidence of short-latency waves or an FFR, even at the lowest click rates used. The non-phase-locked component of the response extracted from the ERBP showed a robust, long-latency response occurring here in response to the highest click rates in the series.


Neuroscience | 2007

Auditory-visual processing represented in the human superior temporal gyrus

Richard A. Reale; Gemma A. Calvert; Thomas Thesen; Hiroto Kawasaki; Hiroyuki Oya; Matthew A. Howard; John F. Brugge

In natural face-to-face communication, speech perception utilizes both auditory and visual information. We described previously an acoustically responsive area on the posterior lateral surface of the superior temporal gyrus (field PLST) that is distinguishable on physiological grounds from other auditory fields located within the superior temporal plane. Considering the empirical findings in humans and non-human primates of cortical locations responsive to heard sounds and/or seen sound-sources, we reasoned that area PLST would also contain neural signals reflecting audiovisual speech interactions. To test this hypothesis, event related potentials (ERPs) were recorded from area PLST using chronically implanted multi-contact subdural surface-recording electrodes in patient-subjects undergoing diagnosis and treatment of medically intractable epilepsy, and cortical ERP maps were acquired during five contrasting auditory, visual and bimodal speech conditions. Stimulus conditions included consonant-vowel (CV) syllable sounds alone, silent seen speech or CV sounds paired with a female face articulating matched or mismatched syllables. Data were analyzed using a MANOVA framework, with the results from planned comparisons used to construct cortical significance maps. Our findings indicate that evoked responses recorded from area PLST to auditory speech stimuli are influenced significantly by the addition of visual images of the moving lower face and lips, either articulating the audible syllable or carrying out a meaningless (gurning) motion. The area of cortex exhibiting this audiovisual influence was demonstrably greater in the speech-dominant hemisphere.


Current Biology | 2010

Direct Recordings of Pitch Responses from Human Auditory Cortex

Timothy D. Griffiths; Sukhbinder Kumar; William Sedley; Kirill V. Nourski; Hiroto Kawasaki; Hiroyuki Oya; Roy D. Patterson; John F. Brugge; Matthew A. Howard

Summary Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept.


The Journal of Neuroscience | 2011

Value Encoding in Single Neurons in the Human Amygdala during Decision Making

Antonio Rangel; Hiroyuki Oya; Hiroto Kawasaki; Matthew A. Howard

A growing consensus suggests that the brain makes simple choices by assigning values to the stimuli under consideration and then comparing these values to make a decision. However, the network involved in computing the values has not yet been fully characterized. Here, we investigated whether the human amygdala plays a role in the computation of stimulus values at the time of decision making. We recorded single neuron activity from the amygdala of awake patients while they made simple purchase decisions over food items. We found 16 amygdala neurons, located primarily in the basolateral nucleus that responded linearly to the values assigned to individual items.


PLOS ONE | 2011

Human auditory cortical activation during self-vocalization

Jeremy D. W. Greenlee; Adam W. Jackson; Fangxiang Chen; Charles R. Larson; Hiroyuki Oya; Hiroto Kawasaki; Haiming Chen; Matthew A. Howard

During speaking, auditory feedback is used to adjust vocalizations. The brain systems mediating this integrative ability have been investigated using a wide range of experimental strategies. In this report we examined how vocalization alters speech-sound processing within auditory cortex by directly recording evoked responses to vocalizations and playback stimuli using intracranial electrodes implanted in neurosurgery patients. Several new findings resulted from these high-resolution invasive recordings in human subjects. Suppressive effects of vocalization were found to occur only within circumscribed areas of auditory cortex. In addition, at a smaller number of sites, the opposite pattern was seen; cortical responses were enhanced during vocalization. This increase in activity was reflected in high gamma power changes, but was not evident in the averaged evoked potential waveforms. These new findings support forward models for vocal control in which efference copies of premotor cortex activity modulate sub-regions of auditory cortex.


NeuroImage | 2011

Manifestation of ocular-muscle EMG contamination in human intracranial recordings

Christopher K. Kovach; Naotsugu Tsuchiya; Hiroto Kawasaki; Hiroyuki Oya; Matthew A. Howard; Ralph Adolphs

It is widely assumed that intracranial recordings from the brain are only minimally affected by contamination due to ocular-muscle electromyogram (oEMG). Here we show that this is not always the case. In intracranial recordings from five surgical epilepsy patients we observed that eye movements caused a transient biphasic potential at the onset of a saccade, resembling the saccadic spike potential commonly seen in scalp EEG, accompanied by an increase in broadband power between 20 and 200 Hz. Using concurrently recorded eye movements and high-density intracranial EEG (iEEG) we developed a detailed overview of the spatial distribution and temporal characteristics of the saccade-related oculomotor signal within recordings from ventral, medial and lateral temporal cortex. The occurrence of the saccadic spike was not explained solely by reference contact location, and was observed near the temporal pole for small (<2 deg) amplitude saccades and over a broad area for larger saccades. We further examined the influence of saccade-related oEMG contamination on measurements of spectral power and interchannel coherence. Contamination manifested in both spectral power and coherence measurements, in particular, over the anterior half of the ventral and medial temporal lobe. Next, we compared methods for removing the contaminating signal and found that nearest-neighbor bipolar re-referencing and ICA filtering were effective for suppressing oEMG at locations far from the orbits, but tended to leave some residual contamination at the temporal pole. Finally, we show that genuine cortical broadband gamma responses observed in averaged data from ventral temporal cortex can bear a striking similarity in time course and band-width to oEMG contamination recorded at more anterior locations. We conclude that eye movement-related contamination should be ruled out when reporting high gamma responses in human intracranial recordings, especially those obtained near anterior and medial temporal lobe.


Cerebral Cortex | 2011

Intracranial Study of Speech-Elicited Activity on the Human Posterolateral Superior Temporal Gyrus

Mitchell Steinschneider; Kirill V. Nourski; Hiroto Kawasaki; Hiroyuki Oya; John F. Brugge; Matthew A. Howard

To clarify speech-elicited response patterns within auditory-responsive cortex of the posterolateral superior temporal (PLST) gyrus, time-frequency analyses of event-related band power in the high gamma frequency range (75-175 Hz) were performed on the electrocorticograms recorded from high-density subdural grid electrodes in 8 patients undergoing evaluation for medically intractable epilepsy. Stimuli were 6 stop consonant-vowel (CV) syllables that varied in their consonant place of articulation (POA) and voice onset time (VOT). Initial augmentation was maximal over several centimeters of PLST, lasted about 400 ms, and was often followed by suppression and a local outward expansion of activation. Maximal gamma power overlapped either the Nα or Pβ deflections of the average evoked potential (AEP). Correlations were observed between the relative magnitudes of gamma band responses elicited by unvoiced stop CV syllables (/pa/, /ka/, /ta/) and their corresponding voiced stop CV syllables (/ba/, /ga/, /da/), as well as by the VOT of the stimuli. VOT was also represented in the temporal patterns of the AEP. These findings, obtained in the passive awake state, indicate that PLST discriminates acoustic features associated with POA and VOT and serve as a benchmark upon which task-related speech activity can be compared.

Collaboration


Dive into the Hiroto Kawasaki's collaboration.

Top Co-Authors

Avatar

Matthew A. Howard

University of Iowa Hospitals and Clinics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian J. Dlouhy

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph Adolphs

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasunori Nagahama

University of Iowa Hospitals and Clinics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John F. Brugge

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge