Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shigeto Furukawa is active.

Publication


Featured researches published by Shigeto Furukawa.


Journal of the Acoustical Society of America | 2005

Acoustical cues for sound localization by the Mongolian gerbil, Meriones unguiculatus

Katuhiro Maki; Shigeto Furukawa

The present study measured the head-related transfer functions (HRTFs) of the Mongolian gerbil for various sound-source directions, and explored acoustical cues for sound localization that could be available to the animals. The HRTF exhibited spectral notches for frequencies above 25 kHz. The notch frequency varied systematically with source direction, and thereby characterized the source directions well. The frequency dependence of the acoustical axis, the direction for which the HRTF amplitude was maximal, was relatively irregular and inconsistent between ears and animals. The frequency-by-frequency plot of the interaural level difference (ILD) exhibited positive and negative peaks, with maximum values of 30 dB at around 30 kHz. The ILD peak frequency had a relatively irregular spatial distribution, implying a poor sound localization cue. The binaural acoustical axis (the direction with the maximum ILD magnitude) showed relatively orderly clustering around certain frequencies, the pattern being fairly consistent among animals. The interaural time differences (ITDs) were also measured and fell in a +/- 120 micros range. When two different animal postures were compared (i.e., the animal was standing on its hind legs and prone), small but consistent differences were found for the lower rear directions on the HRTF amplitudes, the ILDs, and the ITDs.


The Neuroscientist | 2002

Book Review: Cortical Neurons That Localize Sounds

John C. Middlebrooks; Li Xu; Shigeto Furukawa; Ewan A. Macpherson

Efforts to locate a cortical map of auditory space generally have proven unsuccessful. At moderate sound levels, cortical neurons generally show large or unbounded spatial receptive fields. Within those large receptive fields, however, changes in sound location result in systematic changes in the temporal firing patterns such that single-neuron firing patterns can signal the locations of sound sources throughout as much as 360 degrees of auditory space. Neurons in the cat’s auditory cortex show accurate localization of broadband sounds, which human listeners localize accurately. Conversely, in response to filtered sounds that produce spatial illusions in human listeners, neurons signal systematically incorrect locations that can be predicted by a model that also predicts the listeners’ illusory reports. These results from the cat’s auditory cortex, as well as more limited results from nonhuman primates, suggest a model in which the location of any particular sound source is represented in a distributed fashion within individual auditory cortical areas and among multiple cortical areas.


Jaro-journal of The Association for Research in Otolaryngology | 2000

Auditory Cortical Images of Tones and Noise Bands

Julie G. Arenberg; Shigeto Furukawa; John C. Middlebrooks

We examined the representation of stimulus center frequencies by the distribution of cortical activity. Recordings were made from the primary auditory cortex (area A1) of ketamine-anesthetized guinea pigs. Cortical images of tones and noise bands were visualized as the simultaneously recorded spike activity of neurons at 16 sites along the tonotopic gradient of cortical frequency representation. The cortical image of a pure tone showed a restricted focus of activity along the tonotopic gradient. As the stimulus frequency was increased, the location of the activation focus shifted from rostral to caudal. When cochlear activation was broadened by increasing the stimulus level or bandwidth, the cortical image broadened. An artificial neural network algorithm was used to quantify the accuracy of center-frequency representation by small populations of cortical neurons. The artificial neural network identified stimulus center frequency based on single-trial spike counts at as few as ten sites. The performance of the artificial neural network under various conditions of stimulus level and bandwidth suggests that the accuracy of representation of center frequency is largely insensitive to changes in the width of cortical images.


Psychonomic Bulletin & Review | 2016

Correspondences among pupillary dilation response, subjective salience of sounds, and loudness

Hsin-I Liao; Shunsuke Kidani; Makoto Yoneya; Makio Kashino; Shigeto Furukawa

A pupillary dilation response is known to be evoked by salient deviant or contrast auditory stimuli, but so far a direct link between it and subjective salience has been lacking. In two experiments, participants listened to various environmental sounds while their pupillary responses were recorded. In separate sessions, participants performed subjective pairwise-comparison tasks on the sounds with respect to their salience, loudness, vigorousness, preference, beauty, annoyance, and hardness. The pairwise-comparison data were converted to ratings on the Thurstone scale. The results showed a close link between subjective judgments of salience and loudness. The pupil dilated in response to the sound presentations, regardless of sound type. Most importantly, this pupillary dilation response to an auditory stimulus positively correlated with the subjective salience, as well as the loudness, of the sounds (Exp. 1). When the loudnesses of the sounds were identical, the pupil responses to each sound were similar and were not correlated with the subjective judgments of salience or loudness (Exp. 2). This finding was further confirmed by analyses based on individual stimulus pairs and participants. In Experiment 3, when salience and loudness were manipulated by systematically changing the sound pressure level and acoustic characteristics, the pupillary dilation response reflected the changes in both manipulated factors. A regression analysis showed a nearly perfect linear correlation between the pupillary dilation response and loudness. The overall results suggest that the pupillary dilation response reflects the subjective salience of sounds, which is defined, or is heavily influenced, by loudness.


Frontiers in Neuroengineering | 2012

Photosensitive-polyimide based method for fabricating various neural electrode architectures.

Yasuhiro X. Kato; Shigeto Furukawa; Kazuyuki Samejima; Naoyuki Hironaka; Makio Kashino

An extensive photosensitive-polyimide (PSPI)-based method for designing and fabricating various neural electrode architectures was developed. The method aims to broaden the design flexibility and expand the fabrication capability for neural electrodes to improve the quality of recorded signals and integrate other functions. After characterizing PSPIs properties for micromachining processes, we successfully designed and fabricated various neural electrodes even on a non-flat substrate using only one PSPI as an insulation material and without the time-consuming dry etching processes. The fabricated neural electrodes were an electrocorticogram (ECoG) electrode, a mesh intracortical electrode with a unique lattice-like mesh structure to fixate neural tissue, and a guide cannula electrode with recording microelectrodes placed on the curved surface of a guide cannula as a microdialysis probe. In vivo neural recordings using anesthetized rats demonstrated that these electrodes can be used to record neural activities repeatedly without any breakage and mechanical failures, which potentially promises stable recordings for long periods of time. These successes make us believe that this PSPI-based fabrication is a powerful method, permitting flexible design, and easy optimization of electrode architectures for a variety of electrophysiological experimental research with improved neural recording performance.


Hearing Research | 2014

Independent or integrated processing of interaural time and level differences in human auditory cortex

Christian F. Altmann; Satoshi Terada; Makio Kashino; Kazuhiro Goto; Tatsuya Mima; Hidenao Fukuyama; Shigeto Furukawa

Sound localization in the horizontal plane is mainly determined by interaural time differences (ITD) and interaural level differences (ILD). Both cues result in an estimate of sound source location and in many real-life situations these two cues are roughly congruent. When stimulating listeners with headphones it is possible to counterbalance the two cues, so called ITD/ILD trading. This phenomenon speaks for integrated ITD/ILD processing at the behavioral level. However, it is unclear at what stages of the auditory processing stream ITD and ILD cues are integrated to provide a unified percept of sound lateralization. Therefore, we set out to test with human electroencephalography for integrated versus independent ITD/ILD processing at the level of preattentive cortical processing by measuring the mismatch negativity (MMN) to changes in sound lateralization. We presented a series of diotic standards (perceived at a midline position) that were interrupted by deviants that entailed either a change in a) ITD only, b) ILD only, c) congruent ITD and ILD, or d) counterbalanced ITD/ILD (ITD/ILD trading). The sound stimuli were either i) pure tones with a frequency of 500 Hz, or ii) amplitude modulated tones with a carrier frequency of 4000 Hz and a modulation frequency of 125 Hz. We observed significant MMN for the ITD/ILD traded deviants in case of the 500 Hz pure tones, and for the 4000 Hz amplitude-modulated tone. This speaks for independent processing of ITD and ILD at the level of the MMN within auditory cortex. However, the combined ITD/ILD cues elicited smaller MMN than the sum of the MMN induced in response to ITD and ILD cues presented in isolation for 500 Hz, but not 4000 Hz, suggesting independent processing for the higher frequency only. Thus, the two markers for independent processing - additivity and cue-conflict - resulted in contradicting conclusions with a dissociation between the lower (500 Hz) and higher frequency (4000 Hz) bands.


Journal of the Acoustical Society of America | 2005

Reducing individual differences in the external-ear transfer functions of the Mongolian gerbil

Katuhiro Maki; Shigeto Furukawa

This study examines individual differences in the directional transfer functions (DTFs), the directional components of head-related transfer functions of gerbils, and seeks a method for reducing these differences. The difference between the DTFs of a given animal pair was quantified by the intersubject spectral difference (ISSD), which is the variance in the difference spectra of DTFs for frequencies between 5 and 45 kHz and for 361 source directions. An attempt was made to reduce the ISSD by scaling the DTFs of one animal in frequency and/or rotating the DTFs along the source coordinate sphere. The ISSD was reduced by a median of 12% after optimal frequency scaling alone, by a median of 19% after optimal spatial rotation alone, and by a median of 36% after simultaneous frequency scaling and spatial rotation. The optimal scaling factor (OSF) and the optimal coordinate rotation (OCR) correlated strongly with differences in head width and pinna angles (i.e., pinna inclination around the vertical and front-back axes), respectively. Thus, linear equations were derived to estimate the OSF and OCR from these anatomical measurements. The ISSD could be reduced by a median of 22% based on the estimated OSF and OCR.


Hearing Research | 2016

Subcortical correlates of auditory perceptual organization in humans

Shimpei Yamagishi; Sho Otsuka; Shigeto Furukawa; Makio Kashino

To make sense of complex auditory scenes, the auditory system sequentially organizes auditory components into perceptual objects or streams. In the conventional view of this process, the cortex plays a major role in perceptual organization, and subcortical mechanisms merely provide the cortex with acoustical features. Here, we show that the neural activities of the brainstem are linked to perceptual organization, which alternates spontaneously for human listeners without any stimulus change. The stimulus used in the experiment was an unchanging sequence of repeated triplet tones, which can be interpreted as either one or two streams. Listeners were instructed to report the perceptual states whenever they experienced perceptual switching between one and two streams throughout the stimulus presentation. Simultaneously, we recorded event related potentials with scalp electrodes. We measured the frequency-following response (FFR), which is considered to originate from the brainstem. We also assessed thalamo-cortical activity through the middle-latency response (MLR). The results demonstrate that the FFR and MLR varied with the state of auditory stream perception. In addition, we found that the MLR change precedes the FFR change with perceptual switching from a one-stream to a two-stream percept. This suggests that there are top-down influences on brainstem activity from the thalamo-cortical pathway. These findings are consistent with the idea of a distributed, hierarchical neural network for perceptual organization and suggest that the network extends to the brainstem level.


Frontiers in Neuroscience | 2014

Factors that account for inter-individual variability of lateralization performance revealed by correlations of performance among multiple psychoacoustical tasks

Atsushi Ochi; Tatsuya Yamasoba; Shigeto Furukawa

This study explored the source of inter-listener variability in the performance of lateralization tasks based on interaural time or level differences (ITDs or ILDs) by examining correlation of performance between pairs of multiple psychoacoustical tasks. The ITD, ILD, Time, and Level tasks were intended to measure sensitivities to ITD; ILD; temporal fine structure or envelope of the stimulus encoded by the neural phase locking; and stimulus level, respectively. Stimuli in low- and high-frequency regions were tested. The low-frequency stimulus was a harmonic complex (F0 = 100 Hz) that was spectrally shaped for the frequency region around the 11th harmonic. The high frequency stimulus was a “transposed stimulus,” which was a 4-kHz tone amplitude-modulated with a half-wave rectified 125-Hz sinusoid. The task procedures were essentially the same between the low- and high-frequency stimuli. Generally, the thresholds for pairs of ITD and ILD tasks, across cues or frequencies, exhibited significant positive correlations, suggesting a common mechanism across cues and frequencies underlying the lateralization tasks. For the high frequency stimulus, there was a significant positive correlation of performance between the ITD and Time tasks. A significant positive correlation was found also in the pair of ILD and Level tasks for the low- frequency stimulus. These results indicate that the inter-listener variability of ITD and ILD sensitivities could be accounted for partially by the variability of monaural efficiency of neural phase locking and intensity coding, respectively, depending of frequency.


PLOS ONE | 2016

A Role of Medial Olivocochlear Reflex as a Protection Mechanism from Noise-Induced Hearing Loss Revealed in Short-Practicing Violinists

Sho Otsuka; Minoru Tsuzaki; Junko Sonoda; Satomi Tanaka; Shigeto Furukawa

Previous studies have indicated that extended exposure to a high level of sound might increase the risk of hearing loss among professional symphony orchestra musicians. One of the major problems associated with musicians’ hearing loss is difficulty in estimating its risk simply on the basis of the physical amount of exposure, i.e. the exposure level and duration. The aim of this study was to examine whether the measurement of the medial olivocochlear reflex (MOCR), which is assumed to protect the cochlear from acoustic damage, could enable us to assess the risk of hearing loss among musicians. To test this, we compared the MOCR strength and the hearing deterioration caused by one-hour instrument practice. The participants in the study were music university students who are majoring in the violin, whose left ear is exposed to intense violin sounds (broadband sounds containing a significant number of high-frequency components) during their regular instrument practice. Audiogram and click-evoked otoacoustic emissions (CEOAEs) were measured before and after a one-hour violin practice. There was a larger exposure to the left ear than to the right ear, and we observed a left-ear specific temporary threshold shift (TTS) after the violin practice. Left-ear CEOAEs decreased proportionally to the TTS. The exposure level, however, could not entirely explain the inter-individual variation in the TTS and the decrease in CEOAE. On the other hand, the MOCR strength could predict the size of the TTS and CEOAE decrease. Our findings imply that, among other factors, the MOCR is a promising measure for assessing the risk of hearing loss among musicians.

Collaboration


Dive into the Shigeto Furukawa's collaboration.

Top Co-Authors

Avatar

Makio Kashino

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katuhiro Maki

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Shimpei Yamagishi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Makoto Yoneya

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge