Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satoshi Imaizumi is active.

Publication


Featured researches published by Satoshi Imaizumi.


Neuroreport | 1997

Vocal identification of speaker and emotion activates differerent brain regions

Satoshi Imaizumi; Koichi Mori; Shigeru Kiritani; Ryuta Kawashima; Motoaki Sugiura; Hiroshi Fukuda; Kengo Itoh; Takashi Kato; Akinori Nakamura; Kentaro Hatano; Shozo Kojima; Katsuki Nakamura

REGIONAL cerebral blood flow was measured in six healthy volunteers by positron emission tomography during identification of speaker and emotion from spoken words. The speaker identification task activated several audio-visual multimodal areas, particularly the temporal poles in both hemispheres, which may be involved in connecting vocal attributes with the visual representations of speakers. The emotion identification task activated regions in the cerebellum and the frontal lobe, suggesting a functional relationship between those regions involved in emotion. The results suggest that different anatomical structures contribute to the vocal identification of speaker and emotion.


Neuroreport | 1998

Task-dependent laterality for cue decoding during spoken language processing.

Satoshi Imaizumi; Koichi Mori; Shigeru Kiritani; Hiroshi Hosoi; Mitsuo Tonoike

THE task-dependent laterality of the auditory cortices was investigated by measuring the magnetic fields elicited by three forms of a Japanese verb, which differed in terms of prosodic and phonetic cues. Significant task- dependent magnetic fields were found in both hemispheres during a prosody-related task, but only in the left during a phoneme-related task. The latency was similar to the mismatch negatively which reflects the neural activity of automatic cue decoding. These results suggest that task-dependent schemata are activated at least partially in parallel with automatic cue-decoding processes such that those in the left hemisphere process linguistic information irrespective of acoustic cues whereas those in the right hemisphere process prosodic information.


Neuroreport | 2001

Ultrasound activates the auditory cortex of profoundly deaf subjects

Satoshi Imaizumi; Hiroshi Hosoi; Takefumi Sakaguchi; Yoshiaki Watanabe; Norihiro Sadato; Satoshi Nakamura; Atsuo Waki; Yoshiharu Yonekura

Using three-dimensional PET, the cortical areas activated by bone-conducted ultrasound were measured from five profoundly deaf subjects and compared with the cortical areas of normal-hearing subjects activated by stimuli through bone-conducted ultrasonic, air-conducted, bone-conducted, and vibro-tactile hearing aids. All of the hearing aids, including the ultrasonic hearing aid, consistently activated the medial portion of the primary auditory cortex of the normal volunteers. The same cortical area was also significantly activated in the profoundly deaf subjects although the percentage increase in regional cerebral blood flow (rCBF) was smaller than in normal subjects. These results suggest that extra-cochlear routes convey information to the primary auditory cortex and can therefore produce detectable sound sensation even in the profoundly deaf subjects, who reported a sensation themselves.


Japanese Journal of Applied Physics | 2002

Inner Head Acoustic Field for Bone-Conducted Sound Calculated by Finite-Difference Time-Domain Method

Takefumi Sakaguchi; Takahito Hirano; Yoshiaki Watanabe; Tadashi Nishimura; Hiroshi Hosoi; Satoshi Imaizumi; Seiji Nakagawa; Mitsuo Tonoike

The phenomenon physically occurring within the head for bone-conducted sound of various stimulation locations has been calculated using the finite-difference time-domain (FDTD) technique; three slightly different stimulation locations near the left mastoid were set for audible and ultrasonic frequency stimulation. Calculated sound fields at the plane including the cochleae showed considerably different characteristics at different stimulation frequencies. For audible frequency stimulation, their distribution negligibly differed for each stimulation location. On the contrary, for ultrasonic frequency stimulation, their distribution shifted considerably for each different stimulation location. These results indicated the characteristics of the shifting sound image perceived for bone-conducted ultrasound and the negligibly shifting sound image perceived for bone-conducted audible sound, from the slight changes in their stimulation locations.


Journal of the Acoustical Society of America | 1995

Listener adaptive characteristics of vowel devoicing in Japanese dialogue

Satoshi Imaizumi; Akiko Hayashi; Toshisada Deguchi

Listener adapative characteristics of Japanese vowel devoicing were investigated by analyzing (i) dialogues between professional teachers and hearing-impaired (HI) or normal-hearing (NH) children, and (ii) speech samples read by the teachers as fast and clearly as possible (RD). The teachers reduced the devoicing rate and lengthened the moras more in the HI vs NH samples, and even more in the HI vs RD samples. A logistic regression analysis of the devoicing rate suggests that the speech rate dependency of the devoicing rate is different among the HI, NH, and RD samples. When moras are lengthened, the predicted devoicing rate decreases more for the HI vs NH samples, and even more in the HI vs RD samples, suggesting that not only rate-dependent adjustments but also rate-independent adjustments significantly affect devoicing reduction. These results suggest the following: (1) Professional teachers of hearing-impaired children reduce their devoicing not only by lengthening the interval of successive voicing/devoicing gestures, but also by resizing component gestures to some extent, probably to improve the listeners comprehension; (2) vowel devoicing should be represented in terms of parameters of speech motor control; (3) it may be possible to develop an optimized communication method for hearing-impaired children by simulating such listener-adaptive adjustments in speech production.


Logopedics Phoniatrics Vocology | 2010

Perceptual evaluation of pathological voice quality: A comparative analysis between the RASATI and GRBASI scales

Emi Juliana Yamauchi; Satoshi Imaizumi; Hagino Maruyama; Tomoyuki Haji

Abstract To provide mutual understanding between different evaluation scales for pathological voice quality, comparative analyses between the GRBASI and the RASATI systems were conducted. A total of 100 voice samples were rated by experienced Brazilian and Japanese listeners. Analysis by factor analysis with varimax rotation identified significant interrelations between the scales, with asthenia, instability, and roughness as the common factors. Grade-of-hoarseness, only included in GRBASI, corresponds to a combination of roughness, breathiness, and instability. Harshness, included only in RASATI, can be predicted by breathiness with strain in the GRBASI scale. Roughness is found to be the most consistent factor and the easiest to identify by evaluators with different linguistic background.


Logopedics Phoniatrics Vocology | 2009

Voice as a tool communicating intentions.

Satoshi Imaizumi; Izumi Furuya; Kazuko Yamasaki

Abstract The ability to understand speakers’ intentions is examined for typically developing children (TDC), children with autism spectrum disorders (ASD), and children with attention deficit/hyperactivity disorder (ADHD). Four types of spoken phrases, expressing praise, sarcasm, blame, and banter, were presented, and subjects were asked to judge if the speaker praises you or not, or if she blames you or not. The children could correctly judge the speakers intention for congruent phrases such as praise and blame. TDC younger than 8 years had significantly lower correct percent compared to the TDC older than them for the sarcastic and banter phrases, which have incongruent linguistic and affective valences. The correct percent was significantly lower for ASD aged 10 years compared to the age-matched TDC and ADHD groups.


Neuroreport | 2001

Elicitation of N400m in sentence comprehension due to lexical prosody incongruity

Ryoko Hayashi; Satoshi Imaizumi; Koichi Mori; Seiji Niimi; Shogo Ueno; Shigeru Kiritani

The role of lexical prosody in the semantic integration of spoken sentences consisting of a quiz stem and an answer word was investigated analyzing the event-related magnetic response, N400m. Three conditions regarding the relations between the quiz and the answer word were prepared: pitch–accent violation, phonemic violation and no violation. Both the pitch–accent and phonemic violations elicited significant N400m without any significant differences in the peak latency and magnitude of the equivalent current dipoles, suggesting that the role of pitch–accent in semantic integration is equivalent to that of phonemes. However, the rate of violation detection and the successful N400m source estimation were lower for the pitch–accent violation than for the phonemic violation, suggesting differential neural processes for the phonemic and prosodic cues.


Journal of the Acoustical Society of America | 1999

DEVELOPMENT OF ADAPTIVE PHONETIC GESTURES IN CHILDREN : EVIDENCE FROM VOWEL DEVOICING IN TWO DIFFERENT DIALECTS OF JAPANESE

Satoshi Imaizumi; Kiyoko Fuwa; Hiroshi Hosoi

High vowels between voiceless consonants are often devoiced in many languages, as well as in many dialects of Japanese. This phenomenon can be hypothesized to be a consequence of the adaptive organization of the laryngeal gestures to various conditions, including dialectal requirements. If this theory is correct, it may be possible to predict developmental changes in vowel devoicing based on the developmental improvement in the dialect-specific organization of the laryngeal gestures. To test this expectation, the developmental properties of vowel devoicing were investigated for 72 children of 4 and 5 years of age, and 37 adults in two dialects of Japanese. One was the Osaka dialect, with a low devoicing rate, and the other the Tokyo dialect, with a high devoicing rate. In the Tokyo dialect, the devoicing rate of children significantly increased and reached an adultlike level by the age of 5 years, whereas it remained low irrespective of age in Osaka. The vowel devoicing of 5-year-old children exhibited the same characteristics as that of the adults of their respective dialect. These results suggest that children growing up with the Tokyo dialect acquire the articulatory gestures which do not inhibit vowel devoicing by the age of 5 years, whereas children growing up with the Osaka dialect acquire those which inhibit the devoicing of vowels by the same age. The results fit in well with the predictions of the gestural account of vowel devoicing. It is also suggested that learning dialect-specific adaptive strategies to coordinate voicing and devoicing gestures as required to attain an adultlike vowel devoicing pattern is a long process: By the age of 5 years children have completed enough of this process to become members of their dialectal community.


international conference on acoustics, speech, and signal processing | 1986

Acoustic measurement of pathological voice qualities for medical purposes

Satoshi Imaizumi

In order to develop an assessment system for voice quality to be used in voice clinics, acoustic correlates of pathological voices were investigated. Extracted measures were 1) modulation indices representing periodical variations in the pitch period, in the amplitude and in the waveform; 2) a pitch perturbation quotient; 3) an amplitude perturbation quotient; 4) a distortion factor representing richness of harmonics; and 5) the additive noise level. It was found that these acoustic measures can indicate the differences in the perceptual qualities of pathological voices very well.

Collaboration


Dive into the Satoshi Imaizumi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akiko Hayashi

Tokyo Gakugei University

View shared research outputs
Top Co-Authors

Avatar

Hiroyuki Muranaka

Tsukuba International University

View shared research outputs
Researchain Logo
Decentralizing Knowledge