Takayuki Ito
Haskins Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Takayuki Ito.
Proceedings of the National Academy of Sciences of the United States of America | 2009
Takayuki Ito; Mark Tiede; David J. Ostry
Somatosensory signals from the facial skin and muscles of the vocal tract provide a rich source of sensory input in speech production. We show here that the somatosensory system is also involved in the perception of speech. We use a robotic device to create patterns of facial skin deformation that would normally accompany speech production. We find that when we stretch the facial skin while people listen to words, it alters the sounds they hear. The systematic perceptual variation we observe in conjunction with speech-like patterns of skin stretch indicates that somatosensory inputs affect the neural processing of speech sounds and shows the involvement of the somatosensory system in the perceptual processing in speech.
Journal of Phonetics | 2002
Hiroaki Gomi; Masaaki Honda; Takayuki Ito; Emi Z. Murano
Abstract The cooperative mechanisms in articulatory movements were examined by using mechanical perturbations during bilabial phonemic tasks. The first experiment compares the differences in compensatory responses during sustained productions of the bilabial fricative /Φ/ for which lip constriction is required, and /a/, for which the lips and jaw are relatively relaxed. In the second experiment, we perturbed jaw movement with different load-onsets in the sentence “kono /aΦaΦa/ mitai”. In both experiments, labial distances were recovered partly or fully by the downward shifts of the upper lip. The upper lip response was frequently prior to the EMG response observed in the sustained task. Additionally, initial downward displacement of the upper lip was frequently larger when the load was supplied during /Φ/ than when it was supplied during /a/ in the sustained and sentence tasks, respectively. The stiffness variation estimated by using a muscle linkage model indicates that the stiffness increases for the bilabial phonemic task in order to robustly configure a labial constriction. The results suggest that the change in passive stiffness regulated by the muscle activation level is important in generating quick cooperative articulation.
Journal of Neurophysiology | 2012
Takayuki Ito; David J. Ostry
Interactions between auditory and somatosensory information are relevant to the neural processing of speech since speech processes and certainly speech production involves both auditory information and inputs that arise from the muscles and tissues of the vocal tract. We previously demonstrated that somatosensory inputs associated with facial skin deformation alter the perceptual processing of speech sounds. We show here that the reverse is also true, that speech sounds alter the perception of facial somatosensory inputs. As a somatosensory task, we used a robotic device to create patterns of facial skin deformation that would normally accompany speech production. We found that the perception of the facial skin deformation was altered by speech sounds in a manner that reflects the way in which auditory and somatosensory effects are linked in speech production. The modulation of orofacial somatosensory processing by auditory inputs was specific to speech and likewise to facial skin deformation. Somatosensory judgments were not affected when the skin deformation was delivered to the forearm or palm or when the facial skin deformation accompanied nonspeech sounds. The perceptual modulation that we observed in conjunction with speech sounds shows that speech sounds specifically affect neural processing in the facial somatosensory system and suggest the involvement of the somatosensory system in both the production and perceptual processing of speech.
Journal of Neurophysiology | 2010
Takayuki Ito; David J. Ostry
Motor learning is dependent on kinesthetic information that is obtained both from cutaneous afferents and from muscle receptors. In human arm movement, information from these two kinds of afferents is largely correlated. The facial skin offers a unique situation in which there are plentiful cutaneous afferents and essentially no muscle receptors and, accordingly, experimental manipulations involving the facial skin may be used to assess the possible role of cutaneous afferents in motor learning. We focus here on the information for motor learning provided by the deformation of the facial skin and the motion of the lips in the context of speech. We used a robotic device to slightly stretch the facial skin lateral to the side of the mouth in the period immediately preceding movement. We found that facial skin stretch increased lip protrusion in a progressive manner over the course of a series of training trials. The learning was manifest in a changed pattern of lip movement, when measured after learning in the absence of load. The newly acquired motor plan generalized partially to another speech task that involved a lip movement of different amplitude. Control tests indicated that the primary source of the observed adaptation was sensory input from cutaneous afferents. The progressive increase in lip protrusion over the course of training fits with the basic idea that change in sensory input is attributed to motor performance error. Sensory input, which in the present study precedes the target movement, is credited to the target-related motion, even though the skin stretch is released prior to movement initiation. This supports the idea that the nervous system generates motor commands on the assumption that sensory input and kinematic error are in register.
Biological Cybernetics | 2004
Takayuki Ito; Hiroaki Gomi; Masaaki Honda
Different kinds of articulators, such as the upper and lower lips, jaw, and tongue, are precisely coordinated in speech production. Based on a perturbation study of the production of a fricative consonant using the upper and lower lips, it has been suggested that increasing the stiffness in the muscle linkage between the upper lip and jaw is beneficial for maintaining the constriction area between the lips (Gomi et al. 2002). This hypothesis is crucial for examining the mechanism of speech motor control, that is, whether mechanical impedance is controlled for the speech motor coordination. To test this hypothesis, in the current study we performed a dynamical simulation of lip compensatory movements based on a muscle linkage model and then evaluated the performance of compensatory movements. The temporal pattern of stiffness of muscle linkage was obtained from the electromyogram (EMG) of the orbicularis oris superior (OOS) muscle by using the temporal transformation (second-order dynamics with time delay) from EMG to stiffness, whose parameters were experimentally determined. The dynamical simulation using stiffness estimated from empirical EMG successfully reproduced the temporal profile of the upper lip compensatory articulations. Moreover, the estimated stiffness variation significantly contributed to reproduce a functional modulation of the compensatory response. This result supports the idea that the mechanical impedance highly contributes to organizing coordination among the lips and jaw. The motor command would be programmed not only to generate movement in each articulator but also to regulate mechanical impedance among articulators for robust coordination of speech motor control.
Neuroreport | 2007
Takayuki Ito; Hiroaki Gomi
Owing to the lack of muscle spindles and tendon organs in the perioral system, cutaneous receptors may contribute to speech sensorimotor processes. We have investigated this possibility in the context of upper lip reflexes, which we have induced by unexpectedly stretching the facial skin lateral to the oral angle. Skin stretch at this location resulted in long latency reflex responses that were similar to the cortical reflexes observed previously. This locationreliably elicited the reflex response, whereas the skin above the oral angle and the skin on the cheek did not. The data suggest that cutaneous mechanoreceptors are narrowly tuned to deformation of the facial skin and provide kinesthetic information for rapid sensorimotor processing in speech.
Frontiers in Psychology | 2014
Takayuki Ito; Vincent L. Gracco; David J. Ostry
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
international workshop on variable structure systems | 1996
Kenzo Nonami; Takayuki Ito; Yasuhiro Kitamura; Kazunori Iwabuchi
This paper proposes a new sliding mode control method using H/sub /spl infin////spl mu/ control theory and shows its applications. This concept is based on the frequency-shaped approach. A conventional hyperplane consists of a desired reference model without dynamics. Therefore, the sliding mode control system often becomes unstable based on spillover phenomena in a higher frequency region. On the other hand, the proposed design method can completely suppress such spillover phenomena because of the frequency-shaped hyperplane. Also, it has good robustness and robust performance in cases of parameter variations on the hyperplane to minimize the maximum singular value and structured singular value from some noise to the controlled variables. We have applied this new method to the flexible structure of the miniature test rig with four stories like high rise building and the positioning control system with the flexible arm. We have verified from simulations and experiments that the new sliding mode control method proposed in previous paper has good performances and it is very useful to suppress the spillover in a higher frequency region.
Journal of Speech Language and Hearing Research | 2017
Mark R. van den Bunt; Margriet A. Groen; Takayuki Ito; Ana A. Francisco; Vincent L. Gracco; Kenneth R. Pugh; Ludo Verhoeven
Purpose The purpose of this study was to examine whether developmental dyslexia (DD) is characterized by deficiencies in speech sensory and motor feedforward and feedback mechanisms, which are involved in the modulation of phonological representations. Method A total of 42 adult native speakers of Dutch (22 adults with DD; 20 participants who were typically reading controls) were asked to produce /bep/ while the first formant (F1) of the /e/ was not altered (baseline), increased (ramp), held at maximal perturbation (hold), and not altered again (after-effect). The F1 of the produced utterance was measured for each trial and used for statistical analyses. The measured F1s produced during each phase were entered in a linear mixed-effects model. Results Participants with DD adapted more strongly during the ramp phase and returned to baseline to a lesser extent when feedback was back to normal (after-effect phase) when compared with the typically reading group. In this study, a faster deviation from baseline during the ramp phase, a stronger adaptation response during the hold phase, and a slower return to baseline during the after-effect phase were associated with poorer reading and phonological abilities. Conclusion The data of the current study are consistent with the notion that the phonological deficit in DD is associated with a weaker sensorimotor magnet for phonological representations.
Journal of the Acoustical Society of America | 2015
Atsuo Suemitsu; Jianwu Dang; Takayuki Ito; Mark Tiede
Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.