Toshinori Yoshioka
Queen's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Toshinori Yoshioka.
Nature | 2000
Hiroshi Imamizu; Satoru Miyauchi; Tomoe Tamada; Yuka Sasaki; Ryousuke Takino; Benno Pütz; Toshinori Yoshioka; Mitsuo Kawato
Theories of motor control postulate that the brain uses internal models of the body to control movements accurately. Internal models are neural representations of how, for instance, the arm would respond to a neural command, given its current position and velocity. Previous studies have shown that the cerebellar cortex can acquire internal models through motor learning. Because the human cerebellum is involved in higher cognitive function as well as in motor control, we propose a coherent computational theory in which the phylogenetically newer part of the cerebellum similarly acquires internal models of objects in the external world. While human subjects learned to use a new tool (a computer mouse with a novel rotational transformation), cerebellar activity was measured by functional magnetic resonance imaging. As predicted by our theory, two types of activity were observed. One was spread over wide areas of the cerebellum and was precisely proportional to the error signal that guides the acquisition of internal models during learning. The other was confined to the area near the posterior superior fissure and remained even after learning, when the error levels had been equalized, thus probably reflecting an acquired internal model of the new tool.
Proceedings of the National Academy of Sciences of the United States of America | 2003
Hiroshi Imamizu; Tomoe Kuroda; Satoru Miyauchi; Toshinori Yoshioka; Mitsuo Kawato
Human capabilities in dexterously manipulating many different tools suggest modular neural organization at functional levels, but anatomical modularity underlying the capabilities has yet to be demonstrated. Although modularity in phylogenetically older parts of the cerebellum is well known, comparable modularity in the lateral cerebellum for cognitive functions remains unknown. We investigated these issues by functional MRI (fMRI) based on our previous findings of a cerebellar internal model of a tool. After subjects intensively learned to manipulate two novel tools (the rotated mouse whose cursor appeared at a rotated position, and the velocity mouse whose cursor velocity was proportional to the mouse position), they could easily switch between the two. The lateral and posterior cerebellar activities for the two different tools were spatially segregated, and their overlaps were <10%, even at low statistical thresholds. Activities of the rotated mouse were more anterior and lateral than the velocity mouse activities. These results were consistent with predictions by the MOdular Selection And Identification Controller (MOSAIC) model that multiple internal models compete to partition sensory-motor experiences and their outputs are linearly combined for a particular context.
Progress in Brain Research | 2003
Mitsuo Kawato; Tomoe Kuroda; Hiroshi Imamizu; Eri Nakano; Satoru Miyauchi; Toshinori Yoshioka
Internal models are neural mechanisms that can mimic the input-output or output-input properties of the motor apparatus and external objects. Forward internal models predict sensory consequences from efference copies of motor commands. There is growing acceptance of the idea that forward models are important in sensorimotor integration as well as in higher cognitive function, but their anatomical loci and neural mechanisms are still largely unknown. Some of the most convincing evidence that the central nervous system (CNS) makes use of forward models in sensory motor control comes from studies on grip force-load force coupling. We first present a brief review of recent computational and behavioral studies that provide decisive evidence for the utilization of forward models in grip force-load force coupling tasks. Then, we used functional magnetic resonance imaging (fMRI) to measure the brain activity related to this coupling and demonstrate that the cerebellum is the most likely site for forward models to be stored.
Nature Neuroscience | 2004
Rieko Osu; Satomi Hirai; Toshinori Yoshioka; Mitsuo Kawato
Studies have shown that humans cannot simultaneously learn opposing force fields or opposing visuomotor rotations, even when provided with arbitrary contextual information, probably because of interference in their working memory. In contrast, we found that subjects can adapt to two opposing force fields when provided with contextual cues and can consolidate motor memories if random and frequent switching occurs. Because significant aftereffects were seen, this study suggests that multiple internal models can be acquired simultaneously during learning and predictively switched, depending only on contextual information.
Journal of Biomechanics | 2000
Etienne Burdet; Rieko Osu; David W. Franklin; Toshinori Yoshioka; Theodore E. Milner; Mitsuo Kawato
Current methods for measuring stiffness during human arm movements are either limited to one-joint motions, or lead to systematic errors. The technique presented here enables a simple, accurate and unbiased measurement of endpoint stiffness during multi-joint movements. Using a computer-controlled mechanical interface, the hand is displaced relative to a prediction of the undisturbed trajectory. Stiffness is then computed as the ratio of restoring force to displacement amplitude. Because of the accuracy of the prediction (< 1 cm error after 200 ms) and the quality of the implementation, the movement is not disrupted by the perturbation. This technique requires only 13 as many trials to identify stiffness as the method of Gomi and Kawato (1997, Biological Cybernetics 76, 163-171) and may, therefore, be used to investigate the evolution of stiffness during motor adaptation.
The Journal of Neuroscience | 2004
Hiroshi Imamizu; Tomoe Kuroda; Toshinori Yoshioka; Mitsuo Kawato
An internal model is a neural mechanism that can mimic the input–output properties of a controlled object such as a tool. Recent research interests have moved on to how multiple internal models are learned and switched under a given context of behavior. Two representative computational models for task switching propose distinct neural mechanisms, thus predicting different brain activity patterns in the switching of internal models. In one model, called the mixture-of-experts architecture, switching is commanded by a single executive called a “gating network,” which is different from the internal models. In the other model, called the MOSAIC (MOdular Selection And Identification for Control), the internal models themselves play crucial roles in switching. Consequently, the mixture-of-experts model predicts that neural activities related to switching and internal models can be temporally and spatially segregated, whereas the MOSAIC model predicts that they are closely intermingled. Here, we directly examined the two predictions by analyzing functional magnetic resonance imaging activities during the switching of one common tool (an ordinary computer mouse) and two novel tools: a rotated mouse, the cursor of which appears in a rotated position, and a velocity mouse, the cursor velocity of which is proportional to the mouse position. The switching and internal model activities temporally and spatially overlapped each other in the cerebellum and in the parietal cortex, whereas the overlap was very small in the frontal cortex. These results suggest that switching mechanisms in the frontal cortex can be explained by the mixture-of-experts architecture, whereas those in the cerebellum and the parietal cortex are explained by the MOSAIC model.
Neuroreport | 1999
Tomoe Tamada; Ca Satoru Miyauchi; Hiroshi Imamizu; Toshinori Yoshioka; Mitsuo Kawato
The function of the lateral part of the human cerebellum was investigated through cerebro-cerebellar functional connectivity. We propose a laterality index method to reveal a functional and possibly anatomical pathway between the cerebral cortex and the cerebellum. The brain activity involved in learning a visually-guided tracking skill using a novel computer mouse was measured by functional magnetic resonance imaging. The imaging data analyzed using the method suggest that the simple lobule and semilunar lobule of the lateral cerebellum have connections with the pars opercularis and pars triangularis in the inferior frontal gyrus. A possible function of this cerebro-cerebellar communication loop is tool usage, which is in-between the cognitive and motor functions of the human cerebellum.
Scientific Reports | 2015
Gowrishankar Ganesh; Atsushi Takagi; Rieko Osu; Toshinori Yoshioka; Mitsuo Kawato; Etienne Burdet
How do physical interactions with others change our own motor behavior? Utilizing a novel motor learning paradigm in which the hands of two - individuals are physically connected without their conscious awareness, we investigated how the interaction forces from a partner adapt the motor behavior in physically interacting humans. We observed the motor adaptations during physical interactions to be mutually beneficial such that both the worse and better of the interacting partners improve motor performance during and after interactive practice. We show that these benefits cannot be explained by multi-sensory integration by an individual, but require physical interaction with a reactive partner. Furthermore, the benefits are determined by both the interacting partners performance and similarity of the partners behavior to ones own. Our results demonstrate the fundamental neural processes underlying human physical interactions and suggest advantages of interactive paradigms for sport-training and physical rehabilitation.
Neural Networks | 2008
Jo-Anne Ting; Aaron D'Souza; Kenji Yamamoto; Toshinori Yoshioka; Donna L. Hoffman; Shinji Kakei; Lauren E. Sergio; John F. Kalaska; Mitsuo Kawato; Peter L. Strick; Stefan Schaal
An increasing number of projects in neuroscience require statistical analysis of high-dimensional data, as, for instance, in the prediction of behavior from neural firing or in the operation of artificial devices from brain recordings in brain-machine interfaces. Although prevalent, classical linear analysis techniques are often numerically fragile in high dimensions due to irrelevant, redundant, and noisy information. We developed a robust Bayesian linear regression algorithm that automatically detects relevant features and excludes irrelevant ones, all in a computationally efficient manner. In comparison with standard linear methods, the new Bayesian method regularizes against overfitting, is computationally efficient (unlike previously proposed variational linear regression methods, is suitable for data sets with large numbers of samples and a very high number of input dimensions) and is easy to use, thus demonstrating its potential as a drop-in replacement for other linear regression techniques. We evaluate our technique on synthetic data sets and on several neurophysiological data sets. For these neurophysiological data sets we address the question of whether EMG data collected from arm movements of monkeys can be faithfully reconstructed from neural activity in motor cortices. Results demonstrate the success of our newly developed method, in comparison with other approaches in the literature, and, from the neurophysiological point of view, confirms recent findings on the organization of the motor cortex. Finally, an incremental, real-time version of our algorithm demonstrates the suitability of our approach for real-time interfaces between brains and machines.
Nature Human Behaviour | 2017
Atsushi Takagi; Gowrishankar Ganesh; Toshinori Yoshioka; Mitsuo Kawato; Etienne Burdet
From a parent helping to guide their child during their first steps, to a therapist supporting a patient, physical assistance enabled by haptic interaction is a fundamental modus for improving motor abilities. However, what movement information is exchanged between partners during haptic interaction, and how this information is used to coordinate and assist others, remains unclear1. Here, we propose a model in which haptic information, provided by touch and proprioception2, enables interacting individuals to estimate the partner’s movement goal and use it to improve their own motor performance. We use an empirical physical interaction task3 to show that our model can explain human behaviours better than existing models of interaction in literature4–8. Furthermore, we experimentally verify our model by embodying it in a robot partner and checking that it induces the same improvements in motor performance and learning in a human individual as interacting with a human partner. These results promise collaborative robots that provide human-like assistance, and suggest that movement goal exchange is the key to physical assistance.
Collaboration
Dive into the Toshinori Yoshioka's collaboration.
National Institute of Information and Communications Technology
View shared research outputs