Ken Takiyama
Tokyo University of Agriculture and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ken Takiyama.
Scientific Reports | 2016
Ken Takiyama; Yutaka Sakai
Motor learning in unimanual and bimanual planar reaching movements has been intensively investigated. Although distinct theoretical frameworks have been proposed for each of these reaching movements, the relationship between these movements remains unclear. In particular, the generalization of motor learning effects (transfer of learning effects) between unimanual and bimanual movements has yet to be successfully explained. Here, by extending a motor primitive framework, we analytically proved that the motor primitive framework can reproduce the generalization of learning effects between unimanual and bimanual movements if the mean activity of each primitive for unimanual movements is balanced to the mean for bimanual movements. In this balanced condition, the activity of each primitive is consistent with previously reported neuronal activity. The unimanual-bimanual balance leads to the testable prediction that generalization between unimanual and bimanual movements is more widespread to different reaching directions than generalization within respective movements. Furthermore, the balanced motor primitive can reproduce another previously reported phenomenon: the learning of different force fields for unimanual and bimanual movements.
PLOS ONE | 2016
Ken Takiyama; Masahiro Shinya
Most motor learning experiments have been conducted in a laboratory setting. In this type of setting, a huge and expensive manipulandum is frequently used, requiring a large budget and wide open space. Subjects also need to travel to the laboratory, which is a burden for them. This burden is particularly severe for patients with neurological disorders. Here, we describe the development of a novel application based on Unity3D and smart devices, e.g., smartphones or tablet devices, that can be used to conduct motor learning experiments at any time and in any place, without requiring a large budget and wide open space and without the burden of travel on subjects. We refer to our application as POrtable Motor learning LABoratory, or PoMLab. PoMLab is a multiplatform application that is available and sharable for free. We investigated whether PoMLab could be an alternative to the laboratory setting using a visuomotor rotation paradigm that causes sensory prediction error, enabling the investigation of how subjects minimize the error. In the first experiment, subjects could adapt to a constant visuomotor rotation that was abruptly applied at a specific trial. The learning curve for the first experiment could be modeled well using a state space model, a mathematical model that describes the motor leaning process. In the second experiment, subjects could adapt to a visuomotor rotation that gradually increased each trial. The subjects adapted to the gradually increasing visuomotor rotation without being aware of the visuomotor rotation. These experimental results have been reported for conventional experiments conducted in a laboratory setting, and our PoMLab application could reproduce these results. PoMLab can thus be considered an alternative to the laboratory setting. We also conducted follow-up experiments in university physical education classes. A state space model that was fit to the data obtained in the laboratory experiments could predict the learning curves obtained in the follow-up experiments. Further, we investigated the influence of vibration function, weight, and screen size on learning curves. Finally, we compared the learning curves obtained in the PoMLab experiments to those obtained in the conventional reaching experiments. The results of the in-class experiments show that PoMLab can be used to conduct motor learning experiments at any time and place.
Scientific Reports | 2015
Ken Takiyama
Sensorimotor transformation is indispensable to the accurate motion of the human body in daily life. For instance, when we grasp an object, the distance from our hands to an object needs to be calculated by integrating multisensory inputs, and our motor system needs to appropriately activate the arm and hand muscles to minimize the distance. The sensorimotor transformation is implemented in our neural systems, and recent advances in measurement techniques have revealed an important property of neural systems: a small percentage of neurons exhibits extensive activity while a large percentage shows little activity, i.e., sparse coding. However, we do not yet know the functional role of sparse coding in sensorimotor transformation. In this paper, I show that sparse coding enables complete and robust learning in sensorimotor transformation. In general, if a neural network is trained to maximize the performance on training data, the network shows poor performance on test data. Nevertheless, sparse coding renders compatible the performance of the network on both training and test data. Furthermore, sparse coding can reproduce reported neural activities. Thus, I conclude that sparse coding is necessary and a biologically plausible factor in sensorimotor transformation.
Neural Networks | 2017
Ken Takiyama
Despite a near-infinite number of possible movement trajectories, our body movements exhibit certain invariant features across individuals; for example, when grasping a cup, individuals choose an approximately linear path from the hand to the cup. Based on these experimental findings, many researchers have proposed optimization frameworks to determine desired movement trajectories. Successful conventional frameworks include the geodesic path, which considers the geometry of our complicated body dynamics, and stochastic frameworks, which consider movement variability. The former succeed in explaining the kinematics in human reaching movements, and the latter succeed in explaining the variability in those movements. However, the conventional geodesic path framework does not consider variability, and the conventional stochastic frameworks do not consider the geometrical properties of our bodies. Thus, how to reconcile these two successful frameworks remains unclear. Here, I show that the conventional geodesic path can be interpreted as a Bayesian framework in which no uncertainty is considered. Hence, by introducing uncertainty into the framework, I propose a Bayesian geodesic path framework that can simultaneously consider the geometric properties of our bodies and movement variability. I demonstrate that the Bayesian geodesic path generates a mean movement trajectory that corresponds to the conventional geodesic path and a variability of movement trajectory, thus explaining the characteristic variability in human reaching movements.
Neural Networks | 2017
Ken Takiyama; Yutaka Sakai
Certain theoretical frameworks have successfully explained motor learning in either unimanual or bimanual movements. However, no single theoretical framework can comprehensively explain motor learning in both types of movement because the relationship between these two types of movement remains unclear. Although our recent model of a balanced motor primitive framework attempted to simultaneously explain motor learning in unimanual and bimanual movements, this model focused only on a limited subset of bimanual movements and therefore did not elucidate the relationships between unimanual movements and various bimanual movements. Here, we extend the balanced motor primitive framework to simultaneously explain motor learning in unimanual and various bimanual movements as well as the transfer of learning effects between unimanual and various bimanual movements; these phenomena can be simultaneously explained if the mean activity of each primitive for various unimanual movements is balanced with the corresponding mean activity for various bimanual movements. Using this balanced condition, we can reproduce the results of prior behavioral and neurophysiological experiments. Furthermore, we demonstrate that the balanced condition can be implemented in a simple neural network model.
bioRxiv | 2018
Daisuke Furuki; Ken Takiyama
Motor variability is inevitable in our body movements and has been discussed from various perspectives in motor neuroscience and biomechanics; it can originate from the variability of neural activities, reflect a large degree of freedom inherent in our body movements, decrease muscle fatigue, and facilitate motor learning. How to evaluate this motor variability is thus a fundamental question in motor neuroscience and biomechanics. Previous methods quantified (at least) two striking features of motor variability: a smaller variability in the task-relevant dimension than in the task-irrelevant dimension and a lowdimensional structure that have often been referred to as synergy or the principal component. However, these previous methods were not only unsuitable for quantifying these features simultaneously but also applicable in only limited conditions (e.g., one method cannot consider motion sequence and another method cannot consider how each motion is relevant to performance). Here, we propose a flexible and straightforward machine learning technique that can quantify task-relevant variability, task-irrelevant variability, and the relevance of each principal component to task performance while considering motion sequence and the relevance of each motion sequence to task performance in a data-driven manner. We validate our method by constructing a novel experimental setting to investigate goal-directed and whole-body movements. Further, our setting enables us to induce motor adaptation by using perturbation and evaluate the modulation of task-relevant and task-irrelevant variabilities through motor adaptation. Our method enables to identify a novel property of motor variability: the modulation of those variabilities is different depending on perturbation schedule. A constant perturbation increases task-relevant variability, and a gradually imposed perturbation increases task-irrelevant variability.
bioRxiv | 2018
Keiji Ota; Ken Takiyama
Although optimal decision-making is essential for sports performance and fine motor control, it has been repeatedly confirmed that humans show a strong risk-seeking bias, selecting a risky strategy over an optimal solution. Despite such evidence, the ideal method to promote optimal decision-making remains unclear. Here, we propose that interactions with other people can influence motor decision-making and improve risk-seeking bias. We developed a competitive reaching game (a variant of the “chicken game”) in which aiming for greater rewards increased the risk of no reward and subjects competed for the total reward with their opponent. The game resembles situations in sports, such as a penalty kick in soccer, service in tennis, the strike zone in baseball, or take-off in ski jumping. In five different experiments, we demonstrated that, at the beginning of the competitive game, the subjects robustly switched their risk-seeking strategy to a risk-averse strategy. Following the reversal of the strategy, the subjects achieved optimal decision-making when competing with risk-averse opponents. This optimality was achieved by a non-linear influence of an opponent’s decisions on a subject’s decisions. These results suggest that interactions with others can alter human motor decision strategies and that competition with a risk-averse opponent is key for optimizing motor decision-making.
Scientific Reports | 2018
Kotaro Ishii; Takuji Hayashi; Ken Takiyama
Humans and animals can flexibly switch rules to generate the appropriate response to the same sensory stimulus, e.g., we kick a soccer ball toward a friend on our team, but we kick the ball away from a friend who is traded to an opposing team. Most motor learning experiments have relied on a fixed rule; therefore, the effects of switching rules on motor learning are unclear. Here, we study the availability of motor learning effects when a rule in the training phase is different from a rule in the probe phase. Our results suggest that switching a rule causes partial rather than perfect availability. To understand the neural mechanisms inherent in our results, we verify that a computational model can explain our experimental results when each neural unit has different activities, but the total population activity is the same in the same planned movement with different rules. Thus, we conclude that switching rules causes modulations in individual neural activities under the same population activity, resulting in a partial transfer of learning effects for the same planned movements. Our results indicate that sports training and rehabilitation should include various situations even when the same motions are required.
Scientific Reports | 2017
Daisuke Furuki; Ken Takiyama
Goal-directed whole-body movements are fundamental in our daily life, sports, music, art, and other activities. Goal-directed movements have been intensively investigated by focusing on simplified movements (e.g., arm-reaching movements or eye movements); however, the nature of goal-directed whole-body movements has not been sufficiently investigated because of the high-dimensional nonlinear dynamics and redundancy inherent in whole-body motion. One open question is how to overcome high-dimensional nonlinear dynamics and redundancy to achieve the desired performance. It is possible to approach the question by quantifying how the motions of each body part at each time point contribute to movement performance. Nevertheless, it is difficult to identify an explicit relation between each motion element (the motion of each body part at each time point) and performance as a result of the high-dimensional nonlinear dynamics and redundancy inherent in whole-body motion. The current study proposes a data-driven approach to quantify the relevance of each motion element to the performance. The current findings indicate that linear regression may be used to quantify this relevance without considering the high-dimensional nonlinear dynamics of whole-body motion.
Physical Review E | 2016
Ken Takiyama
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.