Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasuharu Koike is active.

Publication


Featured researches published by Yasuharu Koike.


Biological Cybernetics | 1995

Estimation of dynamic joint torques and trajectory formation from surface electromyography signals using a neural network model

Yasuharu Koike; Mitsuo Kawato

In this study, human arm movement was re-constructed from electromyography (EMG) signals using a forward dynamics model acquired by an artificial neural network within a modular architecture. Dynamic joint torques at the elbow and shoulder were estimated for movements in the horizontal plane from the surface EMG signals of 10 flexor and extensor muscles. Using only the initial conditions of the arm and the EMG time course as input, the network reliably reconstructed a variety of movement trajectories. The results demonstrate that posture maintenance and multijoint movements, entailing complex via-point specification and co-contraction of muscles, can be accurately computed from multiple surface EMG signals. In addition to the models empirical uses, such as calculation of arm stiffness during motion, it allows evaluation of hypothesized computational mechanisms of the central nervous system such as virtual trajectory control and optimal trajectory planning.


Journal of Neurophysiology | 2009

A myokinetic arm model for estimating joint torque and stiffness from EMG signals during maintained posture.

Duk Shin; Jaehyo Kim; Yasuharu Koike

The perturbation method has been used to measure stiffness of the human arm with a manipulator. Results are averages of stiffness during short perturbation intervals (<0.4 s) and also vary with muscle activation. We therefore propose a novel method for estimating static arm stiffness from muscle activation without the use of perturbation. We developed a mathematical muscle model based on anatomical and physiological data to estimate joint torque solely from EMG. This model expresses muscle tension using a quadratic function of the muscle activation and parameters representing muscle properties. The parameters are acquired from the relation between EMG and measured torque. Using this model, we were able to reconstruct joint torque from EMG signals with or without co-contraction. Joint stiffness is directly obtained by differentiation of this model analytically. We confirmed that the proposed method can be used to estimate joint torque, joint stiffness, and stiffness ellipses simultaneously for various postures with the same parameters and produces results consistent with the conventional perturbation method.


IEEE Transactions on Biomedical Engineering | 2010

Application of Covariate Shift Adaptation Techniques in Brain–Computer Interfaces

Yan Li; Hiroyuki Kambara; Yasuharu Koike; Masashi Sugiyama

A phenomenon often found in session-to-session transfers of brain-computer interfaces (BCIs) is nonstationarity. It can be caused by fatigue and changing attention level of the user, differing electrode placements, varying impedances, among other reasons. Covariate shift adaptation is an effective method that can adapt to the testing sessions without the need for labeling the testing session data. The method was applied on a BCI Competition III dataset. Results showed that covariate shift adaptation compares favorably with methods used in the BCI competition in coping with nonstationarities. Specifically, bagging combined with covariate shift helped to increase stability, when applied to the competition dataset. An online experiment also proved the effectiveness of bagged-covariate shift method. Thus, it can be summarized that covariate shift adaptation is helpful to realize adaptive BCI systems.


Neural Networks | 2009

2009 Special Issue: Single-trial classification of vowel speech imagery using common spatial patterns

Charles S. DaSalla; Hiroyuki Kambara; Makoto Sato; Yasuharu Koike

With the goal of providing a speech prosthesis for individuals with severe communication impairments, we propose a control scheme for brain-computer interfaces using vowel speech imagery. Electroencephalography was recorded in three healthy subjects for three tasks, imaginary speech of the English vowels /a/ and /u/, and a no action state as control. Trial averages revealed readiness potentials at 200 ms after stimulus and speech related potentials peaking after 350 ms. Spatial filters optimized for task discrimination were designed using the common spatial patterns method, and the resultant feature vectors were classified using a nonlinear support vector machine. Overall classification accuracies ranged from 68% to 78%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller.


ieee virtual reality conference | 2002

Tension based 7-DOF force feedback device: SPIDAR-G

Seahak Kim; Shoichi Hasegawa; Yasuharu Koike; Makoto Sato

We demonstrate a new intuitive force feedback device for advanced VR applications. Force feedback for the device is tension based and is characterized by 7 degrees of freedom (DOF); 3 DOF for translation, 3 DOF for rotation, and 1 DOF for grasp. The SPIDAR-G (Space Interface Device for Artificial Reality with Grip) will allow users to interact with virtual objects naturally by manipulating two hemispherical grips located in the center of the device frame. We show how to connect the strings between each vertex of grip and each extremity of the frame in order to achieve force feedback. In addition, methodologies will be discussed for calculating translation, orientation and grasp using the length of 8 strings connected to the motors and encoders on the frame. The SPIDAR-G exhibits smooth force feedback, minimized inertia, no backlash, scalability and safety. Such features are attributed to strategic string arrangement and control that results in stable haptic rendering. The design and control of the SPIDAR-G is described in detail and the space graphic user interface system based on the proposed SPIDAR-G system is demonstrated. Experimental results validate the feasibility of the proposed device and reveal its application to virtual reality.


Neuroscience Research | 2009

Phased processing of facial emotion: an ERP study.

Nugraha P. Utama; Atsushi Takemoto; Yasuharu Koike; Katsuki Nakamura

We examined the temporal characteristics of facial-emotion processing. The stimuli were several morphed images containing seven facial emotions (neutral, anger, happiness, disgust, sadness, surprise, and fear) and ten-graded intensity levels to parametrically control these aspects of facial emotions. Brain activity was recorded with electroencephalography as the subjects detected the facial emotion and assessed its intensity. We found that the temporal profile of detection was quite different from the assessment of intensity. A positive component 100 ms after stimulus onset (P100) was significantly correlated with the correct detection of facial emotion, whereas a negative component 170 ms after stimulus onset (N170) was significantly correlated with the assessment of intensity level. The source of both the P100 and N170 signals was consistently localized to the right occipito-parietal region. We propose phased processing of facial emotion, in which rapid detection of any facial emotion occurs within 100 ms and detailed processing, including the assessment of the intensity, occurs shortly afterwards.


international conference of the ieee engineering in medicine and biology society | 1992

Human hand stiffness during discrete point-to-point multi-joint movement

Hiroaki Gomi; Yasuharu Koike; Mitsuo Kawato

We compared dynamic hand stiffness during discrete multi-joint arm movement and static stiffness during posture control in order to investigate computational strategy for arm control by the central nervous system. Estimated stiffness during movement obtained by applying small perturbations were lower than those during static posture near the end of the movement. This result supports our simulation study that the virtual equilibrium trajectory of the hand is complicated in its shape to compensate nonlinear interaction forces of the multi-joint arm especially for fast movements.


PLOS ONE | 2013

Prediction of three-dimensional arm trajectories based on ECoG signals recorded from human sensorimotor cortex.

Yasuhiko Nakanishi; Takufumi Yanagisawa; Duk Shin; Ryohei Fukuma; Chao Chen; Hiroyuki Kambara; Natsue Yoshimura; Masayuki Hirata; Toshiki Yoshimine; Yasuharu Koike

Brain-machine interface techniques have been applied in a number of studies to control neuromotor prostheses and for neurorehabilitation in the hopes of providing a means to restore lost motor function. Electrocorticography (ECoG) has seen recent use in this regard because it offers a higher spatiotemporal resolution than non-invasive EEG and is less invasive than intracortical microelectrodes. Although several studies have already succeeded in the inference of computer cursor trajectories and finger flexions using human ECoG signals, precise three-dimensional (3D) trajectory reconstruction for a human limb from ECoG has not yet been achieved. In this study, we predicted 3D arm trajectories in time series from ECoG signals in humans using a novel preprocessing method and a sparse linear regression. Average Pearson’s correlation coefficients and normalized root-mean-square errors between predicted and actual trajectories were 0.44∼0.73 and 0.18∼0.42, respectively, confirming the feasibility of predicting 3D arm trajectories from ECoG. We foresee this method contributing to future advancements in neuroprosthesis and neurorehabilitation technology.


Neuroscience Research | 2006

Prediction of arm trajectory from a small number of neuron activities in the primary motor cortex

Yasuharu Koike; Hideaki Hirose; Yoshio Sakurai; Toshio Iijima

Monkey arm movement was reconstructed from neuron activities recorded in the primary motor cortex (Ml). We recorded single neuron activities from a monkeys Ml, while the animal performed an arm reaching task. We also recorded electromyographic (EMG) activity and movement trajectories during the task. First, we reconstructed the EMG signals from the neuron activities. The EMG signals were reliably reconstructed with a linear summation of the neuron activities. Next, we reconstructed joint angles from the reconstructed EMG signals with an artificial neural network model. The reconstructed trajectories of the hand position and elbow position showed good correlation with the actual arm movement. This model appropriately reflected the anatomical characteristics.


Biological Cybernetics | 1995

A computational theory for movement pattern recognition based on optimal movement pattern generation

Yasuhiro Wada; Yasuharu Koike; Eric Vatikiotis-Bateson; Mitsuo Kawato

We have previously proposed an optimal trajectory and control theory for continuous movements, such as reaching or cursive handwriting. According to Marrs three-level description of brain function, our theory can be summarized as follows: (1) The computational theory is the minimum torque-change model; (2) the intermediate representation of a pattern is given as a set of via-points extracted from an example pattern; and (3) algorithm and hardware are provided by FIRM, a neural network that can generate and control minimum torque-change trajectories. In this paper, we propose a computational theory for movement pattern recognition that is based on our theory for optimal movement pattern generation. The three levels of the description of brain function in the recognition theory are tightly coupled with those for pattern generation. In recognition, the generation process and the recognition process are actually two flows of information in opposite directions within a single functional unit. In our theory, if the input movement trajectory data are identical to the optimal movement pattern reconstructed from an intermediate representation of some symbol, the input data are recognized as that symbol. If an error exists between the movement trajectory data and the generated trajectory, the putative symbol is corrected, and the generation is repeated. In particular, we present concrete computational procedures for the recognition of connected cursive handwritten characters, as well as for the estimation of phonemic timing in natural speech. Our most important contribution is to demonstrate the computational realizability for the ‘motor theory of movement pattern perception’: the movement-pattern recognition process can be realized by actively recruiting the movementpattern formation process. The way in which the formation process is utilized in pattern recognition in our theory suggests a duality between movement pattern formation and movement pattern perception.

Collaboration


Dive into the Yasuharu Koike's collaboration.

Top Co-Authors

Avatar

Hiroyuki Kambara

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Makoto Sato

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Duk Shin

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Natsue Yoshimura

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shoichi Hasegawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles S. DaSalla

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kyoungsik Kim

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge