Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroyuki Kambara is active.

Publication


Featured researches published by Hiroyuki Kambara.


IEEE Transactions on Biomedical Engineering | 2010

Application of Covariate Shift Adaptation Techniques in Brain–Computer Interfaces

Yan Li; Hiroyuki Kambara; Yasuharu Koike; Masashi Sugiyama

A phenomenon often found in session-to-session transfers of brain-computer interfaces (BCIs) is nonstationarity. It can be caused by fatigue and changing attention level of the user, differing electrode placements, varying impedances, among other reasons. Covariate shift adaptation is an effective method that can adapt to the testing sessions without the need for labeling the testing session data. The method was applied on a BCI Competition III dataset. Results showed that covariate shift adaptation compares favorably with methods used in the BCI competition in coping with nonstationarities. Specifically, bagging combined with covariate shift helped to increase stability, when applied to the competition dataset. An online experiment also proved the effectiveness of bagged-covariate shift method. Thus, it can be summarized that covariate shift adaptation is helpful to realize adaptive BCI systems.


Neural Networks | 2009

2009 Special Issue: Single-trial classification of vowel speech imagery using common spatial patterns

Charles S. DaSalla; Hiroyuki Kambara; Makoto Sato; Yasuharu Koike

With the goal of providing a speech prosthesis for individuals with severe communication impairments, we propose a control scheme for brain-computer interfaces using vowel speech imagery. Electroencephalography was recorded in three healthy subjects for three tasks, imaginary speech of the English vowels /a/ and /u/, and a no action state as control. Trial averages revealed readiness potentials at 200 ms after stimulus and speech related potentials peaking after 350 ms. Spatial filters optimized for task discrimination were designed using the common spatial patterns method, and the resultant feature vectors were classified using a nonlinear support vector machine. Overall classification accuracies ranged from 68% to 78%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller.


PLOS ONE | 2013

Prediction of three-dimensional arm trajectories based on ECoG signals recorded from human sensorimotor cortex.

Yasuhiko Nakanishi; Takufumi Yanagisawa; Duk Shin; Ryohei Fukuma; Chao Chen; Hiroyuki Kambara; Natsue Yoshimura; Masayuki Hirata; Toshiki Yoshimine; Yasuharu Koike

Brain-machine interface techniques have been applied in a number of studies to control neuromotor prostheses and for neurorehabilitation in the hopes of providing a means to restore lost motor function. Electrocorticography (ECoG) has seen recent use in this regard because it offers a higher spatiotemporal resolution than non-invasive EEG and is less invasive than intracortical microelectrodes. Although several studies have already succeeded in the inference of computer cursor trajectories and finger flexions using human ECoG signals, precise three-dimensional (3D) trajectory reconstruction for a human limb from ECoG has not yet been achieved. In this study, we predicted 3D arm trajectories in time series from ECoG signals in humans using a novel preprocessing method and a sparse linear regression. Average Pearson’s correlation coefficients and normalized root-mean-square errors between predicted and actual trajectories were 0.44∼0.73 and 0.18∼0.42, respectively, confirming the feasibility of predicting 3D arm trajectories from ECoG. We foresee this method contributing to future advancements in neuroprosthesis and neurorehabilitation technology.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2011

A Dictionary-Driven P300 Speller With a Modified Interface

Sercan Taha Ahi; Hiroyuki Kambara; Yasuharu Koike

P300 spellers are mainly composed of an interface, by which alphanumerical characters are presented to users, and a classification system, which identifies the target character by using acquired EEG data. In this study, we proposed modifications both to the interface and to the classification system, in order to reduce the number of required stimulus repetitions and consequently boost the information transfer rate. We initially incorporated a custom-built dictionary into the classification system, and conducted a study on 14 healthy subjects who copy-spelled 15 four letter words. Incorporating the dictionary, the mean accuracy at five trials increased from 72.86% to 95.71%. To further increase the system performance, we first validated the hypothesis that for a conventional P300 system, most target-error pairs lie on the same row or column. Then based on the validated hypothesis, we adjusted letter positions on the well-known from A to Z interface. The same subjects spelled the same 15 words using the modified interface as well, and the mean information transfer rate at two trials reached 55.32 bits/min.


PLOS ONE | 2012

Prediction of Muscle Activities from Electrocorticograms in Primary Motor Cortex of Primates

Duk Shin; Hidenori Watanabe; Hiroyuki Kambara; Atsushi Nambu; Tadashi Isa; Yukio Nishimura; Yasuharu Koike

Electrocorticography (ECoG) has drawn attention as an effective recording approach for brain-machine interfaces (BMI). Previous studies have succeeded in classifying movement intention and predicting hand trajectories from ECoG. Despite such successes, however, there still remains considerable work for the realization of ECoG-based BMIs as neuroprosthetics. We developed a method to predict multiple muscle activities from ECoG measurements. We also verified that ECoG signals are effective for predicting muscle activities in time varying series when performing sequential movements. ECoG signals were band-pass filtered into separate sensorimotor rhythm bands, z-score normalized, and smoothed with a Gaussian filter. We used sparse linear regression to find the best fit between frequency bands of ECoG and electromyographic activity. The best average correlation coefficient and the normalized root-mean-square error were 0.92±0.06 and 0.06±0.10, respectively, in the flexor digitorum profundus finger muscle. The δ (1.5∼4Hz) and γ2 (50∼90Hz) bands contributed significantly more strongly than other frequency bands (P<0.001). These results demonstrate the feasibility of predicting muscle activity from ECoG signals in an online fashion.


Neural Networks | 2009

Learning and generation of goal-directed arm reaching from scratch

Hiroyuki Kambara; Kyoungsik Kim; Duk Shin; Makoto Sato; Yasuharu Koike

In this paper, we propose a computational model for arm reaching control and learning. Our model describes not only the mechanism of motor control but also that of learning. Although several motor control models have been proposed to explain the control mechanism underlying well-trained arm reaching movements, it has not been fully considered how the central nervous system (CNS) learns to control our body. One of the great abilities of the CNS is that it can learn by itself how to control our body to execute required tasks. Our model is designed to improve the performance of control in a trial-and-error manner which is commonly seen in humans motor skill learning. In this paper, we focus on a reaching task in the sagittal plane and show that our model can learn and generate accurate reaching toward various target points without prior knowledge of arm dynamics. Furthermore, by comparing the movement trajectories with those made by human subjects, we show that our model can reproduce human-like reaching motions without specifying desired trajectories.


Neuroscience Research | 2014

Decoding fingertip trajectory from electrocorticographic signals in humans.

Yasuhiko Nakanishi; Takufumi Yanagisawa; Duk Shin; Chao Chen; Hiroyuki Kambara; Natsue Yoshimura; Ryohei Fukuma; Haruhiko Kishima; Masayuki Hirata; Yasuharu Koike

Seeking to apply brain-machine interface technology in neuroprosthetics, a number of methods for predicting trajectory of the elbow and wrist have been proposed and have shown remarkable results. Recently, the prediction of hand trajectory and classification of hand gestures or grasping types have attracted considerable attention. However, trajectory prediction for precise finger motion has remained a challenge. We proposed a method for the prediction of fingertip motions from electrocorticographic signals in human cortex. A patient performed extension/flexion tasks with three fingers. Average Pearsons correlation coefficients and normalized root-mean-square errors between decoded and actual trajectories were 0.83-0.90 and 0.24-0.48, respectively. To confirm generalizability to other users, we applied our method to the BCI Competition IV open data sets. Our method showed that the prediction accuracy of fingertip trajectory could be equivalent to that of other results in the competition.


Biomedical Signal Processing and Control | 2015

Online classification algorithm for eye-movement-based communication systems using two temporal EEG sensors

Abdelkader Nasreddine Belkacem; Duk Shin; Hiroyuki Kambara; Natsue Yoshimura; Yasuharu Koike

Abstract Real-time classification of eye movements offers an effective mode for human–machine interaction, and many eye-based interfaces have been presented in the literature. However, such systems often require that sensors be attached around the eyes, which can be obtrusive and cause discomfort. Here, we used two electroencephalography sensors positioned over the temporal areas to perform real-time classification of eye-blink and five classes of eye movement direction. We applied a continuous wavelet transform for online detection then extracted some discriminable time-series features. Using linear classification, we obtained an average accuracy of 85.2% and sensitivity of 77.6% over all classes. The results showed that the proposed algorithm was efficient in the detection and classification of eye movements, providing high accuracy and low-latency for single trials. This work demonstrates the promise of portable eye-movement-based communication systems and the sensor positions, features extraction, and classification methods used.


international convention on rehabilitation engineering & assistive technology | 2009

Spatial filtering and single-trial classification of EEG during vowel speech imagery

Charles S. DaSalla; Hiroyuki Kambara; Yasuharu Koike; Makoto Sato

With the purpose of providing assistive technology for the communication impaired, we propose a control algorithm for speech prostheses using vowel speech imagery. Electroen-cephalograms were recorded in three healthy subjects during the performance of three tasks, imaginary speech of the English vowels /a/ and /u/, and a no action state as control. Speech related potentials were visualized by grand averaging in the time domain. Feature data was obtained by filtering the time series data using optimal spatial filters designed through the common spatial patterns method. Resultant feature vectors were classified using a nonlinear support vector machine. Overall classification accuracies ranged from 68 to 78%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller.


Computational Intelligence and Neuroscience | 2015

Real-Time control of a video game using eye movements and two temporal EEG sensors

Abdelkader Nasreddine Belkacem; Supat Saetia; Kalanyu Zintus-Art; Duk Shin; Hiroyuki Kambara; Natsue Yoshimura; Nasr-Eddine Berrached; Yasuharu Koike

EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.

Collaboration


Dive into the Hiroyuki Kambara's collaboration.

Top Co-Authors

Avatar

Yasuharu Koike

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Duk Shin

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Natsue Yoshimura

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Makoto Sato

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuhiko Nakanishi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chao Chen

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kyoungsik Kim

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Atsushi Nambu

Graduate University for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge