Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Natsue Yoshimura is active.

Publication


Featured researches published by Natsue Yoshimura.


PLOS ONE | 2013

Prediction of three-dimensional arm trajectories based on ECoG signals recorded from human sensorimotor cortex.

Yasuhiko Nakanishi; Takufumi Yanagisawa; Duk Shin; Ryohei Fukuma; Chao Chen; Hiroyuki Kambara; Natsue Yoshimura; Masayuki Hirata; Toshiki Yoshimine; Yasuharu Koike

Brain-machine interface techniques have been applied in a number of studies to control neuromotor prostheses and for neurorehabilitation in the hopes of providing a means to restore lost motor function. Electrocorticography (ECoG) has seen recent use in this regard because it offers a higher spatiotemporal resolution than non-invasive EEG and is less invasive than intracortical microelectrodes. Although several studies have already succeeded in the inference of computer cursor trajectories and finger flexions using human ECoG signals, precise three-dimensional (3D) trajectory reconstruction for a human limb from ECoG has not yet been achieved. In this study, we predicted 3D arm trajectories in time series from ECoG signals in humans using a novel preprocessing method and a sparse linear regression. Average Pearson’s correlation coefficients and normalized root-mean-square errors between predicted and actual trajectories were 0.44∼0.73 and 0.18∼0.42, respectively, confirming the feasibility of predicting 3D arm trajectories from ECoG. We foresee this method contributing to future advancements in neuroprosthesis and neurorehabilitation technology.


NeuroImage | 2012

Reconstruction of flexor and extensor muscle activities from electroencephalography cortical currents.

Natsue Yoshimura; Charles S. DaSalla; Takashi Hanakawa; Masa-aki Sato; Yasuharu Koike

The ability to reconstruct muscle activity time series from electroencephalography (EEG) may lead to drastic improvements in brain-machine interfaces (BMIs) by providing a means for realistic continuous reproduction of dexterous movements in human beings. However, it is considered difficult to isolate signals related to individual muscle activities from EEG because EEG sensors record a mixture of signals originating from many cortical regions. Here, we challenge this assumption by reconstructing agonist and antagonist muscle activities (i.e. filtered electromyography (EMG) signals) from EEG cortical currents estimated using a hierarchical Bayesian EEG inverse method. Results of 5 volunteer subjects performing isometric right wrist flexion and extension tasks showed that individual muscle activity time series, as well as muscle activities at different force levels, were well reconstructed using EEG cortical currents and with significantly higher accuracy than when directly reconstructing from EEG sensor signals. Moreover, spatial distribution of weight values for reconstruction models revealed that highly contributing cortical sources to flexion and extension tasks were mutually exclusive, even though they were mapped onto the same cortical region. These results suggest that EEG sensor signals were reasonably isolated into cortical currents using the applied method and provide the first evidence that agonist and antagonist muscle activity time series can be reconstructed using EEG cortical currents.


Neuroscience Research | 2014

Decoding fingertip trajectory from electrocorticographic signals in humans.

Yasuhiko Nakanishi; Takufumi Yanagisawa; Duk Shin; Chao Chen; Hiroyuki Kambara; Natsue Yoshimura; Ryohei Fukuma; Haruhiko Kishima; Masayuki Hirata; Yasuharu Koike

Seeking to apply brain-machine interface technology in neuroprosthetics, a number of methods for predicting trajectory of the elbow and wrist have been proposed and have shown remarkable results. Recently, the prediction of hand trajectory and classification of hand gestures or grasping types have attracted considerable attention. However, trajectory prediction for precise finger motion has remained a challenge. We proposed a method for the prediction of fingertip motions from electrocorticographic signals in human cortex. A patient performed extension/flexion tasks with three fingers. Average Pearsons correlation coefficients and normalized root-mean-square errors between decoded and actual trajectories were 0.83-0.90 and 0.24-0.48, respectively. To confirm generalizability to other users, we applied our method to the BCI Competition IV open data sets. Our method showed that the prediction accuracy of fingertip trajectory could be equivalent to that of other results in the competition.


Biomedical Signal Processing and Control | 2015

Online classification algorithm for eye-movement-based communication systems using two temporal EEG sensors

Abdelkader Nasreddine Belkacem; Duk Shin; Hiroyuki Kambara; Natsue Yoshimura; Yasuharu Koike

Abstract Real-time classification of eye movements offers an effective mode for human–machine interaction, and many eye-based interfaces have been presented in the literature. However, such systems often require that sensors be attached around the eyes, which can be obtrusive and cause discomfort. Here, we used two electroencephalography sensors positioned over the temporal areas to perform real-time classification of eye-blink and five classes of eye movement direction. We applied a continuous wavelet transform for online detection then extracted some discriminable time-series features. Using linear classification, we obtained an average accuracy of 85.2% and sensitivity of 77.6% over all classes. The results showed that the proposed algorithm was efficient in the detection and classification of eye movements, providing high accuracy and low-latency for single trials. This work demonstrates the promise of portable eye-movement-based communication systems and the sensor positions, features extraction, and classification methods used.


Computational Intelligence and Neuroscience | 2015

Real-Time control of a video game using eye movements and two temporal EEG sensors

Abdelkader Nasreddine Belkacem; Supat Saetia; Kalanyu Zintus-Art; Duk Shin; Hiroyuki Kambara; Natsue Yoshimura; Nasr-Eddine Berrached; Yasuharu Koike

EEG-controlled gaming applications range widely from strictly medical to completely nonmedical applications. Games can provide not only entertainment but also strong motivation for practicing, thereby achieving better control with rehabilitation system. In this paper we present real-time control of video game with eye movements for asynchronous and noninvasive communication system using two temporal EEG sensors. We used wavelets to detect the instance of eye movement and time-series characteristics to distinguish between six classes of eye movement. A control interface was developed to test the proposed algorithm in real-time experiments with opened and closed eyes. Using visual feedback, a mean classification accuracy of 77.3% was obtained for control with six commands. And a mean classification accuracy of 80.2% was obtained using auditory feedback for control with five commands. The algorithm was then applied for controlling direction and speed of character movement in two-dimensional video game. Results showed that the proposed algorithm had an efficient response speed and timing with a bit rate of 30 bits/min, demonstrating its efficacy and robustness in real-time control.


Neuroscience Research | 2014

Decoding grasp force profile from electrocorticography signals in non-human primate sensorimotor cortex.

Chao Chen; Duk Shin; Hidenori Watanabe; Yasuhiko Nakanishi; Hiroyuki Kambara; Natsue Yoshimura; Atsushi Nambu; Tadashi Isa; Yukio Nishimura; Yasuharu Koike

The relatively low invasiveness of electrocorticography (ECoG) has made it a promising candidate for the development of practical, high-performance neural prosthetics. Recent ECoG-based studies have shown success in decoding hand and finger movements and muscle activity in reaching and grasping tasks. However, decoding of force profiles is still lacking. Here, we demonstrate that lateral grasp force profile can be decoded using a sparse linear regression from 15 and 16 channel ECoG signals recorded from sensorimotor cortex in two non-human primates. The best average correlation coefficients of prediction after 10-fold cross validation were 0.82±0.09 and 0.79±0.15 for our monkeys A and B, respectively. These results show that grasp force profile was successfully decoded from ECoG signals in reaching and grasping tasks and may potentially contribute to the development of more natural control methods for grasping in neural prosthetics.


IEEE Access | 2016

Hybrid Control of a Vision-Guided Robot Arm by EOG, EMG, EEG Biosignals and Head Movement Acquired via a Consumer-Grade Wearable Device

Ludovico Minati; Natsue Yoshimura; Yasuharu Koike

Simultaneous acquisition of electrooculogram, jaw electromyogram, electroencephalogram, and head movement via consumer-grade wearable devices has become possible. Such devices offer new opportunities to deploy practical biosignal-based interfaces for assistive robots; however, they also pose challenges related to the available signals and their characteristics. In this proof-of-concept study, we demonstrate the possibility of successful control of a 5 + 1 degrees-of-freedom robot arm based on a consumer wireless headband in the form of four control modes predicated on distinct signal combinations. We propose a control approach hybrid at two levels, which seeks a compromise between robot controllability and maintaining the user goal rather than being process-focused. First, robot arm steering combines discrete and proportional aspects. Second, after the robot has been steered toward the approximate target direction, a sparse approach is followed and the user only needs to issue a single command, after which steering adjustment and grasping are performed automatically under stereoscopic vision guidance. We present in detail the associated algorithms, whose implementation is publicly available. Within this framework, we also demonstrate the control of arm posture and grasping force based, respectively, on object visual features and user input. We regard the interface proposed herein as a viable blueprint for future work on controlling wheelchair-mounted and meal-assisting robot arms.


Frontiers in Neuroscience | 2016

Decoding of Covert Vowel Articulation Using Electroencephalography Cortical Currents.

Natsue Yoshimura; Atsushi Nishimoto; Abdelkader Nasreddine Belkacem; Duk Shin; Hiroyuki Kambara; Takashi Hanakawa; Yasuharu Koike

With the goal of providing assistive technology for the communication impaired, we proposed electroencephalography (EEG) cortical currents as a new approach for EEG-based brain-computer interface spellers. EEG cortical currents were estimated with a variational Bayesian method that uses functional magnetic resonance imaging (fMRI) data as a hierarchical prior. EEG and fMRI data were recorded from ten healthy participants during covert articulation of Japanese vowels /a/ and /i/, as well as during a no-imagery control task. Applying a sparse logistic regression (SLR) method to classify the three tasks, mean classification accuracy using EEG cortical currents was significantly higher than that using EEG sensor signals and was also comparable to accuracies in previous studies using electrocorticography. SLR weight analysis revealed vertices of EEG cortical currents that were highly contributive to classification for each participant, and the vertices showed discriminative time series signals according to the three tasks. Furthermore, functional connectivity analysis focusing on the highly contributive vertices revealed positive and negative correlations among areas related to speech processing. As the same findings were not observed using EEG sensor signals, our results demonstrate the potential utility of EEG cortical currents not only for engineering purposes such as brain-computer interfaces but also for neuroscientific purposes such as the identification of neural signaling related to language processing.


international conference on virtual rehabilitation | 2011

Usability of EEG cortical currents in classification of vowel speech imagery

Natsue Yoshimura; Aruha Satsuma; Charles S. DaSalla; Takashi Hanakawa; Masa-aki Sato; Yasuharu Koike

With the purpose of providing assistive technology for the communication impaired, we propose a new approach for speech prostheses using vowel speech imagery. Using a hierarchical Bayesian method, electroencephalography (EEG) cortical currents were estimated using EEG signals recorded from three healthy subjects during the performance of three tasks, imaginary speech of vowels /a/ and /u/, and a no imagery state as control. The 3-task classification using a sparse logistic regression method with variational approximation (SLR-VAR) revealed that mean classification accuracy of cortical currents was almost two times greater than chance level and significantly higher than that using EEG signals. The results suggest the possibility of using EEG cortical currents to discriminate multiple syllables by improving the spatial discrimination of EEG.


NeuroImage | 2014

Dissociable neural representations of wrist motor coordinate frames in human motor cortices

Natsue Yoshimura; Koji Jimura; Charles S. DaSalla; Duk Shin; Hiroyuki Kambara; Takashi Hanakawa; Yasuharu Koike

There is a growing interest in how the brain transforms body part positioning in the extrinsic environment into an intrinsic coordinate frame during motor control. To explore the human brain areas representing intrinsic and extrinsic coordinate frames, this fMRI study examined neural representation of motor cortices while human participants performed isometric wrist flexions and extensions in different forearm postures, thereby applying the same wrist actions (representing the intrinsic coordinate frame) to different movement directions (representing the extrinsic coordinate frame). Using sparse logistic regression, critical voxels involving pattern information that specifically discriminates wrist action (flexion vs. extension) and movement direction (upward vs. downward) were identified within the primary motor and premotor cortices. Analyses of classifier weights further identified contributions of the primary motor cortex to the intrinsic coordinate frame and the ventral and dorsal premotor cortex and supplementary motor area proper to the extrinsic coordinate frame. These results are consistent with existing findings using non-human primates and demonstrate the distributed representations of independent coordinate frames in the human brain.

Collaboration


Dive into the Natsue Yoshimura's collaboration.

Top Co-Authors

Avatar

Yasuharu Koike

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiroyuki Kambara

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Duk Shin

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuhiko Nakanishi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chao Chen

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Charles S. DaSalla

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kalanyu Zintus-Art

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Atsushi Nambu

Graduate University for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge