Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianyou Yu is active.

Publication


Featured researches published by Tianyou Yu.


IEEE Transactions on Biomedical Engineering | 2010

An EEG-Based BCI System for 2-D Cursor Control by Combining Mu/Beta Rhythm and P300 Potential

Yuanqing Li; Jinyi Long; Tianyou Yu; Zhu Liang Yu; Chuanchu Wang; Haihong Zhang; Cuntai Guan

Two-dimensional cursor control is an important and challenging issue in EEG-based brain-computer interfaces (BCIs). To address this issue, here we propose a new approach by combining two brain signals including Mu/Beta rhythm during motor imagery and P300 potential. In particular, a motor imagery detection mechanism and a P300 potential detection mechanism are devised and integrated such that the user is able to use the two signals to control, respectively, simultaneously, and independently, the horizontal and the vertical movements of the cursor in a specially designed graphic user interface. A real-time BCI system based on this approach is implemented and evaluated through an online experiment involving six subjects performing 2-D control tasks. The results attest to the efficacy of obtaining two independent control signals by the proposed approach. Furthermore, the results show that the system has merit compared with prior systems: it allows cursor movement between arbitrary positions.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2012

A Hybrid Brain Computer Interface to Control the Direction and Speed of a Simulated or Real Wheelchair

Jinyi Long; Yuanqing Li; Hongtao Wang; Tianyou Yu; Jiahui Pan; Feng Li

Brain-computer interfaces (BCIs) are used to translate brain activity signals into control signals for external devices. Currently, it is difficult for BCI systems to provide the multiple independent control signals necessary for the multi-degree continuous control of a wheelchair. In this paper, we address this challenge by introducing a hybrid BCI that uses the motor imagery-based mu rhythm and the P300 potential to control a brain-actuated simulated or real wheelchair. The objective of the hybrid BCI is to provide a greater number of commands with increased accuracy to the BCI user. Our paradigm allows the user to control the direction (left or right turn) of the simulated or real wheelchair using left- or right-hand imagery. Furthermore, a hybrid manner can be used to control speed. To decelerate, the user imagines foot movement while ignoring the flashing buttons on the graphical user interface (GUI). If the user wishes to accelerate, then he/she pays attention to a specific flashing button without performing any motor imagery. Two experiments were conducted to assess the BCI control; both a simulated wheelchair in a virtual environment and a real wheelchair were tested. Subjects steered both the simulated and real wheelchairs effectively by controlling the direction and speed with our hybrid BCI system. Data analysis validated the use of our hybrid BCI system to control the direction and speed of a wheelchair.


IEEE Transactions on Biomedical Engineering | 2012

Target Selection With Hybrid Feature for BCI-Based 2-D Cursor Control

Jinyi Long; Yuanqing Li; Tianyou Yu; Zhenghui Gu

To control a cursor on a monitor screen, a user generally needs to perform two tasks sequentially. The first task is to move the cursor to a target on the monitor screen (termed a 2-D cursor movement), and the second task is either to select a target of interest by clicking on it or to reject a target that is not of interest by not clicking on it. In a previous study, we implemented the former function in an EEG-based brain-computer interface system using motor imagery and the P300 potential to control the horizontal and vertical cursor movements, respectively. In this study, the target selection or rejection functionality is implemented using a hybrid feature from motor imagery and the P300 potential. Specifically, to select the target of interest, the user must focus his or her attention on a flashing button to evoke the P300 potential, while simultaneously maintaining an idle state of motor imagery. Otherwise, the user performs left-/right-hand motor imagery without paying attention to any buttons to reject the target. Our data analysis and online experimental results validate the effectiveness of our approach. The proposed hybrid feature is shown to be more effective than the use of either the motor imagery feature or the P300 feature alone. Eleven subjects attended our online experiment, in which a trial involved sequential 2-D cursor movement and target selection. The average duration of each trial and average accuracy of target selection were 18.19 s and 93.99% , respectively, and each target selection or rejection event was performed within 2 s.


Journal of Neural Engineering | 2012

Surfing the internet with a BCI mouse

Tianyou Yu; Yuanqing Li; Jinyi Long; Zhenghui Gu

In this paper, we present a new web browser based on a two-dimensional (2D) brain-computer interface (BCI) mouse, where our major concern is the selection of an intended target in a multi-target web page. A real-world web page may contain tens or even hundreds of targets, including hyperlinks, input elements, buttons, etc. In this case, a target filter designed in our system can be used to exclude most of those targets of no interest. Specifically, the user filters the targets of no interest out by inputting keywords with a P300-based speller, while keeps those containing the keywords. Such filtering largely facilitates the target selection task based on our BCI mouse. When there are only several targets in a web page (either an original sparse page or a target-filtered page), the user moves the mouse toward the target of interest using his/her electroencephalographic signal. The horizontal movement and vertical movement are controlled by motor imagery and P300 potential, respectively. If the mouse encounters a target of no interest, the user rejects it and continues to move the mouse. Otherwise the user selects the target and activates it. With the collaboration of the target filtering and a series of mouse movements and target selections/rejections, the user can select an intended target in a web page. Based on our browser system, common navigation functions, including history rolling forward and backward, hyperlink selection, page scrolling, text input, etc, are available. The system has been tested on seven subjects. Experimental results not only validated the efficacy of the proposed method, but also showed that free internet surfing with a BCI mouse is feasible.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2016

Control of a Wheelchair in an Indoor Environment Based on a Brain–Computer Interface and Automated Navigation

Rui Zhang; Yuanqing Li; Yongyong Yan; Hao Zhang; Shaoyu Wu; Tianyou Yu; Zhenghui Gu

The concept of controlling a wheelchair using brain signals is promising. However, the continuous control of a wheelchair based on unstable and noisy electroencephalogram signals is unreliable and generates a significant mental burden for the user. A feasible solution is to integrate a brain-computer interface (BCI) with automated navigation techniques. This paper presents a brain-controlled intelligent wheelchair with the capability of automatic navigation. Using an autonomous navigation system, candidate destinations and waypoints are automatically generated based on the existing environment. The user selects a destination using a motor imagery (MI)-based or P300-based BCI. According to the determined destination, the navigation system plans a short and safe path and navigates the wheelchair to the destination. During the movement of the wheelchair, the user can issue a stop command with the BCI. Using our system, the mental burden of the user can be substantially alleviated. Furthermore, our system can adapt to changes in the environment. Two experiments based on MI and P300 were conducted to demonstrate the effectiveness of our system.


Cognitive Neurodynamics | 2014

An asynchronous wheelchair control by hybrid EEG–EOG brain–computer interface

Hongtao Wang; Yuanqing Li; Jinyi Long; Tianyou Yu; Zhenghui Gu

Wheelchair control requires multiple degrees of freedom and fast intention detection, which makes electroencephalography (EEG)-based wheelchair control a big challenge. In our previous study, we have achieved direction (turning left and right) and speed (acceleration and deceleration) control of a wheelchair using a hybrid brain–computer interface (BCI) combining motor imagery and P300 potentials. In this paper, we proposed hybrid EEG-EOG BCI, which combines motor imagery, P300 potentials, and eye blinking to implement forward, backward, and stop control of a wheelchair. By performing relevant activities, users (e.g., those with amyotrophic lateral sclerosis and locked-in syndrome) can navigate the wheelchair with seven steering behaviors. Experimental results on four healthy subjects not only demonstrate the efficiency and robustness of our brain-controlled wheelchair system but also indicate that all the four subjects could control the wheelchair spontaneously and efficiently without any other assistance (e.g., an automatic navigation system).


IEEE Transactions on Biomedical Engineering | 2015

Enhanced Motor Imagery Training Using a Hybrid BCI With Feedback

Tianyou Yu; Jun Xiao; Fangyi Wang; Rui Zhang; Zhenghui Gu; Andrzej Cichocki; Yuanqing Li

Goal: Motor imagery-related mu/beta rhythms, which can be voluntarily modulated by subjects, have been widely used in EEG-based brain computer interfaces (BCIs). Moreover, it has been suggested that motor imagery-specific EEG differences can be enhanced by feedback training. However, the differences observed in the EEGs of naive subjects are typically not sufficient to provide reliable EEG control and thus result in unintended feedback. Such feedback can frustrate subjects and impede training. In this study, a hybrid BCI paradigm combining motor imagery and steady-state visually evoked potentials (SSVEPs) has been proposed to provide effective continuous feedback for motor imagery training. Methods: During the initial training sessions, subjects must focus on flickering buttons to evoke SSVEPs as they perform motor imagery tasks. The output/feedback of the hybrid BCI is based on hybrid features consisting of motor imagery- and SSVEP-related brain signals. In this context, the SSVEP plays a more important role than motor imagery in generating feedback. As the training progresses, the subjects can gradually decrease their visual attention to the flickering buttons, provided that the feedback is still effective. In this case, the feedback is mainly based on motor imagery. Results: Our experimental results demonstrate that subjects generate distinguishable brain patterns of hand motor imagery after only five training sessions lasting approximately 1.5 h each. Conclusion: The proposed hybrid feedback paradigm can be used to enhance motor imagery training. Significance: This hybrid BCI system with feedback can effectively identify the intentions of the subjects.


Proceedings of the IEEE | 2016

Multimodal BCIs: Target Detection, Multidimensional Control, and Awareness Evaluation in Patients With Disorder of Consciousness

Yuanqing Li; Jiahui Pan; Jinyi Long; Tianyou Yu; Fei Wang; Zhu Liang Yu; Wei Wu

Despite rapid advances in the study of brain-computer interfaces (BCIs) in recent decades, two fundamental challenges, namely, improvement of target detection performance and multidimensional control, continue to be major barriers for further development and applications. In this paper, we review the recent progress in multimodal BCIs (also called hybrid BCIs), which may provide potential solutions for addressing these challenges. In particular, improved target detection can be achieved by developing multimodal BCIs that utilize multiple brain patterns, multimodal signals, or multisensory stimuli. Furthermore, multidimensional object control can be accomplished by generating multiple control signals from different brain patterns or signal modalities. Here, we highlight several representative multimodal BCI systems by analyzing their paradigm designs, detection/control methods, and experimental results. To demonstrate their practicality, we report several initial clinical applications of these multimodal BCI systems, including awareness evaluation/detection in patients with disorder of consciousness (DOC). As an evolving research area, the study of multimodal BCIs is increasingly requiring more synergetic efforts from multiple disciplines for the exploration of the underlying brain mechanisms, the design of new effective paradigms and means of neurofeedback, and the expansion of the clinical applications of these systems.


Cerebral Cortex | 2015

Crossmodal Integration Enhances Neural Representation of Task-Relevant Features in Audiovisual Face Perception

Yuanqing Li; Jinyi Long; Biao Huang; Tianyou Yu; Wei Wu; Yongjian Liu; Changhong Liang; Pei Sun

Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.


Cognitive Neurodynamics | 2011

Semi-supervised joint spatio-temporal feature selection for P300-based BCI speller

Jinyi Long; Zhenghui Gu; Yuanqing Li; Tianyou Yu; Feng Li; Ming Fu

In this paper, we address the important problem of feature selection for a P300-based brain computer interface (BCI) speller system in several aspects. Firstly, time segment selection and electroencephalogram channel selection are jointly performed for better discriminability of P300 and background signals. Secondly, in view of the situation that training data with labels are insufficient, we propose an iterative semi-supervised support vector machine for joint spatio-temporal feature selection as well as classification, in which both labeled training data and unlabeled test data are utilized. More importantly, the semi-supervised learning enables the adaptivity of the system. The performance of our algorithm has been evaluated through the analysis of a P300 dataset provided by BCI Competition 2005 and another dataset collected from an in-house P300 speller system. The results show that our algorithm for joint feature selection and classification achieves satisfactory performance, meanwhile it can significantly reduce the training effort of the system. Furthermore, this algorithm is implemented online and the corresponding results demonstrate that our algorithm can improve the adaptiveness of the P300-based BCI speller.

Collaboration


Dive into the Tianyou Yu's collaboration.

Top Co-Authors

Avatar

Yuanqing Li

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinyi Long

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhenghui Gu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhu Liang Yu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jiahui Pan

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Xiao

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Hongtao Wang

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronghao Yu

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Feng Li

Changsha University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge