Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mengfan Li is active.

Publication


Featured researches published by Mengfan Li.


Revista De Informática Teórica E Aplicada | 2014

A P300 Model for Cerebot – A Mind-Controlled Humanoid Robot

Mengfan Li; Wei Li; Jing Zhao; Qing-Hao Meng; Ming Zeng; Genshe Chen

In this paper, we present a P300 model for control of Cerebot – a mind-controlled humanoid robot, including a procedure of acquiring P300 signals, topographical distribution analysis of P300 signals, and a classification approach to identifying subjects’ mental activities regarding robot-walking behavior.


robotics and biomimetics | 2013

An OpenViBE-based brainwave control system for Cerebot

Jing Zhao; Qing-Hao Meng; Wei Li; Mengfan Li; Fuchun Sun; Genshe Chen

In this paper, we develop a brainwave-based control system for Cerebot, consisting of a humanoid robot and a Cerebus™ Data Acquisition System up to 128 channels. Under the OpenViBE programming environment, the control system integrates OpenGL, OpenCV, WEBOTS, Choregraph, Central software, and user-developed programs in C++ and Matlab. The proposed system is easy to be expanded or upgraded. Firstly, we describe the system structures for off-line analysis of acquired neural signals and for on-line control of a humanoid robot via brainwaves. Secondly, we discuss how to use the toolboxes provided with the OpenViBE environment to design three types of brainwave-based models: SSVEPs, P300s, and mu/beta rhythms. Finally, we use the Cerebot platform to investigate the three models by controlling four robot-walking behaviors: turning right, turning left, walking forward, and walking backward.


PLOS ONE | 2015

Comparative Study of SSVEP- and P300-based Models for the Telepresence Control of Humanoid Robots

Jing Zhao; Wei Li; Mengfan Li

In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.


Computational Intelligence and Neuroscience | 2017

Progress in EEG-Based Brain Robot Interaction Systems

Xiaoqian Mao; Mengfan Li; Wei Li; Linwei Niu; Bin Xian; Ming Zeng; Genshe Chen

The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.


International Journal of Neural Systems | 2016

Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

Mengfan Li; Wei Li; Huihui Zhou

Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.


Frontiers in Systems Neuroscience | 2015

Control of humanoid robot via motion-onset visual evoked potentials

Wei Li; Mengfan Li; Jing Zhao

This paper investigates controlling humanoid robot behavior via motion-onset specific N200 potentials. In this study, N200 potentials are induced by moving a blue bar through robot images intuitively representing robot behaviors to be controlled with mind. We present the individual impact of each subject on N200 potentials and discuss how to deal with individuality to obtain a high accuracy. The study results document the off-line average accuracy of 93% for hitting targets across over five subjects, so we use this major component of the motion-onset visual evoked potential (mVEP) to code peoples mental activities and to perform two types of on-line operation tasks: navigating a humanoid robot in an office environment with an obstacle and picking-up an object. We discuss the factors that affect the on-line control success rate and the total time for completing an on-line operation task.


world congress on intelligent control and automation | 2014

SSVEP-based hierarchical architecture for control of a humanoid robot with mind

Jing Zhao; Qing-Hao Meng; Wei Li; Mengfan Li; Genshe Chen

In this paper, we present an SSVEP-based hierarchical architecture for control of a humanoid robot with mind, consisting of five-layer decisions from robot state level to implementation level. This architecture is able to control a variety of humanoid robot behaviors at different levels to perform a complex task. We implement this hierarchical architecture on our Cerebot platform and test this control architecture in a multi-task experiment. We compare its control performance with the one achieved using manual control by an experienced operator. The results show that the architecture with coordinating multi-layer decisions and fusing human and robot actions is a good way to solve the issue of the limited information transfer rate (ITR) in mind-controlled humanoid robot system using current EEG-based BCI technology.


Journal of Visualized Experiments | 2015

SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.

Jing Zhao; Wei Li; Xiaoqian Mao; Mengfan Li

Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.


world congress on intelligent control and automation | 2014

Control of a humanoid robot via N200 potentials

Mengfan Li; Wei Li; Jing Zhao; Qing-Hao Meng; Genshe Chen

In this paper, we present a N200 model for control of a humanoid robot with mind. N200 is a major component of the motion visual evoked potential (mVEP) which can be used to detect the subjects intention. In order to acquire N200, we design a visual stimulus interface in which the activation of a stimulus is a bar scanning above the image of a humanoid robot behavior. By analyzing brain signals induced by this kind of stimulus and calculating some system indexes, the results of this study demonstrate that the designed interface can induce prominent N200 potentials, and another component P300 induced by this experiment can be considered as another characteristic of the feature vector to contribute to the classification. To our best knowledge, this paper would be the first report on an application of N200 model to control a humanoid robot with visual feedback in real time.


international conference on mechatronics and machine vision in practice | 2016

Operating an underwater manipulator via P300 brainwaves

Jin Zhang; Wei Li; Jiancheng Yu; Xiaoqian Mao; Mengfan Li; Genshe Chen

It would be difficult and stressful for a single operator to operate an underwater manipulator using his/her both hands in deep sea environments while the operator has to monitor or manipulate additional equipment. In order to reduce the operating pressure and make full use of the operator potentials, in this paper we propose a control strategy for operating the underwater manipulator via P300 brainwaves, which provides the operator a new way to operate the underwater manipulator without need for both hands. In this case, the two hands can be used to manipulate other equipment. The manipulator is a master-slave servo hydraulic manipulator with 7 functions, consisting of six degrees of freedom (DOFs) and a parallel gripper for manipulations. A p300 interface is designed by considering operation tasks of the underwater manipulator. Its a 3∗3 image matrix where each image corresponds to an underwater manipulator behavior. An experimental platform, in which a virtual underwater manipulator is developed, is used for validating the feasibility and effectiveness of the proposed brainwave-based strategy. Eight subjects are invited to do a typical underwater operational task, grasping a marine organism sample, on the virtual experimental platform. Their experimental results demonstrate that the proposed method is feasible and effective.

Collaboration


Dive into the Mengfan Li's collaboration.

Top Co-Authors

Avatar

Wei Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Huihui Zhou

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Linwei Niu

West Virginia State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge