Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Genshe Chen is active.

Publication


Featured researches published by Genshe Chen.


Revista De Informática Teórica E Aplicada | 2014

A P300 Model for Cerebot – A Mind-Controlled Humanoid Robot

Mengfan Li; Wei Li; Jing Zhao; Qing-Hao Meng; Ming Zeng; Genshe Chen

In this paper, we present a P300 model for control of Cerebot – a mind-controlled humanoid robot, including a procedure of acquiring P300 signals, topographical distribution analysis of P300 signals, and a classification approach to identifying subjects’ mental activities regarding robot-walking behavior.


robotics and biomimetics | 2013

An OpenViBE-based brainwave control system for Cerebot

Jing Zhao; Qing-Hao Meng; Wei Li; Mengfan Li; Fuchun Sun; Genshe Chen

In this paper, we develop a brainwave-based control system for Cerebot, consisting of a humanoid robot and a Cerebus™ Data Acquisition System up to 128 channels. Under the OpenViBE programming environment, the control system integrates OpenGL, OpenCV, WEBOTS, Choregraph, Central software, and user-developed programs in C++ and Matlab. The proposed system is easy to be expanded or upgraded. Firstly, we describe the system structures for off-line analysis of acquired neural signals and for on-line control of a humanoid robot via brainwaves. Secondly, we discuss how to use the toolboxes provided with the OpenViBE environment to design three types of brainwave-based models: SSVEPs, P300s, and mu/beta rhythms. Finally, we use the Cerebot platform to investigate the three models by controlling four robot-walking behaviors: turning right, turning left, walking forward, and walking backward.


Computational Intelligence and Neuroscience | 2017

Progress in EEG-Based Brain Robot Interaction Systems

Xiaoqian Mao; Mengfan Li; Wei Li; Linwei Niu; Bin Xian; Ming Zeng; Genshe Chen

The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.


IEEE Transactions on Industrial Electronics | 2017

Development of a Virtual Platform for Telepresence Control of an Underwater Manipulator Mounted on a Submersible Vehicle

Jin Zhang; Wei Li; Jiancheng Yu; Qifeng Zhang; Shengguo Cui; Yam Li; Shuo Li; Genshe Chen

This paper develops a virtual platform of an underwater manipulator mounted on a submersible vehicle via the three-dimensional simulator “Webots” for teleoperation through a replica master arm. The graphical, kinematic, and dynamic models of the manipulator refer to a master-slave servo hydraulic manipulator with seven functions, consisting of six degrees of freedom and a parallel gripper, while the “Jiaolong” deep-manned submersible vehicle, operating below the sea surface down to 7000 m, is chosen as the underwater manipulator carrier. This study uses the virtual platform for training an operator to telepresence control the virtual manipulator to complete basic tasks in subsea environments. When training the operator, one has to consider uncertain external disturbances and the visual impacts that stem from subsea environments. In order to demonstrate the feasibility and effectiveness of the virtual platform, one designs two typical underwater operational tasks: grasping a marine organism sample and reaching at a given position. This paper presents the comparative studies: 1) the performances demonstrated by remotely controlling the virtual manipulator and the real manipulator; 2) the operating performances delivered by three operators before and after training when using the platform.


world congress on intelligent control and automation | 2014

SSVEP-based hierarchical architecture for control of a humanoid robot with mind

Jing Zhao; Qing-Hao Meng; Wei Li; Mengfan Li; Genshe Chen

In this paper, we present an SSVEP-based hierarchical architecture for control of a humanoid robot with mind, consisting of five-layer decisions from robot state level to implementation level. This architecture is able to control a variety of humanoid robot behaviors at different levels to perform a complex task. We implement this hierarchical architecture on our Cerebot platform and test this control architecture in a multi-task experiment. We compare its control performance with the one achieved using manual control by an experienced operator. The results show that the architecture with coordinating multi-layer decisions and fusing human and robot actions is a good way to solve the issue of the limited information transfer rate (ITR) in mind-controlled humanoid robot system using current EEG-based BCI technology.


IEEE Transactions on Cognitive and Developmental Systems | 2017

Behavior-Based SSVEP Hierarchical Architecture for Telepresence Control of Humanoid Robot to Achieve Full-Body Movement

Jing Zhao; Wei Li; Xiaoqian Mao; Hong Hu; Linwei Niu; Genshe Chen

The challenge to telepresence control a humanoid robot through a steady-state visual evoked potential (SSVEP) based model is to rapidly and accurately control full-body movement of the robot because a subject has to synchronously recognize the complex natural environments based on live video feedback and activate the proper mental states by targeting the visual stimuli. To mitigate this problem, this paper presents a behavior-based hierarchical architecture, which coordinates a large number of robot behaviors using only the most effective five stimuli. We defined and implemented fourteen robot behaviors for motion control and object manipulation, which were encoded through the visual stimuli of SSVEPs, and classified them into four behavioral sets. We proposed switch mechanisms in the hierarchical architecture to coordinate these behaviors and control the full-body movement of a NAO humanoid robot. To improve operation performance, we investigated the individual sensitivities of visual stimuli and allocated the stimuli targets according to frequency-responsive properties of individual subjects. We compared different types of walking strategies. The experimental results showed that the behavior-based SSVEP hierarchical architecture enabled the humanoid robot to complete an operation task, including navigating to an object and picking the object up with a fast operation time and a low chance of collision in an environment cluttered with obstacles.


international workshop on advanced motion control | 2016

Kinect-based control of a DaNI robot via body gesture

Xiaoqian Mao; Robert Swanson; Micheal Grishaber; Wei Li; Genshe Chen

This paper presents a body gesture-based controlled mobile robot system, consisting of a DaNI robot, a Kinect sensor, and a Pan-Tilt camera. The Kinect sensor is used to track skeletons and joints of an operator for controlling the robot, while the pan-tilt camera is used to feed back live videos around surrounding information. The DaNI robot platform is a powerful tool for teaching robotics and mechatronics concepts or for developing a robot prototype with LabVIEW Robotics. The body gesture-based controller telepresence controls the DaNI robot by interpreting movements of the two hands of the operator.


Proceedings of SPIE | 2013

A fuzzy-logic based approach to color segmentation

Guoxin Zhao; Yunyi Li; Genshe Chen; Qing-Hao Meng; Wei Li

This paper presents a fuzzy based approach to segmenting color images in three steps. First, we discuss a fuzzy color extractor to extract fuzzy color components. Second, we present techniques for iteratively generating reference patterns (seeds) to extract color components. Finally, we use the region growing method to check connectivity of segmented subimages. Different from the existing color segmentation methods that provide a crisp segmentation of color images, the proposed approach herein generates a fuzzy color segmentation which classifies the same pixel into several fuzzy sets. As an example, we apply the proposed approach to segment the chemical plumes from images taken in undersea environments.


Revista De Informática Teórica E Aplicada | 2017

Rotation Vector Sensor-Based Remote Control of a Mobile Robot via Google Glass

Xi Wen; Yu Song; Wei Li; Genshe Chen

Google Glass, as a representative product of wearable Android devices, provides a new way of Human Machine Interaction. In this paper, we propose a method for the control of a Surveyor mobile robot via a Google Glass. In doing it, we establish Wi-Fi communication between the Google Glass and the mobile robot, wear the Google Glass to observe robot states and its surroundings, and navigate the mobile robot through head movements detected by rotation vector sensor mounted on the Google Glass. The method allows us to completely free hands in navigating the robot without need for a computer monitor. In order to demonstrate the flexibility of the proposed method, we control the robot to go through a maze in a simulated environment via the Google Glass.


international workshop on advanced motion control | 2016

Rotation vector sensor-based remote control of a humanoid robot through a Google Glass

Xi Wen; Yu Song; Wei Li; Genshe Chen; Bin Xian

This paper develops a Google Glass-based system to achieve hands-free remote control of humanoid robots. This system transforms operators head movements into control instructions via a rotation vector sensor mounted on the Google Glass. When a head movement is detected, the corresponding control instruction is sent through WI-FI to the robot. In order to verify the system, this paper designs an experiment in which an operator navigates a Nao humanoid robot with obstacle avoidance. The result shows that the robot is able to smoothly pass by all obstacles and walk to the destination without need for operators hands.

Collaboration


Dive into the Genshe Chen's collaboration.

Top Co-Authors

Avatar

Wei Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiancheng Yu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Linwei Niu

West Virginia State University

View shared research outputs
Top Co-Authors

Avatar

Jin Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge