Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoqian Mao is active.

Publication


Featured researches published by Xiaoqian Mao.


Computational Intelligence and Neuroscience | 2017

Progress in EEG-Based Brain Robot Interaction Systems

Xiaoqian Mao; Mengfan Li; Wei Li; Linwei Niu; Bin Xian; Ming Zeng; Genshe Chen

The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques.


IEEE Transactions on Cognitive and Developmental Systems | 2017

Behavior-Based SSVEP Hierarchical Architecture for Telepresence Control of Humanoid Robot to Achieve Full-Body Movement

Jing Zhao; Wei Li; Xiaoqian Mao; Hong Hu; Linwei Niu; Genshe Chen

The challenge to telepresence control a humanoid robot through a steady-state visual evoked potential (SSVEP) based model is to rapidly and accurately control full-body movement of the robot because a subject has to synchronously recognize the complex natural environments based on live video feedback and activate the proper mental states by targeting the visual stimuli. To mitigate this problem, this paper presents a behavior-based hierarchical architecture, which coordinates a large number of robot behaviors using only the most effective five stimuli. We defined and implemented fourteen robot behaviors for motion control and object manipulation, which were encoded through the visual stimuli of SSVEPs, and classified them into four behavioral sets. We proposed switch mechanisms in the hierarchical architecture to coordinate these behaviors and control the full-body movement of a NAO humanoid robot. To improve operation performance, we investigated the individual sensitivities of visual stimuli and allocated the stimuli targets according to frequency-responsive properties of individual subjects. We compared different types of walking strategies. The experimental results showed that the behavior-based SSVEP hierarchical architecture enabled the humanoid robot to complete an operation task, including navigating to an object and picking the object up with a fast operation time and a low chance of collision in an environment cluttered with obstacles.


international workshop on advanced motion control | 2016

Kinect-based control of a DaNI robot via body gesture

Xiaoqian Mao; Robert Swanson; Micheal Grishaber; Wei Li; Genshe Chen

This paper presents a body gesture-based controlled mobile robot system, consisting of a DaNI robot, a Kinect sensor, and a Pan-Tilt camera. The Kinect sensor is used to track skeletons and joints of an operator for controlling the robot, while the pan-tilt camera is used to feed back live videos around surrounding information. The DaNI robot platform is a powerful tool for teaching robotics and mechatronics concepts or for developing a robot prototype with LabVIEW Robotics. The body gesture-based controller telepresence controls the DaNI robot by interpreting movements of the two hands of the operator.


international conference on swarm intelligence | 2016

A Fleet of Chemical Plume Tracers with the Distributed Architecture Built upon DaNI Robots

David Oswald; Henry Lin; Xiaoqian Mao; Wei Li; Linwei Niu; Xiaosu Chen

This paper presents a fleet of chemical plume tracers with the distributed architecture developed at California State University, Bakersfield CSUB. Each chemical plume tracer built upon a DaNI robot integrates multiple sensors, including a wind sensor, chemical sensors, a wireless router, and a network camera. The DaNI robot is an advanced platform embedded with a single control board sbRIO-9632, consisting of a 400 MHz industrial processor, a 2M gate Xilinx Spartan FPGA, and a variety of I/Os. In order demonstrate the feasibility of the designed chemical plume tracers, the experiments on moth-inspired plume tracing are conducted under the turbulent airflow environment. This fleet of chemical plume tracers is a powerful tool for investigating algorithms for the tracking and mapping of chemical plumes via swarm intelligence.


Journal of Visualized Experiments | 2015

SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.

Jing Zhao; Wei Li; Xiaoqian Mao; Mengfan Li

Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.


instrumentation and measurement technology conference | 2017

Navigation of a humanoid robot via head gestures based on global and local live videos on Google Glass

Zibo Wang; Xi Wen; Yu Song; Xiaoqian Mao; Wei Li; Genshe Chen

The navigation of a mobile robot is a central problem in robotics research. A smart wearable device called Google Glass provides a new way to achieve the human and machine interaction. This paper presents a navigation strategy for NAO humanoid robot via head gestures based on global and local live videos displayed on a Google Glass. We develop a module to establish connection from the Google Glass to the robot and detect head gestures by fusing multi-sensor data through a complementary filter to eliminate drift of the head gesture reference. We conduct an obstacle avoidance task to validate the effectiveness of the control system. An operator wearing Google Glass was able to navigate the robot smoothly.


instrumentation and measurement technology conference | 2017

An IFCE-based effective color tracking system for a humanoid robot in cluttered environments

Xiaoqian Mao; Huidong He; Wei Li; Genshe Chen

A variety of object tracking algorithms have been published in the field of image process, but most of them are easily influenced by illumination intensity and may not be reliable in cluttered environments. This paper proposes a novel color tracking system by extracting an object of interest with color similarities, which is based on an Improved Fuzzy Color Extractor (IFCE). The IFCE uses the angle between two vectors to distinguish the two pixels in RGB space coordinates. The length of a vector represents the illumination intensity of the pixel and the direction corresponds to the color. Thus, the illumination intensity and the color of a pixel are separated to adapt IFCE varying illumination conditions. Moreover, the strict color discrimination is accomplished by setting the fuzzy parameters. The central vision tracking strategy performs dynamic tracking processes. We use the NAO robot to validate the proposed system by tracking a moving person with several color tags. The results show that the IFCE-based color tracking system is robust in the cluttered environment with varying illumination intensity and the NAO robot tracks the person effectively.


Journal of Electrical and Computer Engineering | 2017

Visual Sensor Based Image Segmentation by Fuzzy Classification and Subregion Merge

Huidong He; Xiaoqian Mao; Wei Li; Linwei Niu; Genshe Chen

The extraction and tracking of targets in an image shot by visual sensors have been studied extensively. The technology of image segmentation plays an important role in such tracking systems. This paper presents a new approach to color image segmentation based on fuzzy color extractor (FCE). Different from many existing methods, the proposed approach provides a new classification of pixels in a source color image which usually classifies an individual pixel into several subimages by fuzzy sets. This approach shows two unique features: the spatial proximity and color similarity, and it mainly consists of two algorithms: CreateSubImage and MergeSubImage. We apply the FCE to segment colors of the test images from the database at UC Berkeley in the RGB, HSV, and YUV, the three different color spaces. The comparative studies show that the FCE applied in the RGB space is superior to the HSV and YUV spaces. Finally, we compare the segmentation effect with Canny edge detection and Log edge detection algorithms. The results show that the FCE-based approach performs best in the color image segmentation.


Computational Intelligence and Neuroscience | 2017

Object Extraction in Cluttered Environments via a P300-Based IFCE

Xiaoqian Mao; Wei Li; Huidong He; Bin Xian; Ming Zeng; Huihui Zhou; Linwei Niu; Genshe Chen

One of the fundamental issues for robot navigation is to extract an object of interest from an image. The biggest challenges for extracting objects of interest are how to use a machine to model the objects in which a human is interested and extract them quickly and reliably under varying illumination conditions. This article develops a novel method for segmenting an object of interest in a cluttered environment by combining a P300-based brain computer interface (BCI) and an improved fuzzy color extractor (IFCE). The induced P300 potential identifies the corresponding region of interest and obtains the target of interest for the IFCE. The classification results not only represent the human mind but also deliver the associated seed pixel and fuzzy parameters to extract the specific objects in which the human is interested. Then, the IFCE is used to extract the corresponding objects. The results show that the IFCE delivers better performance than the BP network or the traditional FCE. The use of a P300-based IFCE provides a reliable solution for assisting a computer in identifying an object of interest within images taken under varying illumination intensities.


world congress on intelligent control and automation | 2016

Path finding for a NAO humanoid robot by fusing visual and proximity sensors

Xiaoqian Mao; Huidong He; Wei Li

Humanoid robot path finding is one of the core-technologies in robot research domain. This paper presents an approach to finding a path for robot motion by fusing images taken by the NAOs camera and proximity information delivered by sonar sensors. The NAO robot takes an image around its surroundings, uses the fuzzy color extractor to segment its potential path colors, and selects a fitting line as path by the least squares method. Therefore, the NAO robot is able to perform the automatic navigation according to the selected path. As a result, the experiments are conducted to navigate the NAO robot to walk to a given destination and to grasp a box. In addition, the NAO robot uses its sonar sensors to detect a barrier and helps pick up the box with its hands.

Collaboration


Dive into the Xiaoqian Mao's collaboration.

Top Co-Authors

Avatar

Wei Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar

Linwei Niu

West Virginia State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiancheng Yu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jin Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge