Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xin Ma is active.

Publication


Featured researches published by Xin Ma.


IEEE Journal of Biomedical and Health Informatics | 2014

Depth-based human fall detection via shape features and improved extreme learning machine.

Xin Ma; Haibo Wang; Bingxia Xue; Mingang Zhou; Bing Ji; Yibin Li

Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.


international conference on control and automation | 2014

Combining Features for Chinese Sign Language Recognition with Kinect

Lubo Geng; Xin Ma; Bingxia Xue; Hanbo Wu; Jason Gu; Yibin Li

In this paper, we propose a novel three-dimensional combining features method for sign language recognition. Based on the Kinect depth data and the skeleton joints data, we acquire the 3D trajectories of right hand, right wrist and right elbow. To construct feature vector, the paper uses combining location and spherical coordinate feature representation. The proposed approach utilizes the feature representation in spherical coordinate system effectively depicting the kinematic connectivity among hand, wrist and elbow for recognition. Meanwhile, 3D trajectory data acquired from Kinect avoid the interference of the illumination change and cluttered background. In experiments with a dataset of 20 gestures from Chinese sign language, the Extreme Learning Machine(ELM) is tested, compared with Support Vector Machine( SVM), the superior recognition performance is verified.


world congress on intelligent control and automation | 2014

Chinese sign language recognition with 3D hand motion trajectories and depth images

Lubo Geng; Xin Ma; Haibo Wang; Jason Gu; Yibin Li

An important part for sign language expression is hand shape, and the 3D hand motion trajectories also contain abundant information to interpret the meaning of sign language. In this paper, a novel feature descriptor is proposed for sign language recognition, the hand shape features extracted from the depth images and spherical coordinate (SPC) feature extracted from the 3D hand motion trajectories combine to make up the final feature representation. The new representation not only incorporates both the spatial and temporal information to depict the kinematic connectivity among hand, wrist and elbow for recognition effectively but also avoids the interference of the illumination change and cluttered background compared with other methods. Meanwhile, our self-built dataset includes 320 instances to evaluate the effectiveness of our combining feature. In experiments with the dataset and different feature representation, the superior performance of Extreme Learning Machine (ELM) is tested, compared with Support Vector Machine (SVM).


robotics and biomimetics | 2013

An improved extreme learning machine based on Variable-length Particle Swarm Optimization

Bingxia Xue; Xin Ma; Jason Gu; Yibin Li

Extreme Learning Machine (ELM) for Single-hidden Layer Feedforward Neural Network (SLFN) has been attracting attentions because of its faster learning speed and better generalization performance than those of the traditional gradient-based learning algorithms. However, it has been proven that generalization performance of ELM classifier depends critically on the number of hidden neurons and the random determination of the input weights and hidden biases. In this paper, we propose Variable-length Particle Swarm Optimization algorithm (VPSO) for ELM to automatically select the number of hidden neurons as well as corresponding input weights and hidden biases for maximizing ELM classifiers generalization performance. Experimental results have verified that the proposed VPSO-ELM scheme significantly improves the testing accuracy of classification problems.


Control and Intelligent Systems | 2014

IMPROVED VARIABLE-LENGTH PARTICLE SWARM OPTIMIZATION FOR STRUCTURE-ADJUSTABLE EXTREME LEARNING MACHINE

Bingxia Xue; Xin Ma; Haibo Wang; Jason Gu; Yibin Li

Extreme learning machine (ELM) is one of the single hidden layer feed-forward neural networks (SLFNs). It has been widely used for multiclass classification because of the preferable generalization performance and its faster learning speed. The parameters (including the input weights, hidden biases and the number of hidden neurons) have great impact on the generalization performance of ELM classifier. An improved variable-length particle swarm optimization (IVPSO) algorithm is proposed in this paper to automatically select the optimal structure of ELM classifier (the number of hidden neurons with the corresponding input weights and hidden biases) for maximizing the accuracy of validation data and minimizing the norm of output weights. It has been verified in the experimental results that the new algorithm IVPSO-ELM significantly increases the testing accuracy of many classification problems that we choose in UCI machine learning repository.


international conference on robotics and automation | 2017

Correlation filter-based self-paced object tracking

Wenhui Huang; Jason Gu; Xin Ma; Yibin Li

Object tracking is an important capability for robots tasked with interacting with humans and the environment, and it enables robots to manipulate objects. In object tracking, selecting samples to learn a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and frequency of model updating, which concerns many details that can affect the tracking results. In this paper, we propose an object tracking approach by formulating a new objective function that integrates the learning paradigm of self-paced learning into object tracking such that reliable samples can be automatically selected for model learning. Sample weights and model parameters can be learned by minimizing this single objective function under the framework of kernelized correlation filters. Moreover, a real-valued error-tolerant self-paced function with a constraint vector is proposed to combine prior knowledge, i.e., the characteristics of object tracking, with information learned during tracking. We demonstrate the robustness and efficiency of our object tracking approach on a recent object tracking benchmark data set: OTB 2013.


Journal of Electronic Imaging | 2017

Self-paced model learning for robust visual tracking

Wenhui Huang; Jason Gu; Xin Ma; Yibin Li

In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.


world congress on intelligent control and automation | 2014

Face recognition based on KPCA and SVM

Jianhua Dong; Jason Gu; Xin Ma; Yibin Li

KPCA algorithm can solve the problem of nonlinear characteristic that the PCA algorithm cant handle with and the traditional curvelet decomposition algorithm cannot take full advantage of the fine scale component information. So we put forward KPCA algorithm and data fusion algorithm. The KPCA algorithm has a good effect on extracting face contour and the curve detail information through internal nonlinear kernel function. Data fusion algorithm can make use of different scale of image which decomposed by curvelet according to certain proportion. Support Vector Machine (SVM) has the strong ability of classification of small samples and the advantage of dealing with nonlinear and high dimension. In this paper, the KPCA and SVM methods were combined with and used for face recognition. At first, the paper made use of the low-frequency of the face images decomposed by curvelet transform, then the feature vectors were extracted by KPCA, and the strategy of “one vs one” of SVM was chosen to perform recognition. The results based on the ORL and Yale shows the success of KPCA+SVM employed in face recognition. Then the curvelet faces were reduced dimension by PCA. The coarse information and the fine information were combined by data fusion. The results based on the ORL shows the success of data fusion employed in face recognition.


international conference on robotics and automation | 2014

Intelligent mobility assisted mobile sensor network localization

Xin Ma; Mingang Zhou; Yibin Li; Jindong Tan

The trajectories of mobile seeds have a great influence on localization accuracy and efficiency. This paper presents a novel information-driven intelligent mobility-assisted wireless sensor network localization algorithm. Without requiring any prior knowledge of the sensing field, seeds or pseudo-seeds (common sensors which have been positioned) trajectories are scheduled dynamically aiming at position estimates of neighboring non-positioned common sensors. With an information-theoretic utility measure as the objective function, mobile seeds or pseudo-seeds actively determine their motion directions for minimizing the uncertainty in position estimates of neighboring sensors. At the first level, seeds estimate the neighboring sensor nodes positions with bearing measurements by means of extended Kalman filters and optimize their motion directions by maximizing the mutual information between the position estimates and the motions of seeds. Afterwards the seeds forward the position estimates to the corresponding sensor nodes, which then act as pseudo-seeds. By repeating this process at the following levels, all sensor nodes can obtain position estimates. Compared with heuristic mobility and random mobility-assisted mobile sensor network localization algorithms, the proposed algorithm requires fewer maneuvers of seed or pseudo-seeds for quick convergence to good position estimates. Extensive simulations show that this algorithm can provide more accurate position estimates with fewer maneuvers, especially in the case of limited seeds.


International Journal of Advanced Robotic Systems | 2018

Active 6-D position-pose estimation of a spatial circle using monocular eye-in-hand system

Xin Ma; Junbing Feng; Yibin Li; Jindong Tan

Nuts and bolts are common components in assembly lines. Their position and pose estimation is a vital step for automatic assembling. Although many approaches using a monocular camera have been proposed, few works consider a monocular camera’s active movements for improving estimation accuracy. This article presents an active movement strategy for a monocular eye-in-hand camera for high position and pose estimation accuracy of a spatial circle. Extensive experiments are conducted to validate the effectiveness of the proposed method for position and pose estimation of circles printed on paper, real circular flat washers, and nuts.

Collaboration


Dive into the Xin Ma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Gu

Dalhousie University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge