Peipei Yang
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peipei Yang.
Neurocomputing | 2014
Kexin Xing; Peipei Yang; Jian Huang; Yongji Wang; Quanmin Zhu
This study proposes a real-time electro-myogram (EMG) pattern recognition approach for the control of multifunction myoelectric hands. In techniques, time and frequency information is extracted by wavelet packet transform (WPT) and the node energy of the WPT coefficients is selected as the feature of the EMG signals. Then a novel feature selection method based on a depth recursive search algorithm is developed so that the high-dimensional features can be reduced by a supervised feature reduction algorithm. Consequently, the support vector machine (SVM) is adopted to give the recognition result. In the experiment, a real-time EMG pattern recognition system is developed to control a virtual hand with EMG signals from antebrachium. The experimental results show both the high accuracy and better real-time performance of the proposed method
international conference on neural information processing | 2011
Peipei Yang; Kaizhu Huang; Cheng-Lin Liu
Multi-task learning, referring to the joint training of multiple problems, can usually lead to better performance by exploiting the shared information across all the problems. On the other hand, metric learning, an important research topic, is however often studied in the traditional single task setting. Targeting this problem, in this paper, we propose a novel multi-task metric learning framework. Based on the assumption that the discriminative information across all the tasks can be retained in a low-dimensional common subspace, our proposed framework can be readily used to extend many current metric learning approaches for the multi-task scenario. In particular, we apply our framework on a popular metric learning method called Large Margin Component Analysis (LMCA) and yield a new model called multi-task LMCA (mtLMCA). In addition to learning an appropriate metric, this model optimizes directly on the transformation matrix and demonstrates surprisingly good performance compared to many competitive approaches. One appealing feature of the proposed mtLMCA is that we can learn a metric of low rank, which proves effective in suppressing noise and hence more resistant to over-fitting. A series of experiments demonstrate the superiority of our proposed framework against four other comparison algorithms on both synthetic and real data.
Neural Computing and Applications | 2013
Peipei Yang; Kaizhu Huang; Cheng-Lin Liu
Metric learning has been widely studied in machine learning due to its capability to improve the performance of various algorithms. Meanwhile, multi-task learning usually leads to better performance by exploiting the shared information across all tasks. In this paper, we propose a novel framework to make metric learning benefit from jointly training all tasks. Based on the assumption that discriminative information is retained in a common subspace for all tasks, our framework can be readily used to extend many current metric learning methods. In particular, we apply our framework on the widely used Large Margin Component Analysis (LMCA) and yield a new model called multi-task LMCA. It performs remarkably well compared to many competitive methods. Besides, this method is able to learn a low-rank metric directly, which effects as feature reduction and enables noise compression and low storage. A series of experiments demonstrate the superiority of our method against three other comparison algorithms on both synthetic and real data.
european conference on machine learning | 2012
Peipei Yang; Kaizhu Huang; Cheng-Lin Liu
Multi-task learning has been widely studied in machine learning due to its capability to improve the performance of multiple related learning problems. However, few researchers have applied it on the important metric learning problem. In this paper, we propose to couple multiple related metric learning tasks with von Neumann divergence. On one hand, the novel regularized approach extends previous methods from the vector regularization to a general matrix regularization framework; on the other hand and more importantly, by exploiting von Neumann divergence as the regularizer, the new multi-task metric learning has the capability to well preserve the data geometry. This leads to more appropriate propagation of side-information among tasks and provides potential for further improving the performance. We propose the concept of geometry preserving probability (PG) and show that our framework leads to a larger PG in theory. In addition, our formulation proves to be jointly convex and the global optimal solution can be guaranteed. A series of experiments across very different disciplines verify that our proposed algorithm can consistently outperform the current methods.
IEEE Signal Processing Letters | 2014
Xu-Yao Zhang; Peipei Yang; Yan-Ming Zhang; Kaizhu Huang; Cheng-Lin Liu
This letter considers the combination of multiple classification and clustering results to improve the prediction accuracy. First, an object-similarity graph is constructed from multiple clustering results. The labels predicted by the classification models are then propagated on this graph to adaptively satisfy the smoothness of the prediction over the graph. The convex learning problem is efficiently solved by the label propagation algorithm. A semi-supervised extension is also provided to further improve the performance. Experiments on 11 tasks identify the validity of the proposed models.
chinese control and decision conference | 2013
Peipei Yang; Kexin Xing; Jian Huang; Yongji Wang
This paper proposes a novel feature reduction approach for real-time electromyogram (EMG) pattern recognition. This study extracts time and frequency information by wavelet packet transform (WPT) coefficients and uses the node energy as the feature to overcome the translation-invariant property of WPT. Then the non-parametric discriminant analysis (NDA) is used for feature reduction. Because of some inherent properties of the packet node energy, the within-class scatter matrix is usually singular in this approach, which makes feature project unavailable. To solve this problem, a recursive algorithm is proposed to discard some feature components that lead to singularity and contain relatively less discriminant information. Finally, the support vector machine (SVM) is used as the classifier and gives the recognition result. The corresponding pattern of the action could be recognized in a millisecond (ms). The experimental results show that the proposed method has strong robustness and good real-time performance.
international conference on image and graphics | 2017
Ting-Bing Xu; Peipei Yang; Xu-Yao Zhang; Cheng-Lin Liu
Deep neural networks (DNNs) have achieved remarkable successes in many vision tasks. However, due to the dependence on large memory and high-performance GPUs, it is extremely hard to deploy DNNs on low-power devices. For compressing and accelerating deep neural networks, many techniques have been proposed recently. Particularly, binarized weight networks, which store one weight using only one bit and replace complex floating operations with simple calculations, are attractive from the perspective of hardware implementation. In this paper, we propose a simple strategy to learn better binarized weight networks. Motivated by the phenomenon that the stochastic binarization approach usually converges with real-valued weights close to two boundaries \(\{-1, +1\}\) and gives better performance than deterministic binarization, we construct a margin-aware binarization strategy by adding a weight constraint into the objective function of deterministic scheme to minimize the margins between real-valued weights and boundaries. This constraint can be easily realized by a Binary-L2 regularization without suffering from the complex random number generation. Experimental results on MNIST and CIFAR-10 datasets show that the proposed method yields better performance than recent network binarization schemes and the full precision network counterpart.
international conference on neural information processing | 2012
Peipei Yang; Xu-Yao Zhang; Kaizhu Huang; Cheng-Lin Liu
Multi-task learning (MTL) has drawn a lot of attentions in machine learning. By training multiple tasks simultaneously, information can be better shared across tasks. This leads to significant performance improvement in many problems. However, most existing methods assume that all tasks are related or their relationship follows a simple and specified structure. In this paper, we propose a novel manifold regularized framework for multi-task learning. Instead of assuming simple relationship among tasks, we propose to learn task decision functions as well as a manifold structure from data simultaneously. As manifold could be arbitrarily complex, we show that our proposed framework can contain many recent MTL models, e.g. RegMTL and cCMTL, as special cases. The framework can be solved by alternatively learning all tasks and the manifold structure. In particular, learning all tasks with the manifold regularization can be solved as a single-task learning problem, while the manifold structure can be obtained by successive Bregman projection on a convex feasible set. On both synthetic and real datasets, we show that our method can outperform the other competitive methods.
Archive | 2009
Qi Xu; Kexin Xing; Yongji Wang; Jiping He; Jian Huang; Wu Jun; Peipei Yang; Rui Yang
Big Data Analytics | 2018
Peipei Yang; Kaizhu Huang; Amir Hussain