Huajin Tang
Sichuan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Huajin Tang.
PLOS ONE | 2013
Qiang Yu; Huajin Tang; Kay Chen Tan; Haizhou Li
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
IEEE Transactions on Systems, Man, and Cybernetics | 2017
Xi Peng; Zhiding Yu; Zhang Yi; Huajin Tang
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that <inline-formula> <tex-math notation=LaTeX>
IEEE Transactions on Neural Networks | 2013
Qiang Yu; Huajin Tang; Kay Chen Tan; Haizhou Li
boldsymbol {ell }_{boldsymbol {1}}
IEEE Transactions on Neural Networks | 2016
Xi Peng; Huajin Tang; Lei Zhang; Zhang Yi; Shijie Xiao
</tex-math></inline-formula>-, <inline-formula> <tex-math notation=LaTeX>
IEEE Transactions on Neural Networks | 2015
Bo Zhao; Ruoxi Ding; Shoushun Chen; Bernabé Linares-Barranco; Huajin Tang
boldsymbol {ell }_{boldsymbol {2}}
IEEE Transactions on Neural Networks | 2004
Huajin Tang; Kay Chen Tan; Zhang Yi
</tex-math></inline-formula>-, <inline-formula> <tex-math notation=LaTeX>
Neurocomputing | 2004
Kay Chen Tan; Huajin Tang; Zhang Yi
boldsymbol {ell }_{boldsymbol {infty }}
IEEE Transactions on Systems, Man, and Cybernetics | 2015
Vui Ann Shim; Kay Chen Tan; Huajin Tang
</tex-math></inline-formula>-, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
IEEE Transactions on Neural Networks | 2018
Xi Peng; Canyi Lu; Zhang Yi; Huajin Tang
Primates perform remarkably well in cognitive tasks such as pattern recognition. Motivated by recent findings in biological systems, a unified and consistent feedforward system network with a proper encoding scheme and supervised temporal rules is built for solving the pattern recognition task. The temporal rules used for processing precise spiking patterns have recently emerged as ways of emulating the brains computation from its anatomy and physiology. Most of these rules could be used for recognizing different spatiotemporal patterns. However, there arises the question of whether these temporal rules could be used to recognize real-world stimuli such as images. Furthermore, how the information is represented in the brain still remains unclear. To tackle these problems, a proper encoding method and a unified computational model with consistent and efficient learning rule are proposed. Through encoding, external stimuli are converted into sparse representations, which also have properties of invariance. These temporal patterns are then learned through biologically derived algorithms in the learning layer, followed by the final decision presented through the readout layer. The performance of the model with images of digits from the MNIST database is presented. The results show that the proposed model is capable of recognizing images correctly with a performance comparable to that of current benchmark algorithms. The results also suggest a plausibility proof for a class of feedforward models of rapid and robust recognition in the brain.
Neural Computation | 2013
Jun Hu; Huajin Tang; Kay Chen Tan; Haizhou Li; Luping Shi
Under the framework of spectral clustering, the key of subspace clustering is building a similarity graph, which describes the neighborhood relations among data points. Some recent works build the graph using sparse, low-rank, and ℓ2-norm-based representation, and have achieved the state-of-the-art performance. However, these methods have suffered from the following two limitations. First, the time complexities of these methods are at least proportional to the cube of the data size, which make those methods inefficient for solving the large-scale problems. Second, they cannot cope with the out-of-sample data that are not used to construct the similarity graph. To cluster each out-of-sample datum, the methods have to recalculate the similarity graph and the cluster membership of the whole data set. In this paper, we propose a unified framework that makes the representation-based subspace clustering algorithms feasible to cluster both the out-of-sample and the large-scale data. Under our framework, the large-scale problem is tackled by converting it as the out-of-sample problem in the manner of sampling, clustering, coding, and classifying. Furthermore, we give an estimation for the error bounds by treating each subspace as a point in a hyperspace. Extensive experimental results on various benchmark data sets show that our methods outperform several recently proposed scalable methods in clustering a large-scale data set.