Si-Wei Luo
Beijing Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Si-Wei Luo.
international conference on machine learning and cybernetics | 2005
Mei Tian; Si-Wei Luo; Ling-Zhi Liao
The purpose of this paper is to discuss the usage possibility of singular value decomposition in image compression applications. A mistake viewpoint that is about SVD-based image compression scheme is demonstrated. The paper goes deep to study three schemes of SVD-based image compression and prove the usage feasibility of SVD-based image compression.
international conference on machine learning and cybernetics | 2004
Lian-Wei Zhao; Si-Wei Luo; Ling-Zhi Liao
Kernel principal component analysis (PCA) is proposed as a nonlinear technique for dimensionality reduction. The basic idea is to map the input space into a feature space via nonlinear mapping and then compute the principal component in the feature space. In this paper, we utilize kernel PCA technique into 3D object recognition and pose estimation, and present results of appearance-based object recognition accomplished by employing a neural network architecture on the base of kernel PCA. Through adopting a polynomial kernel, the principal component can be computed in the space spanned by high-order correlations of input pixels. We illustrate the potential of kernel PCA on a database of 1,440 images of 20 different objects. The excellent recognition rates achieved in all of the performed experiments indicate that the proposed method is well-suited for object recognition and pose estimation.
international conference on pattern recognition | 2006
Yu Zheng; Si-Wei Luo; Ziang Lv
To accelerate the learning of reinforcement learning, many types of function approximation are used to represent state value. However function approximation reduces the accuracy of state value, and brings difficulty in the convergence. To solve the problems of tradeoff between the generalization and accuracy in reinforcement learning, we represent state-action value by two CMAC networks with different generalization parameters. The accuracy CMAC network can represent values exactly, which achieves precise control in the states around target area. And the generalization CMAC network can extend experiences to unknown area, and guide the learning of accuracy CMAC network. The algorithm proposed in this paper can effectively avoid the dilemma of achieving tradeoff between generalization and accuracy. Simulation results for the control of double inverted pendulum are presented to show effectiveness of the proposed algorithm
international conference on machine learning and cybernetics | 2004
Yun-Hui Liu; Si-Wei Luo; Ai-Jun Li; Han-Bin Yu
The problem of determining the proper size of an artificial neural network is recognized to be crucial. One popular approach is pruning which means training a larger than necessary network and removing unnecessary weights/nodes. Though pruning is commonly used in architecture learning of neural network, there is still no theoretical framework about it. We give an information geometric explanation of pruning. In information geometric framework, most kinds of neural networks form exponential or mixture manifold which has a natural hierarchical structure. In a hierarchical set of systems, a lower order system is included in the parameter space of a large one as a submanifold. Such a parameter space has rich geometrical structures that are responsible for the dynamic behaviors of learning. The pruning problem is formulated in iterative m-projections from the current manifold to its submanifold in which the divergence between the two manifolds is minimized, and it means meaning the network performance does not worsen over the entire pruning process. The result gives a geometric understanding and an information geometric guideline of pruning, which has more authentic theoretic foundation.
international conference on neural networks and brain | 2005
Jian Yang; Si-Wei Luo
In this paper we made four modifications to incremental ensemble learning algorithm Learn++, including (1) use a self-growing dynamic committee machine generated by error correlation partition (ECP) to construct individual hypothesis to avoid discard any hypothesis in learning; (2) in order to avoid the overall performance decline caused by dataset belonging to a single class. We use appropriate negative instances obtained by ECP to help to form classification boundaries; (3) adopt a new voting weights scheme with penalty term to allow the voting weights to vary in response to the confidence with which an instance is classified; (4) use a discrepancy measure to ensure differences between individual hypotheses to make it generalize better. Experiments show that this new hybrid committee machine for incremental learning Learn++.H sees further performance increase
international conference on neural networks and brain | 2005
Yu Zheng; Si-Wei Luo; Ziang Lv
Control inverted pendulum is one of important applied regions of reinforcement learning. This paper analyzes negative effect on the control of inverted pendulum caused by the limit cycle. It points out the limit cycle will make Q-value converge to zero, and destroy the stabilization of the optimal control policy. Moreover higher degree of exploration can not overcome this problem, but rather intensify it. This paper discuss many solutions to this limit cycle, which succeed controlling inverted pendulum system and keep the stabilization of control policy
international conference on machine learning and cybernetics | 2002
Ai-Jun Li; Yun-Hui Liu; Ying-Jian Qi; Si-Wei Luo
Sequence matching in time series databases is one of the most important data mining applications. In this paper, we focus on subsequence matching. We propose an efficient approach to compare time series. To simplify the searching process, we first use the KMP algorithm to carry through rough sequence matching. As KMP is a typical algorithm for string matching, we must transform time series into 0-1 string inspired by literature (Keogh and Pazzani,1999); then we quickly. search all rough similar subsequences from major sequence and finally, to reduce the dimension of raw time series data, we use Harr wavelet transform to represent the sequence to be compared and use WT (Wavelet Transformations) coefficients to compute the similarity of two sequences. That we carry out rough matching at first may reduce the numbers of WT and quicken the whole subsequence matching process.
international conference on neural networks and brain | 2005
Jian Yang; Si-Wei Luo
This paper presents evolving ensembles by boosting (EEB) for designing NN ensembles by combining evolutionary learning and boosting algorithm and negative correlation learning (NCL). Its advantages include: first, the use of NCL is to encourage different individual networks in the ensemble to learn different parts of the training data. The individual networks are trained simultaneously. This provides an opportunity for the individual networks to interact with each other and to specialize; second, there are two levels of interaction, one of which is caused by NCL; the other is caused by using the weight update scheme similar to that in boosting algorithm to form the final combination. So, in these senses the proposed algorithm EEB learns and combines individual networks exactly in the same process. Third, unlike all the other algorithms, our selection mechanism is based on the dynamic weight vector updated by boosting that can fine-tune the contributions of individual networks made to the whole ensemble
international conference on neural networks and brain | 2005
Ling-Zhi Liao; Si-Wei Luo; Lian-Wei Zhao; Mei Tan
The sparse coding and independent component analysis for natural scenes, in recent years, have succeeded in for finding a set of basis functions that can effectively represent the input data, by supposing that the feature vectors of images should be sparse or independent. In this paper, we investigated the efficient coding for natural images by making assumptions of sparseness and independence on the activities of basis functions over the image ensemble, without considering directly the statistics of the feature vectors of images. Experimental results show that the approach can also produce basis functions which have similar properties with the receptive fields of simple cells in V1 and thereby be effective
international conference on machine learning and cybernetics | 2005
Jian Yang; Si-Wei Luo
The goal of model selection is to identify the model that generated the data. Goodness of a model is measured using generalization, which takes two opposite pressures: goodness of fit and model complexity into account. In the paper we take neural network as an example and use conception of curvature from the point of view of differential geometry to explore the intrinsic model complexity that is free of reparametrization; and then through theoretical analysis, we show the future residual that is qualified to measure the generalization can be expressed by using the intrinsic curvature array of model, from which we give a new model selection criterion, it not only considers the factors such as the number of parameters, sample size and functional form, but also with very clear and intuitive geometric understanding of model selection.