Yun-Hui Liu
Beijing Jiaotong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yun-Hui Liu.
Information Sciences | 2011
Yaping Huang; Jiali Zhao; Yun-Hui Liu; Siwei Luo; Qi Zou; Mei Tian
Temporal coherence principle is an attractive biologically inspired learning rule to extract slowly varying features from quickly varying input data. In this paper we develop a new Nonlinear Neighborhood Preserving (NNP) technique, by utilizing the temporal coherence principle to find an optimal low dimensional representation from the original high dimensional data. NNP is based on a nonlinear expansion of the original input data, such as polynomials of a given degree. It can be solved by the eigenvalue problem without using gradient descent and is guaranteed to find the global optimum. NNP can be viewed as a nonlinear dimensionality reduction framework which takes into consideration both time series and data sets without an obvious temporal structure. According to different situations, we introduce three algorithms of NNP, named NNP-1, NNP-2, and NNP-3. The objective function of NNP-1 is equal to Slow Feature Analysis (SFA), and it works well for time series such as image sequences. NNP-2 artificially constructs time series consisting of neighboring points for data sets without a clear temporal structure such as image data. NNP-3 is proposed for classification tasks, which can minimize the distances of neighboring points in the embedding space and ensure that the remaining points are as far apart as possible simultaneously. Furthermore, the kernel extension of NNP is also discussed in this paper. The proposed algorithms work very well on some image sequences and image data sets compared to other methods. Meanwhile, we perform the classification task on the MNIST handwritten digit database using the supervised NNP algorithms. The experimental results demonstrate that NNP is an effective technique for nonlinear dimensionality reduction tasks.
international conference on machine learning and cybernetics | 2004
Yun-Hui Liu; Si-Wei Luo; Ai-Jun Li; Han-Bin Yu
The problem of determining the proper size of an artificial neural network is recognized to be crucial. One popular approach is pruning which means training a larger than necessary network and removing unnecessary weights/nodes. Though pruning is commonly used in architecture learning of neural network, there is still no theoretical framework about it. We give an information geometric explanation of pruning. In information geometric framework, most kinds of neural networks form exponential or mixture manifold which has a natural hierarchical structure. In a hierarchical set of systems, a lower order system is included in the parameter space of a large one as a submanifold. Such a parameter space has rich geometrical structures that are responsible for the dynamic behaviors of learning. The pruning problem is formulated in iterative m-projections from the current manifold to its submanifold in which the divergence between the two manifolds is minimized, and it means meaning the network performance does not worsen over the entire pruning process. The result gives a geometric understanding and an information geometric guideline of pruning, which has more authentic theoretic foundation.
international conference on machine learning and cybernetics | 2003
Ai-Jun Li; Yun-Hui Liu; Siwei Luo
The decision tree-based neural network is introduced to combine neural networks with decision trees. Its key idea is using decision trees to form the structure of the neural network: to construct a decision tree and then convert the tree into a neural network. The decision tree and neural network for classification are similar and have equivalent property. Thereby a decision tree can be used to provide a systematic design method of neural network. We propose a new mapping between decision tree and neural network that can accurately specify the number of units, layers, connection and initial setting of parameters of neural network. Furthermore, we quote two theorems to show that the mapping is reasonable. In this paper, we use the decision tree-based neural network to solve the XOR problem.
international conference on machine learning and cybernetics | 2003
Hua Huang; Siwei Luo; Yun-Hui Liu
Research on neural network has witnessed impressive progress on its model, learning algorithm and application. Owe to the ambiguity, fuzziness and ability of approximating functions of any complexity it provides, neural computing method has gained great success in various fields. However, serious limitations remain accompanying the method concerning accessibility, flexibility, scaling and reliability. Efforts have been made to relieve the problem, of which decomposition approach is typically used. Although it works, the method is greatly confined by the decomposition strategy and scalability of such system which is scarified. Previous researches focus on effectively dividing the input space to build up modular systems for given tasks and that the design of the modules themselves is neglected. In this paper, we introduce the notion of autonomous artificial neural network (AANN) for the first time and use AANN units as the basic building block of a modular neural system. The difference between AANN units and conventional neural network is that AANN units possess the ability of self-reflection. Unlike conventional neural networks, which directly give an output corresponding to an input without commenting on its quality, an AANN unit comments in its own point of view on the output, providing extra information such as whether the result is correct or not or to what degree the result can be believed. With AANN units as building blocks, a modular neural system is greatly relieved from what it suffers. Scalability, reliability and flexibility of such a system are greatly improved. We have also proposed a coding method to implement AANN units. And a neural network ensemble of AANN units is built up, which acquires knowledge progressively by inheriting the learned knowledge of AANN units attached to it.
international conference on machine learning and cybernetics | 2003
Yun-Hui Liu; Siwei Luo; Ai-Jun Li; Hua Huang; Jin-Wei Wen
In this paper, an extendable hierarchical large scale neural network model is developed based on the theoretical analysis of information geometry. In a hierarchical set of systems, a lower order system is included in the parameter space of a larger one as a subset. Such a parameter space has rich geometrical structures that are responsible for the dynamic behaviors of learning. Extendable hierarchical large scale neural network divides a task into small tasks, and each task is fulfilled by a small network under the principle of divide and conquer to improve the performance of a single network. By studying the dual manifold architecture for a family of neural networks and analyzing the hierarchical expansion of this model based on information geometry, the paper proposes a new method to construct the extendable hierarchical large scale neural network model that has knowledge-increasable and structure-extendible ability. The method helps to provide explanation of the transformation mechanism of human recognition system and understand the theory of global architecture of neural network.
international conference on machine learning and cybernetics | 2002
Ai-Jun Li; Yun-Hui Liu; Ying-Jian Qi; Si-Wei Luo
Sequence matching in time series databases is one of the most important data mining applications. In this paper, we focus on subsequence matching. We propose an efficient approach to compare time series. To simplify the searching process, we first use the KMP algorithm to carry through rough sequence matching. As KMP is a typical algorithm for string matching, we must transform time series into 0-1 string inspired by literature (Keogh and Pazzani,1999); then we quickly. search all rough similar subsequences from major sequence and finally, to reduce the dimension of raw time series data, we use Harr wavelet transform to represent the sequence to be compared and use WT (Wavelet Transformations) coefficients to compute the similarity of two sequences. That we carry out rough matching at first may reduce the numbers of WT and quicken the whole subsequence matching process.
International Conference on High Performance Networking, Computing and Communication Systems | 2011
Yun-Hui Liu; Qi Zou; Siwei Luo
The correlation is an important tool during image processing and pattern recognition, and also widely applied in other image-related fields. The correlation between two images (cross correlation) is deemed as an accurate method to evaluate the similarity of these images. However, a high computational cost to calculate the correlation hinders its wide use. Calculating the correlation by transforming them into Fourier spaces (Fourier cross correlation, FCC) can shorten the computational time to some extent. To further accelerate the computation speed based on the FCC computation, we introduce CUDA GPU and compare the performance of both GPU and CPU. Our comparison results show that there is a 10 times speed up when computing FCC between 4096*4096 pixel images on NVIDIA GeForce 9400. We obtained the similar results when implementing FCC in rotation cases. The accelerated FCC algorithm based on GPU is proved to take effect in a template matching application.
international conference on machine learning and cybernetics | 2007
Yun-Hui Liu; Siwei Luo; Ziang Lv; Qi Zou
Recent psychological and neurobiological experiments results show that top-down information such as attention and other higher cortical processes play an important role in perceptual learning issues, while current neural network models, mostly concerned with bottom-up information process only, do not combine the top-down information. In this paper, we give a model of perceptual learning that takes top-down information into account, and explain the mechanism of this model in the framework of information geometry.
international conference on machine learning and cybernetics | 2006
Yun-Hui Liu; Siwei Luo; Ziang Lv; Hua Huang
Model selection is important in deciding among competing computational models in many scientific research domains including in cognition processing. This paper presents an information geometric model selection criterion GMSC and shows its application in cognition. IGMSC computes the geometric complexity of the model by regarding the model space as the manifold and estimates the model-data geometric fitness by using the divergence between the true distribution and the asymptotic distribution, enduing complexity and fitness with clear geometric significance. The comparison experiment shows the effect of IGMSC in cognition
ieee international conference on cognitive informatics | 2006
Ziang Lv; Siwei Luo; Yun-Hui Liu; Yu Zheng
Model selection is one of the central problems of machine learning. The goal of model selection is to select from a set of competing explanations the best one that capture the underlying regularities of given observations. The criterion of a good model is generalizability. We must make balance between the goodness of fit and the complexity of the model to obtain good generalization. Most of present methods are consistent in goodness of fit and differ in complexity. But they only focus on the free parameters of the model; hence they cannot describe the intrinsic complexity of the model and they are not invariant under re-parameterization of the model. This paper uses a new geometrical method to study the complexity of the model selection problem. We propose that the integral of the Gauss-Kronecker curvature of the statistical manifold is the natural measurement of the non-linearity of the manifold of the model. This approach provides a clear intuitive understanding of the intrinsic complexity of the model We use an experiment to verify the criterion based on this method