Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chunzhi Jin is active.

Publication


Featured researches published by Chunzhi Jin.


Neural Networks | 2000

Universal learning network and its application to chaos control

Kotaro Hirasawa; Xiaofeng Wang; Junichi Murata; Jinglu Hu; Chunzhi Jin

Universal Learning Networks (ULNs) are proposed and their application to chaos control is discussed. ULNs provide a generalized framework to model and control complex systems. They consist of a number of inter-connected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. Therefore, physical systems, which can be described by differential or difference equations and also their controllers, can be modeled in a unified way, and so ULNs may form a super set of neural networks and fuzzy neural networks. In order to optimize the ULNs, a generalized learning algorithm is derived, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. The derivatives are calculated by using forward or backward propagation schemes. These algorithms for calculating the derivatives are extended versions of Back Propagation Through Time (BPTT) and Real Time Recurrent Learning (RTRL) of Williams in the sense that generalized node functions, generalized network connections with multi-branch of arbitrary time delays, generalized criterion functions and higher order derivatives can be deal with. As an application of ULNs, a chaos control method using maximum Lyapunov exponent of ULNs is proposed. Maximum Lyapunov exponent of ULNs can be formulated by using higher order derivatives of ULNs, and the parameters of ULNs can be adjusted so that the maximum Lyapunov exponent approaches the target value. From the simulation results, it has been shown that a fully connected ULN with three nodes is able to display chaotic behaviors.


systems man and cybernetics | 2000

Universal learning network and its application to robust control

Kotaro Hirasawa; Junichi Murata; Jinglu Hu; Chunzhi Jin

Universal learning networks (ULNs) and robust control system design are discussed, ULNs provide a generalized framework to model and control complex systems. They consist of a number of interconnected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. Therefore, physical systems which can be described by differential or difference equations and also their controllers can be modeled in a unified way. So, ULNs constitute a superset of neural networks or fuzzy neural networks. In order to optimize the systems, a generalized learning algorithm is derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. The derivatives are calculated by using forward or backward propagation schemes. These algorithms for calculating the derivatives are extended versions of back propagation through time (BPTT) and real time recurrent learning (RTRL) by Williams in the sense that generalized nonlinear functions and higher order derivatives are dealt with. As an application of ULNs, the higher order derivative, one of the distinguished features of ULNs, is applied to realizing a robust control system in this paper. In addition, it is shown that the higher order derivatives are effective tools to realize sophisticated control of nonlinear systems. Other features of ULNs such as multiple branches with arbitrary time delays and using a priori information will be discussed in other papers.


Neural Networks | 2001

Improvement of generalization ability for identifying dynamical systems by using universal learning networks

Kotaro Hirasawa; Sung-ho Kim; Jinglu Hu; Junichi Murata; Min Han; Chunzhi Jin

This paper studies how the generalization ability of models of dynamical systems can be improved by taking advantage of the second order derivatives of the outputs with respect to the external inputs. The proposed method can be regarded as a direct implementation of the well-known regularization technique using the higher order derivatives of the Universal Learning Networks (ULNs). ULNs consist of a number of interconnected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm has been derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. First, the method for computing the second order derivatives of ULNs is discussed. Then, a new method for implementing the regularization term is presented. Finally, simulation studies on identification of a nonlinear dynamical system with noises are carried out to demonstrate the effectiveness of the proposed method. Simulation results show that the proposed method can improve the generalization ability of neural networks significantly, especially in terms that (1) the robust network can be obtained even when the branches of trained ULNs are destructed, and (2) the obtained performance does not depend on the initial parameter values.


systems man and cybernetics | 2000

Chaos control on universal learning networks

Kotaro Hirasawa; Junichi Murata; Jinglu Hu; Chunzhi Jin

A new chaos control method is proposed which is useful for taking advantage of chaos and avoiding it. The proposed method is based on the following facts: (1) chaotic phenomena can be generated and eliminated by controlling the maximum Lyapunov exponent of the systems, and (2) the maximum Lyapunov exponent can be formulated and calculated by using higher-order derivatives of universal learning networks (ULNs). ULNs consist of a number of interconnected nodes which may have any continuously differentiable nonlinear functions in them and where each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm has been derived for the ULNs in which both first-order derivatives (gradients) and higher-order derivatives are incorporated. In simulations, parameters of ULNs with bounded node outputs were adjusted for the maximum Lyapunov exponent to approach the target value, and it has been shown that a fully-connected ULN with three sigmoidal function nodes is able to generate and eliminate chaotic behaviors by adjusting these parameters.


society of instrument and control engineers of japan | 1999

A target tracking algorithm with range rate under the color measurement environment

Yaping Dai; Chunzhi Jin; Jinglu Hu; Kotaro Hirasawa; Zheng Liu

In this paper a new observation model is presented to improve the state estimation and prediction in a target tracking problem. Comparing with conventional approaches, the following are distinguished points of the approach. First, the measurement equation is set up in the polar coordinate and even combines the range rate measurement with the usual position measurements (i.e. range, azimuth, elevation angle, range rate). Second, the observation noise of sensor data is considered as a colored one, so new linear state and observation equation can be obtained by incorporating the noise vector into the state vector, which satisfy the requirement of Kalman filter. As a result, the accuracy of both the measurement and the prediction will be increased.


systems man and cybernetics | 2001

A new control method of nonlinear systems based on impulse responses of universal learning networks

Kotaro Hirasawa; Jinglu Hu; Junichi Murata; Chunzhi Jin

A new control method of nonlinear dynamic systems is proposed based on the impulse responses of universal learning networks (ULNs), ULNs form a superset of neural networks. They consist of a number of interconnected nodes where the nodes may have any continuously differentiable nonlinear functions in them and each pair of nodes can be connected by multiple branches with arbitrary time delays. A generalized learning algorithm is derived for the ULNs, in which both the first order derivatives (gradients) and the higher order derivatives are incorporated. One of the distinguished features of the proposed control method is that the impulse response of the systems is considered as an extended part of the criterion function and it can be calculated by using the higher order derivatives of ULNs. By using the impulse response as the criterion function, nonlinear dynamics with not only quick response but also quick damping and small steady state error can be more easily obtained than the conventional nonlinear control systems with quadratic form criterion functions of state and control variables.


IFAC Proceedings Volumes | 2001

On Bias Compensated Recursive Least-Squares Algorithm for FIR Adaptive Filtering

Lijuan Jia; Chunzhi Jin; Kiyoshi Wada

Abstract It is known that recursive least squares (RLS) algorithm or least mean square (LMS) algorithm leads to biased FIR filter coefficients in the presence of input and output noise. In this paper, a new type of bias compensated recursive least squares (BCRLS) algorithm is proposed to produce consistent results for adaptive FIR filtering in the input and output noise case. The proposed algorithm introduces an auxiliary estimator to estimate unknown input noise variance. Owing to this, the bias of the RLS solution due to the input noise can be compensated to yield the consistent filter coefficients. Computational simulations indicate that the proposed algorithm is very efficient


IFAC Proceedings Volumes | 1999

Control of nonlinear systems based on a probabilistic learning network

Jinglu Hu; Kotaro Hirasawa; Chunzhi Jin; Takuya Matsuoka

Abstract This paper presents a control design scheme for nonlinear systems based on a probability learning network (ProNet) that has a capability dealing with stochastic signals. Plant and its controllers are described by using a set of related equations and form a unified learning network- ProNet where disturbances are considered as its external inputs. Then controller design is transfered to ProNet learning. By including an effort to reduce variance of ProNet output in training, the trained ProNet has different sensitivity to signals of different frequencies. By taking this advantage, a ProNet control system is designed to increase its robustness against disturbance.


international symposium on neural networks | 1999

Universal learning networks with varying parameters

Kotaro Hirasawa; Jinglu Hu; Junichi Murata; Chunzhi Jin; Hironobu Etoh; Hironobu Katagiri

The universal learning network (ULN) which is a superset of supervised learning networks has been already proposed. Parameters in ULN are trained in order to optimize a criterion function as conventional neural networks, and after training they are used as constant parameters. In this paper, a method to alter the parameters depending on the network flows is presented to enhance representation abilities of networks. In the proposed method, there exists two kinds of networks, the first one is a basic network which includes varying parameters and the other one is a network which calculates the optimal varying parameters depending on the network flows of the basic network. It is also proposed in this paper that any type of networks such as fuzzy inference networks, radial basis function networks and neural networks can be used for the basic and parameter calculation networks. From simulations where parameters in a neural network are altered by fuzzy inference networks, it is shown that the networks with the same number of varying parameters have higher representation abilities than the conventional networks.


international symposium on neural networks | 2000

A new method to prune the neural network

Weishui Wan; Kotaro Hirasawa; Jinglu Hu; Chunzhi Jin

Using the backpropagation algorithm (BP) to train neural networks is a widely adopted practice in both theory and practical applications. However, its distributed weight representation, that is the weight matrix of final network after training by using BP are usually not sparsified, and prohibits its use in the rule discovery of inherent functional relations between the input and output data, so in this aspect some kinds of structure optimization are needed to improve its poor performance. In this paper, with this in mind, a new method to prune neural networks is proposed based on some statistical quantities of neural networks. Comparing with the other known pruning methods such as the structural learning with forgetting and RPROP algorithm, the proposed method can attain comparable or even better results over these methods without evident increase of the computational load. Detailed simulations using the Iris data sets exhibit our above assertion.

Collaboration


Dive into the Chunzhi Jin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lijuan Jia

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yaping Dai

Beijing Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge