Chao-Chee Ku
Pennsylvania State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chao-Chee Ku.
IEEE Transactions on Nuclear Science | 1992
Chao-Chee Ku; Kwang Y. Lee; Robert M. Edwards
A novel approach to wide-range optimal reactor temperature control using diagonal recurrent neural networks (DRNNs) with an adaptive learning rate scheme is presented. The drawback of the usual feedforward neural network (FNN) is that it is a static mapping and requires a large number of neurons and takes a long training time. The usual fixed learning rate based on an empirical trial and error scheme is slow and does not guarantee convergence. The DRNN is for dynamic mapping and requires much fewer neurons and weights, and thus converges faster than FNN. A dynamic backpropagation algorithm coupled with an adaptive learning rate guarantees even faster convergence. The DRNN controller described includes both a neurocontroller and a neuroidentifier. A reference model which incorporates an optimal control law with improved reactor temperature response is used for training of the neurocontroller and neuroidentifier. Rapid convergence of this DRNN-based control system is demonstrated when used to improve reactor temperature performance. >
international forum on applications of neural networks to power systems | 1991
Kwang Y. Lee; Y.T. Cha; Chao-Chee Ku
A study is made on the application of the artificial neural network (ANN) method to forecast the short-term load for a large power system. The load has two distinct patterns: weekday and weekend-day patterns. The weekend-day pattern include Saturday, Sunday, and Monday loads. Three different ANN models are proposed, including two feedforward neural networks and one recurrent neural network. Inputs to the ANN are past loads and the output is the predicted load for a given day. The standard deviation and percent error of each model are compared.<<ETX>>
international forum on applications of neural networks to power systems | 1993
Kwang Y. Lee; Tae-Il Choi; Chao-Chee Ku; June Ho Park
This paper presents a new approach for short term load forecasting using a diagonal recurrent neural network with an adaptive learning rate. The fully connected recurrent neural network (FRNN), where all neurons are coupled to one another, is difficult to train and to converge in a short time. The DRNN is a modified model of FRNN. It requires fewer weights than FRNN and rapid convergence has been demonstrated. A dynamic backpropagation algorithm coupled with an adaptive learning rate guarantees even faster convergence. To consider the effect of seasonal load variation on the accuracy of the proposed forecasting model, forecasting accuracy is evaluated throughout a whole year. Simulation results show that the forecast accuracy is improved.<<ETX>>
conference on decision and control | 1992
Chao-Chee Ku; Kwang Y. Lee
A method of choosing learning rates adaptively in controlling a dynamic system using diagonal recurrent neural networks is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer. The hidden layer is comprised of self-recurrent neurons, each feeding its output only into itself and not to other neurons in the hidden layer. An unknown plant is identified by a diagonal recurrent neuroidentifier (DRNI), and provides the sensitivity information of the plant to a diagonal recurrent neurocontroller (DRNC). A dynamic backpropagation algorithm (DBP) is used to train both DRNC and DRNI. The DRNN captures the dynamic nature of a system, and, since it is not fully connected, training is much faster than for the case of a fully connected recurrent neural network. For faster learning, the use of an adaptive learning rate that guarantees convergence is developed. The proposed approach is applied to numerical problems. Simulation results are included.<<ETX>>
international symposium on neural networks | 1992
Chao-Chee Ku; Kwang Y. Lee
The recurrent neural network is proposed for system identification of nonlinear dynamic systems. When the system identification is coupled with control problems, the real-time feature is very important, and a neuro-identifier must be designed so that it will converge and the training time will not be too long. The neural network should also be simple and implemented easily. A novel neuro-identifier, the diagonal recurrent neural network (DRNN), that fulfils these requirements is proposed. A generalized algorithm, dynamic backpropagation, is developed to train the DRNN. The DRNN was used to identify nonlinear systems, and simulation showed promising results.<<ETX>>
international symposium on neural networks | 1992
Chao-Chee Ku; Kwang Y. Lee
The authors present an approach for control and system identification using diagonal recurrent neural networks (DRNNs). An unknown plant is identified by a system identifier, called a diagonal recurrent neuroidentifier (DRNI), and provides information on the plant to a controller, called a diagonal recurrent neurocontroller (DRNC). A generalized algorithm, called the dynamic backpropagation algorithm, is developed to train both the DRNC and the DRNI. The DRNN captures the dynamic nature of a system and, since it is not fully connected, training is much faster than with a fully connected recurrent neural network.<<ETX>>
advances in computing and communications | 1994
Chao-Chee Ku; Kwang Y. Lee
Convergence and the closed-loop stability property are established for a diagonal recurrent neural network (DRNN) based control system. Two DRNNs are utilized in the control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller called diagonal recurrent neurocontroller (DRNC). A generalized dynamic backpropagation algorithm (DBP) is developed and used to train both DRNC and DRNI. Due to the recurrence, the DRNN can capture the dynamic behavior of a system and since it is not fully connected, the architecture is simpler than a fully connected recurrent neural network. Convergence theorems for the adaptive DBP algorithms are developed and the closed-loop stability is established for the DRNN based control system when the plant is BIBO stable.
international symposium on neural networks | 1994
Kwang Y. Lee; Tae-Il Choi; Chao-Chee Ku; June Ho Park
Different neural network architectures are presented for short-term load forecasting. The fully connected recurrent neural network (FRNN), where all neurons are coupled to one another, is difficult to train and to converge in a short time. The diagonal recurrent neural network (DRNN) is a modified model of FRNN. It requires fewer weights than FRNN and rapid convergence has been demonstrated. A dynamic backpropagation algorithm coupled with adaptive learning rate guarantees even faster convergence. Many experiments are conducted to provide the one-day ahead load forecast, and the results are compared. The effect of temperatures and functional-link net mapping are also studied by including them as the networks inputs. The forecasting accuracy for weekend load can be improved by using a separate weekend load model.<<ETX>>
international forum on applications of neural networks to power systems | 1993
Chao-Chee Ku; Kwang Y. Lee
A new neural network paradigm called diagonal recurrent neural network (DRNN) structure is presented, and is used to design a neural network controller, which includes both a neuroidentifier (DRNI) and a neurocontroller (DRNC). An unknown plant is identified by a neuroidentifier, which provides the sensitivity information of the plant to a neurocontroller. A generalized dynamical backpropagation algorithm (DBP) is developed to train both DRNC and DRNI. An approach to use an adaptive learning rate scheme based on the Lyapunov function is developed. The use of adaptive learning rates not only accelerates the learning speed but also guarantees the convergence of the neural network.<<ETX>>
international forum on applications of neural networks to power systems | 1993
Chao-Chee Ku; Kwang Y. Lee; Robert M. Edwards
A new approach for wide-range optimal reactor temperature control using diagonal recurrent neural networks (DRNN) with an adaptive learning rate scheme is presented. The drawback of the usual feedforward neural network (FNN) is that it is a static mapping and requires a large number of neurons and takes a long training time. The usual fixed learning rate based on an empirical trial and error scheme is slow and does not guarantee convergence. The dynamic backpropagation algorithm coupled with an adaptive learning rate guarantees even faster convergence. A reference model which incorporates an optimal control law with improved reactor temperature response is used for training of the neurocontroller and neuroidentifier. Rapid convergence of this DRNN-based control system is demonstrated when applied to improve reactor temperature performance.<<ETX>>