Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changhua Yu is active.

Publication


Featured researches published by Changhua Yu.


Neurocomputing | 2006

An efficient hidden layer training method for the multilayer perceptron

Changhua Yu; Michael T. Manry; Jiang Li; Pramod Lakshmi Narasimha

Abstract The output-weight-optimization and hidden-weight-optimization (OWO–HWO) training algorithm for the multilayer perceptron alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, three major improvements are made to OWO–HWO. First, a desired net function is derived. Second, using the classical mean square error, a weighted hidden layer error function is derived which de-emphasizes net function errors that correspond to saturated activation function values. Third, an adaptive learning factor based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified, using three training data sets.


International Journal on Artificial Intelligence Tools | 2005

PROTOTYPE CLASSIFIER DESIGN WITH PRUNING

Jiang Li; Michael T. Manry; Changhua Yu; D. Randall Wilson

Algorithms reducing the storage requirement of the nearest neighbor classifier (NNC) can be divided into three main categories: Fast searching algorithms, Instance-based learning algorithms and Prototype based algorithms. We propose an algorithm, LVQPRU, for pruning NNC prototype vectors and a compact classifier with good performance is obtained. The basic condensing algorithm is applied to the initial prototypes to speed up the learning process. The learning vector quantization (LVQ) algorithm is utilized to fine tune the remaining prototypes during each pruning iteration. We evaluate LVQPRU on several data sets along with 12 other algorithms using ten-fold cross-validation. Simulation results show that the proposed algorithm has high generalization accuracy and good storage reduction ratios.


asilomar conference on signals, systems and computers | 2002

A modified hidden weight optimization algorithm for feedforward neural networks

Changhua Yu; Michael T. Manry

The output weight optimization-hidden weight optimization (OWO-HWO) feedforward network training algorithm alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, a new hidden layer error function is proposed which de-emphasizes net function errors that correspond to saturated activation function values. In addition, an adaptive learning rate based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified.


International Journal on Artificial Intelligence Tools | 2005

ITERATIVE DESIGN OF NEURAL NETWORK CLASSIFIERS THROUGH REGRESSION

R. G. Gore; Jiang Li; Michael T. Manry; Li-min Liu; Changhua Yu; John Y. Wei

A method which modifies the objective function used far designing neural network classifiers is presented. The classical mean-square error criteria is relaxed by introducing two types of local error bias which are treated like free parameters. Open and closed form solutions are given for finding these bias parameters. The new objective function is seamlessly integrated into existing training algorithms such as back propagation (BP), output weight optimization (OWO), and hidden weight optimization (HWO). The resulting algorithms are successfully applied in training neural net classifiers having a linear final layer. Classifiers are trained and tested on several data sets from pattern recognition applications. Improvement over classical iterative regression methods is clearly demonstrated.


Neurocomputing | 2007

Convergent design of piecewise linear neural networks

Hema Chandrasekaran; Jiang Li; Walter H. Delashmit; Pramod Lakshmi Narasimha; Changhua Yu; Michael T. Manry

Piecewise linear networks (PLNs) are attractive because they can be trained quickly and provide good performance in many nonlinear approximation problems. Most existing design algorithms for piecewise linear networks are not convergent, non-optimal, or are not designed to handle noisy data. In this paper, four algorithms are presented which attack this problem. They are: (1) a convergent design algorithm which builds the PLN one module at a time using a branch and bound technique; (2) two pruning algorithms which eliminate less useful modules from the network; and (3) a sifting algorithm which picks the best networks out of the many designed. The performance of the PLN is compared with that of the multilayer perceptron (MLP) using several benchmark data sets. Numerical results demonstrate that piecewise linear networks are adequate for many approximation problems.


International Journal of Pattern Recognition and Artificial Intelligence | 2005

Effects of nonsingular preprocessing on feedforward network training

Changhua Yu; Michael T. Manry; Jiang Li

In the neural network literature, many preprocessing techniques, such as feature de-correlation, input unbiasing and normalization, are suggested to accelerate multilayer perceptron training. In this paper, we show that a network trained with an original data set and one trained with a linear transformation of the original data will go through the same training dynamics, as long as they start from equivalent states. Thus preprocessing techniques may not be helpful and are merely equivalent to using a different weight set to initialize the network. Theoretical analyses of such preprocessing approaches are given for conjugate gradient, back propagation and the Newton method. In addition, an efficient Newton-like training algorithm is proposed for hidden layer training. Experiments on various data sets confirm the theoretical analyses and verify the improvement of the new algorithm.


Neural Processing Letters | 2017

Properties of a Batch Training Algorithm for Feedforward Networks

Melvin D. Robinson; Michael T. Manry; Sanjeev S. Malalur; Changhua Yu

We examine properties for a batch training algorithm known as the output weight optimization–hidden weight optimization (OWO–HWO). Using the concept of equivalent networks, we analyze the effect of input transformation on BP. We introduce new theory of affine invariance and partial affine invariance for neural networks and prove this property for OWO–HWO. Finally, we relate HWO to BP and show that every iteration of HWO is equivalent to BP applied to whitening transformed data. Experimental results validate the connection between OWO–HWO and OWO–BP.


IEEE Transactions on Neural Networks | 2006

Feature Selection Using a Piecewise Linear Network

Jiang Li; Michael T. Manry; Pramod Lakshmi Narasimha; Changhua Yu


the florida ai research society | 2004

Iterative improvement of neural classifiers

Jiang Li; Michael T. Manry; Li-min Liu; Changhua Yu; John Y. Wei


the florida ai research society | 2004

Invariance of MLP training to input feature De-correlation

Changhua Yu; Michael T. Manry; Jiang Li

Collaboration


Dive into the Changhua Yu's collaboration.

Top Co-Authors

Avatar

Michael T. Manry

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Jiang Li

Old Dominion University

View shared research outputs
Top Co-Authors

Avatar

Pramod Lakshmi Narasimha

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

John Y. Wei

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Li-min Liu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hema Chandrasekaran

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Melvin D. Robinson

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Sanjeev S. Malalur

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge