Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhen-Ping Lo is active.

Publication


Featured researches published by Zhen-Ping Lo.


Biological Cybernetics | 1991

On the rate of convergence in topology preserving neural networks

Zhen-Ping Lo; Behnam Bavarian

A formal analysis of the neighborhood interaction function selection in the topology preserving unsupervised neural network is presented in this paper. The definition of the neighborhood interaction function is motivated by anatomical evidence as opposed to what is currently used, which is a uniform neighborhood interaction set. By selecting a neighborhood interaction function with a neighborhood amplitude of interaction which is decreasing in spatial domain the topological order is always enforced and the rate of self-organization to final equilibrium state is improved. Several simulations are carried out to show the improvement in rate between using a neighborhood interaction function vs. using a neighborhood interaction set. An error measure functional is further defined to compare the two approaches quantitatively.


international parallel processing symposium | 1991

Analysis of neighborhood interaction in Kohonen neural networks

Zhen-Ping Lo; Masahiro Fujita; Behnam Bavarian

A formal analysis of the neighborhood interaction of Kohonen neural networks is presented. The authors propose a new neighborhood interaction to improve the topological order of the neural network. The neighborhood interaction which depends on lateral distance is motivated by anatomical evidence as opposed to what is currently used, which is a constant. The authors also mathematically show that using the new neighborhood interaction will enforce the topological order in the neighborhood set for every iteration. One simulation is presented to show that the topological order is improved by using the new neighborhood interaction.<<ETX>>


Computers & Electrical Engineering | 1993

Multiple job scheduling with artificial neural networks

Zhen-Ping Lo; Behnam Bavarian

Abstract This paper presents an application of neural networks in a multiple task scheduling problem. We take the crossbar Hopfield network which is used to solve the classical traveling salesman problem and extend it to a 3-D neuro-box network (NBN) to solve multiple task scheduling on multiple servers. The approach is presented in several stages starting with a brief review of the Hopfield network, the formulation of the traveling salesman problem on the Hopfield network, the extension to the multiple traveling salesman problem, and the formulation of the manufacturing task scheduling problem, in increasing order of difficulty. At every step, the topology of the network, the energy function (or the cost function which is to be minimized) of the network, the differential equations defining the characteristics of the neurons and illustrative simulations are presented in the paper.


systems man and cybernetics | 1991

Job scheduling on parallel machines using simulated annealing

Zhen-Ping Lo; Behnam Bavarian

The authors consider the problem of scheduling a set of simultaneously available jobs on several parallel machines. Specifically, the minimization of the time to finish all the jobs assigned to all machines of scheduling sequence under job deadline constraints for the n jobs, m machines problem is formulated. The simulated annealing and fast simulated annealing algorithms are reviewed and adopted for the scheduling problem. A large number of simulations were carried out which provides an empirical basis for comparing the application of classical simulated annealing and fast simulated annealing algorithms to the scheduling problem.<<ETX>>


Pattern Recognition Letters | 1991

Comparison of a neural network and a piecewise linear classifier

Zhen-Ping Lo; Behnam Bavarian

Abstract A neural algorithm proposed by Kohonen is used to design a neural classifier. We use the algorithm to classify a real multiple-class data set obtained from ship images and compare the results with the results of the piecewise linear classifier. The error performance of the neural classifier is slightly better than the piecewise linear classifier, however the training algorithm is much simplier and easier to implement.


international symposium on neural networks | 1992

Derivation of learning vector quantization algorithms

Zhen-Ping Lo; Yaoqi Yu; Behnam Bavarian

A formal derivation of three learning rules for the adaptation of the synaptic weight vectors of neurons representing the prototype vectors of the class distribution in a classifier is presented. A decision surface function and a set of adaptation algorithms for adjusting this surface which are derived by using the gradient-descent approach to minimize the classification error are derived. This also provides a formal analysis of the Kohonen learning vector quantization (LVQ1 and LVQ2) algorithms. In particular, it is shown that to minimize the classification error, one of the learning equations in the LVQ1 algorithm is not required. An application of the learning algorithms for designing a neural network classifier is presented. The performance of the classifier was tested and compared to the K-NN decision rule for the Iris real data set.<<ETX>>


systems man and cybernetics | 1991

Analysis and application of self-organizing sensory mapping

Zhen-Ping Lo; M. Fujita; Behnam Bavarian

The authors present a mathematical analysis of self-organizing sensory mapping which was first proposed by Kohonen. It is shown that using the sensory mapping learning rule is equivalent to minimizing an energy function of the network outlined. The underlying work of Kohonen and the topology preserving networks are reviewed, along with the algorithm for implementing the network. The concept of the energy of a network is defined and a detailed analysis of the mapping algorithm is outlined.<<ETX>>


international parallel processing symposium | 1991

A neural algorithm for variable thresholding of images

Zhen-Ping Lo; Behnam Bavarian

A two-stage thresholding for gray scale images is presented in this paper. The first stage is based on a conventional application of the histograms which provides fixed global threshold value. This threshold value is then assigned as the initial state of a set of neurons which will process the image in parallel, in a horizontal scan, producing the binary image at the output. The state of the neurons is updated using the Kohonen self-organizing learning algorithm. This technique has two properties, First it smooths the spike noise, and second the low frequency illumination variation is compensated for and the segmented binary image regions are not affected by lighting conditions. Several examples are processed and presented to show the performance of the algorithm.<<ETX>>


systems man and cybernetics | 1991

Using neural networks to model the open environment for mobile robot navigation

Shahriar Najand; Zhen-Ping Lo; Behnam Bavarian

The application of the Kohonen self-organizing topology preserving neural network for learning and developing a minimal representation for the open environment for mobile robot navigation is presented. The input to the algorithm is the coordinates of randomly selected points in the open environment. No specific knowledge of the size, number, and shape of the obstacles is needed by the network. The parameter selection for the network is discussed and two illustrative examples are presented.<<ETX>>


international symposium on neural networks | 1992

Using the Kohonen topology preserving mapping network for learning the minimal environment representation

Shariar Najand; Zhen-Ping Lo; Behnam Bavarian

The authors present the application of the Kohonen self-organizing topology-preserving neural network for learning and developing a minimal representation for the open environment in mobile robot navigation. The input to the algorithm consists of the coordinates of randomly selected points in the open environment. No specific knowledge of the size, number, and shape of the obstacles is needed by the network. The parameter selection for the network is discussed. The neighborhood function, adaptation gain, and the number of training sample points have direct effect on the convergence and usefulness of the final representation. The environment dimensions and a measure of environment complexity are used to find approximate bounds and requirements on these parameters.<<ETX>>

Collaboration


Dive into the Zhen-Ping Lo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yaoqi Yu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Fujita

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shariar Najand

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge