Sa Hyun Bang
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sa Hyun Bang.
IEEE Transactions on Neural Networks | 1993
Joongho Choi; Sa Hyun Bang; Bing J. Sheu
An analog VLSI neural network processor was designed and fabricated for communication receiver applications. It does not require prior estimation of the channel characteristics. A powerful channel equalizer was implemented with this processor chip configured as a four-layered perceptron network. The compact synapse cell is realized with an enhanced wide-range Gilbert multiplier circuit. The output neuron consists of a linear current-to-voltage converter and a sigmoid function generator with a controllable voltage gain. Network training is performed by the modified Kalman neuro-filtering algorithm to speed up the convergence process for intersymbol interference and white Gaussian noise communication channels. The learning process is done in the companion DSP board which also keeps the synapse weight for later use of the chip. The VLSI neural network processor chip occupies a silicon area of 4.6 mmx6.8 mm and was fabricated in a 2-mum double-polysilicon CMOS technology. System analysis and experimental results are presented.
IEEE Transactions on Neural Networks | 1996
Sa Hyun Bang; Bing J. Sheu; Tony H. Wu
An engineering annealing method for optimal solutions of cellular neural networks is presented. Cellular neural networks are very promising in solving many scientific problems in image processing, pattern recognition, and optimization by the use of stored program with predetermined templates. Hardware annealing, which is a paralleled version of mean-field annealing in analog networks, is a highly efficient method of finding optimal solutions of cellular neural networks. It does not require any stochastic procedure and henceforth can be very fast. The generalized energy function of the network is first increased by reducing the voltage gain of each neuron. Then, the hardware annealing searches for the globally minimum energy state by continuously increasing the gain of neurons. The process of global optimization by the proposed annealing can be described by the eigenvalue problems in the time-varying dynamic system. In typical nonoptimization problems, it also provides enough stimulation to frozen neurons caused by ill-conditioned initial states.
international symposium on circuits and systems | 1995
Bing J. Sheu; Sa Hyun Bang; Wai-Chi Fang
A cellular neural network (CNN) is a locally connected, massively paralleled computing system with simple synaptic operators so that it is very suitable for VLSI implementation in real-time, high-speed applications. VLSI architecture of a continuous-time shift-invariant CNN with digitally-programmable operators and optical inputs is proposed. Circuits with annealing ability are included to achieve optimal solutions for many selected applications.
international symposium on circuits and systems | 1992
Sa Hyun Bang; Bing J. Sheu
A multilayer perceptron with the extended Kalman filter (EKF) training algorithm is investigated as a communication receiver when intersymbol interference and white Gaussian noise are present. Besides discussing the complexity of the EKF algorithm, it is shown that the EKF has better performance than the conventional backpropagation (BP) training algorithm in the sense that is requires less training steps and also results in proper training even when the BP did not. With 2500 training symbols, the EKF resulted in about 4-dB performance improvement over the conventional BP.<<ETX>>
IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 1995
Sa Hyun Bang; O.T.-C. Chen; J.C.-F. Chang; Bing J. Sheu
In a multilevel neural network, the output of each neuron is to produce a multi-bit representation. Therefore, the total network size can be significantly smaller than a conventional network. The reduction in network size is a highly desirable feature in large-scale applications. The procedure for applying hardware annealing by continuously changing the neuron gain from a low value to a certain high value, to reach the globally optimal solution is described. Several simulation results are also presented. The hardware annealing technique can be applied to the neurons in a parallel format, and is much faster than the simulated annealing method on digital computers. >
international symposium on circuits and systems | 1994
Sa Hyun Bang; Bing J. Sheu; Josephine C.-F. Chang
The output of a multi-level neuron produces a multibit representation. The total network size with multilevel neurons can therefore be significantly reduced from a conventional network with two-level neurons. The reduction in network size benefits VLSI implementation. Due to the nonlinearity associated with a neuron transfer function, multiple local minima exist in the energy function of a multi-level analog-to-digital decision network. The procedure for applying hardware annealing by continuously changing the neuron gain from a low value to a certain high value, to reach the globally optimal solution is described. Several simulation results are also presented. The parallel hardware annealing method is much faster than the simulated annealing method on digital computers.<<ETX>>
international symposium on circuits and systems | 1995
Bing J. Sheu; Robert Chen-Hao Chang; Tony H. Wu; Sa Hyun Bang
The hardware annealing method is effective on finding the optimal solutions for selected cellular neural network applications. The method is presented and analyzed in terms of the generalized energy which corresponds to the cost function in optimization problems. The network stability property and parameters in annealed neural networks are described to serve as a guideline for practical applications.
international symposium on neural networks | 1994
Sa Hyun Bang; Bing J. Sheu; Tony H. Wu
Hardware annealing, which is a paralleled version of mean-field annealing in analog networks, is an efficient method of finding the optimal solutions for cellular neural networks. It does not require any stochastic procedure and hence can be very fast. Once the energy of the network is increased, the hardware annealing searches for the globally minimum energy state by gradually increasing the gain of neurons. In typical non-optimization problems, it also provides enough energy to frozen neurons caused by ill-conditioned initial states.<<ETX>>
custom integrated circuits conference | 1993
Joongho Choi; Sa Hyun Bang; Bing J. Sheu
An analog VLSI neural network processor is developed for digital communication receiver applications without any need for a priori estimation of the channel characteristics. Network training is performed by the modified Kalman filtering algorithm to speed up the convergence process for intersymbol interference and white Gaussian noise communication channels. The fabricated chip is based on a four-layered network running at 1 MHz in a 2-/spl mu/m CMOS technology. Measured characteristics of the electrically programmable wide-range synapse cell, the input neuron, and the output neuron are supportive of precision network operation.
Archive | 1994
Bing J. Sheu; Sa Hyun Bang