Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shih-Lin Hung is active.

Publication


Featured researches published by Shih-Lin Hung.


Kybernetes | 1994

Machine learning: neural networks, genetic algorithms, and fuzzy systems

Hojjat Adeli; Shih-Lin Hung

Perceptron Learning with a Hidden Layer An Object-Oriented Backpropagation Learning Model Concurrent Backpropagation Learning Algorithms An Adaptive Conjugate Gradient Learning Algorithm for Efficient Training of Neural Networks A Concurrent Adaptive Conjugate Gradient Learning Algorithm on MIMD Shared Memory Machines A Concurrent Genetic/Neural Network Learning Algorithm for MIMD Shared Memory Machines A Hybrid Learning Algorithm for Distributed Memory Multicomputers A Fuzzy Neural Network Learning Model Appendices References Index.


IEEE Transactions on Neural Networks | 1994

A parallel genetic/neural network learning algorithm for MIMD shared memory machines

Shih-Lin Hung; Hojjat Adeli

A new algorithm is presented for training of multilayer feedforward neural networks by integrating a genetic algorithm with an adaptive conjugate gradient neural network learning algorithm. The parallel hybrid learning algorithm has been implemented in C on an MIMD shared memory machine (Cray Y-MP8/864 supercomputer). It has been applied to two different domains, engineering design and image recognition. The performance of the algorithm has been evaluated by applying it to three examples. The superior convergence property of the parallel hybrid neural network learning algorithm presented in this paper is demonstrated.


Neurocomputing | 1993

Parallel backpropagation learning algorithms on CRAY Y-MP8/864 supercomputer

Shih-Lin Hung; Hojjat Adeli

Abstract Parallel backpropagation neural networks learning algorithms have been developed employing the vectorization and microtasking capabilities of vector MIMD machines. They have been implemented in C on CRAY Y-MP/864 supercomputer under UNICOS operating system. The algorithms have been applied to two different domains: engineering design and image recognition, and their performance has been investigated. A maximum speedup of about 6.7 is achieved using eight processors for a large network with 5950 links due to microtasking only. When vectorization is combined with microtasking, a maximum speedup of about 33 is realized using eight processors.


Computers & Structures | 2003

DETECTION OF STRUCTURAL DAMAGE VIA FREE VIBRATION RESPONSES GENERATED BY APPROXIMATING ARTIFICIAL NEURAL NETWORKS

C.Y. Kao; Shih-Lin Hung

This work presented a novel neural network-based approach for detecting structural damage. The proposed approach involves two steps. The first step, system identification, uses neural system identification networks (NSINs) to identify the undamaged and damaged states of a structural system. The second step, structural damage detection, uses the aforementioned trained NSINs to generate free vibration responses with the same initial condition or impulsive force. Comparing the periods and amplitudes of the free vibration responses of the damaged and undamaged states allows the extent of changes to be assessed. Furthermore, numerical and experimental examples demonstrate the feasibility of applying the proposed method for detecting structural damage.


Neurocomputing | 1994

Object-oriented backpropagation and its application to structural design

Shih-Lin Hung; Hojjat Adeli

Abstract A multilayer neural network development environment, called ANNDE, is presented for implementing effective learning algorithms for the domain of engineering design using the object-oriented programming paradigm. It consists of five primary components: learning domain, neural nets, library of learning strategies, learning process, and analysis process. These components have been implemented as five classes in two object-oriented programming languages C++ and G++. The library of learning strategies includes generalized delta rule with error backpropagation. Several examples are presented for learning in the domain of structural engineering.


ieee international conference on high performance computing data and analytics | 1993

A Concurrent Adaptive Conjugate Gradient Learning Algorithm On Mimd Shared-Memory Machines

Hojjat Adeli; Shih-Lin Hung

A concurrent adaptive conjugate gradient learning al gorithm has been developed for training of multilayer feed-forward neural networks and implemented in C on a MIMD shared-memory machine (CRAY Y-MP/8- 864 supercomputer). The learning algorithm has been applied to the domain of image recognition. The per formance of the algorithm has been evaluated by ap plying it to two large-scale training examples with 2,304 training instances. The concurrent adaptive neural networks algorithm has superior convergence property compared with the concurrent momentum back-propagation algorithm. A maximum speedup of about 7.9 is achieved using eight processors for a large network with 4,160 links as a result of microtask ing only. When vectorization is combined with micro tasking, a maximum speedup of about 44 is realized using eight processors.


Computer-aided Civil and Infrastructure Engineering | 2009

Identification of Time‐Variant Modal Parameters Using Time‐Varying Autoregressive with Exogenous Input and Low‐Order Polynomial Function

C. S. Huang; Shih-Lin Hung; W. C. Su; C. L. Wu

This work presents an approach that ac- curately identifies instantaneous modal parameters of a structure using time-varying autoregressive with exoge- nous input (TVARX) model. By developing the equiva- lent relations between the equation of motion of a time- varying structural system and the TVARX model, this work proves that instantaneous modal parameters of a time-varying system can be directly estimated from the TVARX model coefficients established from displace- ment responses. A moving least-squares technique incor- porating polynomial basis functions is adopted to ap- proximate the coefficient functions of the TVARX model. The coefficient functions of the TVARX model are rep- resented by polynomials having time-dependent coeffi- cients, instead of constant coefficients as in traditional basis function expansion approaches, so that only low orders of polynomial basis functions are needed. Nu- merical studies are carried out to investigate the effects of parameters in the proposed approach on accurately determining instantaneous modal parameters. Numerical analyses also demonstrate that the proposed approach is superior to some published techniques (i.e., recursive technique with a forgetting factor, traditional basis func- tion expansion approach, and weighted basis function expansion approach) in accurately estimating instanta- neous modal parameters of a structure. Finally, the pro- ∗ To whom correspondence should be addressed. E-mail: cshuang@ mail.nctu.edu.tw. posed approach is applied to process measured data for a frame specimen subjected to a series of base excitations in shaking table tests. The specimen was damaged dur- ing testing. The identified instantaneous modal parame- ters are consistent with observed physical phenomena.


Neurocomputing | 1991

A model of perceptron learning with a hidden layer for engineering design

Shih-Lin Hung; Hojjat Adeli

Abstract A model of machine learning in engineering design, called PERHID, is presented based on the concept of perceptron learning algorithm with a two-layer neural network. PERHID has been constructed by combining the perceptron with a single-layer AND neural net. The problem of structural design is cast in a form that can be described by a two-layer neural network. Some results from PERHID learning model are presented in tabular form. The paper is concluded by a comparison of the learning by the previously developed single-layer perceptron and PERHID.


Computer-Aided Engineering | 1993

Fuzzy Neural Network Learning Model for Image Recognition

Hojjat Adeli; Shih-Lin Hung

An unsupervised fuzzy neural network classification algorithm has been developed and applied to perform feature abstraction and classify a large number of training instances into a small number of clusters. A fuzzy neural network learning model has been developed by integrating the unsupervised fuzzy neural network classification algorithm with a genetic algorithm and an adaptive conjugate gradient neural network learning algorithm. The learning model has been applied to the domain of image recognition. The performance of the model has been evaluated by applying it to a large-scale training example with 2304 training instances. An average computational speedup of eight is achieved by the new algorithm.


Engineering Optimization | 2013

Enhancing particle swarm optimization algorithm using two new strategies for optimizing design of truss structures

Y. C. Lu; J. C. Jan; Shih-Lin Hung; G. H. Hung

This work develops an augmented particle swarm optimization (AugPSO) algorithm using two new strategies,: boundary-shifting and particle-position-resetting. The purpose of the algorithm is to optimize the design of truss structures. Inspired by a heuristic, the boundary-shifting approach forces particles to move to the boundary between feasible and infeasible regions in order to increase the convergence rate in searching. The purpose of the particle-position-resetting approach, motivated by mutation scheme in genetic algorithms (GAs), is to increase the diversity of particles and to prevent the solution of particles from falling into local minima. The performance of the AugPSO algorithm was tested on four benchmark truss design problems involving 10, 25, 72 and 120 bars. The convergence rates and final solutions achieved were compared among the simple PSO, the PSO with passive congregation (PSOPC) and the AugPSO algorithms. The numerical results indicate that the new AugPSO algorithm outperforms the simple PSO and PSOPC algorithms. The AugPSO achieved a new and superior optimal solution to the 120-bar truss design problem. Numerical analyses showed that the AugPSO algorithm is more robust than the PSO and PSOPC algorithms.

Collaboration


Dive into the Shih-Lin Hung's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. S. Huang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Tzu-Hsuan Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

J. C. Jan

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. M. Wen

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Y. Kao

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

W. C. Su

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge