Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jin-Tsong Jeng is active.

Publication


Featured researches published by Jin-Tsong Jeng.


IEEE Transactions on Neural Networks | 2002

Robust support vector regression networks for function approximation with outliers

Chen-Chia Chuang; Shun-Feng Su; Jin-Tsong Jeng; Chih-Ching Hsiao

Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.


Fuzzy Sets and Systems | 2003

Support vector interval regression networks for interval regression analysis

Jin-Tsong Jeng; Chen-Chia Chuang; Shun-Feng Su

In this paper, the support vector interval regression networks (SVIRNs) are proposed for the interval regression analysis. The SVIRNs consist of two radial basis function networks. One network identifies the upper side of data interval, and the other network identifies the lower side of data intervals. Because the support vector regression (SVR) approach is equivalent to solving a linear constrained quadratic programming problem, the number of hidden nodes and the initial values of adjustable parameters can be easily obtained. Since the selection of a parameter e in the SVR approach may seriously affect the modeling performance, a two-step approach is proposed to properly select the e value. After the SVR approach with the selected e, an initial structure of SVIRNs can be obtained. Besides, outliers will not significantly affect the upper and lower bound interval obtained through the proposed two-step approach. Consequently, a traditional back-propagation (BP) learning algorithm can be used to adjust the initial structure networks of SVIRNs under training data sets without or with outliers. Due to the better initial structure of SVIRNs are obtained by the SVR approach, the convergence rate of SVIRNs is faster than the conventional networks with BP learning algorithms or with robust BP learning algorithms for interval regression analysis. Four examples are provided to show the validity and applicability of the proposed SVIRNs.


Neurocomputing | 2004

Annealing robust radial basis function networks for function approximation with outliers

Chen-Chia Chuang; Jin-Tsong Jeng; Pao-Tsun Lin

Abstract In this paper, the annealing robust radial basis function networks (ARRBFNs) are proposed to improve the problems of the robust radial basis function networks (RBFNs) for function approximation with outliers. Firstly, a support vector regression (SVR) approach is proposed to determine an initial structure of ARRBFNs in this paper. Because an SVR approach is equivalent to solving a linear constrained quadratic programming problem under a fixed structure of SVR, the number of hidden nodes, initial parameters and initial weights of the ARRBFNs are easily obtained. Secondly, the results of SVR are used as the initial structure in ARRBFNs. At the same time, an annealing robust learning algorithm (ARLA) is used as the learning algorithm for ARRBFNs, and applied to adjust the parameters as well as weights of ARRBFNs. That is, an ARLA is proposed to overcome the problems of initialization and the cut-off points in the robust learning algorithm. Hence, when an initial structure of ARRBFNs is determined by an SVR approach, the ARRBFNs with ARLA have fast convergence speed and are robust against outliers. Simulation results are provided to show the validity and applicability of the proposed ARRBFNs.


systems man and cybernetics | 2012

Radial Basis Function Networks With Linear Interval Regression Weights for Symbolic Interval Data

Shun-Feng Su; Chen-Chia Chuang; Chin-Wang Tao; Jin-Tsong Jeng; Chih-Ching Hsiao

This paper introduces a new structure of radial basis function networks (RBFNs) that can successfully model symbolic interval-valued data. In the proposed structure, to handle symbolic interval data, the Gaussian functions required in the RBFNs are modified to consider interval distance measure, and the synaptic weights of the RBFNs are replaced by linear interval regression weights. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the center and range of the interval-valued data are considered. In addition, in the proposed approach, two stages of learning mechanisms are proposed. In stage 1, an initial structure (i.e., the number of hidden nodes and the adjustable parameters of radial basis functions) of the proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In stage 2, a gradient-descent kind of learning algorithm is applied to fine-tune the parameters of the radial basis function and the coefficients of the linear interval regression weights. Various experiments are conducted, and the average behavior of the root mean square error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are considered as the performance index. The results clearly show the effectiveness of the proposed structure.


Expert Systems With Applications | 2009

Hybrid robust approach for TSK fuzzy modeling with outliers

Chen-Chia Chuang; Jin-Tsong Jeng; Chin-Wang Tao

This study proposes a hybrid robust approach for constructing Takagi-Sugeno-Kang (TSK) fuzzy models with outliers. The approach consists of a robust fuzzy C-regression model (RFCRM) clustering algorithm in the coarse-tuning phase and an annealing robust back-propagation (ARBP) learning algorithm in the fine-tuning phase. The RFCRM clustering algorithm is modified from the fuzzy C-regression models (FCRM) clustering algorithm by incorporating a robust mechanism and considering input data distribution and robust similarity measure into the FCRM clustering algorithm. Due to the use of robust mechanisms and the consideration of input data distribution, the fuzzy subspaces and the parameters of functions in the consequent parts are simultaneously identified by the proposed RFCRM clustering algorithm and the obtained model will not be significantly affected by outliers. Furthermore, the robust similarity measure is used in the clustering process to reduce the redundant clusters. Consequently, the RFCRM clustering algorithm can generate a better initialization for the TSK fuzzy models in the coarse-tuning phase. Then, an ARBP algorithm is employed to obtain a more precise model in the fine-tuning phase. From our simulation results, it is clearly evident that the proposed robust TSK fuzzy model approach is superior to existing approaches in learning speed and in approximation accuracy.


Applied Mathematics and Computation | 2009

Identification of MIMO systems using radial basis function networks with hybrid learning algorithm

Yu-Yi Fu; Chia-Ju Wu; Jin-Tsong Jeng; Chia-Nan Ko

When a radial basis function network (RBFN) is used for identification of a nonlinear multi-input multi-output (MIMO) system, the number of hidden layer nodes, the initial parameters of the kernel, and the initial weights of the network must be determined first. For this purpose, a systematic way that integrates the support vector regression (SVR) and the least squares regression (LSR) is proposed to construct the initial structure of the RBFN. The first step of the proposed method is to determine the number of hidden layer nodes and the initial parameters of the kernel by the SVR method. Then the weights of the RBFN are determined by solving a simple minimization problem based on the concept of LSR. After initialization, an annealing robust learning algorithm (ARLA) is then applied to train the RBFN. With the proposed initialization approach, one can find that the designed RBFN has few hidden layer nodes while maintaining good performance. To show the feasibility and superiority of the annealing robust radial basis function networks (ARRBFNs) for identification of MIMO systems, several illustrative examples are included.


Expert Systems With Applications | 2009

Robust neural-fuzzy method for function approximation

Horng-Lin Shieh; Ying-Kuei Yang; Po-Lun Chang; Jin-Tsong Jeng

The back propagation (BP) algorithm for function approximation is multi-layer feed-forward perceptions to learn parameters from sampling data. The BP algorithm uses the least squares method to obtain a set of weights minimizing the object function. One of main issues on the BP algorithm is to deal with data sets having variety of data distributions and bound with noises and outliers. In this paper, in order to overcome the problems of function approximation for a nonlinear system with noise and outliers, a robust fuzzy clustering method is proposed to greatly mitigate the influence of noise and outliers and then a fuzzy-based data sifter (FDS) is used to partition the nonlinear systempsilas domain into several piecewise linear subspaces to be represented by neural networks. Two experiments are illustrated and these results have shown that the proposed approach has good performance in various kinds of data domains with data noise and outliers.


Applied Physics Letters | 2007

Work function tuning of the TixTayN metal gate electrode for advanced metal-oxide-semiconductor devices applications

Chin-Lung Cheng; Chien-Wei Liu; Jin-Tsong Jeng

A work function (WF) tuning of the TixTayN metal gate ranging from 4.1to4.8eV has been observed using a post-metal-annealing (PMA). The mechanism related to the effective tunable WF can be explained using the creation of the extrinsic states, which is usually associated with the bonding defects that formed at the TixTayN∕SiO2 interface. The results display that the electron trapping is generated in the gate dielectric during the PMA treatments. The reduction on equivalent-oxide thickness with increasing the PMA temperature can be attributed to the combination of the densification of the SiO2 and the high-k layer that formed at the TixTayN∕SiO2 interface.


Expert Systems With Applications | 2010

ARFNNs with SVR for prediction of chaotic time series with outliers

Yu-Yi Fu; Chia-Ju Wu; Jin-Tsong Jeng; Chia-Nan Ko

This paper demonstrates an approach to predict the chaotic time series with outliers using annealing robust fuzzy neural networks (ARFNNs). A combination model that merges support vector regression (SVR), radial basis function networks (RBFNs) and simplified fuzzy inference system is used. The SVR has the good performances to determine the number of rules in the simplified fuzzy inference system and initial weights for the fuzzy neural networks (FNNs). Based on these initial structures, and then annealing robust learning algorithm (ARLA) can be used effectively to overcome outliers and adjust the parameters of structures. Simulation results show the superiority of the proposed method with different SVR for training and prediction of chaotic time series with outliers.


systems man and cybernetics | 2000

Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network

Jin-Tsong Jeng; Tsu-Tian Lee

A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

Collaboration


Dive into the Jin-Tsong Jeng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shun-Feng Su

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chin-Lung Cheng

National Formosa University

View shared research outputs
Top Co-Authors

Avatar

Chin-Wang Tao

National Ilan University

View shared research outputs
Top Co-Authors

Avatar

Chia-Ju Wu

National Yunlin University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chien-Wei Liu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Yu-Yi Fu

Nan Kai University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bau-Tong Dai

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Chia-Nan Ko

Nan Kai University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge