Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yih-Lon Lin is active.

Publication


Featured researches published by Yih-Lon Lin.


Expert Systems With Applications | 2008

A particle swarm optimization approach to nonlinear rational filter modeling

Yih-Lon Lin; Wei-Der Chang; Jer-Guang Hsieh

This paper presents a particle swarm optimization (PSO) algorithm to solve the parameter estimation problem for nonlinear dynamic rational filters. For the modeling of the nonlinear rational filter, the unknown filter parameters are arranged in the form of a parameter vector which is called a particle in the terminology of PSO. The proposed PSO algorithm applies the velocity updating and position updating formulas to the population composed of many particles such that better particles are generated. Because the PSO algorithm manipulates the parameter vectors directly as real numbers rather than binary strings, implementing the PSO algorithm into the computer programs becomes fairly easy and straightforward. Finally, an illustrative example for the modeling of the nonlinear rational filter is provided to show the validity, as compared with the traditional genetic algorithm, of the proposed method.


IEEE Transactions on Neural Networks | 2008

Preliminary Study on Wilcoxon Learning Machines

Jer-Guang Hsieh; Yih-Lon Lin; Jyh-Horng Jeng

As is well known in statistics, the resulting linear regressors by using the rank-based Wilcoxon approach to linear regression problems are usually robust against (or insensitive to) outliers. This motivates us to introduce in this paper the Wilcoxon approach to the area of machine learning. Specifically, we investigate four new learning machines, namely Wilcoxon neural network (WNN), Wilcoxon generalized radial basis function network (WGRBFN), Wilcoxon fuzzy neural network (WFNN), and kernel-based Wilcoxon regressor (KWR). These provide alternative learning machines when faced with general nonlinear learning problems. Simple weights updating rules based on gradient descent will be derived. Some numerical examples will be provided to compare the robustness against outliers for various learning machines. Simulation results show that the Wilcoxon learning machines proposed in this paper have good robustness against outliers. We firmly believe that the Wilcoxon approach will provide a promising methodology for many machine learning problems.


Digital Signal Processing | 2010

Genetic algorithm with a hybrid select mechanism for fractal image compression

Ming-Sheng Wu; Yih-Lon Lin

In this paper, a genetic algorithm with a hybrid select mechanism is proposed to speed up the fractal encoder. First, all of the image blocks including domain blocks and range blocks are classified into three classes: smooth; horizontal/vertical edge; and diagonal/sub-diagonal edge, according to their discrete cosine transformation (DCT) coefficients. Then, during the GA evolution, the population of every generation is separated into two clans: a superior clan and an inferior clan, according to whether the chromosome type is the same as that of the range block to be encoded or not. The hybrid select mechanism proposed by us is used to select appropriate parents from the two clans in order to reduce the number of MSE computations and maintain the retrieved image quality. Experimental results show that, since the number of MSE computations in the proposed GA method is about half of the traditional GA method, the encoding time for the proposed GA method is less than that of the traditional GA method. For retrieved image quality, the proposed GA method is almost the same as the traditional GA method or only has a little decay. Moreover, in comparison with the full search method, the encoding speed of the proposed GA method is some 130 times faster than that of the full search method, whereas the retrieved image quality is still relatively acceptable.


Neurocomputing | 2011

Three-parameter sequential minimal optimization for support vector machines

Yih-Lon Lin; Jer-Guang Hsieh; Hsu-Kun Wu; Jyh-Horng Jeng

The well-known sequential minimal optimization (SMO) algorithm is the most commonly used algorithm for numerical solutions of the support vector learning problems. At each iteration in the traditional SMO algorithm, also called 2PSMO algorithm in this paper, it jointly optimizes only two chosen parameters. The two parameters are selected either heuristically or randomly, whilst the optimization with respect to the two chosen parameters is performed analytically. The 2PSMO algorithm is naturally generalized to the three-parameter sequential minimal optimization (3PSMO) algorithm in this paper. At each iteration of this new algorithm, it jointly optimizes three chosen parameters. As in 2PSMO algorithm, the three parameters are selected either heuristically or randomly, whilst the optimization with respect to the three chosen parameters is performed analytically. Consequently, the main difference between these two algorithms is that the optimization is performed at each iteration of the 2PSMO algorithm on a line segment, whilst that of the 3PSMO algorithm on a two-dimensional region consisting of infinitely many line segments. This implies that the maximum can be attained more efficiently by 3PSMO algorithm. Main updating formulae of both algorithms for each support vector learning problem are presented. To assess the efficiency of the 3PSMO algorithm compared with the 2PSMO algorithm, 14 benchmark datasets, 7 for classification and 7 for regression, will be tested and numerical performances are compared. Simulation results demonstrate that the 3PSMO outperforms the 2PSMO algorithm significantly in both executing time and computation complexity.


Fuzzy Sets and Systems | 2010

On maximum likelihood fuzzy neural networks

Hsu-Kun Wu; Jer-Guang Hsieh; Yih-Lon Lin; Jyh-Horng Jeng

In this paper, M-estimators, where M stands for maximum likelihood, used in robust regression theory for linear parametric regression problems will be generalized to nonparametric maximum likelihood fuzzy neural networks (MFNNs) for nonlinear regression problems. Emphasis is put particularly on the robustness against outliers. This provides alternative learning machines when faced with general nonlinear learning problems. Simple weight updating rules based on gradient descent and iteratively reweighted least squares (IRLS) will be derived. Some numerical examples will be provided to compare the robustness against outliers for usual fuzzy neural networks (FNNs) and the proposed MFNNs. Simulation results show that the MFNNs proposed in this paper have good robustness against outliers.


Computers & Mathematics With Applications | 2010

Robust estimation of parameter for fractal inverse problem

Yih-Lon Lin

In this paper, some similarity measures for fractal image compression (FIC) are introduced, which are robust against noises. In the proposed methods, robust estimation technique from statistics is embedded into the encoding procedure of the fractal inverse problem to find the parameters. When the original image is corrupted by noises, we hope that the proposed scheme is insensitive to those noises presented in the corrupted image. This leads to a new concept of robust estimation of fractal inverse problem. The proposed least absolute derivation (LAD), least trimmed squares (LTS), and Wilcoxon FIC are the first attempt toward the design of robust fractal image compression which can remove the noises in the encoding process. The main disadvantage of the robust FIC is the computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed FIC is robust against the outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.


Mathematical Problems in Engineering | 2013

Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

Yih-Lon Lin; Jer-Guang Hsieh; Jyh-Horng Jeng

If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.


Neural Computing and Applications | 2017

NXOR- or XOR-based robust template decomposition for cellular neural networks implementing an arbitrary Boolean function via support vector classifiers

Yih-Lon Lin; Jer-Guang Hsieh; Ying-Sheng Kuo; Jyh-Horng Jeng

Abstract Robust template design for cellular neural networks (CNNs) implementing an arbitrary Boolean function is currently an active research area. If the given Boolean function is linearly separable, a single robust uncoupled CNN can be designed preferably as a maximal margin classifier to implement the Boolean function. On the other hand, if the linearly separable Boolean function has a small geometric margin or the Boolean function is not linearly separable, a popular approach is to find a sequence of robust uncoupled CNNs implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are usually restricted to assume only a given finite set of integers. In this study, we try to remove this unnecessary restriction. NXOR- or XOR-based decomposition algorithm utilizing the soft margin and maximal margin support vector classifiers is proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.


soft computing and pattern recognition | 2015

Preliminary study on QR code detection using HOG and AdaBoost

Yih-Lon Lin; Chung-Ming Sung

In this paper, an approach of QR code detection using histograms of oriented gradients (HOG) and AdaBoost is proposed. There are two steps in our approach. In the first step, feature vectors are extracted using HOG with various cell sizes and overlapping or non-overlapping blocks. In the second step, the AdaBoost algorithms are trained by the input feature vectors from HOG and output targets. The QR code position is then detected via the predicted outputs from the AdaBoost algorithm. Experimental results show that the proposed method is an effective way to detect QR code position. Frankly speaking, the results reported here only provide preliminary study on QR code detection using HOG and AdaBoost.


Neurocomputing | 2015

On least trimmed squares neural networks

Yih-Lon Lin; Jer-Guang Hsieh; Jyh-Horng Jeng; Wen-Chin Cheng

In this paper, least trimmed squares (LTS) estimators, frequently used in robust (or resistant) linear parametric regression problems, will be generalized to nonparametric LTS neural networks for nonlinear regression problems. Emphasis is put particularly on the robustness against outliers. This provides alternative learning machines when faced with general nonlinear learning problems. Simple weight updating rules based on gradient descent and iteratively reweighted least squares (IRLS) algorithms will be provided. The important parameter of trimming percentage for the data at hand can be determined by cross validation. Some simulated and real-world data will be used to illustrate the use of LTS neural networks. We will compare the robustness against outliers for usual neural networks with least squares criterion and the proposed LTS neural networks. Simulation results show that the LTS neural networks proposed in this paper have good robustness against outliers. Nonparametric least trimmed squares neural networks are proposed.We emphasize the issue of robustness against outliers.The trimming percentage for the data at hand is determined by cross validation.

Collaboration


Dive into the Yih-Lon Lin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hsu-Kun Wu

National Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei-Chih Teng

National Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge