Jr-Syu Yang
Tamkang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jr-Syu Yang.
Neurocomputing | 2009
Chan-Yun Yang; Jr-Syu Yang; Jianjun Wang
Imbalanced dataset learning is an important practical issue in machine learning, even in support vector machines (SVMs). In this study, a well known reference model for solving the problem proposed by Veropoulos et al., is first studied. From the aspect of loss function, the reference cost sensitive prototype is identified as a penalty-regularized model. Intuitively, the loss function can change not only the penalty but also the margin to recover the biased decision boundary. This study focuses mainly on the effect from the margin and then extends the model to a more general modification. As proposed in the prototype, the modification first adopts an inversed proportional regularized penalty to re-weight the imbalanced classes. In addition to the penalty regularization, the modification then employs a margin compensation to lead the margin to be lopsided, which enables the decision boundary drift. Two regularization factors, the penalty and margin, are hence suggested for achieving an unbiased classification. The margin compensation, associating with the penalty regularization, is here utilized to calibrate and refine the biased decision boundary to further reduce the bias. With the area under the receiver operating characteristic curve (AuROC) for examining the performance, the modification shows relative higher scores than the reference model, even though the optimal performance is achieved by the reference model. Some useful characteristics found empirically are also included, which may be convenient for the future applications. All the theoretical descriptions and experimental validations show the proposed models potential to compete for highly unbiased accuracy in a complex imbalanced dataset.
systems, man and cybernetics | 2004
Z. M. Lin; Jr-Syu Yang; Chan-Yun Yang
Billiard is one of the most complex games to play in the real world. A player needs to visualize the situation between balls and pockets and to score the ball into the designate pocket by his/her own experience. A billiard robot is developed to imitate the behavior of human beings to play billiard. There are machine vision, decision-making, control and actuating subsystems in the experiment setup. The objective of this paper is to design a decision algorithm for a billiard robot by using grey theory. The results indicate that the decision algorithm work very well in both the simulation and experiment.
international conference on networking, sensing and control | 2004
Bo-Ru Cheng; Je-Ting Li; Jr-Syu Yang
A billiard robot is design to imitate the learning ability of human beings to play billiards. The objective of this research is to design a neural-fuzzy compensator for this billiard robot to improve the billiards skill. First, the predictable hitting error model is developed based on the recorded database of pocketing processes. Then, the predictable error is compensated by the fuzzy controller to decide the cutting angle (hitting point) of the object ball automatically. We confirm the sufficient accuracy to sink the ball into the designated pocket by experiments and numerical analysis.
international symposium on neural networks | 2008
Chan-Yun Yang; Jianjun Wang; Jr-Syu Yang; Guo-Ding Yu
The paper surveys the previous solutions and proposes further a new solution based on the cost-sensitive learning for solving the imbalanced dataset learning problem in the support vector machines. The general idea of cost-sensitive approach is to adopt an inverse proportional penalization scheme for dealing with the problem and forms a penalty regularized model. In the paper, additional margin compensation is further included to achieve a more accurate solution. As known, the margin plays an important role in drawing the decision boundary. It motivates the study to produce imbalanced margin between the classes which enables the decision boundary shift. The imbalanced margin is hence allowed to recompense the overwhelmed class as margin compensation. Incorporating with the penalty regularization, the margin compensation is capable to calibrate moderately the decision boundary and can be utilized to refine the bias boundary. The effect decreases the need of high penalty on the minority class and prevents the classification from the risk of overfitting. Experimental results show a promising potential in future applications.
systems, man and cybernetics | 2005
Jr-Syu Yang; Yu-Seng Chang
The objective of this paper is to design a neural fuzzy controller for a four-link robot to stand up vertically and stably from a flat horizontal surface. There are a structural mechanism, a PC, a tilt sensor and three stepping motors in this robot system. The sequence of standing behavior of this robot is designed and assigned by a developed software program in a PC. The artificial neural network (ANN) and fuzzy control algorithm are applied to develop the standing controller for this robot. The position of the center of gravity (COG) is an important factor to determine the stability of the robot. Finally, the robot links are driven by the corresponding motors to demonstrate its dynamic behaviors successfully and automatically.
computational intelligence and security | 2005
Che-Chang Hsu; Chan-Yun Yang; Jr-Syu Yang
The paper proposed a hybrid two-stage method of support vector machines (SVM) to increase its performance in classification accuracy. In this model, a filtering stage of the k nearest neighbor (kNN) rule was employed to collect information from training observations and re-evaluate balance weights for the observations based on their influences. The balance weights changed the policy of the discrete class label. A novel idea of real-valued class labels for transferring the balance weights was therefore proposed. Embedded in the class label, the weights given as the penalties of the uncertain outliers in the classification were considered in the quadratic programming of SVM, and produced a different hyperplane with higher accuracy. The adoption of kNN rule in the filtering stage has the advantage to distinguish the uncertain outliers in an independent way. The results showed that the classification accuracy of the hybrid model was higher than that of the classical SVM.
computational intelligence and security | 2006
Chan-Yun Yang; Che-Chang Hsu; Jr-Syu Yang
The paper proposes a model merging a non-parametric k-nearest-neighbor (kNN) method into an underlying support vector machine (SVM) to produce an instance-dependent loss function. In this model, a filtering stage of the kNN searching was employed to collect information from training examples and produced a set of emphasized weights which can be distributed to every example by a class of real-valued class labels. The emphasized weights changed the policy of the equal-valued impacts of the training examples and permitted a more efficient way to utilize the information behind the training examples with various significance levels. Due to the property of estimating density locally, the kNN method has the advantage to distinguish the heterogeneous examples from the regular examples by merely considering the situation of the examples themselves. The paper shows the model is promising with both the theoretical derivations and consequent experimental results
international conference on networking, sensing and control | 2015
Wei-Chih Lin; Chan-Yun Yang; Gene Eu Jan; Jr-Syu Yang
In general, a classifier of statistical learning can be expressed as a regularized optimization problem argmin<sub>f∈H</sub> λ<sub>Ω</sub>Ω[f]+R<sub>emp</sub>[f] where λ<sub>Ω</sub> is a regulator for balancing the optimization between Ω[f] and R<sub>emp</sub>[f] terms. The λ<sub>Ω</sub> here is a factor for regularization over all the training patterns. By the regulator λ<sub>Ω</sub>, the optimization weights all the training patterns uniquely with an equivalent cost despite there are different altitudes among the training samples. The altitudes may vary with the difference in sampling cost, the different uncertainties behind the samples, or even the imbalance between the adversary classes. Models capable of assigning various costs for individual samples are hence developed in this paper. By passing the regularization to R<sub>emp</sub>[f] term, this study proposes different regularization models of the support vector machines by tuning a parameterized governing loss function. Since the loss function is a key for success of the support vector machines, changing the loss function individually extends the support vector machines capable of accomplishing the missions mentioned above. This study discovers the properties due to the changes in loss function, and realizes in turn the feasibility for three kinds of related models.
Seventh International Symposium on Precision Engineering Measurements and Instrumentation | 2011
Jr-Syu Yang; Chiun-Shiang Su; Chan-Yun Yang
The objective of this paper is to design a mobile robot with automatic motion behaviors and obstacle avoidance functions. The robot is also able to make the SLAM (Simultaneous Localization And Mapping) at an unknown environment. The robot position is calculated by the developed software program from the motor encoders. An obstacle avoidance controller is developed by the fuzzy theory. A LRF(laser ranger finder) is installed on the robot. The sensing data of this LRF are applied to calculate the environmental information for the obstacle avoidance controller. Then, the ICP (Iterative Closest Point) algorithm is applied to compare the position error of the environmental data in order to obtain the estimated position of the LRF. Finally, these estimated position data are used to calculate the final SLAM of this mobile robot. Both the simulation and experimental results show that this developed robot system work very well. Key word: SLAM, obstacle avoidance, ICP(Iterative Closest Point), LRF(laser range finder).
Neural Processing Letters | 2009
Chan-Yun Yang; Che-Chang Hsu; Jr-Syu Yang
This paper presents a new model developed by merging a non-parametric k-nearest-neighbor (kNN) preprocessor into an underlying support vector machine (SVM) to provide shelters for meaningful training examples, especially for stray examples scattered around their counterpart examples with different class labels. Motivated by the method of adding heavier penalty to the stray example to attain a stricter loss function for optimization, the model acts to shelter stray examples. The model consists of a filtering kNN emphasizer stage and a classical classification stage. First, the filtering kNN emphasizer stage was employed to collect information from the training examples and to produce arbitrary weights for stray examples. Then, an underlying SVM with parameterized real-valued class labels was employed to carry those weights, representing various emphasized levels of the examples, in the classification. The emphasized weights given as heavier penalties changed the regularization in the quadratic programming of the SVM, and brought the resultant decision function into a higher training accuracy. The novel idea of real-valued class labels for conveying the emphasized weights provides an effective way to pursue the solution of the classification inspired by the additional information. The adoption of the kNN preprocessor as a filtering stage is effective since it is independent of SVM in the classification stage. Due to its property of estimating density locally, the kNN method has the advantage of distinguishing stray examples from regular examples by merely considering their circumstances in the input space. In this paper, detailed experimental results and a simulated application are given to address the corresponding properties. The results show that the model is promising in terms of its original expectations.