Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wei-Yang Lin is active.

Publication


Featured researches published by Wei-Yang Lin.


systems man and cybernetics | 2012

Machine Learning in Financial Crisis Prediction: A Survey

Wei-Yang Lin; Ya-Han Hu; Chih-Fong Tsai

For financial institutions, the ability to predict or forecast business failures is crucial, as incorrect decisions can have direct financial consequences. Bankruptcy prediction and credit scoring are the two major research problems in the accounting and finance domain. In the literature, a number of models have been developed to predict whether borrowers are in danger of bankruptcy and whether they should be considered a good or bad credit risk. Since the 1990s, machine-learning techniques, such as neural networks and decision trees, have been studied extensively as tools for bankruptcy prediction and credit score modeling. This paper reviews 130 related journal papers from the period between 1995 and 2010, focusing on the development of state-of-the-art machine-learning techniques, including hybrid and ensemble classifiers. Related studies are compared in terms of classifier design, datasets, baselines, and other experimental factors. This paper presents the current achievements and limitations associated with the development of bankruptcy-prediction and credit-scoring models employing machine learning. We also provide suggestions for future research.


computer vision and pattern recognition | 2006

Fusion of Summation Invariants in 3D Human Face Recognition

Wei-Yang Lin; Kin-Chung Wong; Nigel Boston; Yu Hen Hu

A novel family of 2D and 3D geometrically invariant features, called summation invariants is proposed for the recognition of the 3D surface of human faces. Focusing on a rectangular region surrounding the nose of a 3D facial depth map, a subset of the so called semi-local summation invariant features is extracted. Then the similarity between a pair of 3D facial depth maps is computed to determine whether they belong to the same person. Out of many possible combinations of these set of features, we select, through careful experimentation, a subset of features that yields best combined performance. Tested with the 3D facial data from the on-going Face Recognition Grand Challenge v1.0 dataset, the proposed new features exhibit significant performance improvement over the baseline algorithm distributed with the datase


systems man and cybernetics | 2007

Optimal Linear Combination of Facial Regions for Improving Identification Performance

Kin-Chung Wong; Wei-Yang Lin; Yu Hen Hu; Nigel Boston; Xueqin Zhang

This paper presents a novel 3D multiregion face recognition algorithm that consists of new geometric summation invariant features and an optimal linear feature fusion method. A summation invariant, which captures local characteristics of a facial surface, is extracted from multiple subregions of a 3D range image as the discriminative features. Similarity scores between two range images are calculated from the selected subregions. A novel fusion method that is based on a linear discriminant analysis is developed to maximize the verification rate by a weighted combination of these similarity scores. Experiments on the Face Recognition Grand Challenge V2.0 dataset show that this new algorithm improves the recognition performance significantly in the presence of facial expressions.


Pattern Recognition | 2013

Kernel-based representation for 2D/3D motion trajectory retrieval and classification

Wei-Yang Lin; Chung-Yang Hsieh

This paper proposes a novel kernel-space representation for motion trajectories. Contrasted to most trajectory representation methods in the literature, our method is more generic in the sense that it can be applied to either 2D or 3D trajectories. In the proposed method, a trajectory is firstly projected by the Kernel Principal Component Analysis (KPCA) which can be considered as an implicit mapping to a much higher-dimensional feature space. The high dimensionality can effectively improve the accuracy in recognizing motion trajectories. Then, Nonparametric Discriminant Analysis (NDA) is used to extract the most discriminative features from the KPCA feature space. The synergistic effect of KPCA and NDA leads to better class separability and makes the proposed trajectory representation a more powerful discriminator. The experimental validation of the proposed method is conducted on the Australian Sign Language (ASL) data set. The results show that our method performs significantly better, in both trajectory classification and retrieval, than the state-of-the-art techniques.


international conference on multimedia and expo | 2007

3D Face Recognition Under Expression Variations using Similarity Metrics Fusion

Wei-Yang Lin; Kin-Chung Wong; Nigel Boston; Yu Hen Hu

We present a novel 3D face recognition method that incorporates summation invariant features extracted from multiple sub-regions of a facial range images, and optimal fusion of similarity scores between corresponding sub-regions. The key innovation of this paper is the development of the fusion-based face recognition algorithm that delivers significant performance enhancement while requiring very little computation. Experiments on the FRGC (Face Recognition Grand Challenge) version 2 dataset show that our algorithm improves the recognition performance significantly in the presence of facial expressions.


Medical Physics | 2012

Real-time automatic fiducial marker tracking in low contrast cine-MV images

Wei-Yang Lin; Shu-Fang Lin; Sheng-Chang Yang; Shu-Cheng Liou; Ravinder Nath; Wu Liu

PURPOSE To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). METHODS Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. RESULTS The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. CONCLUSIONS The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.


Investigative Ophthalmology & Visual Science | 2011

Automatic Characterization of Classic Choroidal Neovascularization by Using AdaBoost for Supervised Learning

Chia-Ling Tsai; Yi-Lun Yang; Shih-Jen Chen; Kai-Shung Lin; Chih-Hao Chan; Wei-Yang Lin

PURPOSE To provide a computer-aided visualization tool for accurate diagnosis and quantification of choroidal neovascularization (CNV) on the basis of fluorescence leakage characteristics. METHODS All image frames of a fluorescein angiography (FA) sequence are first aligned and mapped to a global space. To automatically determine the severity of each pixel in the global space and hence the extent of CNV, the system matches the intensity variation of each set of spatially corresponding pixels across the sequence with the targeted leakage pattern, learned from a sampled population graded by a retina specialist. The learning strategy, known as the AdaBoost algorithm, has 12 classifiers for 12 features that summarize the variation in fluorescence intensity over time. Given a new sequence, the severity map image is generated using the contribution scores of the 12 classifiers. Initialized with points of low and high severity, regions of CNV are delineated using the random walk algorithm. RESULTS A dataset of 33 FA sequences of classic CNV showed the average accuracy of CNV delineation to be 83.26%. In addition, the 30- to 60-second interval provided the most reliable information for differentiating CNV from the background. Using eight sequences of multiple visits of four patients for evaluation of the postphotodynamic therapy (PDT), the statistics derived from the segmented regions correlate closely with the clinical observed changes. CONCLUSIONS The clinician can easily visualize the temporal characteristics of CNV fluorescence leakage using the severity map, which is a two-dimensional summary of a complete FA sequence. The computer-aided tool allows objective evaluation and computation of statistical data from the automatic delineation for surgical assessment.


IEEE Transactions on Biomedical Engineering | 2012

Retinal Vascular Tree Reconstruction With Anatomical Realism

Kai-Shung Lin; Chia-Ling Tsai; Chih-Hsiangng Tsai; Michal Sofka; Shih-Jen Chen; Wei-Yang Lin

Motivated by the goals of automatically extracting vessel segments and constructing retinal vascular trees with anatomical realism, this paper presents and analyses an algorithm that combines vessel segmentation and grouping of the extracted vessel segments. The proposed method aims to restore the topology of the vascular trees with anatomical realism for clinical studies and diagnosis of retinal vascular diseases, which manifest abnormalities in either venous and/or arterial vascular systems. Vessel segments are grouped using extended Kalman filter which takes into account continuities in curvature, width, and intensity changes at the bifurcation or crossover point. At a junction, the proposed method applies the minimum-cost matching algorithm to resolve the conflict in grouping due to error in tracing. The system was trained with 20 images from the DRIVE dataset, and tested using the remaining 20 images. The dataset contained a mixture of normal and pathological images. In addition, six pathological fluorescein angiogram sequences were also included in this study. The results were compared against the groundtruth images provided by a physician, achieving average success rates of 88.79% and 90.09%, respectively.


multimedia signal processing | 2007

Fusion of Multiple Facial Regions for Expression-Invariant Face Recognition

Wei-Yang Lin; Ming-Yang Chen; Kerry R. Widder; Yu Hen Hu; Nigel Boston

In this paper, we describe a fusion-based face recognition method that is able to compensate for facial expressions even when training samples contain only neutral expression. The similarity metric between two facial images are calculated by combining the similarity scores of the corresponding facial regions, e.g. the similarity between two mouths, the similarity between two noses, etc. In contrast with other approaches where equal weights are assigned on each region, a novel fusion method based on linear discriminant analysis (LDA) is developed to maximize the verification performance. We also conduct a comparative study on various face recognition schemes, including the FRGC baseline algorithm, the fusion of multiple regions by sum rule, and the fusion of multiple regions by LDA. Experiments on the FRGC (Face Recognition Grand Challenge) V2.0 dataset, containing 4007 face images recorded from 266 subjects, show that the proposed method significantly improves the verification performance in the presence of facial expressions.


Multimedia Tools and Applications | 2014

Facial expression recognition using bag of distances

Fu-Song Hsu; Wei-Yang Lin; Tzu-Wei Tsai

The automatic recognition of facial expressions is critical to applications that are required to recognize human emotions, such as multimodal user interfaces. A novel framework for recognizing facial expressions is presented in this paper. First, distance-based features are introduced and are integrated to yield an improved discriminative power. Second, a bag of distances model is applied to comprehend training images and to construct codebooks automatically. Third, the combined distance-based features are transformed into mid-level features using the trained codebooks. Finally, a support vector machine (SVM) classifier for recognizing facial expressions can be trained. The results of this study show that the proposed approach outperforms the state-of-the-art methods regarding the recognition rate, using a CK+ dataset.

Collaboration


Dive into the Wei-Yang Lin's collaboration.

Top Co-Authors

Avatar

Nigel Boston

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Yu Hen Hu

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shih-Jen Chen

Taipei Veterans General Hospital

View shared research outputs
Top Co-Authors

Avatar

Fu-Song Hsu

National Chung Cheng University

View shared research outputs
Top Co-Authors

Avatar

Chung-Yang Hsieh

National Chung Cheng University

View shared research outputs
Top Co-Authors

Avatar

Kerry R. Widder

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kin-Chung Wong

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Chih-Fong Tsai

National Central University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge