Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yi-Ren Yeh is active.

Publication


Featured researches published by Yi-Ren Yeh.


Pattern Recognition | 2013

Locality-sensitive dictionary learning for sparse representation based classification

Chia-Po Wei; Yu-Wei Chao; Yi-Ren Yeh; Yu-Chiang Frank Wang

Motivated by image reconstruction, sparse representation based classification (SRC) has been shown to be an effective method for applications like face recognition. In this paper, we propose a locality-sensitive dictionary learning algorithm for SRC, in which the designed dictionary is able to preserve local data structure, resulting in improved image classification. During the dictionary update and sparse coding stages in the proposed algorithm, we provide closed-form solutions and enforce the data locality constraint throughout the learning process. In contrast to previous dictionary learning approaches utilizing sparse representation techniques, which did not (or only partially) take data locality into consideration, our algorithm is able to produce a more representative dictionary and thus achieves better performance. We conduct experiments on databases designed for face and handwritten digit recognition. For such reconstruction-based classification problems, we will confirm that our proposed method results in better or comparable performance as state-of-the-art SRC methods do, while less training time for dictionary learning can be achieved.


IEEE Transactions on Image Processing | 2014

Heterogeneous Domain Adaptation and Classification by Exploiting the Correlation Subspace

Yi-Ren Yeh; Chun-Hao Huang; Yu-Chiang Frank Wang

We present a novel domain adaptation approach for solving cross-domain pattern recognition problems, i.e., the data or features to be processed and recognized are collected from different domains of interest. Inspired by canonical correlation analysis (CCA), we utilize the derived correlation subspace as a joint representation for associating data across different domains, and we advance reduced kernel techniques for kernel CCA (KCCA) if nonlinear correlation subspace are desirable. Such techniques not only makes KCCA computationally more efficient, potential over-fitting problems can be alleviated as well. Instead of directly performing recognition in the derived CCA subspace (as prior CCA-based domain adaptation methods did), we advocate the exploitation of domain transfer ability in this subspace, in which each dimension has a unique capability in associating cross-domain data. In particular, we propose a novel support vector machine (SVM) with a correlation regularizer, named correlation-transfer SVM, which incorporates the domain adaptation ability into classifier design for cross-domain recognition. We show that our proposed domain adaptation and classification approach can be successfully applied to a variety of cross-domain recognition tasks such as cross-view action recognition, handwritten digit recognition with different features, and image-to-text or text-to-image classification. From our empirical results, we verify that our proposed method outperforms state-of-the-art domain adaptation approaches in terms of recognition performance.


international conference on image processing | 2011

Locality-constrained group sparse representation for robust face recognition

Yu-Wei Chao; Yi-Ren Yeh; Yu-Wen Chen; Yuh-Jye Lee; Yu-Chiang Frank Wang

This paper presents a novel sparse representation for robust face recognition. We advance both group sparsity and data locality and formulate a unified optimization framework, which produces a locality and group sensitive sparse representation (LGSR) for improved recognition. Empirical results confirm that our LGSR not only outperforms state-of-the-art sparse coding based image classification methods, our approach is robust to variations such as lighting, pose, and facial details (glasses or not), which are typically seen in real-world face recognition problems.


IEEE Transactions on Multimedia | 2012

A Novel Multiple Kernel Learning Framework for Heterogeneous Feature Fusion and Variable Selection

Yi-Ren Yeh; Ting-Chu Lin; Yung-Yu Chung; Yu-Chiang Frank Wang

We propose a novel multiple kernel learning (MKL) algorithm with a group lasso regularizer, called group lasso regularized MKL (GL-MKL), for heterogeneous feature fusion and variable selection. For problems of feature fusion, assigning a group of base kernels for each feature type in an MKL framework provides a robust way in fitting data extracted from different feature domains. Adding a mixed norm constraint (i.e., group lasso) as the regularizer, we can enforce the sparsity at the group/feature level and automatically learn a compact feature set for recognition purposes. More precisely, our GL-MKL determines the optimal base kernels, including the associated weights and kernel parameters, and results in improved recognition performance. Besides, our GL-MKL can also be extended to address heterogeneous variable selection problems. For such problems, we aim to select a compact set of variables (i.e., feature attributes) for comparable or improved performance. Our proposed method does not need to exhaustively search for the entire variable space like prior sequential-based variable selection methods did, and we do not require any prior knowledge on the optimal size of the variable subset either. To verify the effectiveness and robustness of our GL-MKL, we conduct experiments on video and image datasets for heterogeneous feature fusion, and perform variable selection on various UCI datasets.


IEEE Transactions on Knowledge and Data Engineering | 2009

Nonlinear Dimension Reduction with Kernel Sliced Inverse Regression

Yi-Ren Yeh; Su-Yun Huang; Yuh-Jye Lee

Sliced inverse regression (SIR) is a renowned dimension reduction method for finding an effective low-dimensional linear subspace. Like many other linear methods, SIR can be extended to nonlinear setting via the ldquokernel trick.rdquo The main purpose of this paper is two-fold. We build kernel SIR in a reproducing kernel Hilbert space rigorously for a more intuitive model explanation and theoretical development. The second focus is on the implementation algorithm of kernel SIR for fast computation and numerical stability. We adopt a low-rank approximation to approximate the huge and dense full kernel covariance matrix and a reduced singular value decomposition technique for extracting kernel SIR directions. We also explore kernel SIRs ability to combine with other linear learning algorithms for classification and regression including multiresponse regression. Numerical experiments show that kernel SIR is an effective kernel tool for nonlinear dimension reduction and it can easily combine with other linear algorithms to form a powerful toolkit for nonlinear data analysis.


Neural Computation | 2009

Robust kernel principal component analysis

Su-Yun Huang; Yi-Ren Yeh; Shinto Eguchi

This letter discusses the robustness issue of kernel principal component analysis. A class of new robust procedures is proposed based on eigenvalue decomposition of weighted covariance. The proposed procedures will place less weight on deviant patterns and thus be more resistant to data contamination and model deviation. Theoretical influence functions are derived, and numerical examples are presented as well. Both theoretical and numerical results indicate that the proposed robust method outperforms the conventional approach in the sense of being less sensitive to outliers. Our robust method and results also apply to functional principal component analysis.


international conference on computer vision | 2012

Recognizing actions across cameras by exploring the correlated subspace

Chun-Hao Huang; Yi-Ren Yeh; Yu-Chiang Frank Wang

We present a novel transfer learning approach to cross-camera action recognition. Inspired by canonical correlation analysis (CCA), we first extract the spatio-temporal visual words from videos captured at different views, and derive a correlation subspace as a joint representation for different bag-of-words models at different views. Different from prior CCA-based approaches which simply train standard classifiers such as SVM in the resulting subspace, we explore the domain transfer ability of CCA in the correlation subspace, in which each dimension has a different capability in correlating source and target data. In our work, we propose a novel SVM with a correlation regularizer which incorporates such ability into the design of the SVM. Experiments on the IXMAS dataset verify the effectiveness of our method, which is shown to outperform state-of-the-art transfer learning approaches without taking such domain transfer ability into consideration.


computer vision and pattern recognition | 2016

Learning Cross-Domain Landmarks for Heterogeneous Domain Adaptation

Yao-Hung Hubert Tsai; Yi-Ren Yeh; Yu-Chiang Frank Wang

While domain adaptation (DA) aims to associate the learning tasks across data domains, heterogeneous domain adaptation (HDA) particularly deals with learning from cross-domain data which are of different types of features. In other words, for HDA, data from source and target domains are observed in separate feature spaces and thus exhibit distinct distributions. In this paper, we propose a novel learning algorithm of Cross-Domain Landmark Selection (CDLS) for solving the above task. With the goal of deriving a domain-invariant feature subspace for HDA, our CDLS is able to identify representative cross-domain data, including the unlabeled ones in the target domain, for performing adaptation. In addition, the adaptation capabilities of such cross-domain landmarks can be determined accordingly. This is the reason why our CDLS is able to achieve promising HDA performance when comparing to state-of-the-art HDA methods. We conduct classification experiments using data across different features, domains, and modalities. The effectiveness of our proposed method can be successfully verified.


Pattern Recognition | 2013

A rank-one update method for least squares linear discriminant analysis with concept drift

Yi-Ren Yeh; Yu-Chiang Frank Wang

Linear discriminant analysis (LDA) is a popular supervised dimension reduction algorithm, which projects the data into an effective low-dimensional linear subspace while the separation between the projected data from different classes is improved. While this subspace is typically determined by solving a generalized eigenvalue decomposition problem, its high computation costs prohibit the use of LDA especially when the scale and the dimensionality of the data are large. Based on the recent success of least squares LDA (LSLDA), we propose a novel rank-one update method with a simplified class indicator matrix. Using the proposed algorithm, we are able to derive the LSLDA model efficiently. Moreover, our LSLDA model can be extended to address the learning task of concept drift, in which the recently received data exhibit with gradual or abrupt changes in distribution. In other words, our LSLDA is able to observe and model the data distribution changes, while the dependency on outdated data will be suppressed. This proposed LSLDA will benefit applications of streaming data classification or mining, and it can recognize data with newly added class labels during the learning process. Experimental results on both synthetic and real datasets (with and without concept drift) confirm the effectiveness of our propose LSLDA.


international conference on computer vision | 2015

Unsupervised Domain Adaptation with Imbalanced Cross-Domain Data

Tzu Ming Harry Hsu; Wei Yu Chen; Cheng-An Hou; Yao-Hung Hubert Tsai; Yi-Ren Yeh; Yu-Chiang Frank Wang

We address a challenging unsupervised domain adaptation problem with imbalanced cross-domain data. For standard unsupervised domain adaptation, one typically obtains labeled data in the source domain and only observes unlabeled data in the target domain. However, most existing works do not consider the scenarios in which either the label numbers across domains are different, or the data in the source and/or target domains might be collected from multiple datasets. To address the aforementioned settings of imbalanced cross-domain data, we propose Closest Common Space Learning (CCSL) for associating such data with the capability of preserving label and structural information within and across domains. Experiments on multiple cross-domain visual classification tasks confirm that our method performs favorably against state-of-the-art approaches, especially when imbalanced cross-domain data are presented.

Collaboration


Dive into the Yi-Ren Yeh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuh-Jye Lee

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheng-An Hou

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun-Hao Huang

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Hsing-Kuo Pao

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shi-Yen Tao

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wei-Yu Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yu-Wei Chao

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge