Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kuang-Yu Chang is active.

Publication


Featured researches published by Kuang-Yu Chang.


computer vision and pattern recognition | 2011

Ordinal hyperplanes ranker with cost sensitivities for age estimation

Kuang-Yu Chang; Chu-Song Chen; Yi-Ping Hung

In this paper, we propose an ordinal hyperplane ranking algorithm called OHRank, which estimates human ages via facial images. The design of the algorithm is based on the relative order information among the age labels in a database. Each ordinal hyperplane separates all the facial images into two groups according to the relative order, and a cost-sensitive property is exploited to find better hyperplanes based on the classification costs. Human ages are inferred by aggregating a set of preferences from the ordinal hyperplanes with their cost sensitivities. Our experimental results demonstrate that the proposed approach outperforms conventional multiclass-based and regression-based approaches as well as recently developed ranking-based age estimation approaches.


international conference on pattern recognition | 2010

A Ranking Approach for Human Ages Estimation Based on Face Images

Kuang-Yu Chang; Chu-Song Chen; Yi-Ping Hung

In our daily life, it is much easier to distinguish which person is elder between two persons than how old a person is. When inferring a persons age, we may compare his or her face with many people whose ages are known, resulting in a series of comparative results, and then we conjecture the age based on the comparisons. This process involves numerous pairwise preferences information obtained by a series of queries, where each query compares the target persons face to those faces in a database. In this paper, we propose a ranking-based framework consisting of a set of binary queries. Each query collects a binary-classification-based comparison result. All the query results are then fused to predict the age. Experimental results show that our approach performs better than traditional multi-class-based and regression-based approaches for age estimation.


IEEE Transactions on Geoscience and Remote Sensing | 2007

Feature Extractions for Small Sample Size Classification Problem

Bor-Chen Kuo; Kuang-Yu Chang

Much research has shown that the definitions of within-class and between-class scatter matrices and regularization technique are the key components to design a feature extraction for small sample size problems. In this paper, we illustrate the importance of another key component, eigenvalue decomposition method, and a new regularization technique was proposed. In the hyperspectral image experiment, the effects of these three components of feature extraction are explored on ill-posed and poorly posed conditions. The experimental results show that different regularization methods need to cooperate with different eigenvalue decomposition methods to reach the best performance, the proposed regularization method, regularized feature extraction (RFE) outperform others, and the best feature extraction for a small sample size classification problem is RFE with nonparametric weighted scatter matrices


IEEE Transactions on Image Processing | 2015

A Learning Framework for Age Rank Estimation Based on Face Images With Scattering Transform

Kuang-Yu Chang; Chu-Song Chen

This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.


international conference on computer vision | 2009

Multi-class multi-instance boosting for part-based human detection

Yu-Ting Chen; Chu-Song Chen; Yi-Ping Hung; Kuang-Yu Chang

With the purpose of designing a general learning framework for detecting human parts, we formulate this task as a classification problem over non-aligned training examples of multiple classes. We propose a new multi-class multi-instance boosting method, named MCMIBoost, for effective human parts detection in static images. MCMIBoost has two benefits. First, training examples are represented as a set of non-aligned instances, so that the alignment problem caused by human appearance variation can be handled. Second, instead of learning part detectors individually, MCMIBoost learns a unified detector for efficient detection, and uses the feature-sharing concept to design an efficient multi-class classifier. Experiment results on MIT and INRIA datasets demonstrate the superior performance of the proposed method.


british machine vision conference | 2015

Automatic Age Estimation from Face Images via Deep Ranking.

Huei-Fang Yang; Bo-Yao Lin; Kuang-Yu Chang; Chu-Song Chen

This paper focuses on automatic age estimation (AAE) from face images, which amounts to determining the exact age or age group of a face image according to features from faces. Although great effort has been devoted to AAE [1, 4, 6], it remains a challenging problem. The difficulties are due to large facial appearance variations resulting from a number of factors, e.g., aging and facial expressions. AAE algorithms need to overcome heterogeneity in facial appearance changes to provide accurate age estimates. To this end, we propose a generic, deep network model for AAE (see Figure 1). Given a face image, our network first extracts features from the face through a 3-layer scattering network (ScatNet) [2], then reduces the feature dimension by principal component analysis (PCA), and finally predicts the age via category-wise rankers constructed as a 3-layer fullyconnected network. The contributions are: (1) Our ranking method is point-wised and thus is easily scaled up to large-scale datasets; (2) our deep ranking model is general and can be applied to age estimation from faces with large facial appearance variations as a result of aging or facial expression changes; and (3) we show that the high-level concepts learned from large-scale neutral faces can be transferred to estimating ages from faces under expression changes, leading to improved performance. Our model is with the following characteristics: (1) The scattering features are invariant to translation and small deformations. ScatNet is a deep convolutional network of specific characteristics. It uses predefined wavelets and computes scattering representations via a cascade of wavelet transforms and modulus pooling operators from shallow to deep layers. With the nonlinear modulus and averaging operators, ScatNet can produce representations that are discriminative as well as invariant to translation and small deformations. As ScatNet provides fundamentally invariant representations for discriminating feature extraction, only the weights of the fully-connected layers are learned in our network model, which considerably reduces the training time. (2) The rank labels encoded in the network exploit the ordering relation among labels. Each category-wise ranker is an ordinal regression ranker. We encode the age rank based on the reduction framework [5]. Given a set of training samples X = {(xi,yi), i = 1 · · ·N}, let xi ∈ RD be the input image and yi be a rank label (yi ∈ {1, . . . ,K}), respectively, where K is the number of age ranks. For rank k, we separate X into two subsets, X k and X − k , as follows: X k = {(xi,+1)|yi > k} X− k = {(xi,−1)|yi ≤ k}. (1)


systems, man and cybernetics | 2013

Intensity Rank Estimation of Facial Expressions Based on a Single Image

Kuang-Yu Chang; Chu-Song Chen; Yi-Ping Hung

In this paper, we propose a framework that estimates the discrete intensity rank of a facial expression based on a single image. For most people, judging whether an expression is more intense than others is easier than determining its real-valued intensity degree, and hence the relative order of two expressions is more distinguishable than the exact difference between them. We utilize the relative order to construct an image-based ranking approach for inferring the discrete ranks. The challenge of image-based approaches is to conduct a representation for subtle expression changes. We employ an efficient descriptor, scattering transform, which is translation invariant and can linearize deformations. This scattering representation recovers the lost high frequencies and retains discrimination under invariant property. Our experimental results demonstrate that the proposed framework with scattering transform outperforms other compared feature descriptors and algorithms.


Circuits Systems and Signal Processing | 2014

Single-Pass K-SVD for Efficient Dictionary Learning

Kuang-Yu Chang; Cheng-Fu Lin; Chu-Song Chen; Yi-Ping Hung

Sparse representation has been widely used in machine learning, signal processing and communications. K-SVD, which generalizes k-means clustering, is one of the most famous algorithms for sparse representation and dictionary learning. K-SVD is an iterative method that alternates between encoding the data sparsely by using the current dictionary and updating the dictionary based on the sparsely represented data. In this paper, we introduce a single-pass K-SVD method. In this method, the previous input data are first summarized as a condensed representation of weighted samples. Then, we developed a weighted K-SVD algorithm to learn a dictionary from the union of this representation and the newly input data. Experimental results show that our approach can approximate K-SVD’s performance well by consuming considerably less storage resource.


green computing and communications | 2014

Facial Expression Recognition via Discriminative Dictionary Learning

Kuang-Yu Chang; Chu-Song Chen

Dictionary learning has been applied to computer vision problems such as facial expression recognition. K-SVD is one of the state-of-the-art dictionary learning algorithms. However, K-SVD is unsupervised and focuses only on the representational power. In this paper, we adopt label-consistent K-SVD with scattering transform in facial expression recognition. In addition to reducing the reconstruction error, label-consistent K-SVD combines further the discriminative sparse-code error and classification error in the optimization. Experimental results show that our approach can improve the performance of facial expression recognition when sparse coding is used.


international geoscience and remote sensing symposium | 2005

Exploring the effects of scatter matrices, eigenvalue decomposition methods, and regularization techniques in feature extractions for small sample size classification problem

Bor-Chen Kuo; Kuang-Yu Chang; Shu-Chuan Shih; Shih-Hsun Li

Many researches show that the definitions of withinclass and between-class scatter matrices, and regularization techniques are the key points of designing a feature extraction for small sample size classification problem. In this study, another key point, eigenvalue decomposition method, is addressed and the effects and performances of these three factors are explored. Keywordsfeature extraction; regularization; hypersectral image classification; eigenvalue decomposition

Collaboration


Dive into the Kuang-Yu Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi-Ping Hung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheng-Fu Lin

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Li-Wei Ko

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge