Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chu-Song Chen is active.

Publication


Featured researches published by Chu-Song Chen.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

RANSAC-based DARCES: a new approach to fast automatic registration of partially overlapping range images

Chu-Song Chen; Yi-Ping Hung; Jen-Bo Cheng

In this paper, we propose a new method, the RANSAC-based DARCES method (data-aligned rigidity-constrained exhaustive search based on random sample consensus), which can solve the partially overlapping 3D registration problem without any initial estimation. For the noiseless case, the basic algorithm of our method can guarantee that the solution it finds is the true one, and its time complexity can be shown to be relatively low. An extra characteristic is that our method can be used even for the case that there are no local features in the 3D data sets.


computer vision and pattern recognition | 2011

Ordinal hyperplanes ranker with cost sensitivities for age estimation

Kuang-Yu Chang; Chu-Song Chen; Yi-Ping Hung

In this paper, we propose an ordinal hyperplane ranking algorithm called OHRank, which estimates human ages via facial images. The design of the algorithm is based on the relative order information among the age labels in a database. Each ordinal hyperplane separates all the facial images into two groups according to the relative order, and a cost-sensitive property is exploited to find better hyperplanes based on the classification costs. Human ages are inferred by aggregating a set of preferences from the ordinal hyperplanes with their cost sensitivities. Our experimental results demonstrate that the proposed approach outperforms conventional multiclass-based and regression-based approaches as well as recently developed ranking-based age estimation approaches.


computer vision and pattern recognition | 2015

Deep learning of binary hash codes for fast image retrieval

Kevin Lin; Huei-Fang Yang; Jen-Hao Hsiao; Chu-Song Chen

Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images.


IEEE Transactions on Fuzzy Systems | 2012

Multiple Kernel Fuzzy Clustering

Hsin-Chien Huang; Yung-Yu Chuang; Chu-Song Chen

While fuzzy c-means is a popular soft-clustering method, its effectiveness is largely limited to spherical clusters. By applying kernel tricks, the kernel fuzzy c-means algorithm attempts to address this problem by mapping data with nonlinear relationships to appropriate feature spaces. Kernel combination, or selection, is crucial for effective kernel clustering. Unfortunately, for most applications, it is uneasy to find the right combination. We propose a multiple kernel fuzzy c-means (MKFC) algorithm that extends the fuzzy c-means algorithm with a multiple kernel-learning setting. By incorporating multiple kernels and automatically adjusting the kernel weights, MKFC is more immune to ineffective kernels and irrelevant features. This makes the choice of kernels less crucial. In addition, we show multiple kernel k-means to be a special case of MKFC. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed MKFC algorithm.


Pattern Recognition | 2007

Efficient hierarchical method for background subtraction

Yu-Ting Chen; Chu-Song Chen; Chun-Rong Huang; Yi-Ping Hung

Detecting moving objects by using an adaptive background model is a critical component for many vision-based applications. Most background models were maintained in pixel-based forms, while some approaches began to study block-based representations which are more robust to non-stationary backgrounds. In this paper, we propose a method that combines pixel-based and block-based approaches into a single framework. We show that efficient hierarchical backgrounds can be built by considering that these two approaches are complementary to each other. In addition, a novel descriptor is proposed for block-based background modeling in the coarse level of the hierarchy. Quantitative evaluations show that the proposed hierarchical method can provide better results than existing single-level approaches.


Image and Vision Computing | 1997

Range data acquisition using color structured lighting and stereo vision

Chu-Song Chen; Yi-Ping Hung; Chiann-Chu Chiang; Ja-Ling Wu

Abstract This paper presents a new color-lighting/stereo method for 3D range data acquisition by combining color structured lighting and stereo vision. A major advantage of using stereo vision together with color stripes lighting is that there is no need to solve the problem of finding the correspondence between the color stripes projected by the light source and the color stripes observed in the images. That is, the more difficult problem of finding the correct color stripe correspondence problem between the light source and the image is replaced by an easier image-to-image stereo correspondence — which is not only easier than the above lighting-to-image correspondence problem, but also easier than the traditional stereo correspondence because a good color pattern has been projected onto the object. Another advantage of using stereo vision is that there is no need to calibrate the position and orientation for each of the projected light stripes in 3D space. In this work, a pattern of color stripes is projected onto the objects when taking images, after which edge segments are extracted from the acquired stereo image pair, and then used for finding the correct stereo correspondence. A systematic procedure is proposed in this paper for generating good color stripe patterns. To find the correct stereo correspondence, a global search method based on intra-scanline dynamic programming is adopted. A winner-take-all scheme using edge-based inter-scanline consistency is then proposed to refine the results obtained from intra-scanline search. Experimental results have shown that the proposed method can successfully generate a dense range map with only one pair of stereo images.


IEEE Transactions on Image Processing | 2008

Fast Human Detection Using a Novel Boosted Cascading Structure With Meta Stages

Yu-Ting Chen; Chu-Song Chen

We propose a method that can detect humans in a single image based on a novel cascaded structure. In our approach, both intensity-based rectangle features and gradient-based 1-D features are employed in the feature pool for weak-learner selection. The Real AdaBoost algorithm is used to select critical features from a combined feature set and learn the classifiers from the training images for each stage of the cascaded structure. Instead of using the standard boosted cascade, the proposed method employs a novel cascaded structure that exploits both the stage-wise classification information and the interstage cross-reference information. We introduce meta-stages to enhance the detection performance of a boosted cascade. Experiment results show that the proposed approach achieves high detection accuracy and efficiency.


european conference on computer vision | 2014

Cross-Age Reference Coding for Age-Invariant Face Recognition and Retrieval

Bor-Chun Chen; Chu-Song Chen; Winston H. Hsu

Recently, promising results have been shown on face recognition researches. However, face recognition and retrieval across age is still challenging. Unlike prior methods using complex models with strong parametric assumptions to model the aging process, we use a data-driven method to address this problem. We propose a novel coding framework called Cross-Age Reference Coding (CARC). By leveraging a large-scale image dataset freely available on the Internet as a reference set, CARC is able to encode the low-level feature of a face image with an age-invariant reference space. In the testing phase, the proposed method only requires a linear projection to encode the feature and therefore it is highly scalable. To thoroughly evaluate our work, we introduce a new large-scale dataset for face recognition and retrieval across age called Cross-Age Celebrity Dataset (CACD). The dataset contains more than 160,000 images of 2,000 celebrities with age ranging from 16 to 62. To the best of our knowledge, it is by far the largest publicly available cross-age face dataset. Experimental results show that the proposed method can achieve state-of-the-art performance on both our dataset as well as the other widely used dataset for face recognition across age, MORPH dataset.


computer vision and pattern recognition | 2009

Moving cast shadow detection using physics-based features

Jia-Bin Huang; Chu-Song Chen

Cast shadows induced by moving objects often cause serious problems to many vision applications. We present in this paper an online statistical learning approach to model the background appearance variations under cast shadows. Based on the bi-illuminant (i.e. direct light sources and ambient illumination) dichromatic reflection model, we derive physics-based color features under the assumptions of constant ambient illumination and light sources with common spectral power distributions. We first use one Gaussian mixture model (GMM) to learn the color features, which are constant regardless of the background surfaces or illuminant colors in a scene. Then, we build up one pixel based GMM for each pixel to learn the local shadow features. To overcome the slow convergence rate in the conventional GMM learning, we update the pixel-based GMMs through confidence-rated learning. The proposed method can rapidly learn model parameters in an unsupervised way and adapt to illumination conditions or environment changes. Furthermore, we demonstrate that our method is robust to scenes with few foreground activities and videos captured at low or unsteady frame rates.


computer vision and pattern recognition | 2008

An adaptive learning method for target tracking across multiple cameras

Kuan-Wen Chen; Chih-Chuan Lai; Yi-Ping Hung; Chu-Song Chen

This paper proposes an adaptive learning method for tracking targets across multiple cameras with disjoint views. Two visual cues are usually employed for tracking targets across cameras: spatio-temporal cue and appearance cue. To learn the relationships among cameras, traditional methods used batch-learning procedures or hand-labeled correspondence, which can work well only within a short period of time. In this paper, we propose an unsupervised method which learns both spatio-temporal relationships and appearance relationships adaptively and can be applied to long-term monitoring. Our method performs target tracking across multiple cameras while also considering the environment changes, such as sudden lighting changes. Also, we improve the estimation of spatio-temporal relationships by using the prior knowledge of camera network topology.

Collaboration


Dive into the Chu-Song Chen's collaboration.

Top Co-Authors

Avatar

Yi-Ping Hung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chun-Rong Huang

National Chung Hsing University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wen-Yan Chang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Jiun-Hung Chen

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Chih-Yi Chiu

National Chiayi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jen-Hao Hsiao

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge