Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Te-Feng Su is active.

Publication


Featured researches published by Te-Feng Su.


international conference on acoustics, speech, and signal processing | 2011

Detecting moving objects from dynamic background with shadow removal

Shih-Chieh Wang; Te-Feng Su; Shang-Hong Lai

Background subtraction is commonly used to detect foreground objects in video surveillance. Traditional background subtraction methods are usually based on the assumption that the background is stationary. However, they are not applicable to dynamic background, whose background images change over time. In this paper, we propose an adaptive Local-Patch Gaussian Mixture Model (LPGMM) as the dynamic background model for detecting moving objects from video with dynamic background. Then, the SVM classification is employed to discriminate between foreground objects and shadow regions. Finally, we show some experimental results on several video sequences to demonstrate the effectiveness and robustness of the proposed method.


asian conference on computer vision | 2010

Over-segmentation based background modeling and foreground detection with shadow removal by using hierarchical MRFs

Te-Feng Su; Yi-Ling Chen; Shang-Hong Lai

In this paper, we propose a novel over-segmentation based method for the detection of foreground objects from a surveillance video by integrating techniques of background modeling and Markov Random Fields classification. Firstly, we introduce a fast affinity propagation clustering algorithm to produce the over-segmentation of a reference image by taking into account color difference and spatial relationship between pixels. A background model is learned by using Gaussian Mixture Models with color features of the segments to represent the time-varying background scene. Next, each segment is treated as a node in a Markov Random Field and assigned a state of foreground, shadow and background, which is determined by using hierarchical belief propagation. The relationship between neighboring regions is also considered to ensure spatial coherence of segments. Finally, we demonstrate experimental results on several image sequences to show the effectiveness and robustness of the proposed method.


ieee international conference on automatic face gesture recognition | 2013

Multi-attribute sparse representation with group constraints for face recognition under different variations

Chen-Kuo Chiang; Te-Feng Su; Chih Yen; Shang-Hong Lai

A novel multi-attribute sparse representation enforced with group constraints is proposed in this paper. Data with multiple attributes can be represented by individual binary matrices to indicate the group properties for each data sample. Then, these attribute matrices are incorporated into the formulation of l1-minimization. The solution is obtained by jointly considering the data reconstruction error, the sparsity property as well as the group constraints, thus making the basis selection in sparse coding more efficient in term of accuracy. The proposed optimization formulation with group constraints is simple yet very efficient for classification problems with multiple attributes. In addition, it can be derived into a modified sparse coding form so that any l1-minimization solver can be employed in the corresponding optimization problem. We demonstrate the performance of the proposed multi-attribute sparse representation algorithm through experiments on face recognition with different kinds of variations. Experimental results show that the proposed method is very competitive compared to the state-of-the-art methods.


embedded systems for real time multimedia | 2013

Design of vehicle detection methods with OpenCL programming on multi-core systems

Kai-Mao Cheng; Cheng-Yen Lin; Yu-Chun Chen; Te-Feng Su; Shang-Hong Lai; Jenq Kuen Lee

Vehicle detection methods are playing an important role for driver assistance systems. Developing a high accuracy and efficiency vehicle detection system thus becomes crucial. One of the popular approaches is the scanning method which is based on the sliding window search for locating the vehicles from the input images. Such method provides a high detection rate with a time consuming process that identifies the vehicle from each sliding window. The searching time can be unacceptable as the searching space grows. This raises an optimization opportunity to exploit modern heterogeneous multicore system to accelerate the vehicle detection process. In this paper, we present a case study to accelerate a sliding-window based vehicle detection algorithm on a heterogeneous multicore systems using OpenCL designs. Unlike transitional detection algorithm, we integrate width model into our vehicle detection method to reduce search space. We give a detail execution profiling on each component of original vehicle detection algorithm and explore the potential parallelism. The experiment is based on a heterogeneous multicore platform that includes an Intel i5-2400 processor and a AMD HD6670 GPU. Also an Open64-based OpenCL compiler is employed to compile the cl code for the GPU. Significant performance speed-up is achieved with our parallelization and optimization, the maximum speed-up for the vehicle detection kernel and whole application is 17.1 and 16.7 respectively.


embedded systems for real time multimedia | 2011

Support of software framework for embedded multi-core systems with Android environments

Yu-Hao Chang; Chi-Bang Kuan; Cheng-Yen Lin; Te-Feng Su; Chun-Ta Chen; Jyh-Shing Roger Jang; Shang-Hong Lai; Jenq Kuen Lee

Applications on mobile devices are getting more complicated with the new wave of applications in the mobile devices. The computing power for embedded devices are increased with such trends, and embedded multi-core platform are in a position to help boost system performance. Software frameworks integrated the multi-core platforms are often needed to help boost the system performance and reduce programming complexity. In this paper, we present a software framework based on Android and multi-core embedded systems. In the framework, we integrate the compiler toolkit chain for multi-core programming environment which includes DSP C/C++ compilers, streaming RPC programming model, debugger, ESL simulator, and power management models. We also develop software framework for face detection, voice recognition, and mobile streaming management. Those frameworks are designed as multi-core programs and are used to illustrate the design flow for applications on embedded multi-core environments equipped with Android systems. We demonstrate our proposed mechanisms by implementing two applications, Face RMS and voice recognition. The proposed framework gives a case study to illustrate software framework and design flow for emerging RMS-based and voice recognition applications on embedded multi-core systems equipped with Android systems.


international conference on computer vision | 2013

Multi-attributed Dictionary Learning for Sparse Coding

Chen-Kuo Chiang; Te-Feng Su; Chih Yen; Shang-Hong Lai

We present a multi-attributed dictionary learning algorithm for sparse coding. Considering training samples with multiple attributes, a new distance matrix is proposed by jointly incorporating data and attribute similarities. Then, an objective function is presented to learn category-dependent dictionaries that are compact (closeness of dictionary atoms based on data distance and attribute similarity), reconstructive (low reconstruction error with correct dictionary) and label-consistent (encouraging the labels of dictionary atoms to be similar). We have demonstrated our algorithm on action classification and face recognition tasks on several publicly available datasets. Experimental results with improved performance over previous dictionary learning methods are shown to validate the effectiveness of the proposed algorithm.


asia-pacific signal and information processing association annual summit and conference | 2013

Human segmentation from video by combining random walks with human shape prior adaption

Yutzu Lee; Te-Feng Su; Hong-Ren Su; Shang-Hong Lai; Tsung-Chan Lee; Ming-Yu Shih

In this paper, we propose an automatic human segmentation algorithm for video conferencing applications. Since humans are the principal subject in these videos, the proposed framework is based on human shape clues to separate humans from complex background and replace or blur the background for immersive communication. We first detect face position and size, track human boundary across frames, and propagate the segmentation likelihood to the next frame for obtaining the trimap to be used as input to the Random Walk algorithm. In addition, we also include gradient magnitude in edge weight to enhance the Random Walk segmentation results. Finally, we demonstrate experimental results on several image sequences to show the effectiveness and robustness of the proposed method.


international conference on image processing | 2013

Efficient vehicle detection with adaptive scan based on perspective geometry

Yu-Chun Chen; Te-Feng Su; Shang-Hong Lai

Vehicle detection is an important research problem for Advanced Driver Assistance Systems to improve driving safety. Most existing methods are based on the sliding window search framework to locate vehicles in an image. However, such methods usually produce large numbers of false positives and are computationally intensive. In this paper, we propose an efficient vehicle detection algorithm that dramatically reduces the search space based on the perspective geometry of the road. In the training phase, we search a few images to locate all possible vehicle regions by using the standard HOG-based vehicle detector. Pairs of vehicle candidates that satisfy the projective geometry constraints are used to estimate the linear vehicle width model with respect to y coordinates in the image. Then an adaptive scan strategy based on the estimated vehicle width model is proposed to efficiently detect vehicles in an image. Experimental results show that the proposed algorithm provides improved performance in terms of both speed and accuracy compared to standard sliding-windows search strategy.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

A Multiattribute Sparse Coding Approach for Action Recognition From a Single Unknown Viewpoint

Te-Feng Su; Chen-Kuo Chiang; Shang-Hong Lai

We propose a novel approach for view-independent action recognition using multiattribute sparse representation enforced with group constraints. First, an oversegmentation-based background modeling and foreground detection approach is employed to extract silhouettes from action videos. Then multiple time intervals of motion history image are computed to capture motion and pose information in human activities. To obtain a more accurate and discriminative representation, we propose multiattribute sparse representation for multiview action video classification. Actions with multiple attributes can be represented by individual attribute matrices to describe group property for each action instance. These attribute matrices are incorporated into the formulation of l1-minimization. The sparsity property as well as the group constraints make the basis selection in sparse coding more efficient in terms of accuracy. Especially, our approach is able to operate under the condition of partially labeled attributes in the training data. Finally, we demonstrate the proposed algorithm through experiments on three multiview human action datasets to show the effectiveness and robustness of the proposed method.


computer software and applications conference | 2015

Novel Facial Expression Recognition by Combining Action Unit Detection with Sparse Representation Classification

Te-Feng Su; Ching-Hua Weng; Shang-Hong Lai

This paper presents a multi-attribute sparse coding approach for facial expression recognition by regarding Action-Units (AUs) as attributes. AUs describe the movements of individual facial muscles, which are detected from corresponding attribute masks in this work. They can not only be used to de scribe group property which enforces basis selection from groups with the same AUs as best as possible, but also penalize the selection of atoms with the AU distance far away from the target instance. The group constraint and the AU similarity constraint are incorporated into the formulation of l1-minimization to determine the optimal sparse representation for facial expression. Finally, we demonstrate the proposed algorithm through experiments on two facial expression datasets to show the effectiveness and robustness of the proposed method.

Collaboration


Dive into the Te-Feng Su's collaboration.

Top Co-Authors

Avatar

Shang-Hong Lai

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chen-Kuo Chiang

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Yutzu Lee

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Yu-Chun Chen

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Yen Lin

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chi-Bang Kuan

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chih Yen

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chih-Hsueh Duan

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Chin-Yun Fan

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Jenq Kuen Lee

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge