Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wen-Yan Chang is active.

Publication


Featured researches published by Wen-Yan Chang.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

On pose recovery for generalized visual sensors

Chu-Song Chen; Wen-Yan Chang

With the advances in imaging technologies for robot or machine vision, new imaging devices are being developed for robot navigation or image-based rendering. However, to satisfy some design criterion, such as image resolution or viewing ranges, these devices are not necessarily being designed to follow the perspective rule and, thus, the imaging rays may not pass through a common point. Such generalized imaging devices may not be perspective and, therefore, their poses cannot be estimated with traditional techniques. In this paper, we propose a systematic method for pose estimation of such a generalized imaging device. We formulate it as a nonperspective n point (NPnP) problem. The case with exact solutions, n = 3, is investigated comprehensively. Approximate solutions can be found for n > 3 in a least-squared-error manner by combining an initial-pose-estimation procedure and an orthogonally iterative procedure. This proposed method can be applied not only to nonperspective imaging devices but also perspective ones. Results from experiments show that our approach can solve the NPnP problem accurately.


computer vision and pattern recognition | 2005

Appearance-guided particle filtering for articulated hand tracking

Wen-Yan Chang; Chu-Song Chen; Yi-Ping Hung

We propose a model-based tracking method, called appearance-guided particle filtering (AGPF), which integrates both sequential motion transition information and appearance information. A probability propagation model is derived from a Bayesian formulation for this framework, and a sequential Monte Carlo method is introduced for its realization. We apply the proposed method to articulated hand tracking, and show that it performs better than methods that only use either sequential motion transition information or only use appearance information.


IEEE Transactions on Image Processing | 2008

Visual Tracking in High-Dimensional State Space by Appearance-Guided Particle Filtering

Wen-Yan Chang; Chu-Song Chen; Yong-Dian Jian

In this paper, we propose a new approach, appearance-guided particle filtering (AGPF), for high degree-of-freedom visual tracking from an image sequence. This method adopts some known attractors in the state space and integrates both appearance and motion-transition information for visual tracking. A probability propagation model based on these two types of information is derived from a Bayesian formulation, and a particle filtering framework is developed to realize it. Experimental results demonstrate that the proposed method is effective for high degree-of-freedom visual tracking problems, such as articulated hand tracking and lip-contour tracking.


systems man and cybernetics | 2009

Tracking by Parts: A Bayesian Approach With Component Collaboration

Wen-Yan Chang; Chu-Song Chen; Yi-Ping Hung

Instead of using global-appearance information for visual tracking, as adopted by many methods, we propose a tracking-by-parts (TBP) approach that uses partial appearance information for the task. The proposed method considers the collaborations between parts and derives a probability propagation framework by encoding the spatial coherence in a Bayesian formulation. To resolve this formulation, a TBP particle-filtering method is introduced. Unlike existing methods that only use the spatial-coherence relationship for particle-weight estimation, our method further applies this relationship for state prediction based on system dynamics. Thus, the part-based information can be utilized efficiently, and the tracking performance can be improved. Experimental results show that our approach outperforms the factored-likelihood and particle reweight methods, which only use spatial coherence for weight estimation.


international conference on pattern recognition | 2004

Pose estimation for multiple camera systems

Wen-Yan Chang; Chu-Song Chen

Pose estimation of a multiple camera system (MCS) is usually achieved by either solving the PnP problem or finding the least-squared-error rigid transformation between two 3D point sets. These methods employ partial information of an MCS, in which only a small number of features in one or two cameras can be utilized. To overcome this limitation, we propose a new pose estimation method for an MCS that uses complete information of an MCS. In our method, we treat the MCS as a single generalized camera and formulate this problem in a least-squared manner. An iterative algorithm is proposed for solving the least-squared problem. From the experimental results, it shows that the proposed method is accurate for pose estimation of MCS.


asian conference on computer vision | 2007

Analyzing facial expression by fusing manifolds

Wen-Yan Chang; Chu-Song Chen; Yi-Ping Hung

Feature representation and classification are two major issues in facial expression analysis. In the past, most methods used either holistic or local representation for analysis. In essence, local information mainly focuses on the subtle variations of expressions and holistic representation stresses on global diversities. To take the advantages of both, a hybrid representation is suggested in this paper and manifold learning is applied to characterize global and local information discriminatively. Unlike some methods using unsupervised manifold learning approaches, embedded manifolds of the hybrid representation are learned by adopting a supervised manifold learning technique. To integrate these manifolds effectively, a fusion classifier is introduced, which can help to employ suitable combination weights of facial components to identify an expression. Comprehensive comparisons on facial expression recognition are included to demonstrate the effectiveness of our algorithm.


international conference on robotics and automation | 2002

Pose estimation for generalized imaging device via solving non-perspective N point problem

Chu-Song Chen; Wen-Yan Chang

In this paper we present a systematic method for pose estimation of such a generalized imaging device. We reformulate it as a non-perspective n point (NPnP) problem. The case with exact solutions, for n=3, is investigated comprehensively. Approximate solutions can also be found for n>3 with our approach in a least-squared-error manner. The proposed method can be used not only to perspective imaging devices, but also non-perspective ones.


international conference on pattern recognition | 2006

Discriminative Descriptor-Based Observation Model for Visual Tracking

Wen-Yan Chang; Chu-Song Chen; Yi-Ping Hung

Varying illumination and partial occlusion are two main difficulties in visual tracking. Existing methods based on appearance information cannot solve these problems effectively since appearance is sensitive to lighting and the appearances under occlusions are quite different. In this paper, we propose a descriptor-based dynamic tracking approach that can track objects under partial occlusions and varying illumination. Instead of global appearance, an object is represented by a set of invariant feature descriptors that are generated from local regions around some salient points. By integrating the local descriptor information into the observation model, our method is effective under varying illumination and partial occlusions


data compression conference | 2002

Compression of 3D objects with multistage color-depth panoramic maps

Chang-Ming Tsai; Wen-Yan Chang; Chu-Song Chen; Gregory Y. Tang

Summary form only given. A new representation method, the multistage color-depth panoramic map (or panomap), is proposed for compressing 3D graphic objects. The idea of the proposed method is to transform a 3D graphic object, including both the shape and color information, into a single image. Existing image compression techniques can then be applied for compressing the panomap structure, which can achieve a highly efficient representation due to its regularity. In our experiments, compressing the color part of a CMP with a lossy method (JPEG) and the depth part with a lossless one (PNG) achieves good reconstruction quality with low bit rates.


asian conference on computer vision | 2006

Attractor-Guided particle filtering for lip contour tracking

Yong-Dian Jian; Wen-Yan Chang; Chu-Song Chen

We present a lip contour tracking algorithm using attractor-guided particle filtering. Usually it is difficult to robustly track the lip contour because the lip contour is highly deformable and the contrast between skin and lip colors is very low. It makes the traditional blind segmentation-based algorithms often fail to have robust and realistic results. But in fact, the lip contour is constrained by the facial muscles, the tracking configuration space can then be represented by a lower dimensional manifold. With this observation, we take some representative lip shapes as the attractors in the lower dimensional manifold. To resolve the low contrast problem, we adopt a color feature selection algorithm to maximize the separability between skin and lip colors. Then we integrate the shape priors and the discriminative feature into the attractor-guided particle filtering framework to track the lip contour. The experimental result shows that we can track the lip contour robustly and efficiently.

Collaboration


Dive into the Wen-Yan Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi-Ping Hung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yong-Dian Jian

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chia-Han Chang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chien-Nan Chou

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wei-Jia Huang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wei-Ta Chu

National Chung Cheng University

View shared research outputs
Top Co-Authors

Avatar

Wei-Ting Peng

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory Y. Tang

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge