Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Baozong Yuan is active.

Publication


Featured researches published by Baozong Yuan.


international conference on pattern recognition | 2006

Better Foreground Segmentation for Static Cameras via New Energy Form and Dynamic Graph-cut

Yunda Sun; Bo Li; Baozong Yuan; Zhenjiang Miao; Chengkai Wan

In this paper, we propose a new foreground segmentation method for applications using static cameras. It formulates foreground segmentation as an energy minimization problem, and produces much better results than conventional background subtraction methods. Due to the integration of better likelihood term, shadow elimination term and contrast term into energy function, it also achieves more accurate segmentation than existing method of the same type. Furthermore, real-time performance is made possible by employing dynamic graph-cut algorithm. Quantitative and qualitative experiments on real videos demonstrate our improvements


international conference on signal processing | 2004

A novel statistical linear discriminant analysis for image matrix: two-dimensional Fisherfaces

Ming Li; Baozong Yuan

In the pattern recognition field, how to extract the proper features is a very important problem. In recent year, the statistical methods have been researched widely and many methods for feature extraction have been developed, such as, PCA, ICA, nonlinear PCA and etc. But the image always need be transformed to a ID vector in the traditional statistical methods. This paper proposed a novel linear discriminant analysis for image matrix, which achieved better result than the traditional ones. Experiments also proof our method is effective.


international conference on signal processing | 2008

Face recognition using direct LPP algorithm

Jiangfeng Chen; Bo Li; Baozong Yuan

Amounts of data under varying intrinsic features are empirically thought of as high dimensional nonlinear manifold in the observation space. Locality preserving projections (LPP) is a linear transform that optimally preserves the local structure of the data set, and explicitly considers the manifold structure modeled by an adjacency graph. LPP has been applied in many domains successfully, however, LPP algorithm needs a PCA transform in advance to avoid a possible singular problem. Further, LPP is non-orthogonal and this makes it difficult to reconstruct the data. Orthogonal LPP (OLPP) has more discriminating power than LPP, however, the experiments imply that the robustness of OLPP should be improved. Moreover, OLPP also needs a PCA transform in advance. Using PCA as preprocessing can reduce noise, but some discriminative information also is abandoned. In this paper, we present a approach (direct LPP) to extract features from the original data set directly by solving common eigen-value problem of symmetric positive semi-definite matrix. DLPP shares the excellence of LPP and OLPP. Moreover, DLPP is least-squares normalized orthogonal, while OLPP is not known to be optimal for LPP in any sense. Experimental results demonstrate the effectiveness and robustness of our proposed algorithm.


The Visual Computer | 2008

Markerless human body motion capture using Markov random field and dynamic graph cuts

Chengkai Wan; Baozong Yuan; Zhenjiang Miao

Current vision-based human body motion capture methods always use passive markers that are attached to key locations on the human body. However, such systems may confront subjects with cumbersome markers, making it difficult to convert the marker data into kinematic motion. In this paper, we propose a new algorithm for markerless computer vision-based human body motion capture. We compute volume data (voxels) representation from the images using the method of SFS (shape from silhouettes), and consider the volume data as a MRF (Markov random field). Then we match a predefined human body model with pose parameters to the volume data, and the calculation of this matching is transformed into energy function minimization. We convert the problem of energy function construction into a 3D graph construction, and get the minimal energy by the max-flow theory. Finally, we recover the human pose by Powell algorithm.


congress on image and signal processing | 2008

Segmentation of Moving Foreground Objects Using Codebook and Local Binary Patterns

Bo Li; Zhen Tang; Baozong Yuan; Zhenjiang Miao

Robust detection of moving objects in complex scenes is one of the most challenging issues in computer vision. In this paper, we present a novel texture-wise approach to segment moving objects with codebook and local binary patterns (LBP). In many moving segmentation algorithms, the information from limited frames before current image is used. Our approach models background over long time with small memory. Firstly, we construct codebook model which represents a compressed form of background model for long image sequences. A single Gaussian model of per-pixel is built to deal with illumination changes. By using the correlation and texture of spatially proximal pixels, local binary patterns background model is constructed. Finally current image is segmented into two parts, foreground and background, by comparing current image with background model. Experiments show that the proposed approach achieves promising results robustly in real videos.


international conference on signal processing | 2010

Simplified representation for 3D point cloud data

Lihui Wang; Jing Chen; Baozong Yuan

3D data representation becomes more popular, but the enormous points make the model reconstruction and the object recognition difficult. A simplification algorithm for 3D point cloud data integrating both the feature parameter and uniform spherical sampling is presented. At first, we define a feature parameter which includes the average distance parameter and the normal included angle parameter between the point and its neighboring points and point curvature parameter. Then we calculate the density of 3D point cloud data and define it as the feature threshold. We distinguish the feature from the non-feature points by comparing the feature parameter and the threshold. Then we simplify the non-feature points by uniform spherical sampling. The non-feature sampling result and the feature points will be used as the final simplification result. The experimental results demonstrate that our new approach might retain the sharp information of the 3D point cloud data.


Science in China Series F: Information Sciences | 2009

A moving object segmentation algorithm for static camera via active contours and GMM

Chengkai Wan; Baozong Yuan; Zhenjiang Miao

Moving object segmentation is one of the most challenging issues in computer vision. In this paper, we propose a new algorithm for static camera foreground segmentation. It combines Gaussian mixture model (GMM) and active contours method, and produces much better results than conventional background subtraction methods. It formulates foreground segmentation as an energy minimization problem and minimizes the energy function using curve evolution method. Our algorithm integrates the GMM background model, shadow elimination term and curve evolution edge stopping term into energy function. It achieves more accurate segmentation than existing methods of the same type. Promising results on real images demonstrate the potential of the presented method.


international conference on signal processing | 2008

Detection of coded concentric rings for camera calibration

Jing Chen; Lihui Wang; Baozong Yuan

A fast and accurate coded marker identification method is presented in this paper. The method combines image processing techniques with prior information of markerpsilas appearance and shape. The deformation of the circle caused by projection is considered so that the marker can be recognized correctly. The experiments demonstrate the robustness and accuracy of our method.


Tsinghua Science & Technology | 2011

Creating autonomous, perceptive and intelligent virtual humans in a real-time virtual environment

Weibin Liu; Liang Zhou; Weiwei Xing; Xingqi Liu; Baozong Yuan

Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. The framework seeks to reproduce human-like believable behavior and movement in virtual humans in a virtual environment. The framework includes a visual and auditory information perception module, a decision network based behavior decision module, and a hierarchical autonomous motion control module. These cooperate to model realistic autonomous individual behavior for virtual humans in real-time interactive virtual environments. The framework was tested in a simulated virtual environment system to demonstrate the ability of the framework to create autonomous, perceptive and intelligent virtual humans in real-time virtual environments.


Science in China Series F: Information Sciences | 2009

Markerless human motion capture by Markov random field and dynamic graph cuts with color constraints

Jia Li; Chengkai Wan; DianYong Zhang; Zhenjiang Miao; Baozong Yuan

Currently, many vision-based motion capture systems require passive markers attached to key locations on the human body. However, such systems are intrusive with limited application. The algorithm that we use for human motion capture in this paper is based on Markov random field (MRF) and dynamic graph cuts. It takes full account of the impact of 3D reconstruction error and integrates human motion capture and 3D reconstruction into MRF-MAP framework. For more accurate and robust performance, we extend our algorithm by incorporating color constraints into the pose estimation process. The advantages of incorporating color constraints are demonstrated by experimental results on several video sequences.

Collaboration


Dive into the Baozong Yuan's collaboration.

Top Co-Authors

Avatar

Zhenjiang Miao

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Weiwei Xing

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaofang Tang

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Chengkai Wan

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Bo Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar

Jiangfeng Chen

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Qiuqi Ruan

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Lihui Wang

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Ming Liu

Beijing Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge