Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bingpeng Ma is active.

Publication


Featured researches published by Bingpeng Ma.


british machine vision conference | 2012

BiCov: a novel image representation for person re-identification and face verification

Bingpeng Ma; Yu Su; Frédéric Jurie

This paper proposes a novel image representation which can properly handle both background and illumination variations. It is therefore adapted to the person/face reidentification tasks, avoiding the use of any additional pre-processing steps such as foreground-background separation or face and body part segmentation. This novel representation relies on the combination of Biologically Inspired Features (BIF) and covariance descriptors used to compute the similarity of the BIF features at neighboring scales. Hence, we will refer to it as the BiCov representation. To show the effectiveness of BiCov, this paper conducts experiments on two person re-identification tasks (VIPeR and ETHZ) and one face verification task (LFW), on which it improves the current state-of-the-art performance.


international conference on computer vision | 2012

Local descriptors encoded by fisher vectors for person re-identification

Bingpeng Ma; Yu Su; Frédéric Jurie

This paper proposes a new descriptor for person re-identification building on the recent advances of Fisher Vectors. Specifically, a simple vector of attributes consisting in the pixel coordinates, its intensity as well as the first and second-order derivatives is computed for each pixel of the image. These local descriptors are turned into Fisher Vectors before being pooled to produce a global representation of the image. The so-obtained Local Descriptors encoded by Fisher Vector (LDFV) have been validated through experiments on two person re-identification benchmarks (VIPeR and ETHZ), achieving state-of-the-art performance on both datasets.


Image and Vision Computing | 2014

Covariance Descriptor based on Bio-inspired Features for Person re-Identification and Face Verification

Bingpeng Ma; Yu Su; Frédéric Jurie

Abstract Avoiding the use of complicated pre-processing steps such as accurate face and body part segmentation or image normalization, this paper proposes a novel face/person image representation which can properly handle background and illumination variations. Denoted as gBiCov, this representation relies on the combination of Biologically Inspired Features (BIF) and Covariance descriptors [1]. More precisely, gBiCov is obtained by computing and encoding the difference between BIF features at different scales. The distance between two persons can then be efficiently measured by computing the Euclidean distance of their signatures, avoiding some time consuming operations in Riemannian manifold required by the use of Covariance descriptors. In addition, the recently proposed KISSME framework [2] is adopted to learn a metric adapted to the representation. To show the effectiveness of gBiCov, experiments are conducted on three person re-identification tasks (VIPeR, i-LIDS and ETHZ) and one face verification task (LFW), on which competitive results are obtained. As an example, the matching rate at rank 1 on the VIPeR dataset is of 31.11%, improving the best previously published result by more than 10.


international conference on pattern recognition | 2006

Robust Head Pose Estimation Using LGBP

Bingpeng Ma; Wenchao Zhang; Shiguang Shan; Xilin Chen; Wen Gao

In this paper, we introduce a novel discriminative feature which is efficient for pose estimation. The multi-view face representation is based on local Gabor binary patterns (LGBP) and encodes the local facial characteristics in to a compact feature histogram. In LGBP, Gabor filters can extract the feature of the orientation of head and local binary pattern (LBP) can extract the features official local orientation. To keep the spatial information of the multi-view face images, LGBP is operated on many sub-regions of the images. The combination of them can represent well and truly the multi-view face images. Considering the derived feature space, a radial basis function (RBF) kernel SVM classifier is trained to estimate pose. Extensive experiments demonstrate that the facial representation can be effective for pose estimation


international conference on computer vision | 2015

A Spatio-Temporal Appearance Representation for Viceo-Based Pedestrian Re-Identification

Kan Liu; Bingpeng Ma; Wei Zhang; Rui Huang

Pedestrian re-identification is a difficult problem due to the large variations in a persons appearance caused by different poses and viewpoints, illumination changes, and occlusions. Spatial alignment is commonly used to address these issues by treating the appearance of different body parts independently. However, a body part can also appear differently during different phases of an action. In this paper we consider the temporal alignment problem, in addition to the spatial one, and propose a new approach that takes the video of a walking person as input and builds a spatio-temporal appearance representation for pedestrian re-identification. Particularly, given a video sequence we exploit the periodicity exhibited by a walking person to generate a spatio-temporal body-action model, which consists of a series of body-action units corresponding to certain action primitives of certain body parts. Fisher vectors are learned and extracted from individual body-action units and concatenated into the final representation of the walking person. Unlike previous spatio-temporal features that only take into account local dynamic appearance information, our representation aligns the spatio-temporal appearance of a pedestrian globally. Extensive experiments on public datasets show the effectiveness of our approach compared with the state of the art.


Neurocomputing | 2014

Joint sparse representation for video-based face recognition

Zhen Cui; Hong Chang; Shiguang Shan; Bingpeng Ma; Xilin Chen

Video-based Face Recognition (VFR) can be converted into the problem of measuring the similarity of two image sets, where the examples from a video clip construct one image set. In this paper, we consider face images from each clip as an ensemble and formulate VFR into the Joint Sparse Representation (JSR) problem. In JSR, to adaptively learn the sparse representation of a probe clip, we simultaneously consider the class-level and atom-level sparsity, where the former structurizes the enrolled clips using the structured sparse regularizer (i.e., L 2 , 1 -norm) and the latter seeks for a few related examples using the sparse regularizer (i.e., L 1 - norm ). Besides, we also consider to pre-train a compacted dictionary to accelerate the algorithm, and impose the non-negativity constraint on the recovered coefficients to encourage positive correlations of the representation. The classification is ruled in favor of the class that has the lowest accumulated reconstruction error. We conduct extensive experiments on three real-world databases: Honda, MoBo and YouTube Celebrities (YTC). The results demonstrate that our method is more competitive than those state-of-the-art VFR methods. HighlightsPropose a reconstruction-based method for video-based face recognition, where all images from a probe video clip are jointly recovered.Use two sparse constraints to make representation more credible.Develop an efficient optimization algorithm.Get a more competitive performance on the challenging dataset YTC.


acm multimedia | 2014

Person Search in a Scene by Jointly Modeling People Commonness and Person Uniqueness

Yuanlu Xu; Bingpeng Ma; Rui Huang; Liang Lin

This paper presents a novel framework for a multimedia search task: searching a person in a scene using human body appearance. Existing works mostly focus on two independent problems related to this task, i.e., people detection and person re-identification. However, a sequential combination of these two components does not solve the person search problem seamlessly for two reasons: 1) the errors in people detection are carried into person re-identification unavoidably; 2) the setting of person re-identification is different from that of person search which is essentially a verification problem. To bridge this gap, we propose a unified framework which jointly models the commonness of people (for detection) and the uniqueness of a person (for identification). We demonstrate superior performance of our approach on public benchmarks compared with the sequential combination of the state-of-the-art detection and identification algorithms.


Neurocomputing | 2013

A novel feature descriptor based on biologically inspired feature for head pose estimation

Bingpeng Ma; Xiujuan Chai; Tianjiang Wang

Abstract This paper proposes a novel method to improve the accuracy of head pose estimation. Since biologically inspired features (BIF) have been demonstrated to be both effective and efficient for many visual tasks, we argue that BIF can be applied to the problem of head pose estimation. By combining the BIF with the well-known local binary pattern (LBP) feature, we propose a novel feature descriptor named “local biologically inspired features” (LBIF). Considering that LBIF is extrinsically very high dimensional, ensemble-based supervised methods are applied to reduce the dimension while at the same time improving its discriminative ability. Results obtained from the evaluation on two different databases show that the proposed LBIF feature achieves significant improvements over the state-of-the-art methods of head pose estimation.


Neurocomputing | 2014

CovGa: A novel descriptor based on symmetry of regions for head pose estimation

Bingpeng Ma; Annan Li; Xiujuan Chai; Shiguang Shan

This paper proposes a novel method to estimate the head yaw rotation using the symmetry of regions. We argue that the symmetry of 2D regions located in the same horizontal row is more intrinsically relevant to the yaw rotation of head than the symmetry of 1D signals, while at the same time insensitive to the identity of the face. Specifically, the proposed method relies on the effective combination of Gabor filters and covariance descriptors. We first extract the multi-scale and multi-orientation Gabor representations of the input face image, and then use covariance descriptors to compute the symmetry between two regions in terms of Gabor representations under the same scale and orientation. Since the covariance matrix can alleviate the influence caused by rotations and illumination, the proposed method is robust to such variations. In addition, the proposed method is further improved by combining it with a metric learning method named aa KISS MEtric learning (KISSME). Experiments on four challenging databases demonstrated that the proposed method outperformed the state of the art.


Neurocomputing | 2015

Set-label modeling and deep metric learning on person re-identification

Hao Liu; Bingpeng Ma; Lei Qin; Junbiao Pang; Chunjie Zhang; Qingming Huang

Abstract Person re-identification aims at matching individuals across multiple non-overlapping adjacent cameras. By condensing multiple gallery images of a person as a whole, we propose a novel method named Set-Label Model (SLM) to improve the performance of person re-identification under the multi-shot setting. Moreover, we utilize mutual-information to measure the relevance between query image and gallery sets. To decrease the computational complexity, we apply a Naive–Bayes Nearest-Neighbor algorithm to approximate the mutual-information value. To overcome the limitations of traditional linear metric learning, we further develop a deep non-linear metric learning (DeepML) approach based on Neighborhood Component Analysis and Deep Belief Network. To evaluate the effectiveness of our proposed approaches, SLM and DeepML, we have carried out extensive experiments on two challenging datasets i-LIDS and ETHZ. The experimental results demonstrate that the proposed methods can obtain better performances compared with the state-of-the-art methods.

Collaboration


Dive into the Bingpeng Ma's collaboration.

Top Co-Authors

Avatar

Shiguang Shan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xilin Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qingming Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guorong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hong Chang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Liang Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiujuan Chai

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qi Tian

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Jiazhong Chen

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge