Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ancong Wu is active.

Publication


Featured researches published by Ancong Wu.


workshop on applications of computer vision | 2016

An enhanced deep feature representation for person re-identification

Shangxuan Wu; Ying-Cong Chen; Xiang Li; Ancong Wu; Jinjie You; Wei-Shi Zheng

Feature representation and metric learning are two critical components in person re-identification models. In this paper, we focus on the feature representation and claim that hand-crafted histogram features can be complementary to Convolutional Neural Network (CNN) features. We propose a novel feature extraction model called Feature Fusion Net (FFN) for pedestrian image representation. In FFN, back propagation makes CNN features constrained by the handcrafted features. Utilizing color histogram features (RGB, HSV, YCbCr, Lab and YIQ) and texture features (multi-scale and multi-orientation Gabor features), we get a new deep feature representation that is more discriminative and compact. Experiments on three challenging datasets (VIPeR, CUHK01, PRID450s) validates the effectiveness of our proposal.


computer vision and pattern recognition | 2016

Top-Push Video-Based Person Re-identification

Jinjie You; Ancong Wu; Xiang Li; Wei-Shi Zheng

Most existing person re-identification (re-id) models focus on matching still person images across disjoint camera views. Since only limited information can be exploited from still images, it is hard (if not impossible) to overcome the occlusion, pose and camera-view change, and lighting variation problems. In comparison, video-based re-id methods can utilize extra space-time information, which contains much more rich cues for matching to overcome the mentioned problems. However, we find that when using video-based representation, some inter-class difference can be much more obscure than the one when using still-image-based representation, because different people could not only have similar appearance but also have similar motions and actions which are hard to align. To solve this problem, we propose a top-push distance learning model (TDL), in which we integrate a top-push constrain for matching video features of persons. The top-push constraint enforces the optimization on top-rank matching in re-id, so as to make the matching model more effective towards selecting more discriminative features to distinguish different persons. Our experiments show that the proposed video-based reid framework outperforms the state-of-the-art video-based re-id methods.


international conference on computer vision | 2017

RGB-Infrared Cross-Modality Person Re-identification

Ancong Wu; Wei-Shi Zheng; Hong-Xing Yu; Shaogang Gong; Jian-Huang Lai

Person re-identification (Re-ID) is an important problem in video surveillance, aiming to match pedestrian images across camera views. Currently, most works focus on RGB-based Re-ID. However, in some applications, RGB images are not suitable, e.g. in a dark environment or at night. Infrared (IR) imaging becomes necessary in many visual systems. To that end, matching RGB images with infrared images is required, which are heterogeneous with very different visual characteristics. For person Re-ID, this is a very challenging cross-modality problem that has not been studied so far. In this work, we address the RGB-IR cross-modality Re-ID problem and contribute a new multiple modality Re-ID dataset named SYSU-MM01, including RGB and IR images of 491 identities from 6 cameras, giving in total 287,628 RGB images and 15,792 IR images. To explore the RGB-IR Re-ID problem, we evaluate existing popular cross-domain models, including three commonly used neural network structures (one-stream, two-stream and asymmetric FC layer) and analyse the relation between them. We further propose deep zero-padding for training one-stream network towards automatically evolving domain-specific nodes in the network for cross-modality matching. Our experiments show that RGB-IR cross-modality matching is very challenging but still feasible using the proposed model with deep zero-padding, giving the best performance. Our dataset is available at http:// isee.sysu.edu.cn/project/RGBIRReID.htm.


IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015) | 2015

Towards more reliable matching for person re-identification

Xiang Li; Ancong Wu; Mei Cao; Jinjie You; Wei-Shi Zheng

Person re-identification is an important problem of matching persons across non-overlapping camera views. However, the re-identification is still far from achieving reliable matching. First, many existing approaches are wholebody- based matching, and how body parts could affect and assist the matching is still not clearly known. Second, the learned similarity measurement/metric is equally used for each pair of probe and gallery images, and the bias of the measurement is not considered. In this paper, we address the above two problems in order to conduct a more reliable matching. More specifically, we propose a reliable integrated matching scheme (IMS), which uses body parts to assist matching of the whole body. Moreover, a sparsity-based confidence is also presented for regulating the learned metric to improve the matching reliability. The experiments conducted on three publicly available datasets confirm that the proposed scheme is effective for person re-identification.


IEEE Transactions on Image Processing | 2017

Robust Depth-Based Person Re-Identification

Ancong Wu; Wei-Shi Zheng; Jian-Huang Lai

Person re-identification (re-id) aims to match people across non-overlapping camera views. So far the RGB-based appearance is widely used in most existing works. However, when people appeared in extreme illumination or changed clothes, the RGB appearance-based re-id methods tended to fail. To overcome this problem, we propose to exploit depth information to provide more invariant body shape and skeleton information regardless of illumination and color change. More specifically, we exploit depth voxel covariance descriptor and further propose a locally rotation invariant depth shape descriptor called Eigen-depth feature to describe pedestrian body shape. We prove that the distance between any two covariance matrices on the Riemannian manifold is equivalent to the Euclidean distance between the corresponding Eigen-depth features. Furthermore, we propose a kernelized implicit feature transfer scheme to estimate Eigen-depth feature implicitly from RGB image when depth information is not available. We find that combining the estimated depth features with RGB-based appearance features can sometimes help to better reduce visual ambiguities of appearance features caused by illumination and similar clothes. The effectiveness of our models was validated on publicly available depth pedestrian datasets as compared to related methods for re-id.


asian conference on pattern recognition | 2015

Depth-based person re-identification

Ancong Wu; Wei-Shi Zheng; Jian-Huang Lai

Person re-identification aims to match people across non-overlapping camera views. For this purpose, most works exploit appearance cues, assuming that the color of clothes is discriminative in short term. However, when people appear in extreme illumination or change clothes, appearance-based methods tend to fail. Fortunately, depth images provide more invariant body shape and skeleton information regardless of illumination and color, but only a few depth-based methods have been developed so far. In this paper, we propose a covariance-based rotation invariant 3D descriptor called Eigen-depth to describe pedestrian body shape and the property of rotation invariance is proven in theory. It is also insensitive to slight shape change and invariant to color change and background. We combine our descriptor with skeleton-based feature to get a complete representation of human body. The effectiveness is validated on RGBD-ID and BIWIRGBD-ID datasets.


international joint conference on artificial intelligence | 2018

Adversarial Attribute-Image Person Re-identification

Zhou Yin; Wei-Shi Zheng; Ancong Wu; Hong-Xing Yu; Hai Wan; Xiaowei Guo; Feiyue Huang; Jian-Huang Lai

While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modality matching problem in person Re-ID. In this work, we present this challenge and formulate this task as a joint space learning problem. By imposing an attribute-guided attention mechanism for images and a semantic consistent adversary strategy for attributes, each modality, i.e., images and attributes, successfully learns semantically correlated concepts under the guidance of the other. We conducted extensive experiments on three attribute datasets and demonstrated that the proposed joint space learning method is so far the most effective method for the attribute-image cross-modality person Re-ID problem.


european conference on computer vision | 2018

Adversarial Open-World Person Re-Identification

Xiang Li; Ancong Wu; Wei-Shi Zheng

In a typical real-world application of re-id, a watch-list (gallery set) of a handful of target people (e.g. suspects) to track around a large volume of non-target people are demanded across camera views, and this is called the open-world person re-id. Different from conventional (closed-world) person re-id, a large portion of probe samples are not from target people in the open-world setting. And, it always happens that a non-target person would look similar to a target one and therefore would seriously challenge a re-id system. In this work, we introduce a deep open-world group-based person re-id model based on adversarial learning to alleviate the attack problem caused by similar non-target people. The main idea is learning to attack feature extractor on the target people by using GAN to generate very target-like images (imposters), and in the meantime the model will make the feature extractor learn to tolerate the attack by discriminative learning so as to realize group-based verification. The framework we proposed is called the adversarial open-world person re-identification, and this is realized by our Adversarial PersonNet (APN) that jointly learns a generator, a person discriminator, a target discriminator and a feature extractor, where the feature extractor and target discriminator share the same weights so as to makes the feature extractor learn to tolerate the attack by imposters for better group-based verification. While open-world person re-id is challenging, we show for the first time that the adversarial-based approach helps stabilize person re-id system under imposter attack more effectively.


international conference on computer vision | 2017

Cross-View Asymmetric Metric Learning for Unsupervised Person Re-Identification

Hong-Xing Yu; Ancong Wu; Wei-Shi Zheng


national conference on artificial intelligence | 2018

Deep Low-Resolution Person Re-Identification

Jiening Jiao; Wei-Shi Zheng; Ancong Wu; Xiatian Zhu; Shaogang Gong

Collaboration


Dive into the Ancong Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Li

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Jinjie You

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Zhou Yin

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Shaogang Gong

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Hai Wan

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Mei Cao

East China Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge