Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mang Ye is active.

Publication


Featured researches published by Mang Ye.


IEEE Transactions on Multimedia | 2016

Zero-Shot Person Re-identification via Cross-View Consistency

Zheng Wang; Ruimin Hu; Chao Liang; Yi Yu; Junjun Jiang; Mang Ye; Jun Chen; Qingming Leng

Person re-identification, aiming to identify images of the same person from various cameras configured in different places, has attracted much attention in the multimedia retrieval community. In this problem, choosing a proper distance metric is a crucial aspect, and many classic methods utilize a uniform learnt metric. However, their performance is limited due to ignoring the zero-shot and fine-grained characteristics presented in real person re-identification applications. In this paper, we investigate two consistencies across two cameras, which are cross-view support consistency and cross-view projection consistency. The philosophy behind it is that, in spite of visual changes in two images of the same person under two camera views, the support sets in their respective views are highly consistent, and after being projected to the same view, their context sets are also highly consistent. Based on the above phenomena, we propose a data-driven distance metric (DDDM) method, re-exploiting the training data to adjust the metric for each query-gallery pair. Experiments conducted on three public data sets have validated the effectiveness of the proposed method, with a significant improvement over three baseline metric learning methods. In particular, on the public VIPeR dataset, the proposed method achieves an accuracy rate of 42.09% at rank-1, which outperforms the state-of-the-art methods by 4.29%.


IEEE Transactions on Multimedia | 2016

Person Reidentification via Ranking Aggregation of Similarity Pulling and Dissimilarity Pushing

Mang Ye; Chao Liang; Yi Yu; Zheng Wang; Qingming Leng; Chunxia Xiao; Jun Chen; Ruimin Hu

Person reidentification is a key technique to match different persons observed in nonoverlapping camera views. Many researchers treat it as a special object-retrieval problem, where ranking optimization plays an important role. Existing ranking optimization methods mainly utilize the similarity relationship between the probe and gallery images to optimize the original ranking list, but seldom consider the important dissimilarity relationship. In this paper, we propose to use both similarity and dissimilarity cues in a ranking optimization framework for person reidentification. Its core idea is that the true match should not only be similar to those strongly similar galleries of the probe, but also be dissimilar to those strongly dissimilar galleries of the probe. Furthermore, motivated by the philosophy of multiview verification, a ranking aggregation algorithm is proposed to enhance the detection of similarity and dissimilarity based on the following assumption: the true match should be similar to the probe in different baseline methods. In other words, if a gallery blue image is strongly similar to the probe in one method, while simultaneously strongly dissimilar to the probe in another method, it will probably be a wrong match of the probe. Extensive experiments conducted on public benchmark datasets and comparisons with different baseline methods have shown the great superiority of the proposed ranking optimization method.


acm multimedia | 2015

Ranking Optimization for Person Re-identification via Similarity and Dissimilarity

Mang Ye; Chao Liang; Zheng Wang; Qingming Leng; Jun Chen

Person re-identification is a key technique to match different persons observed in non-overlapping camera views.Many researchers treat it as a special object retrieval problem, where ranking optimization plays an important role. Existing ranking optimization methods utilize the similarity relationship between the probe and gallery images to optimize the original ranking list in which dissimilarity relationship is seldomly investigated. In this paper, we propose to use both similarity and dissimilarity cues in a ranking optimization framework for person re-identification. Its core idea is based on the phenomenon that the true match should not only be similar to the strong similar samples of the probe but also dissimilar to the strong dissimilar samples. Extensive experiments have shown the great superiority of the proposed ranking optimization method.


conference on multimedia modeling | 2015

Coupled-View Based Ranking Optimization for Person Re-identification

Mang Ye; Jun Chen; Qingming Leng; Chao Liang; Zheng Wang; Kaimin Sun

Person re-identification aims to match different persons observed in non-overlapping camera views. Researchers have proposed many person descriptors based on global or local descriptions, while both of them have achieved satisfying matching results, however, their ranking lists usually vary a lot for the same query person. These motivate us to investigate an approach to aggregate them to optimize the original matching results. In this paper, we proposed a coupled-view based ranking optimization method through cross KNN rank aggregation and graph-based re-ranking to revise the original ranking lists. Its core assumption is that the images of the same person should share the similar visual appearance in both global and local views. Extensive experiments on two datasets show the superiority of our proposed method with an average improvement of 20-30% over the state-of-the-art methods at CMC@1.


international joint conference on artificial intelligence | 2018

Visible Thermal Person Re-Identification via Dual-Constrained Top-Ranking

Mang Ye; Zheng Wang; Xiangyuan Lan; Pong Chi Yuen

Cross-modality person re-identification between the thermal and visible domains is extremely important for night-time surveillance applications. Existing works in this filed mainly focus on learning sharable feature representations to handle the cross-modality discrepancies. However, besides the cross-modality discrepancy caused by different camera spectrums, visible thermal person re-identification also suffers from large crossmodality and intra-modality variations caused by different camera views and human poses. In this paper, we propose a dual-path network with a novel bi-directional dual-constrained top-ranking loss to learn discriminative feature representations. It is advantageous in two aspects: 1) end-to-end feature learning directly from the data without extra metric learning steps, 2) it simultaneously handles the cross-modality and intra-modality variations to ensure the discriminability of the learnt representations. Meanwhile, identity loss is further incorporated to model the identity-specific information to handle large intra-class variations. Extensive experiments on two datasets demonstrate the superior performance compared to the state-of-the-arts.


pacific rim conference on multimedia | 2015

Person Re-identification via Attribute Confidence and Saliency

Jun Liu; Chao Liang; Mang Ye; Zheng Wang; Yang Yang; Zhen Han; Kaimin Sun

Person re-identification is a problem of recognising and associating persons across different cameras. Existing methods usually take visual appearance features to address this issue, while the visual descriptions are sensitive to the environment variation. Relatively, the semantic attributes are more robust in complicated environments. Therefore, several attribute-based methods are introduced, but most of them ignored the diversities of different attributes. We epitomize the diversities of different attributes as two folds: the attribute confidence which denotes the descriptive power, and the attribute saliency which expresses the discriminative power. Specifically, the attribute confidence is determined by the performance of each attribute classifier, and the attribute saliency is defined by their occurrence frequency, similar to the IDF (Inverse Document Frequency) [1] idea in information retrieval. Then, each attribute is assigned an appropriate weighting according to its saliency and confidence when calculating similarity distances. Based on above considerations, a novel person re-identification method is proposed. Experiments conducted on two benchmark datasets have validated the effectiveness of the proposed method.


international joint conference on artificial intelligence | 2018

Cascaded SR-GAN for Scale-Adaptive Low Resolution Person Re-identification

Zheng Wang; Mang Ye; Fan Yang; Xiang Bai; Shin'ichi Satoh

Person re-identification (REID) is an important task in video surveillance and forensics applications. Most of previous approaches are based on a key assumption that all person images have uniform and sufficiently high resolutions. Actually, various low-resolutions and scale mismatching always exist in open world REID. We name this kind of problem as Scale-Adaptive Low Resolution Person Re-identification (SALR-REID). The most intuitive way to address this problem is to increase various low-resolutions (not only low, but also with different scales) to a uniform high-resolution. SRGAN is one of the most competitive image superresolution deep networks, designed with a fixed upscaling factor. However, it is still not suitable for SALR-REID task, which requires a network not only synthesizing high-resolution images with different upscaling factors, but also extracting discriminative image feature for judging person’s identity. (1) To promote the ability of scale-adaptive upscaling, we cascade multiple SRGANs in series. (2) To supplement the ability of image feature representation, we plug-in a reidentification network. With a unified formulation, a Cascaded Super-Resolution GAN (CSRGAN) framework is proposed. Extensive evaluations on two simulated datasets and one public dataset demonstrate the advantages of our method over related state-of-the-art methods.


european conference on computer vision | 2018

Robust Anchor Embedding for Unsupervised Video Person re-IDentification in the Wild

Mang Ye; Xiangyuan Lan; Pong Chi Yuen

This paper addresses the scalability and robustness issues of estimating labels from imbalanced unlabeled data for unsupervised video-based person re-identification (re-ID). To achieve it, we propose a novel Robust AnChor Embedding (RACE) framework via deep feature representation learning for large-scale unsupervised video re-ID. Within this framework, anchor sequences representing different persons are firstly selected to formulate an anchor graph which also initializes the CNN model to get discriminative feature representations for later label estimation. To accurately estimate labels from unlabeled sequences with noisy frames, robust anchor embedding is introduced based on the regularized affine hull. Efficiency is ensured with kNN anchors embedding instead of the whole anchor set under manifold assumptions. After that, a robust and efficient top-k counts label prediction strategy is proposed to predict the labels of unlabeled image sequences. With the newly estimated labeled sequences, the unified anchor embedding framework enables the feature learning process to be further facilitated. Extensive experimental results on the large-scale dataset show that the proposed method outperforms existing unsupervised video re-ID methods.


pacific rim conference on multimedia | 2015

Adaptive Margin Nearest Neighbor for Person Re-Identification

Lei Yao; Jun Chen; Yi Yu; Zheng Wang; Wenxin Huang; Mang Ye; Ruimin Hu

Person re-identification is a challenging issue due to large visual appearance changes caused by variations in viewpoint, lighting, background clutter and occlusion among different cameras. Recently, Mahalanobis metric learning methods, which aim to find a global, linear transformation of the feature space between cameras [1, 2, 3, 4], are widely used in person re-identification. In order to maximize the inter-class variation, general Mahalanobis metric learning methods usually push impostors (i.e., all negative samples that are nearer than the target neighbors) to a fixed threshold distance away, treating all these impostors equally without considering their diversity. However, for person re-identification, the discrepancies among impostors are useful for refining the ranking list. Motivated by this observation, we propose an Adaptive Margin Nearest Neighbor (AMNN) method for person re-identification. AMNN aims to take unequal treatment to each samples impostors by pushing them to adaptive variable margins away. Extensive comparative experiments conducted on two standard datasets have confirmed the superiority of the proposed method.


conference on multimedia modeling | 2016

Spatial Constrained Fine-Grained Color Name for Person Re-identification

Yang Yang; Yuhong Yang; Mang Ye; Wenxin Huang; Zheng Wang; Chao Liang; Lei Yao; Chunjie Zhang

Person re-identification is a key technique to match different persons observed in non-overlapping camera views. Its a challenging problem due to the huge intra-class variations caused by illumination, poses, viewpoints, occlusions and so on. To address these issues, researchers have proposed many visual descriptors. However, these visual features may be unstable in complicated environment. Comparatively, the semantic features can be a good supplement to visual feature descriptors for its robustness. As a kind of representative semantic features, color name is utilized in this paper. The color name is a semantic description of an image and shows good robustness to photometric variations. Traditional color name based methods are limited in discriminative power due to the finite color categories, only 11 or 16 kinds. In this paper, a new fine-grained color name approach based on bag-of-words model is proposed. Moreover, spatial information, with its advantage in strengthening constraints among features in variant environment, is further applied to optimize our method. Extensive experiments conducted on benchmark datasets have shown great superiorities of the proposed method.

Collaboration


Dive into the Mang Ye's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pong Chi Yuen

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar

Xiangyuan Lan

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Yu

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge