Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shengjin Wang is active.

Publication


Featured researches published by Shengjin Wang.


computer vision and pattern recognition | 2014

Packing and Padding: Coupled Multi-index for Accurate Image Retrieval

Liang Zheng; Shengjin Wang; Ziqiong Liu; Qi Tian

In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts.


computer vision and pattern recognition | 2015

Query-adaptive late fusion for image search and person re-identification

Liang Zheng; Shengjin Wang; Lu Tian; Fei He; Ziqiong Liu; Qi Tian

Feature fusion has been proven effective [35, 36] in image search. Typically, it is assumed that the to-be-fused heterogeneous features work well by themselves for the query. However, in a more realistic situation, one does not know in advance whether a feature is effective or not for a given query. As a result, it is of great importance to identify feature effectiveness in a query-adaptive manner.


IEEE Transactions on Image Processing | 2014

Coupled binary embedding for large-scale image retrieval.

Liang Zheng; Shengjin Wang; Qi Tian

Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.


systems man and cybernetics | 2010

Action and Gait Recognition From Recovered 3-D Human Joints

Junxia Gu; Xiaoqing Ding; Shengjin Wang; Youshou Wu

A common viewpoint-free framework that fuses pose recovery and classification for action and gait recognition is presented in this paper. First, a markerless pose recovery method is adopted to automatically capture the 3-D human joint and pose parameter sequences from volume data. Second, multiple configuration features (combination of joints) and movement features (position, orientation, and height of the body) are extracted from the recovered 3-D human joint and pose parameter sequences. A hidden Markov model (HMM) and an exemplar-based HMM are then used to model the movement features and configuration features, respectively. Finally, actions are classified by a hierarchical classifier that fuses the movement features and the configuration features, and persons are recognized from their gait sequences with the configuration features. The effectiveness of the proposed approach is demonstrated with experiments on the Institut National de Recherche en Informatique et Automatique Xmas Motion Acquisition Sequences data set.


european conference on computer vision | 2016

MARS: A Video Benchmark for Large-Scale Person Re-Identification

Liang Zheng; Zhi Bie; Yifan Sun; Jingdong Wang; Chi Su; Shengjin Wang; Qi Tian

This paper considers person re-identification (re-id) in videos. We introduce a new video re-id dataset, named Motion Analysis and Re-identification Set (MARS), a video extension of the Market-1501 dataset. To our knowledge, MARS is the largest video re-id dataset to date. Containing 1,261 IDs and around 20,000 tracklets, it provides rich visual information compared to image-based datasets. Meanwhile, MARS reaches a step closer to practice. The tracklets are automatically generated by the Deformable Part Model (DPM) as pedestrian detector and the GMMCP tracker. A number of false detection/tracking results are also included as distractors which would exist predominantly in practical video databases. Extensive evaluation of the state-of-the-art methods including the space-time descriptors and CNN is presented. We show that CNN in classification mode can be trained from scratch using the consecutive bounding boxes of each identity. The learned CNN embedding outperforms other competing methods considerably and has good generalization ability on other video re-id datasets upon fine-tuning.


computer vision and pattern recognition | 2013

Lp-Norm IDF for Large Scale Image Search

Liang Zheng; Shengjin Wang; Ziqiong Liu; Qi Tian

The Inverse Document Frequency (IDF) is prevalently utilized in the Bag-of-Words based image search. The basic idea is to assign less weight to terms with high frequency, and vice versa. However, the estimation of visual word frequency is coarse and heuristic. Therefore, the effectiveness of the conventional IDF routine is marginal, and far from optimal. To tackle this problem, this paper introduces a novel IDF expression by the use of Lp-norm pooling technique. Carefully designed, the proposed IDF takes into account the term frequency, document frequency, the complexity of images, as well as the codebook information. Optimizing the IDF function towards optimal balancing between TF and pIDF weights yields the so-called Lp-norm IDF (pIDF). We show that the conventional IDF is a special case of our generalized version, and two novel IDFs, i.e. the average IDF and the max IDF, can also be derived from our formula. Further, by counting for the term-frequency in each image, the proposed Lp-norm IDF helps to alleviate the visual word burstiness phenomenon. Our method is evaluated through extensive experiments on three benchmark datasets (Oxford 5K, Paris 6K and Flickr 1M). We report a performance improvement of as large as 27.1% over the baseline approach. Moreover, since the Lp-norm IDF is computed offline, no extra computation or memory cost is introduced to the system at all.


computer vision and pattern recognition | 2014

Bayes Merging of Multiple Vocabularies for Scalable Image Retrieval

Liang Zheng; Shengjin Wang; Wengang Zhou; Qi Tian

In the Bag-of-Words (BoW) model, the vocabulary is of key importance. Typically, multiple vocabularies are generated to correct quantization artifacts and improve recall. However, this routine is corrupted by vocabulary correlation, i.e., overlapping among different vocabularies. Vocabulary correlation leads to an over-counting of the indexed features in the overlapped area, or the intersection set, thus compromising the retrieval accuracy. In order to address the correlation problem while preserve the benefit of high recall, this paper proposes a Bayes merging approach to down-weight the indexed features in the intersection set. Through explicitly modeling the correlation problem in a probabilistic view, a joint similarity on both image- and feature-level is estimated for the indexed features in the intersection set. We evaluate our method on three benchmark datasets. Albeit simple, Bayes merging can be well applied in various merging tasks, and consistently improves the baselines on multi-vocabulary merging. Moreover, Bayes merging is efficient in terms of both time and memory cost, and yields competitive performance with the state-of-the-art methods.


Neurocomputing | 2015

A survey of recent advances in visual feature detection

Yali Li; Shengjin Wang; Qi Tian; Xiaoqing Ding

Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey.


IEEE Transactions on Image Processing | 2014

\(\mathcal {L}_p\) -Norm IDF for Scalable Image Retrieval

Liang Zheng; Shengjin Wang; Qi Tian

The inverse document frequency (IDF) is prevalently utilized in the bag-of-words-based image retrieval application. The basic idea is to assign less weight to terms with high frequency, and vice versa. However, in the conventional IDF routine, the estimation of visual word frequency is coarse and heuristic. Therefore, its effectiveness is largely compromised and far from optimal. To address this problem, this paper introduces a novel IDF family by the use of Lp-norm pooling technique. Carefully designed, the proposed IDF considers the term frequency, document frequency, the complexity of images, as well as the codebook information. We further propose a parameter tuning strategy, which helps to produce optimal balancing between TF and pIDF weights, yielding the so-called Lp-norm IDF (pIDF). We show that the conventional IDF is a special case of our generalized version, and two novel IDFs, i.e., the average IDF and the max IDF, can be defined from the concept of pIDF. Further, by counting for the term-frequency in each image, the proposed pIDF helps to alleviate the visual word burstiness phenomenon. Our method is evaluated through extensive experiments on four benchmark data sets (Oxford 5K, Paris 6K, Holidays, and Ukbench). We show that the pIDF works well on large scale databases and when the codebook is trained on irrelevant data. We report an mean average precision improvement of as large as +13.0% over the baseline TF-IDF approach on a 1M data set. In addition, the pIDF has a wide application scope varying from buildings to general objects and scenes. When combined with postprocessing steps, we achieve competitive results compared with the state-of-the-art methods. In addition, since the pIDF is computed offline, no extra computation or memory cost is introduced to the system at all.


computer vision and pattern recognition | 2016

Weakly Supervised Object Localization with Progressive Domain Adaptation

Dong Li; Jia-Bin Huang; Yali Li; Shengjin Wang; Ming-Hsuan Yang

We address the problem of weakly supervised object localization where only image-level annotations are available for training. Many existing approaches tackle this problem through object proposal mining. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model initialization and often converge to an undesirable local minimum. In this paper, we address this problem by progressive domain adaptation with two main steps: classification adaptation and detection adaptation. In classification adaptation, we transfer a pre-trained network to our multi-label classification task for recognizing the presence of a certain object in an image. In detection adaptation, we first use a mask-out strategy to collect class-specific object proposals and apply multiple instance learning to mine confident candidates. We then use these selected object proposals to fine-tune all the layers, resulting in a fully adapted detection network. We extensively evaluate the localization performance on the PASCAL VOC and ILSVRC datasets and demonstrate significant performance improvement over the state-of-the-art methods.

Collaboration


Dive into the Shengjin Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qi Tian

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fei He

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge