Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cong Leng is active.

Publication


Featured researches published by Cong Leng.


computer vision and pattern recognition | 2016

Quantized Convolutional Neural Networks for Mobile Devices

Jiaxiang Wu; Cong Leng; Yuhang Wang; Qinghao Hu; Jian Cheng

Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layers response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.


computer vision and pattern recognition | 2014

Fast and Accurate Image Matching with Cascade Hashing for 3D Reconstruction

Jian Cheng; Cong Leng; Jiaxiang Wu; Hainan Cui; Hanqing Lu

Image matching is one of the most challenging stages in 3D reconstruction, which usually occupies half of computational cost and inaccurate matching may lead to failure of reconstruction. Therefore, fast and accurate image matching is very crucial for 3D reconstruction. In this paper, we proposed a Cascade Hashing strategy to speed up the image matching. In order to accelerate the image matching, the proposed Cascade Hashing method is designed to be three-layer structure: hashing lookup, hashing remapping, and hashing ranking. Each layer adopts different measures and filtering strategies, which is demonstrated to be less sensitive to noise. Extensive experiments show that image matching can be accelerated by our approach in hundreds times than brute force matching, even achieves ten times or more than Kd-tree based matching while retaining comparable accuracy.


computer vision and pattern recognition | 2015

Online sketching hashing

Cong Leng; Jiaxiang Wu; Jian Cheng; Xiao Bai; Hanqing Lu

Recently, hashing based approximate nearest neighbor (ANN) search has attracted much attention. Extensive new algorithms have been developed and successfully applied to different applications. However, two critical problems are rarely mentioned. First, in real-world applications, the data often comes in a streaming fashion but most of existing hashing methods are batch based models. Second, when the dataset becomes huge, it is almost impossible to load all the data into memory to train hashing models. In this paper, we propose a novel approach to handle these two problems simultaneously based on the idea of data sketching. A sketch of one dataset preserves its major characters but with significantly smaller size. With a small size sketch, our method can learn hash functions in an online fashion, while needs rather low computational complexity and storage space. Extensive experiments on two large scale benchmarks and one synthetic dataset demonstrate the efficacy of the proposed method.


Computer Vision and Image Understanding | 2014

Semi-supervised multi-graph hashing for scalable similarity search

Jian Cheng; Cong Leng; Peng Li; Meng Wang; Hanqing Lu

Due to the explosive growth of the multimedia contents in recent years, scalable similarity search has attracted considerable attention in many large-scale multimedia applications. Among the different similarity search approaches, hashing based approximate nearest neighbor (ANN) search has become very popular owing to its computational and storage efficiency. However, most of the existing hashing methods usually adopt a single modality or integrate multiple modalities simply without exploiting the effect of different features. To address the problem of learning compact hashing codes with multiple modality, we propose a semi-supervised Multi-Graph Hashing (MGH) framework in this paper. Different from the traditional methods, our approach can effectively integrate the multiple modalities with optimized weights in a multi-graph learning scheme. In this way, the effects of different modalities can be adaptively modulated. Besides, semi-supervised information is also incorporated into the unified framework and a sequential learning scheme is adopted to learn complementary hash functions. The proposed framework enables direct and fast handling for the query examples. Thus, the binary codes, learned by our approach can be more effective for fast similarity search. Extensive experiments are conducted on two large public datasets to evaluate the performance of our approach and the results demonstrate that the proposed approach achieves promising results compared to the state-of-the-art methods


conference on information and knowledge management | 2014

Supervised Hashing with Soft Constraints

Cong Leng; Jian Cheng; Jiaxiang Wu; Xi Zhang; Hanqing Lu

Due to the ability to preserve semantic similarity in Hamming space, supervised hashing has been extensively studied recently. Most existing approaches encourage two dissimilar samples to have maximum Hamming distance. This may lead to an unexpected consequence that two unnecessarily similar samples would have the same code if they are both dissimilar with another sample. Besides, in existing methods, all labeled pairs are treated with equal importance without considering the semantic gap, which is not conducive to thoroughly leverage the supervised information. We present a general framework for supervised hashing to address the above two limitations. We do not toughly require a dissimilar pair to have maximum Hamming distance. Instead, a soft constraint which can be viewed as a regularization to avoid over-fitting is utilized. Moreover, we impose different weights to different training pairs, and these weights can be automatically adjusted in the learning process. Experiments on two benchmarks show that the proposed method can easily outperform other state-of-the-art methods.


acm multimedia | 2015

Learning Deep Features For MSR-bing Information Retrieval Challenge

Qiang Song; Sixie Yu; Cong Leng; Jiaxiang Wu; Qinghao Hu; Jian Cheng

Two tasks have been put forward in the MSR-bing Grand Challenge 2015. To address the information retrieval task, we raise and integrate a series of methods with visual features obtained by convolution neural network (CNN) models. In our experiments, we discover that the ranking strategies of Hierarchical clustering and PageRank methods are mutually complementary. Another task is fine-grained classification. In contrast to basic-level recognition, fine-grained classification aims to distinguish between different breeds or species or product models, and often requires distinctions that must be conditioned on the object pose for reliable identification. Current state-of-the-art techniques rely heavily upon the use of part annotations, while the bing datasets suffer both abundance of part annotations and dirty background. In this paper, we propose a CNN-based feature representation for visual recognition only using image-level information. Our CNN model is pre-trained on a collection of clean datasets and fine-tuned on the bing datasets. Furthermore, a multi-scale training strategy is adopted by simply resizing the input images into different scales and then merging the soft-max posteriors. We then implement our method into a unified visual recognition system on Microsoft cloud service. Finally, our solution achieved top performance in both tasks of the contest


international acm sigir conference on research and development in information retrieval | 2014

Random subspace for binary codes learning in large scale image retrieval

Cong Leng; Jian Cheng; Hanqing Lu

Due to the fast query speed and low storage cost, hashing based approximate nearest neighbor search methods have attracted much attention recently. Many state of the art methods are based on eigenvalue decomposition. In these approaches, the information caught in different dimensions is unbalanced and generally most of the information is contained in the top eigenvectors. We demonstrate that this leads to an unexpected phenomenon that longer hashing code does not necessarily yield better performance. In this work, we introduce a random subspace strategy to address this limitation. At first, a small fraction of the whole feature space is randomly sampled to train the hashing algorithms each time and only the top eigenvectors are kept to generate one piece of short code. This process will be repeated several times and then the obtained many pieces of short codes are concatenated into one piece of long code. Theoretical analysis and experiments on two benchmarks confirm the effectiveness of the proposed strategy for hashing.


international acm sigir conference on research and development in information retrieval | 2014

Item group based pairwise preference learning for personalized ranking

Shuang Qiu; Jian Cheng; Ting Yuan; Cong Leng; Hanqing Lu


european conference on machine learning | 2014

Learning binary codes with Bagging PCA

Cong Leng; Jian Cheng; Ting Yuan; Xiao Bai; Hanqing Lu


international conference on machine learning | 2015

Hashing for Distributed Data

Cong Leng; Jiaxiang Wu; Jian Cheng; Xi Zhang; Hanqing Lu

Collaboration


Dive into the Cong Leng's collaboration.

Top Co-Authors

Avatar

Jian Cheng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hanqing Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiaxiang Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qinghao Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ting Yuan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xi Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuhang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Meng Wang

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peng Li

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge