Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qinghao Hu is active.

Publication


Featured researches published by Qinghao Hu.


computer vision and pattern recognition | 2016

Quantized Convolutional Neural Networks for Mobile Devices

Jiaxiang Wu; Cong Leng; Yuhang Wang; Qinghao Hu; Jian Cheng

Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layers response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 ~ 6× speed-up and 15 ~ 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.


Journal of Zhejiang University Science C | 2018

Recent advances in efficient computation of deep convolutional neural networks

Jian Cheng; Peisong Wang; Gang Li; Qinghao Hu; Hanqing Lu

Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.


acm multimedia | 2015

Learning Deep Features For MSR-bing Information Retrieval Challenge

Qiang Song; Sixie Yu; Cong Leng; Jiaxiang Wu; Qinghao Hu; Jian Cheng

Two tasks have been put forward in the MSR-bing Grand Challenge 2015. To address the information retrieval task, we raise and integrate a series of methods with visual features obtained by convolution neural network (CNN) models. In our experiments, we discover that the ranking strategies of Hierarchical clustering and PageRank methods are mutually complementary. Another task is fine-grained classification. In contrast to basic-level recognition, fine-grained classification aims to distinguish between different breeds or species or product models, and often requires distinctions that must be conditioned on the object pose for reliable identification. Current state-of-the-art techniques rely heavily upon the use of part annotations, while the bing datasets suffer both abundance of part annotations and dirty background. In this paper, we propose a CNN-based feature representation for visual recognition only using image-level information. Our CNN model is pre-trained on a collection of clean datasets and fine-tuned on the bing datasets. Furthermore, a multi-scale training strategy is adopted by simply resizing the input images into different scales and then merging the soft-max posteriors. We then implement our method into a unified visual recognition system on Microsoft cloud service. Finally, our solution achieved top performance in both tasks of the contest


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

DeepSearch: A Fast Image Search Framework for Mobile Devices

Peisong Wang; Qinghao Hu; Zhiwei Fang; Chaoyang Zhao; Jian Cheng

Content-based image retrieval (CBIR) is one of the most important applications of computer vision. In recent years, there have been many important advances in the development of CBIR systems, especially Convolutional Neural Networks (CNNs) and other deep-learning techniques. On the other hand, current CNN-based CBIR systems suffer from high computational complexity of CNNs. This problem becomes more severe as mobile applications become more and more popular. The current practice is to deploy the entire CBIR systems on the server side while the client side only serves as an image provider. This architecture can increase the computational burden on the server side, which needs to process thousands of requests per second. Moreover, sending images have the potential of personal information leakage. As the need of mobile search expands, concerns about privacy are growing. In this article, we propose a fast image search framework, named DeepSearch, which makes complex image search based on CNNs feasible on mobile phones. To implement the huge computation of CNN models, we present a tensor Block Term Decomposition (BTD) approach as well as a nonlinear response reconstruction method to accelerate the CNNs involving in object detection and feature extraction. The extensive experiments on the ImageNet dataset and Alibaba Large-scale Image Search Challenge dataset show that the proposed accelerating approach BTD can significantly speed up the CNN models and further makes CNN-based image search practical on common smart phones.


acm multimedia | 2017

Pseudo Label based Unsupervised Deep Discriminative Hashing for Image Retrieval

Qinghao Hu; Jiaxiang Wu; Jian Cheng; Lifang Wu; Hanqing Lu

Hashing methods play an important role in large scale image retrieval. Traditional hashing methods use hand-crafted features to learn hash functions, which can not capture the high level semantic information. Deep hashing algorithms use deep neural networks to learn feature representation and hash functions simultaneously. Most of these algorithms exploit supervised information to train the deep network. However, supervised information is expensive to obtain. In this paper, we propose a pseudo label based unsupervised deep discriminative hashing algorithm. First, we cluster images via K-means and the cluster labels are treated as pseudo labels. Then we train a deep hashing network with pseudo labels by minimizing the classification loss and quantization loss. Experiments on two datasets demonstrate that our unsupervised deep discriminative hashing method outperforms the state-of-art unsupervised hashing methods.


conference on information and knowledge management | 2017

Fast K-means for Large Scale Clustering

Qinghao Hu; Jiaxiang Wu; Lu Bai; Yifan Zhang; Jian Cheng

K-means algorithm has been widely used in machine learning and data mining due to its simplicity and good performance. However, the standard k-means algorithm would be quite slow for clustering millions of data into thousands of or even tens of thousands of clusters. In this paper, we propose a fast k-means algorithm named multi-stage k-means (MKM) which uses a multi-stage filtering approach. The multi-stage filtering approach greatly accelerates the k-means algorithm via a coarse-to-fine search strategy. To further speed up the algorithm, hashing is introduced to accelerate the assignment step which is the most time-consuming part in k-means. Extensive experiments on several massive datasets show that the proposed algorithm can obtain up to 600X speed-up over the k-means algorithm with comparable accuracy.


national conference on artificial intelligence | 2018

From Hashing to CNNs: Training Binary Weight Networks via Hashing

Qinghao Hu; Peisong Wang; Jian Cheng


IEEE Transactions on Neural Networks | 2018

Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks

Jian Cheng; Jiaxiang Wu; Cong Leng; Yuhang Wang; Qinghao Hu


computer vision and pattern recognition | 2018

Two-Step Quantization for Low-Bit Neural Networks

Peisong Wang; Qinghao Hu; Yifan Zhang; Chunjie Zhang; Yang Liu; Jian Cheng


european conference on computer vision | 2018

Training Binary Weight Networks via Semi-Binary Decomposition

Qinghao Hu; Gang Li; Peisong Wang; Yifan Zhang; Jian Cheng

Collaboration


Dive into the Qinghao Hu's collaboration.

Top Co-Authors

Avatar

Jian Cheng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiaxiang Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peisong Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Cong Leng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Gang Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hanqing Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yifan Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yuhang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chaoyang Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunjie Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge