Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongliang Bai is active.

Publication


Featured researches published by Hongliang Bai.


international conference on multimedia and expo | 2012

Contented-Based Large Scale Web Audio Copy Detection

Lezi Wang; Yuan Dong; Hongliang Bai; Jiwei Zhang; Chong Huang; Wei Liu

The exponential growth of web videos brings content based copy detection into a crucial issue. Besides the image information, audio also plays an important role in copy detection. In this paper, the audio-based copy detection framework is introduced. Three contributions are presented: (1) the band energy difference based feature is improved by adding multiscale information, which extends the candidate feature sets; (2) a conditional entropy based method is used to select 16 ordinal relations to generate a more compact and robust feature combination among the random C9116 ≈ 2.6 × 1017 combinations; (3) the result-based fusion strategy is introduced to recall the missed true positives. The proposed algorithm outperforms the traditional coarse fingerprints, shown by experiments conducted in the TRECVID 2011 Content-based Copy Detection (CCD) database.


international conference on acoustics, speech, and signal processing | 2013

An efficient graph-based visual reranking

Chong Huang; Yuan Dong; Hongliang Bai; Lezi Wang; Nan Zhao; Shusheng Cen; Jian Zhao

The state of the art in query expansion is mainly based on the spatial information. These methods achieve high performance, however, suffer from huge computation and memory. The objective of this paper is to perform visual reranking in near-real time regardless of the spatial information. We explore a graph-based method proposed as our confident sample detection baseline, which has been proved successful in achieving high precision. In addition, a novel maximum-kernel-based metric function is introduced to rerank the images in the initial result. We evaluated the method on the standard Paris dataset and a new Francelandmark dataset. Our experiments demonstrate that the algorithm has great value on practicality because of its good performance, easy implementation, and high computational efficiency.


international conference on multimedia and expo | 2014

Efficient image reranking by leveraging click data

Shusheng Cen; Lezi Wang; Yanchao Feng; Hongliang Bai; Yuan Dong

This paper introduces our system competing in MSR-Bing Image Retrieval Challenge at ICME 2014. The task of the challenge is to rank images by their relevance to a given topic, by leveraging cues hidden in search engines click log. With the successful trial in last years challenge, search-based method is shown to be effective in this task. We reserve the basic idea of search-based method in our new system, and there are also some improvements made this time. The first one is an adjustment in textual search algorithm for related clicked images in database. We simplified the previous scheme and make it more straight-forward and effient. The second inovation is using support vector machines to predict the relevance of query-image pair.


conference on multimedia modeling | 2013

Interactive Video Retrieval Using Combination of Semantic Index and Instance Search

Hongliang Bai; Lezi Wang; Yuan Dong; Kun Tao

We present our efficient implementation of interactive video search tool for Known Item Search(KIS) using the combination of Se- mantic Indexing(SIN) and Instance Search(INS). The interaction way allows users to index a video clip via their knowledge of visual con- tent. Our system offers users a set of concepts and SIN module returns candidate keyframes based on userss selection of concepts. Users choose keyframes which contains the interest items, and the INS module recom- mends frames with similar content to the target clip. Finally, the precise time stamps of the clip are given by the Temporal Refinement(TR).


broadband communications, networks and systems | 2011

A word-based approach for duplicate picture in picture sequence detection

Lezi Wang; Yuan Dong; Hongliang Bai; Wei Liu; Kun Tao

A novel word-based algorithm is presented to detect duplicate Picture in Picture (PiP) video sequences in this study. The conventional edge-based methods used to extract the PiP regions are not robust in the noise and blurring images. Bag of Words (BOW) model emphasizes words ambiguity and ignores spatial information. Without detecting the PiP regions and unlike the traditional word based approach, the algorithm grasps the information of visual spatial transformation via exploring the attributes of local matching key-point pairs. The pairs are generated by directly comparing the visual words. Finally, the impact of the words representations is discussed thoroughly, such as words size, diversity and weighting. The experiment is conducted in the TRECVID 2010 content-based copy detection developing database and F-measure is up to 94%. From the results, the algorithm is effective and efficient for the PiP video sequence detection.


international conference on acoustics, speech, and signal processing | 2013

A semantic graph-based algorithm for image search reranking

Nan Zhao; Yuan Dong; Hongliang Bai; Lezi Wang; Chong Huang; Shusheng Cen; Jian Zhao

Image search reranking has become a widely-used approach to significantly boost retrieval performance in the state-of-art content-based image retrieval system. Most of the methods merely rely on matching visual distances between query and initial results or among initial results to detect confident samples relevant to query. However, they may fail to rerank due to the existence of a huge gap between low-level visual features and high-level semantic concepts. In this paper, we propose to detect reliable relevant samples based on a semantic image graph of labeled auxiliary dataset and Markov random walk algorithm. A graph-based rerank method is then presented to propagate the scores of detected confident samples to the rest. Our method is evaluated on the standard Paris dataset and a France dataset introduced by us. The performance is demonstrated to match or exceed the state-of-art.


ieee international conference on network infrastructure and digital content | 2012

A fast color feature for real-time image retrieval

Chong Huang; Yuan Dong; Shusheng Cen; Hongliang Bai; Wei Liu; Jiwei Zhang; Jian Zhao

In this paper, a fast color feature is presented for real-time image retrieval. The feature is based on Dense SIFT (DSIFT) in the multi-scale RGB space. A new sum function is proposed to accelerate feature extraction instead of Gaussian weighting function. In addition, a novel randomized segment-based sampling algorithm is introduced to filter out superfluous features. In the image retrieval stage, a similarity metric is provided to measure the match between the query and reference images. After the experiments are conducted, RGB-DSIFT is more resistant to common image deformations than the original DSIFT, and more efficient than SIFT, CSIFT, GLOH feature in the processing time.


2013 5th IEEE International Conference on Broadband Network & Multimedia Technology | 2013

Fast and compact visual codebook for large-scale object retrieval

Shusheng Cen; Yuan Dong; Hongliang Bai; Chong Huang

In this paper, we propose a novel method for learning a compact codebook in large-scale image dataset. In the past few years, bag-of-visual-words model has been proven to be effective and efficient in multiple multimedia tasks including object retrieval, object detection and scene classification. The existing codebook constructing methods, like k-means or approximate k-means, suffer from information loss in vector quantization, and limit the retrieval performance. We try to improve the existing methods in both time efficiency and retrieval accuracy. By performing principal component analysis in initialization, clustering can start with a quasi-optimal solution. A leader clustering scheme is also proposed to reduce quantization loss, which leads to a compact and discriminative codebook. Our experiment showed that, the proposed method requires less training time and yields better performance in large-scale object retrieval.


advances in multimedia | 2012

Audio-Based copy detection in the large-scale internet videos

Hongliang Bai; Lezi Wang; Chong Huang; Wei Liu; Chengbin Zeng; Yuan Dong

With the large-scale internet video data explosion, the content-based copy detection (CCD) related application and research are significant and necessary. Beside the image-based CCD, the audio-based method has the advantage in its simpleness and efficiency. The article improves the recent methods on the audio-based copy detection. Three improvements are introduced in the study. Firstly, the CEPS-like feature is proposed to satisfy the different audio scale requirements in the feature extraction. Then, the flexible hash-based searching algorithm is presented to strengthen the querying robustness. Finally, the results-based fusion is introduced to take the advantages of the different features. The actual NDCR performances of the balanced profile vary in 0.223~0.460 in the TRECVID2011 copy detection database. The results outperform any single feature.


TRECVID | 2010

The France Telecom Orange Labs (Beijing) Video Semantic Indexing Systems - TRECVID 2010 Notebook Paper

Yuan Dong; Kun Tao; Hongliang Bai; Xiaofu Chang; Chengyu Dong; Jiqing Liu; Shan Gao; Jiwei Zhang; Tianxiang Zhou; Guorui Xiao

Collaboration


Dive into the Hongliang Bai's collaboration.

Top Co-Authors

Avatar

Yuan Dong

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Lezi Wang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Chong Huang

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Shusheng Cen

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gang Qin

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiqing Liu

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Shan Gao

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Yanchao Feng

Beijing University of Posts and Telecommunications

View shared research outputs
Researchain Logo
Decentralizing Knowledge