Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xingsong Hou is active.

Publication


Featured researches published by Xingsong Hou.


Multimedia Tools and Applications | 2012

HMM based soccer video event detection using enhanced mid-level semantic

Xueming Qian; Huan Wang; Guizhong Liu; Xingsong Hou

Highlight detection is a fundamental step in semantics based video retrieval and personalized sports video browsing. In this paper, an effective hidden Markov models (HMMs) based soccer video event detection method based on a hierarchical video analysis framework is proposed. Soccer video shots are classified into four coarse mid-level semantics: global, median, close-up and audience. Global and local motion information is utilized for the refinement of coarse mid-level semantics. Sequential soccer video is segmented into event clips. Both the temporal transitions of the mid-level semantics and the overall features of an event clip are fused using HMMs to determine the type of event. Highlight detection performance of dynamic Bayesian networks (DBN), conditional random fields (CRF) and the proposed HMM based approach are compared. The average F-score of our highlights (including goal, shoot, foul and placed kick) detection approach is 82.92%, which outperforms that of DBN and CRF by 9.85% and 11.12% respectively. The effects of number of hidden states, overall features, and the refinement of mid-level semantics on the event detection performance are also discussed.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Landmark Summarization With Diverse Viewpoints

Xueming Qian; Yao Xue; Xiyu Yang; Yuan Yan Tang; Xingsong Hou; Tao Mei

Landmark summarization with diverse viewpoints is very important in landmark retrieval, as it can create a comprehensive description of a landmark for users. In this paper we present an approach for summarizing a collection of landmark images from diverse viewpoints. First, we group landmark images with content overlap by viewpoint album (VA) generation. Second, we model the relative viewpoint of each image within the VA based on the spatial layout of distinctive descriptors of a landmark. Third, we express the relative viewpoint of an image with a 4-D viewpoint vector, including horizontal, vertical, scale, and rotation. Finally, we summarize the landmarks in terms of viewpoints. Experimental results show the effectiveness of the proposed landmark summarization approach.


international conference on multimedia and expo | 2013

Generating representative images for landmark by discovering high frequency shooting locations from community-contributed photos

Shuhui Jiang; Xueming Qian; Yao Xue; Fan Li; Xingsong Hou

Representative images generation offers a comprehensive knowledge for landmark and is a hot research area recent years. This paper presents a representative images generation system by discovering high frequency shooting locations from geo-tagged community-contributed photos. We discover that the views (e.g. far and near, front, back and side) of the photos taken in the same location are usually similar and but different in different shooting locations. Our system is realized by three steps: 1) Landmark dataset is filtered from social media by the combination of tags and geo-tags. 2) High frequency shooting locations are mined by geo-tag cluster. 3) Visual feature is then used for removing irrelevant images and ranking by intra and inter SIFT matching. This work is the first attempt to generate representative images by high frequency shooting locations mining. Evaluating on ten landmarks shows its effectiveness .


IEEE Transactions on Multimedia | 2017

Image Location Inference by Multisaliency Enhancement

Xueming Qian; Huan Wang; Yisi Zhao; Xingsong Hou; Meng Wang; Yuan Yan Tang

Locations of images have been widely used in many application scenarios for large geotagged image corpora. As to images that are not geographically tagged, we estimate their locations with the help of the large geotagged image set by content-based image retrieval. Bag-of-words image representation has been utilized widely. However, the individual visual word-based image retrieval approach is not effective in expressing the salient relationships of image region. In this paper, we present an image location estimation approach by multisaliency enhancement. We first extract region-of-interests (ROIs) by mean-shift clustering on the visual words and salient map of the image based on which we further determine the importance of the ROI. Then, we describe each ROI by the spatial descriptors of visual words. Finally, region-based visual phrases are generated to further enhance the saliency in image location estimation. Experiments show the effectiveness of our proposed approach.


Neurocomputing | 2015

SAR complex image data compression based on quadtree and zerotree Coding in Discrete Wavelet Transform Domain: A Comparative Study

Xingsong Hou; Min Han; Chen Gong; Xueming Qian

Abstract A SAR complex image data compression algorithm based on quadtree coding (QC) in discrete wavelet transform (DWT) domain (QC-DWT) is proposed. We show that QC-DWT achieves the best performance for SAR complex image compression. Besides this, in this work, we observed a novel phenomenon that QC-DWT outperforms the zerotree based wavelet coding algorithms, e.g., Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC) and Set Partitioning in Hierarchical Trees algorithm (SPIHT) for SAR complex image data, and there exists deficiency of CCSDS-IDC for SAR complex image data compression. This is because the DWT coefficients of SAR complex image data always have intrascale clustering characteristic and no interscale attenuation characteristic, which is different from that of SAR amplitude images and other optical images.


Multimedia Tools and Applications | 2014

Video text detection and localization in intra-frames of H.264/AVC compressed video

Xueming Qian; Huan Wang; Xingsong Hou

Video texts are closely related to the video content. The video text information can facilitate content based video analysis, indexing and retrieval. Video sequences are usually compressed before storage and transmission. A basic step of text-based applications is text detection and localization. In this paper, an overlaid text detection and localization method is proposed for H.264/AVC compressed videos by using the integer discrete cosine transform (DCT) coefficients of intra-frames. The main contributions of this paper are in the following two aspects: 1) coarse text blocks detection using block sizes and quantization parameters adaptive thresholds; 2) text line localization according to the characteristics of text in intra frames of H.264/AVC compressed domain. Comparisons are made with the pixel domain based text detection method for the H.264/AVC compressed video. Text detection results on five H.264/AVC video sequences under various qualities show the effectiveness of the proposed method.


international conference on image processing | 2012

Tag filtering based on similar compatible principle

Xueming Qian; Xian-Sheng Hua; Xingsong Hou

In social image sharing websites, users provide several descriptive tags to annotate their shared images. Usually, the raw tags are noisy, biased and incomplete. How to filter the tags is important for tag based applications. In this paper, a similar compatible principle based tag filtering approach is proposed. We classify tags into two sets. One is relevant to image content. The other is irrelevant to image content. We filter the tags by ranking high relevant tags ahead of the tags with low relevance by the similar compatible principle. This approach determines the ranks of user annotated tags by maximizing the compatible value of changing the labels of the tags from irrelevant to relevant at each step. Experiments on crawled Flickr dataset demonstrate the effectiveness of the proposed approach.


IEEE Transactions on Multimedia | 2018

Efficient and Robust Image Coding and Transmission Based on Scrambled Block Compressive Sensing

Zan Chen; Xingsong Hou; Xueming Qian; Chen Gong

Image transmission in a wireless visual sensor network (WVSN) with limited resources over an unreliable and bandwidth-limited wireless channel is challenging. This paper presents a highly efficient and robust image coding and transmission scheme with a simple encoder based on compressive sensing (CS) for WVSNs. First, an image measurement based on scrambled block compressive sampling with a separable sensing operator is proposed to simplify the encoder. Second, a progressive nonuniform quantization, which exploits the measurement distribution at the encoder side and the measurement dependencies at the decoder side, is designed to improve the rate-distortion (R-D) performance while maintaining low complexity at the encoder. Third, to further improve the R-D performance, a progressive non-local low-rank reconstruction is designed at the decoder. The experimental results show that the proposed scheme can achieve higher R-D performance compared with the benchmark CS-based image coding and transmission schemes. Higher robustness can be achieved compared with the traditional source-channel coding, such as Consultative Committee for Space Data Systems


Multimedia Tools and Applications | 2016

Compressive sensing reconstruction for compressible signal based on projection replacement

Zan Chen; Xingsong Hou; Chen Gong; Xueming Qian

-


international conference on internet multimedia computing and service | 2015

Hyperspectral image compression based on DLWT and PCA

Qiuyan Shi; Xingsong Hou; Xueming Qian

Image Data Compression (CCSDS-IDC) with Raptor codes under a time-varying packet loss channel, and the encoding time can be significantly reduced compared with the traditional image coding schemes. The experimental results also show that the proposed scheme achieves state-of-the-art coding efficiency with lower computational complexity at the encoder while still supporting error resilience.

Collaboration


Dive into the Xingsong Hou's collaboration.

Top Co-Authors

Avatar

Xueming Qian

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Chen Gong

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Huan Wang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Zan Chen

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Guizhong Liu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Guoshuai Zhao

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jinqiang Sun

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Lan Zhang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Tianlei Liu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Xiaoxiao Liu

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge