Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiushan Nie is active.

Publication


Featured researches published by Xiushan Nie.


intelligent information hiding and multimedia signal processing | 2009

A Blind Video Watermarking Scheme Based on DWT

Chun-Xing Wang; Xiushan Nie; Xianqing Wan; Wen Bo Wan; Feng Chao

In this paper, a novel blind video watermarking scheme based on Discrete Wavelet Transform (DWT) is proposed. In this scheme, DWT is performed to each frame of the original video. Then embed watermark into some high-frequency coefficients, which are selected according to certain rules, to guarantee the perceptual quality of the watermarked video. When embedding, the Quantization Index Modulation (QIM) algorithm is used for good robustness. The simulations show that the proposed scheme has a good performance to resist Gaussian noises, salt & pepper noises, cutting, resizing, MPEG-2 compression, frame dropping and frame changing.


IEEE Signal Processing Letters | 2012

Video Hashing Algorithm With Weighted Matching Based on Visual Saliency

Jiande Sun; Jing Wang; Jie Zhang; Xiushan Nie; Ju Liu

In this letter, a novel video hashing algorithm is proposed, in which the weighted hash matching is defined in video hashing for the first time. In the proposed algorithm, the video hash is generated based on the ordinal feature derived from the temporally informative representation image (TIRI). At the same time the representative saliency map (RSM) is constructed by the visual saliency maps in video segments, and it generates the hash weights for hash matching. During hash matching, the traditional bit error rate (BER) is weighted with hash weights to form the weighted error rate (WER). WER is used to measure the similarity between different hashes. Experiments on different kinds of videos with different kinds of attacks verify the robustness and discrimination of the proposed algorithm.


Science in China Series F: Information Sciences | 2013

Robust video hashing based on representative-dispersive frames

Xiushan Nie; Ju Liu; Jiande Sun; LianQi Wang; Xiaohui Yang

This study proposes a robust video hashing for video copy detection. The proposed method, which is based on representative-dispersive frames (R-D frames), can reveal the global and local information of a video. In this method, a video is represented as a graph with frames as vertices. A similarity measure is proposed to calculate the weights between edges. To select R-D frames, the adjacency matrix of the generated graph is constructed, and the adjacency number of each vertex is calculated, and then some vertices that represent the R-D frames of the video are selected. To reveal the temporal and spatial information of the video, all R-D frames are scanned to constitute an image called video tomography image, the fourth-order cumulant of which is calculated to generate a hash sequence that can inherently describe the corresponding video. Experimental results show that the proposed video hashing is resistant to geometric attacks on frames and channel impairments on transmission.


IEEE Signal Processing Letters | 2011

Robust Video Hashing Based on Double-Layer Embedding

Xiushan Nie; Ju Liu; Jiande Sun; Wei Liu

A robust video hashing scheme for video content identification and authentication is proposed, which is called Double-Layer Embedding scheme. Intra-cluster Locally Linear Embedding (LLE) and inter-cluster Multi-Dimensional Scaling (MDS) are used in the scheme. Some dispersive frames of the video are first selected through graph model, and the video is partitioned into clusters based on the dispersive frames and the K-Nearest Neighbor method during the hashing. Then, the intra-cluster LLE and inter-cluster MDS are used to generate local and global hash sequences which can inherently describe the corresponding video. Experimental results show that the video hashing is resistant to geometric attacks of frames and channel impairments of transmission.


international conference on image processing | 2012

A visual saliency based video hashing algorithm

Jing Wang; Jiande Sun; Ju Liu; Xiushan Nie; Hua Yan

A novel video hashing algorithm is proposed, which takes account of visual saliency during hash generation. In the proposed algorithm, the video hash is fused by two hashes, which are spatio-temporal hash (ST-Hash) and visual hash (V-Hash). The ST-Hash is generated based on the ordinal feature, which is formed according to the intensity difference between adjacent blocks of the temporally informative representation image (TIRI). At the same time, the representative saliency map (RSM) is constructed by the visual saliency maps in video segments. The V-Hash is formed according to the intensity difference between adjacent blocks of the RSM, and used to modulate the ST-Hash to form the final video hash. Experiments on different kinds of videos with different kinds of attacks verify that the proposed algorithm has better performance on robustness and discrimination.


IEEE Transactions on Multimedia | 2017

Comprehensive Feature-Based Robust Video Fingerprinting Using Tensor Model

Xiushan Nie; Yilong Yin; Jiande Sun; Ju Liu; Chaoran Cui

Content-based near-duplicate video detection (NDVD) is essential for effective search and retrieval, and robust video fingerprinting is a good solution for NDVD. Most existing video fingerprinting methods use a single feature or concatenate different features to generate video fingerprints, and show good performance under single-mode modifications such as noise addition and blurring. However, when they suffer combined modifications, the performance is degraded to a certain extent because such features cannot characterize the video content completely. By contrast, the assistance and consensus among different features can improve the performance of video fingerprinting. Therefore, in the present study, we mine the assistance and consensus among different features based on a tensor model, and we present a new comprehensive feature to fully use them in the proposed video fingerprinting framework. We also analyze what the comprehensive feature really is for representing the original video. In this framework, the video is initially set as a high-order tensor that consists of different features, and the video tensor is decomposed via the Tucker model with a solution that determines the number of components. Subsequently, the comprehensive feature is generated by the low-order tensor obtained from tensor decomposition. Finally, the video fingerprint is computed using this feature. A matching strategy used for narrowing the search is also proposed based on the core tensor. The robust video fingerprinting framework is resistant not only to single-mode modifications but also to their combination.


international conference on acoustics, speech, and signal processing | 2010

Robust video hashing for identification based on MDS

Xiushan Nie; Ju Liu; Jiande Sun

Video identification is extremely important in video browsing, database search and security. In this paper, we present a video hashing based on MDS (Multi-Dimensional Scaling) which is able to work under variable video transmission impairments and resistant to signal processing. In this method, each frame of the video is divided into blocks and compute its low and middle frequency DCT coefficients of luminance component as a disparities measurement for MDS. Then the video is mapped to two-dimensional space using MDS, and generate a robust hashing as a video signature utilizing the distances between two points mapping from frames. It found that this video hashing is resistant frame geometric attacks (rotation, shift), random noises, lossy compression and other video transmission impairments. It can be instrumental in building database search, video copy detection and watermarking applications for video.


international conference on image processing | 2013

Logarithmic Spread-Transform Dither Modulation watermarking Based on Perceptual Model

Wenbo Wan; Ju Liu; Jiande Sun; Xiaohui Yang; Xiushan Nie; Feng Wang

Logarithmic Quantization Index Modulation (LQIM) is an important extension of the original quantization-based watermarking method. However, it is well known that it is sensitive to valumetric scaling attack and easy to result in sign error after quantization and attacks. For that, in this paper, we propose a new method, namely Logarithmic Spread-Transform Dither Modulation Based on Perceptual Model (LSTDM-WM). In this regard the host signal is first projected onto a random vector and transformed using a novel Logarithmic Quantization function. Then the transformed signal is quantized regarding the watermark data and the watermarked signal is obtained by applying inverse transform to the quantized signal. The perceptual model is further exploited to adjust the quantization step adaptively for watermark embedding. Experimental results indicate that our proposed scheme overcomes two challenges cited above and has superior performance in comparison with conventional LQIM and former proposed schemes of STDM.


international conference on signal processing | 2010

LLE-based video hashing for video identification

Xiushan Nie; Jianping Qiao; Ju Liu; Jiande Sun; Xinchao Li; Wei Liu

As web video databases tend to contain immense copies with the explosive growth of online videos, effective and efficient copy identification techniques are required for content management and copyrights protection. To this end, this paper presents a novel video hashing for video copy identification based on Locally Linear Embedding (LLE). It maps the video to a low-dimensional space through LLE, which is invariant to translation, rotation and rescaling. In this way, we can use the points mapped from the video to play as a robust hashing. Meanwhile, to detect copies which are parts of original videos or contain a clip that comes from original. A dynamic sliding window is applied for matching. Experimental results show that the video hashing is of good robustness and discrimination.


international acm sigir conference on research and development in information retrieval | 2017

Distribution-oriented Aesthetics Assessment for Image Search

Chaoran Cui; Huidi Fang; Xiang Deng; Xiushan Nie; Hongshuai Dai; Yilong Yin

Aesthetics has become increasingly prominent for image search to enhance user satisfaction. Therefore, image aesthetics assessment is emerging as a promising research topic in recent years. In this paper, distinguished from existing studies relying on a single label, we propose to quantify the image aesthetics by a distribution over quality levels. The distribution representation can effectively characterize the disagreement among the aesthetic perceptions of users regarding the same image. Our framework is developed on the foundation of label distribution learning, in which the reliability of training examples and the correlations between quality levels are fully taken into account. Extensive experiments on two benchmark datasets well verified the potential of our approach for aesthetics assessment. The role of aesthetics in image search was also rigorously investigated.

Collaboration


Dive into the Xiushan Nie's collaboration.

Top Co-Authors

Avatar

Jiande Sun

Shandong Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chaoran Cui

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Ju Liu

Shandong University

View shared research outputs
Top Co-Authors

Avatar

Xiaoming Xi

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hua Yan

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianping Qiao

Shandong Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge