Jiande Sun
Shandong Normal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jiande Sun.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011
Xiaohui Yang; Ju Liu; Jiande Sun; Xinchao Li; Wei Liu; Yuling Gao
We propose an effective virtual view synthesis approach, which utilizes the technology of depth-image-based rendering (DIBR). In our scheme, two reference color images and their associated depth maps are used to generate the arbitrary virtual viewpoint. Firstly, the main and auxiliary viewpoint images are warped to the virtual viewpoint. After that, the cracks and error points are removed to enhance the image quality. Then, we complement the disocclusions of the virtual viewpoint image warped from the main viewpoint with the help of the auxiliary viewpoint. In order to reduce the color incontinuity of the virtual view, the brightness of the two reference viewpoint images are adjusted. Finally, the holes are filled by the depth-assistance asymmetric dilation inpainting method. Simulations show that the view synthesis approach is effective and reliable in both of subjective and objective evaluations.
intelligent information hiding and multimedia signal processing | 2008
Xiangyang Sun; Ju Liu; Jiande Sun; Qiang Zhang; Wei Ji
In this paper we propose a novel invisible digital watermarking scheme based on singular value decomposition (SVD). In the proposed method, we first divide the image into 8x8 sub blocks, and then each block is decomposed by SVD. The second and third singular values of each block are exchanged if the watermark bit is 1, so the order of the SVs is not in the original descending order any more. This change of relationship of the order will be detected in the extracting process. The experimental results show that the proposed scheme is robust against common image processing operations, such as JPEG compression, additive Gaussian noise, and median filter.
IEEE Signal Processing Letters | 2013
Xiaocui Liu; Jiande Sun; Ju Liu
The video hash derived from the temporally representative frame (TRF) has attracted increasing interests recently. A temporally visual weighting (TVW) method based on visual attention is proposed for the generation of TRF in this paper. In the proposed TVW method, the visual attention regions of each frame are obtained by combining the dynamic and static attention models. The temporal weight for each frame is defined as the strength of temporal variation of visual attention regions and the TRF of a video segment can be generated by accumulating the frames by the proposed TVW method. The advantage of the TVW method is proved by the comparison experiments. The video hashes used for comparison are derived from the TRFs, which are generated based on the proposed TVW method and other existing weighting methods respectively. The experimental results show that the TVW method is helpful to enhance the robustness and discrimination of video hash.
Neurocomputing | 2016
Jiande Sun; Xiaocui Liu; Wenbo Wan; Jing Li; Dong Zhao; Huaxiang Zhang
Abstract Video hashing has attracted increasing attention in the field of large-scale video retrieval. However, only low-level features or their combinations, referred to as appearance features, are used to generate the video hash in most of the existing video hashing algorithms and these kinds of features are referred to as appearance features. In this paper, a visual attention model is used to extract visual attention features, and the video hash is generated from a fusion of visual-appearance and visual-attention features via a deep belief network (DBN) to obtain representative video features. In addition, hash distance is taken as a vector to measure the similarity between hashes. BER is used as the amplitude of hash distance and the vector cosine similarity is used as the angle of hash distance. Experimental results demonstrate that the fusion of visual appearance and attention features brings about better performance of video hash on recall and precision rates, and the angle of hash distance is useful to improve the accuracy of hash matching.
IEEE Signal Processing Letters | 2012
Jiande Sun; Jing Wang; Jie Zhang; Xiushan Nie; Ju Liu
In this letter, a novel video hashing algorithm is proposed, in which the weighted hash matching is defined in video hashing for the first time. In the proposed algorithm, the video hash is generated based on the ordinal feature derived from the temporally informative representation image (TIRI). At the same time the representative saliency map (RSM) is constructed by the visual saliency maps in video segments, and it generates the hash weights for hash matching. During hash matching, the traditional bit error rate (BER) is weighted with hash weights to form the weighted error rate (WER). WER is used to measure the similarity between different hashes. Experiments on different kinds of videos with different kinds of attacks verify the robustness and discrimination of the proposed algorithm.
Iet Information Security | 2011
Xinchao Li; Ju Liu; Jiande Sun; Xiaohui Yang; Wei Liu
Quantisation index modulation (QIM) is an important class of watermarking methods, which has been widely used in blind watermarking applications. It is well known that spread transform dither modulation (STDM), as an extension of QIM, has good performance in robustness against random noise and re-quantisation. However, the quantisation step-sizes used in STDM are random numbers not taking features of the image into account. The authors present a step projection-based approach to incorporate the perceptual model with STDM framework. Four implementations of the proposed algorithm are further presented according to different modified versions of the perceptual model. Experimental results indicate that the step projection-based approach can incorporate the perceptual model with STDM framework in a better way, thereby providing a significant improvement in image fidelity. Compared with the former proposed modified schemes of STDM, the authors best performed implementation provides powerful resistance against common attacks, especially in robustness against Gauss noise, salt and pepper noise and JPEG compression.
world congress on intelligent control and automation | 2010
Xiaohui Yang; Jiande Sun; Ju Liu; Jinyu Chu; Wei Liu; Yuling Gao
In this paper, an intelligent control scheme based on remote gaze tracking is proposed. First, the eye-moving video of the user is captured by ordinary resolution camera under the illumination of near infrared light sources, then the images of the eye region and the pupil region are extracted by processing the video in real time. We process the image of the pupil region, and get the coordinates of the pupil center and the corneal glints produced by the infrared light sources. The coordinates of the points on the screen that the user is observing are computed by the gaze-tracking algorithm based on cross-ratio-invariant, and a procedure of calibration is needed to eliminate the error produced by the deviation of the optical and visual axes of the eyeball. Finally, the gaze is tracked in real-time. The results show that the accuracy of the gaze tracking system is about 0.327 degree horizontally and 0.300 degree vertically, which is better than most gaze tracking system reported in other papers.
international conference on acoustics, speech, and signal processing | 2007
Wei Zhang; Ju Liu; Jiande Sun; Shuzhong Bai
In this paper we focus on the two-stage underdetermined blind source separation (BSS), which consists of the mixing matrix estimation stage, the first stage, and the source estimation stage, the second stage. In the first stage, both the mixing matrix and the number of sources are estimated by a new potential-function-based clustering method using a new potential function constructed by Laplacian-like window function. In the second stage, in order to overcome the disadvantage of 11-norm solution, a new sparse representation based on high-order statistics in transformed domain, which is called statistically sparse component analysis (SSCA), is proposed to recover the sources. Compared with the existing two-stage methods, the proposed approach can achieve higher reconstructed signal-to-noise ratios (SNRs).
Science in China Series F: Information Sciences | 2013
Xiushan Nie; Ju Liu; Jiande Sun; LianQi Wang; Xiaohui Yang
This study proposes a robust video hashing for video copy detection. The proposed method, which is based on representative-dispersive frames (R-D frames), can reveal the global and local information of a video. In this method, a video is represented as a graph with frames as vertices. A similarity measure is proposed to calculate the weights between edges. To select R-D frames, the adjacency matrix of the generated graph is constructed, and the adjacency number of each vertex is calculated, and then some vertices that represent the R-D frames of the video are selected. To reveal the temporal and spatial information of the video, all R-D frames are scanned to constitute an image called video tomography image, the fourth-order cumulant of which is calculated to generate a hash sequence that can inherently describe the corresponding video. Experimental results show that the proposed video hashing is resistant to geometric attacks on frames and channel impairments on transmission.
IEEE Signal Processing Letters | 2011
Xiushan Nie; Ju Liu; Jiande Sun; Wei Liu
A robust video hashing scheme for video content identification and authentication is proposed, which is called Double-Layer Embedding scheme. Intra-cluster Locally Linear Embedding (LLE) and inter-cluster Multi-Dimensional Scaling (MDS) are used in the scheme. Some dispersive frames of the video are first selected through graph model, and the video is partitioned into clusters based on the dispersive frames and the K-Nearest Neighbor method during the hashing. Then, the intra-cluster LLE and inter-cluster MDS are used to generate local and global hash sequences which can inherently describe the corresponding video. Experimental results show that the video hashing is resistant to geometric attacks of frames and channel impairments of transmission.