Leida Li
China University of Mining and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Leida Li.
IEEE Transactions on Systems, Man, and Cybernetics | 2016
Leida Li; Weisi Lin; Xuesong Wang; Gaobo Yang; Khosro Bahrami; Alex C. Kot
Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.
Information Sciences | 2012
Leida Li; Shushang Li; Ajith Abraham; Jeng-Shyang Pan
This paper presents an invariant image watermarking scheme by introducing the Polar Harmonic Transform (PHT), which is a recently developed orthogonal moment method. Similar to Zernike moment (ZM) and pseudo-Zernike moment (PZM) approaches, PHT is defined on a circular domain. The magnitudes of PHTs are invariant to image rotation and scaling. Furthermore, the PHTs are free of numerical instability, so they are more suitable for watermarking. In this paper, the invariant properties of PHTs are investigated. During embedding, a subset of the accurate PHTs are modified according to the binary watermark sequence. Then a compensation image is formatted by reconstructing the modified PHT vector. The final watermarked image is obtained by adding the compensation image to the original image. In the decoder, the watermark can be retrieved from the magnitudes of the PHTs directly. Experimental results illustrate that the proposed scheme outperforms ZM/PZM based schemes in terms of embedding capacity and watermark robustness and is also robust to both geometric and signal processing based attacks.
IEEE Signal Processing Letters | 2014
Leida Li; Hancheng Zhu; Gaobo Yang; Jiansheng Qian
This letter presents a Referenceless quality Measure of Blocking artifacts (RMB) using Tchebichef moments. It is based on the observation that Tchebichef kernels with different orders have varying abilities to capture blockiness. In a block manner, high-odd-order moments are computed to score the blocking artifacts. The blockiness scores are further weighted to incorporate the characteristic of Human Visual System (HVS), which is achieved by classifying the blocks into smooth and textured. Experimental results and comparisons demonstrate the advantage of the proposed method.
Neurocomputing | 2016
Leida Li; Yu Zhou; Weisi Lin; Jinjian Wu; Xinfeng Zhang; Beijing Chen
JPEG is the most commonly used image compression standard. In practice, JPEG images are easily subject to blocking artifacts at low bit rates. To reduce the blocking artifacts, many deblocking algorithms have been proposed. However, they also introduce certain degree of blur, so the deblocked images contain multiple distortions. Unfortunately, the current quality metrics are not designed for multiply distorted images, so they are limited in evaluating the quality of deblocked images. To solve the problem, this paper presents a no-reference (NR) quality metric for deblocked images. A DeBlocked Image Database (DBID) is first built with subjective Mean Opinion Score (MOS) as ground truth. Then a NR DeBlocked Image Quality (DBIQ) metric is proposed by simultaneously evaluating blocking artifacts in smooth regions and blur in textured regions. Experimental results conducted on the DBID database demonstrate that the proposed metric is effective in evaluating the quality of deblocked images, and it significantly outperforms the existing metrics. As an application, the proposed metric is further used for automatic parameter selection in image deblocking algorithms.
Computers & Electrical Engineering | 2014
Leida Li; Shushang Li; Hancheng Zhu; Xiaoyue Wu
Display Omitted Polar Harmonic Transform is used to detect image copy-move forgery under affine transforms.Circular domain feature extraction improves detection accuracy.Marking the innermost pixels produces fine boundary of the detected region.Post-processing filter eliminates the false detections. In copy-move forgery, the copied region may be rotated and/or scaled to fit the scene better. Most of the existing methods fail when the region is subject to affine transforms. This paper presents a method for detecting this kind of image tampering based on circular pattern matching. The image is first filtered and divided into circular blocks. A rotation and scaling invariant feature is then extracted from each block using Polar Harmonic Transform (PHT). The feature vectors are then lexicographically sorted, and the forged regions are detected by finding the similar block pairs after proper post-processing. Experimental results demonstrate the efficiency of the method.
IEEE Transactions on Information Forensics and Security | 2015
Khosro Bahrami; Alex C. Kot; Leida Li; Haoliang Li
In a tampered blurred image generated by splicing, the spliced region and the original image may have different blur types. Splicing localization in this image is a challenging problem when a forger uses some postprocessing operations as antiforensics to remove the splicing traces anomalies by resizing the tampered image or blurring the spliced region boundary. Such operations remove the artifacts that make detection of splicing difficult. In this paper, we overcome this problem by proposing a novel framework for blurred image splicing localization based on the partial blur type inconsistency. In this framework, after the block-based image partitioning, a local blur type detection feature is extracted from the estimated local blur kernels. The image blocks are classified into out-of-focus or motion blur based on this feature to generate invariant blur type regions. Finally, a fine splicing localization is applied to increase the precision of regions boundary. We can use the blur type differences of the regions to trace the inconsistency for the splicing localization. Our experimental results show the efficiency of the proposed method in the detection and the classification of the out-of-focus and motion blur types. For splicing localization, the result demonstrates that our method works well in detecting the inconsistency in the partial blur types of the tampered images. However, our method can be applied to blurred images only.
Information Sciences | 2010
Leida Li; Xiaoping Yuan; Zhaolin Lu; Jeng-Shyang Pan
Rotation invariance is one of the most challenging issues in robust image watermarking. This paper presents two rotation invariant watermark embedding schemes in the non-subsampled contourlet transform (NSCT) domain based on the scale-adapted local regions. Watermark synchronization is achieved using the scale-space feature point based local characteristic regions. The first method embeds a binary watermark sequence by partitioning the local regions in a rotation invariant pattern. The second method embeds a binary watermark image in a content based manner, and the watermark signal is adaptive to the orientation of the image. Both methods achieve rotation invariant embedding by only using the rotation normalizing angle, thus no interpolation operation is performed. Extensive simulation results and comparisons show that the proposed schemes can efficiently resist both signal processing attacks and geometric attacks.
Information Sciences | 2016
Jinjian Wu; Weisi Lin; Guangming Shi; Leida Li; Yuming Fang
Image quality assessment (IQA) is in great demand for high quality image selection in the big data era. The challenge of reduced-reference (RR) IQA is how to use limited data to effectively represent the visual content of an image in the context of IQA. Research on neuroscience indicates that the human visual system (HVS) exhibits obvious orientation selectivity (OS) mechanism for visual content extraction. Inspired by this, an OS based visual pattern (OSVP) is proposed to extract visual content for RR IQA in this paper. The OS arises from the arrangement of the excitatory and inhibitory interactions among connected cortical neurons in a local receptive field. According to the OS mechanism, the similarity of preferred orientations between two nearby pixels is first analyzed. Then, the orientation similarities of pixels in a local neighborhood are arranged, and the OSVP is built for visual information representation. With the help of OSVP, the visual content of an image is extracted and mapped into a histogram. By calculating the changes between the two histograms of reference and distorted images, a quality score is produced. Experimental results on five public databases demonstrate that the proposed RR IQA method has performance consistent with the human perception under a small amount of reference data (only 9 values).
Journal of Visual Communication and Image Representation | 2015
Zaoshan Liang; Gaobo Yang; Xiangling Ding; Leida Li
CPM is proposed to replace full search or other mapping schemes to speed up detection.GZCL is adopted to gain a smoother edge and less false positive rate.FSD is presented to pick out forged regions, improving detection precision. As a popular image manipulation technique, object removal can be achieved by image-inpainting without any noticeable traces, which poses huge challenges to passive image forensics. The existing detection approach utilizes full search for block matching, resulting in high computational complexity. This paper presents an efficient forgery detection algorithm for object removal by exemplar-based inpainting, which integrates central pixel mapping (CPM), greatest zero-connectivity component labeling (GZCL) and fragment splicing detection (FSD). CPM speeds up suspicious block search by efficiently matching those blocks with similar hash values and then finding the suspicious pairs. To improve the detection precision, GZCL is used to mark the tampered pixels in suspected block pairs. FSD is adopted to distinguish and locate tampered regions from its best-match regions. Experimental results show that the proposed algorithm can reduce up to 90% of the processing time and maintain a detection precision above 85% under different kinds of object-removed images.
IEEE Transactions on Multimedia | 2016
Leida Li; Dong Wu; Jinjian Wu; Haoliang Li; Weisi Lin; Alex C. Kot
Recent advances in sparse representation show that overcomplete dictionaries learned from natural images can capture high-level features for image analysis. Since atoms in the dictionaries are typically edge patterns and image blur is characterized by the spread of edges, an overcomplete dictionary can be used to measure the extent of blur. Motivated by this, this paper presents a no-reference sparse representation-based image sharpness index. An overcomplete dictionary is first learned using natural images. The blurred image is then represented using the dictionary in a block manner, and block energy is computed using the sparse coefficients. The sharpness score is defined as the variance-normalized energy over a set of selected high-variance blocks, which is achieved by normalizing the total block energy using the sum of block variances. The proposed method is not sensitive to training images, so a universal dictionary can be used to evaluate the sharpness of images. Experiments on six public image quality databases demonstrate the advantages of the proposed method.