Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weisi Lin is active.

Publication


Featured researches published by Weisi Lin.


Journal of Visual Communication and Image Representation | 2011

Perceptual visual quality metrics: A survey

Weisi Lin; C.-C. Jay Kuo

Visual quality evaluation has numerous uses in practice, and also plays a central role in shaping many visual processing algorithms and systems, as well as their implementation, optimization and testing. In this paper, we give a systematic, comprehensive and up-to-date review of perceptual visual quality metrics (PVQMs) to predict picture quality according to human perception. Several frequently used computational modules (building blocks of PVQMs) are discussed. These include signal decomposition, just-noticeable distortion, visual attention, and common feature and artifact detection. Afterwards, different types of existing PVQMs are presented, and further discussion is given toward feature pooling, viewing condition, computer-generated signal and visual attention. Six often-used image metrics(namely SSIM, VSNR, IFC, VIF, MSVD and PSNR) are also compared with seven public image databases (totally 3832 test images). We highlight the most significant research work for each topic and provide the links to the extensive relevant literature.


IEEE Transactions on Image Processing | 2012

Image Quality Assessment Based on Gradient Similarity

Anmin Liu; Weisi Lin; Manish Narwaria

In this paper, we propose a new image quality assessment (IQA) scheme, with emphasis on gradient similarity. Gradients convey important visual information and are crucial to scene understanding. Using such information, structural and contrast changes can be effectively captured. Therefore, we use the gradient similarity to measure the change in contrast and structure in images. Apart from the structural/contrast changes, image quality is also affected by luminance changes, which must be also accounted for complete and more robust IQA. Hence, the proposed scheme considers both luminance and contrast-structural changes to effectively assess image quality. Furthermore, the proposed scheme is designed to follow the masking effect and visibility threshold more closely, i.e., the case when both masked and masking signals are small is more effectively tackled by the proposed scheme. Finally, the effects of the changes in luminance and contrast-structure are integrated via an adaptive method to obtain the overall image quality score. Extensive experiments conducted with six publicly available subject-rated databases (comprising of diverse images and distortion types) have confirmed the effectiveness, robustness, and efficiency of the proposed scheme in comparison with the relevant state-of-the-art schemes.


IEEE Transactions on Image Processing | 2012

Saliency Detection in the Compressed Domain for Adaptive Image Retargeting

Yuming Fang; Zhenzhong Chen; Weisi Lin; Chia-Wen Lin

Saliency detection plays important roles in many image processing applications, such as regions of interest extraction and image resizing. Existing saliency detection models are built in the uncompressed domain. Since most images over Internet are typically stored in the compressed domain such as joint photographic experts group (JPEG), we propose a novel saliency detection model in the compressed domain in this paper. The intensity, color, and texture features of the image are extracted from discrete cosine transform (DCT) coefficients in the JPEG bit-stream. Saliency value of each DCT block is obtained based on the Hausdorff distance calculation and feature map fusion. Based on the proposed saliency detection model, we further design an adaptive image retargeting algorithm in the compressed domain. The proposed image retargeting algorithm utilizes multioperator operation comprised of the block-based seam carving and the image scaling to resize images. A new definition of texture homogeneity is given to determine the amount of removal block-based seams. Thanks to the directly derived accurate saliency information from the compressed domain, the proposed image retargeting algorithm effectively preserves the visually important regions for images, efficiently removes the less crucial regions, and therefore significantly outperforms the relevant state-of-the-art algorithms, as demonstrated with the in-depth analysis in the extensive experiments.


IEEE Transactions on Image Processing | 2005

Modeling visual attention's modulatory aftereffects on visual sensitivity and quality evaluation

Zhongkang Lu; Weisi Lin; Xiaokang Yang; Ee Ping Ong; Susu Yao

With the fast development of visual noise-shaping related applications (visual compression, error resilience, watermarking, encryption, and display), there is an increasingly significant demand on incorporating perceptual characteristics into these applications for improved performance. In this paper, a very important mechanism of the human brain, visual attention, is introduced for visual sensitivity and visual quality evaluation. Based upon the analysis, a new numerical measure for visual attentions modulatory aftereffects, perceptual quality significance map (PQSM), is proposed. To a certain extent, the PQSM reflects the processing ability of the human brain on local visual contents statistically. The PQSM is generated with the integration of local perceptual stimuli from color contrast, texture contrast, motion, as well as cognitive features (skin color and face in this study). Experimental results with subjective viewing demonstrate the performance improvement on two PQSM-modulated visual sensitivity models and two PQSM-based visual quality metrics.


IEEE Transactions on Circuits and Systems for Video Technology | 2005

Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile

Xiaokang Yang; Weisi Lin; Zhongkhang Lu; EePing Ong; Susu Yao

We present a motion-compensated residue signal preprocessing scheme in video coding scheme based on just-noticeable-distortion (JND) profile. Human eyes cannot sense any changes below the JND threshold around a pixel due to their underlying spatial/temporal masking properties. An appropriate (even imperfect) JND model can significantly help to improve the performance of video coding algorithms. From the viewpoint of signal compression, smaller variance of signal results in less objective distortion of the reconstructed signal for a given bit rate. In this paper, a new JND estimator for color video is devised in image-domain with the nonlinear additivity model for masking (NAMM) and is incorporated into a motion-compensated residue signal preprocessor for variance reduction toward coding quality enhancement. As the result, both perceptual quality and objective quality are enhanced in coded video at a given bit rate. A solution of adaptively determining the parameter for the residue preprocessor is also proposed. The devised technique can be applied to any standardized video coding scheme based on motion compensated prediction. It provides an extra design option for quality control, besides quantization, in contrast with most of the existing perceptually adaptive schemes which have so far focused on determination of proper quantization steps. As an example for demonstration, the proposed scheme has been implemented in the MPEG-2 TM5 coder, and achieved an average peak signal-to-noise (PSNR) increment of 0.505 dB over the twenty video sequences which have been tested. The perceptual quality improvement has been confirmed by the subjective viewing tests conducted.


IEEE Transactions on Image Processing | 2013

Perceptual Quality Metric With Internal Generative Mechanism

Jinjian Wu; Weisi Lin; Guangming Shi; Anmin Liu

Objective image quality assessment (IQA) aims to evaluate image quality consistently with human perception. Most of the existing perceptual IQA metrics cannot accurately represent the degradations from different types of distortion, e.g., existing structural similarity metrics perform well on content-dependent distortions while not as well as peak signal-to-noise ratio (PSNR) on content-independent distortions. In this paper, we integrate the merits of the existing IQA metrics with the guide of the recently revealed internal generative mechanism (IGM). The IGM indicates that the human visual system actively predicts sensory information and tries to avoid residual uncertainty for image perception and understanding. Inspired by the IGM theory, we adopt an autoregressive prediction algorithm to decompose an input scene into two portions, the predicted portion with the predicted visual content and the disorderly portion with the residual content. Distortions on the predicted portion degrade the primary visual information, and structural similarity procedures are employed to measure its degradation; distortions on the disorderly portion mainly change the uncertain information and the PNSR is employed for it. Finally, according to the noise energy deployment on the two portions, we combine the two evaluation results to acquire the overall quality score. Experimental results on six publicly available databases demonstrate that the proposed metric is comparable with the state-of-the-art quality metrics.


Signal Processing | 2005

Improved estimation for just-noticeable visual distortion

Xiaohui Zhang; Weisi Lin; Ping Xue

Perceptual visibility threshold estimation, based upon characteristics of the human visual system (HVS), has wide applications in digital image/video processing. An improved scheme for estimating just-noticeable distortion (JND) is proposed in this paper. It is proved to outperform the DCTune model, with the major contributions of a new formula for luminance adaptation adjustment and the incorporation of block classification for contrast masking. The HVS visibility threshold for digital images exhibits an approximately parabolic curve versus gray levels and this has been formulated to yield a more accurate base threshold. Moreover, edge regions have been differentiated via block classification to effectively avoid over-estimation of JND in the said regions. Experiments with different images and the associated subjective tests show improved performance of the proposed scheme over the DCTune model for luminance adaptation (especially in dark regions) and masking effect in edge regions. Our model has further demonstrated to achieve favorable results in perceptual visual distortion gauge and image compression. The improvement in JND estimation facilitates better visual distortion measurement and visual signal compression.


IEEE Transactions on Image Processing | 2013

Perceptual Full-Reference Quality Assessment of Stereoscopic Images by Considering Binocular Visual Characteristics

Feng Shao; Weisi Lin; Shanbo Gu; Gangyi Jiang; Thambipillai Srikanthan

Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to...


IEEE Transactions on Circuits and Systems for Video Technology | 2006

Estimating Just-Noticeable Distortion for Video

Yuting Jia; Weisi Lin; Ashraf A. Kassim

Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)


IEEE Transactions on Multimedia | 2013

A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform

Nevrez Imamoglu; Weisi Lin; Yuming Fang

Researchers have been taking advantage of visual attention in various image processing applications such as image retargeting, video coding, etc. Recently, many saliency detection algorithms have been proposed by extracting features in spatial or transform domains. In this paper, a novel saliency detection model is introduced by utilizing low-level features obtained from the wavelet transform domain. Firstly, wavelet transform is employed to create the multi-scale feature maps which can represent different features from edge to texture. Then, we propose a computational model for the saliency map from these features. The proposed model aims to modulate local contrast at a location with its global saliency computed based on the likelihood of the features, and the proposed model considers local center-surround differences and global contrast in the final saliency map. Experimental evaluation depicts the promising results from the proposed model by outperforming the relevant state of the art saliency detection models.

Collaboration


Dive into the Weisi Lin's collaboration.

Top Co-Authors

Avatar

Yuming Fang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaokang Yang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ke Gu

Beijing University of Technology

View shared research outputs
Top Co-Authors

Avatar

Guangtao Zhai

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Wenjun Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chenwei Deng

Beijing Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge