Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaokang Yang is active.

Publication


Featured researches published by Xiaokang Yang.


international conference on acoustics, speech, and signal processing | 2003

Just-noticeable-distortion profile with nonlinear additivity model for perceptual masking in color images

Xiaokang Yang; Weisi Lin; Zhongkang Lu; Ee Ping Ong; Susu Yao

We propose a new spatial just noticeable distortion (JND) profile for color image processing. The JND threshold depends on various masking effects underlying existing in the human vision system (HVS). How to efficiently integrate different masking effects together is the key issue of modelling the JND profile. Based on recent vision research results, we model the masking effects in different stimulus dimensions as a nonlinear additivity model for masking (NAMM). It applies to all color components and accounts for the compound impact of luminance masking and texture masking to estimate the JND threshold in images. In our PSNR and subjective comparison to the related work, the proposed NAMM scheme provides a more accurate JND profile towards the actual JND bound in the HVS.


international symposium on circuits and systems | 2004

Local visual perceptual clues and its use in videophone rate control

Xiaokang Yang; Weisi Lin; Zhongkang Lu; Xiao Lin; Susanto Rahardja; Ee Ping Ong; Susu Yao

We present a method for extracting local visual perceptual clues and its application for rate control of videophone, in order to ensure the scarce bits to be assigned for maximum perceptual coding quality. The optimum quantization step is determined with the rate-distortion model considering the local perceptual clues in the visual signal. For extraction of the perceptual clues, luminance adaptation and texture masking are used as the stimulus-driven factors, while skin color serves as the cognition-driven factor in current implementation. Both objective and subjective quality evaluations are given by evaluating the proposed perceptual rate control (PRC) scheme in the H.263 platform, and the evaluations show that the proposed PRC scheme achieves significant quality improvement in the block-based coding for bandwidth-hungry applications.


international conference on acoustics, speech, and signal processing | 2003

Perceptual-quality significance map (PQSM) and its application on video quality distortion metrics

Zhongkang Lu; Weisi Lin; Ee Ping Ong; Susu Yao; Xiaokang Yang

The paper presents a new and general concept, PQSM (perceptual quality significance map), to be used in measuring visual distortion. It makes use of the mechanism that the HVS (human visual system) pays more attention to certain areas of visual signals due to one or more of the following factors: salient features in image/video; cues from domain knowledge; association of other media (e.g., speech or audio). PQSM is a 3D/4D array whose elements represent the relative perceptual-quality significance levels for the corresponding pixels/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics; it can improve effectiveness or/and efficiency of perceptual metrics or even enhance a PSNR-based metric. A three-stage PQSM generation method is also proposed, with an implementation of motion, luminance, skin-color and face mapping. Experimental results show that the scheme can significantly improve the performance of current image/video distortion metrics.


international conference on acoustics, speech, and signal processing | 2004

Spatial selectivity modulated just-noticeable-distortion profile for video

Zhongkang Lu; Weisi Lin; Xiaokang Yang; Ee Ping Ong; Susu Yao

Both visual sensitivity and spatial selectivity determine the overall visibility threshold at each pixel in an image, according to the physiological and psychological evidence towards the human visual system (HVS). Visual sensitivity can be decided by an existing estimator for just-noticeable-distortion (JND). A computational model is proposed for incorporating a selectivity measure into the JND profile so that more effective noise shaping is possible in various applications. Experimental results with noise-embedded video sequences confirm that the introduction of spatial selectivity enhances the performance of the JND profile used in noise shaping.


international conference on image processing | 2003

On incorporating just-noticeable-distortion profile into motion-compensated prediction for video compression

Xiaokang Yang; Weisi Lin; Zhongkang Lu; EePing Ong; Susu Yao

We explore a new perceptually-adaptive video coding (PVC) scheme in the motion-compensated prediction loop for hybrid video compression, aiming at better perceptual coding quality and operational efficiency. A new just noticeable distortion (JND) estimator for color images/video is first devised in image-domain based on our introduced nonlinear additivity model for masking (NAMM). Secondly, the image-domain JND profile is incorporated into hybrid video encoding via the JND-adaptive motion estimation (ME) and residue filtering. The scheme works with any prevalent video coding standards and various ME strategies. As an example of implementation, it is applied to the MPEG-2 TM5 coder and demonstrated to achieve average improvement of over 18% in ME efficiency, 0.6 dB in perceptual coding quality and most remarkably, 0.17 dB in the objective coding quality measure (PSNR).


international conference on acoustics, speech, and signal processing | 2004

An effective perceptual weighting model for videophone coding

Xiaokang Yang; Weisi Lin; Zhongkang Lu; Ee Ping Ong; Susu Yao

A perceptual weighting model is proposed for effective rate control so as to enhance the perceptual coding quality of videophone. We exploit two categories of factors affecting the perception of the human visual system, stimulus-driven factors and cognition-driven factors. In order to achieve a simple, but effective, perceptual weighting model, we use luminance adaptation and texture masking as the stimulus-driven factors, while skin color serves as the cognition-driven factor in the videophone application. Both objective and subjective quality evaluations of videophone-like sequences in the H.263 platform validate the effectiveness of our perceptual weighting model.


international conference on image processing | 2004

Perceptually-adaptive pre-processing for motion-compensated residue in video coding

Xiaokang Yang; Weisi Lin; Zhongkang Lu; Ee Ping Ong; Susu Yao

We present a perceptually-adaptive pre-processing scheme for motion-compensated residue, based on just-noticeable-distortion (JND) profile. Human eyes cannot sense any changes below the JND threshold around a pixel due to their underlying spatial/temporal masking properties. From the viewpoint of signal compression, a smaller variance of signal results in less objective distortion of the reconstructed signal for a given bit-rate. In this paper, the JND profile is incorporated into a motion-compensated residue signal preprocessor for variance reduction, aimed at coding quality enhancement. A solution of adaptively determining the parameter for the residue pre-processor is also proposed. Experimental results show that both perceptual quality and objective quality are enhanced in coded video at a given bitrate.


international conference on image processing | 2004

Modelling visual attention and motion effect for visual quality evaluation

Zhongkang Lu; Weisi Lin; Xiaokang Yang; EePing Ong; Susu Yao

The perceptual visual quality evaluation of Human Visual System (HVS) is very complex. It concerns almost all aspects of visual processing in vision path, from low-level neuron activities to high-level visual perception. Existing perceptual Visual Quality Metrics (VQMs) only considered several of the mechanisms of HVS and many others are ignored. In this paper, two global modulatory factors, visual attention and motion suppression, are modelled and combined to form a mathematic expression - Perceptual Quality Significant Level (PQSL). To a certain extent, it is believed that PQSL value reflect the processing ability of human brain on local visual contents. To evaluate their effects on visual quality evaluation, two VQMs are proposed. One is a MSE-like VQM based on PQSL-modulated JND profile, which was proposed in Z.K. Lu et al., (2004); the other VQM is based on Wangs visual quality assessment Zhou Wang et al., (2004) PQSL values are used to adjust the weights of his structural similarity index. Experimental results show that introducing of the global modulatory factors can improve the performance of current visual quality metrics.


Archive | 2005

Scalable Video Coding With Grid Motion Estimation and Compensation

Zhengguo Li; Xiaokang Yang; Keng Pang Lim; Xiao Lin; Susanto Rahardja; Feng Pan


Archive | 2004

Method and system for video quality measurements

Ee Ping Ong; Xiaokang Yang; Weisi Lin; Zhongkang Lu; Susu Yao

Collaboration


Dive into the Xiaokang Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weisi Lin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge