Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junle Wang is active.

Publication


Featured researches published by Junle Wang.


IEEE Transactions on Image Processing | 2013

Computational Model of Stereoscopic 3D Visual Saliency

Junle Wang; Matthieu Perreira DaSilva; Patrick LeCallet; Vincent Ricordel

Many computational models of visual attention performing well in predicting salient areas of 2D images have been proposed in the literature. The emerging applications of stereoscopic 3D display bring an additional depth of information affecting the human viewing behavior, and require extensions of the efforts made in 2D visual modeling. In this paper, we propose a new computational model of visual attention for stereoscopic 3D still images. Apart from detecting salient areas based on 2D visual features, the proposed model takes depth as an additional visual dimension. The measure of depth saliency is derived from the eye movement data obtained from an eye-tracking experiment using synthetic stimuli. Two different ways of integrating depth information in the modeling of 3D visual attention are then proposed and examined. For the performance evaluation of 3D visual attention models, we have created an eye-tracking database, which contains stereoscopic images of natural content and is publicly available, along with this paper. The proposed model gives a good performance, compared to that of state-of-the-art 2D models on 2D images. The results also suggest that a better performance is obtained when depth information is taken into account through the creation of a depth saliency map, rather than when it is integrated by a weighting method.


IEEE Transactions on Image Processing | 2014

Saliency Detection for Stereoscopic Images

Yuming Fang; Junle Wang; Manish Narwaria; Patrick Le Callet; Weisi Lin

Saliency detection techniques have been widely used in various 2D multimedia processing applications. Currently, the emerging applications of stereoscopic display require new saliency detection models for stereoscopic images. Different from saliency detection for 2D images, depth features have to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a new stereoscopic saliency detection framework based on the feature contrast of color, intensity, texture, and depth. Four types of features including color, luminance, texture, and depth are extracted from DC-T coefficients to represent the energy for image patches. A Gaussian model of the spatial distance between image patches is adopted for the consideration of local and global contrast calculation. A new fusion method is designed to combine the feature maps for computing the final saliency map for stereoscopic images. Experimental results on a recent eye tracking database show the superior performance of the proposed method over other existing ones in saliency estimation for 3D images.


international conference on acoustics, speech, and signal processing | 2014

Stereoscopic image retargeting based on 3D saliency detection

Junle Wang; Yuming Fang; Manish Narwaria; Weisi Lin; Patrick Le Callet

In this paper, we propose a novel stereoscopic image retargeting algorithm based on 3D visual saliency detection. A new 3D visual attention model is designed based on 2D visual feature detection, depth feature detection and the modeling of various viewing bias in stereo vision. A geometrically consistent seam carving technique is adopted for retargeting stereo image pair. Experimental results demonstrated that both the proposed visual attention model and the proposed retargeting method outperform the state-of-the-art studies.


european workshop on visual information processing | 2011

An efficient no-reference metric for perceived blur

Hantao Liu; Junle Wang; Judith Redi; Patrick Le Callet; Ingrid Heynderickx

This paper presents an efficient no-reference metric that quantifies perceived image quality induced by blur. Instead of explicitly simulating the human visual perception of blur, it calculates the local edge blur in a cost-effective way, and applies an adaptive neural network to empirically learn the highly nonlinear relationship between the local values and the overall image quality. Evaluation of the proposed metric using the LIVE blur database shows its high prediction accuracy at a largely reduced computational cost. To further validate the performance of the blur metric on its robustness against different image content, two additional quality perception experiments were conducted: one with highly textured natural images and one with images with an intentionally blurred background1. Experimental results demonstrate that the proposed blur metric is promising for real-world applications both in terms of computational efficiency and practical reliability.


visual communications and image processing | 2013

Saliency detection for stereoscopic images

Yuming Fang; Junle Wang; Manish Narwaria; Patrick Le Callet; Weisi Lin

Saliency detection techniques have been widely used in various 2D multimedia processing applications. Currently, the emerging applications of stereoscopic display require new saliency detection models for stereoscopic images. Different from saliency detection for 2D images, depth features have to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a new stereoscopic saliency detection framework based on the feature contrast of color, intensity, texture, and depth. Four types of features including color, luminance, texture, and depth are extracted from DC-T coefficients to represent the energy for image patches. A Gaussian model of the spatial distance between image patches is adopted for the consideration of local and global contrast calculation. A new fusion method is designed to combine the feature maps for computing the final saliency map for stereoscopic images. Experimental results on a recent eye tracking database show the superior performance of the proposed method over other existing ones in saliency estimation for 3D images.


quality of multimedia experience | 2014

An eye tracking database for stereoscopic video

Yuming Fang; Junle Wang; Jing Li; Romuald Pépion; Patrick Le Callet

We present a large-scale eye tracking database for stereo-scopic video. A set of participants were involved in this eye tracking experiment. The human fixation maps were created as the ground truth for stereoscopic video from the gaze data from participants. To the best of our knowledge, this is the first large-scale eye tracking database of visual attention modeling for stereoscopic video. The details of the processing operations and properties of the database are described in this paper.


international conference on multimedia and expo | 2013

Visual saliency driven error protection for 3D video

Chaminda T. E. R. Hewage; Junle Wang; Maria G. Martini; Patrick Le Callet

Viewers tend to focus into specific regions of interest in an image. Therefore visual attention is one of the major aspects to understand the overall Quality of Experience (QoE) and user perception. Visual attention models have emerged in the recent past to predict user attention in images, videos and 3D video. However, the usage of these models in quality assessment and quality improvement has not been thoroughly investigated to date. This paper investigates 3D visual attention model driven quality assessment and improvement methods for 3D video services. Moreover, a visual saliency driven error protection mechanism is proposed and evaluated in this paper. Both objective and subjective results show that the proposed method has significant potential to provide improved 3D QoE for end users.


Journal of Eye Movement Research | 2012

Study of depth bias of observers in free viewing of still stereoscopic synthetic stimuli

Junle Wang; Patrick Le Callet; Sylvain Tourancheau; Vincent Ricordel; Matthieu Perreira Da Silva


16th European Conference on Eye Movements, Marseille, 21 - 25 August 2011. | 2011

Quantifying depth bias in free viewing of still stereoscopic synthetic stimuli

Junle Wang; Patrick Le Callet; Vincent Ricordel; Sylvain Tourancheau


Archive | 2017

2D and 3D Visual Attention for Computer Vision: Concepts, Measurement, and Modeling

Vincent Ricordel; Junle Wang; Matthieu Perreira Da Silva; Patrick Le Callet

Collaboration


Dive into the Junle Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emilie Bosc

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuming Fang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Manish Narwaria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Weisi Lin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge