Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junyong You is active.

Publication


Featured researches published by Junyong You.


Signal Processing-image Communication | 2010

Perceptual-based quality assessment for audio-visual services: A survey

Junyong You; Ulrich Reiter; Miska Hannuksela; Moncef Gabbouj; Andrew Perkis

Accurate measurement of the perceived quality of audio-visual services at the end-user is becoming a crucial issue in digital applications due to the growing demand for compression and transmission of audio-visual services over communication networks. Content providers strive to offer the best quality of experience for customers linked to their different quality of service (QoS) solutions. Therefore, developing accurate, perceptual-based quality metrics is a key requirement in multimedia services. In this paper, we survey state-of-the-art signal-driven perceptual audio and video quality assessment methods independently, and investigate relevant issues in developing joint audio-visual quality metrics. Experiments with respect to subjective quality results have been conducted for analyzing and comparing the performance of the quality metrics. We consider emerging trends in audio-visual quality assessment, and propose feasible solutions for future work in perceptual-based audio-visual quality metrics.


acm multimedia | 2009

Perceptual quality assessment based on visual attention analysis

Junyong You; Andrew Perkis; Miska Hannuksela; Moncef Gabbouj

Most existing quality metrics do not take the human attention analysis into account. Attention to particular objects or regions is an important attribute of human vision and perception system in measuring perceived image and video qualities. This paper presents an approach for extracting visual attention regions based on a combination of a bottom-up saliency model and semantic image analysis. The use of PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural SIMilarity) in extracted attention regions is analyzed for image/video quality assessment, and a novel quality metric is proposed which can exploit the attributes of visual attention information adequately. The experimental results with respect to the subjective measurement demonstrate that the proposed metric outperforms the current methods.


Quality of experience : advanced concepts, applications and methods | 2014

Factors Influencing Quality of Experience

Ulrich Reiter; Kjell Brunnström; Katrien De Moor; Mohamed-Chaker Larabi; Manuela Pereira; António M. G. Pinheiro; Junyong You; Andrej Zgank

In this chapter different factors that may influence Quality of Experience (QoE) in the context of media consumption, networked services, and other electronic communication services and applications, are discussed. QoE can be subject to a range of complex and strongly interrelated factors, falling into three categories: human, system and context influence factors (IFs). With respect to Human IFs, we discuss variant and stable factors that may potentially bear an influence on QoE, either for low-level (bottom-up) or higher-level (top-down) cognitive processing. System IFs are classified into four distinct categories, namely content-, media-, network- and device-related IFs. Finally, the broad category of possible Context IFs is decomposed into factors linked to the physical, temporal, social, economic, task and technical information context. The overview given here illustrates the complexity of QoE and the broad range of aspects that potentially have a major influence on it.


IEEE Transactions on Multimedia | 2012

Assessment of Stereoscopic Crosstalk Perception

Liyuan Xing; Junyong You; Touradj Ebrahimi; Andrew Perkis

Stereoscopic three-dimensional (3-D) services do not always prevail when compared with their two-dimensional (2-D) counterparts, though the former can provide more immersive experience with the help of binocular depth. Various specific 3-D artefacts might cause discomfort and severely degrade the Quality of Experience (QoE). In this paper, we analyze one of the most annoying artefacts in the visualization stage of stereoscopic imaging, namely, crosstalk, by conducting extensive subjective quality tests. A statistical analysis of the subjective scores reveals that both scene content and camera baseline have significant impacts on crosstalk perception, in addition to the crosstalk level itself. Based on the observed visual variations during changes in significant factors, three perceptual attributes of crosstalk are summarized as the sensorial results of the human visual system (HVS). These are shadow degree, separation distance, and spatial position of crosstalk. They are classified into two categories: 2-D and 3-D perceptual attributes, which can be described by a Structural SIMilarity (SSIM) map and a filtered depth map, respectively. An objective quality metric for predicting crosstalk perception is then proposed by combining the two maps. The experimental results demonstrate that the proposed metric has a high correlation (over 88%) when compared with subjective quality scores in a wide variety of situations.


international conference on multimedia and expo | 2010

Attention modeling for video quality assessment: Balancing global quality and local quality

Junyong You; Jari Korhonen; Andrew Perkis

This paper proposes to evaluate video quality by balancing two quality components: global quality and local quality. The global quality is a result from subjects allocating their attention equally to all regions in a frame and all frames in a video. It is evaluated by image quality metrics (IQM) with averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame. The local quality of a video sequence is calculated by pooling local quality values over all frames with a temporal pooling scheme derived from the known relationship between perceived video quality and the frequency of temporal quality variations. The overall quality of a distorted video is a weighted average between the global quality and the local quality. Experimental results demonstrate that the combination of the global quality and local quality outperforms both sole global quality and local quality, as well as other quality models, in video quality assessment. In addition, the proposed video quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme.


IEEE Transactions on Image Processing | 2014

Attention Driven Foveated Video Quality Assessment

Junyong You; Touradj Ebrahimi; Andrew Perkis

Contrast sensitivity of the human visual system to visual stimuli can be significantly affected by several mechanisms, e.g., vision foveation and attention. Existing studies on foveation based video quality assessment only take into account static foveation mechanism. This paper first proposes an advanced foveal imaging model to generate the perceived representation of video by integrating visual attention into the foveation mechanism. For accurately simulating the dynamic foveation mechanism, a novel approach to predict video fixations is proposed by mimicking the essential functionality of eye movement. Consequently, an advanced contrast sensitivity function, derived from the attention driven foveation mechanism, is modeled and then integrated into a wavelet-based distortion visibility measure to build a full reference attention driven foveated video quality (AFViQ) metric. AFViQ exploits adequately perceptual visual mechanisms in video quality assessment. Extensive evaluation results with respect to several publicly available eye-tracking and video quality databases demonstrate promising performance of the proposed video attention model, fixation prediction approach, and quality metric.


IEEE Transactions on Multimedia | 2011

Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

Junyong You; Jari Korhonen; Andrew Perkis; Touradj Ebrahimi

The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content. This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model tuned by the attention map considers the degradations on the significantly attended stimuli. To generate the overall video quality score, global and local quality features are combined by a content adaptive linear fusion method and pooled over time, taking the temporal quality variation into consideration. The experimental results have been compared to results from appropriate eye tracking and video quality assessment experiments, demonstrating promising performance.


international conference on image processing | 2010

A perceptual quality metric for stereoscopic crosstalk perception

Liyuan Xing; Junyong You; Touradj Ebrahimi; Andrew Perkis

Compared to metrics proposed to assess the quality of two-dimensional (2D) images, there are very few metrics devoted to quality assessment of stereoscopic presentations. Crosstalk is one of the most annoying distortions in the visualization stage of stereoscopic imaging technology. This paper proposes a perceptual quality metric which takes characteristics of stereoscopic images into account for predicting quality levels of crosstalk perception in stereoscopic images, based on an understanding of three main factors, crosstalk level, camera baseline and scene content. The experimental results demonstrate that the proposed metric has Pearson correlation of 87.7% when compared to the ground truth results from the subjective experiments on the crosstalk perception, which is much better than the traditional 2D metrics without integrating 3D depth information.


Archive | 2010

Quality of Visual Experience for 3D Presentation - Stereoscopic Image

Junyong You; Gangyi Jiang; Liyuan Xing; Andrew Perkis

Three-dimensional television (3DTV) technology is becoming increasingly popular, as it can provide high quality and immersive experience to end users. Stereoscopic imaging is a technique capable of recoding 3D visual information or creating the illusion of depth. Most 3D compression schemes are developed for stereoscopic images including applying traditional two-dimensional (2D) compression techniques, and considering theories of binocular suppression as well. The compressed stereoscopic content is delivered to customers through communication channels. However, both compression and transmission errors may degrade the quality of stereoscopic images. Subjective quality assessment is the most accurate way to evaluate the quality of visual presentations in either 2D or 3D modality, even though it is time-consuming. This chapter will offer an introduction to related issues in perceptual quality assessment for stereoscopic images. Our results are a subjective quality experiment on stereoscopic images and focusing on four typical distortion types including Gaussian blurring, JPEG compression, JPEG2000 compression, and white noise. Furthermore, although many 2D image quality metrics have been proposed that work well on 2D images, developing quality metrics for 3D visual content is almost an unexplored issue. Therefore, this chapter will further introduce some well-known 2D image quality metrics and investigate their capabilities in stereoscopic image quality assessments. As an important attribute of stereoscopic images, disparity refers to the difference in image location of an object seen by the left and right eyes, which has a significant impact on the stereoscopic image quality assessment. Thus, a study on an integration of the disparity information in quality assessment is presented. The experimental results demonstrated that better performance can be achieved if the disparity information and original images are combined appropriately in the stereoscopic image quality assessment.


international conference on acoustics, speech, and signal processing | 2010

Spatial and temporal pooling of image quality metrics for perceptual video quality assessment on packet loss streams

Junyong You; Jari Korhonen; Andrew Perkis

Video streaming through bandwidth-limited channels often suffer from packet losses. Therefore, perceptual quality assessment on video sequences with packet losses is a critical issue in digital video communications. This paper analyzes several image quality metrics and evaluates their applications using spatial and temporal pooling schemes in perceptual video quality assessment for video streams with packet losses. Several approaches using Minkowski summation and averages over different distorted spatial regions and temporal frames to pool the spatial and temporal qualities are evaluated. The experimental results with respect to the subjective video quality measurements demonstrate that the subjects are more sensitive to the most annoying spatial regions and temporal segments when assessing the video quality of the lossy streams.

Collaboration


Dive into the Junyong You's collaboration.

Top Co-Authors

Avatar

Andrew Perkis

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Touradj Ebrahimi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Liyuan Xing

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ulrich Reiter

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jari Korhonen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Moncef Gabbouj

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fitri N. Rahayu

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mark R. Pickering

University of New South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge