Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Quan Huynh-Thu is active.

Publication


Featured researches published by Quan Huynh-Thu.


IEEE Transactions on Broadcasting | 2011

Study of Rating Scales for Subjective Quality Assessment of High-Definition Video

Quan Huynh-Thu; Marie-Neige Garcia; Filippo Speranza; Philip J. Corriveau; Alexander Raake

With the constant evolution of video technology and the deployment of new video services, content providers and broadcasters always face the challenge of delivering an adequate video quality which meets end-users expectations. The development of reliable quality testing and quality monitoring tools that can be used by broadcasters ultimately requires reliable objective video quality metrics. In turn, the validation of these objective models requires reliable subjective assessment, the most accurate representation of the quality perceived by end-users. Many different subjective assessment methodologies exist, and each has its advantages and drawbacks. One important element in a subjective testing methodology is the choice of the rating scale. In this paper, we make a direct comparison between four scales, which are either included in existing international standards or proposed to be used in future standardization activities related to video quality. We examine the subjective data from the points of view of response behavior from participants, similarity and variability of subjective scores. We discuss these results within the context of the subjective quality assessment of high-definition video compressed and transmitted over error-prone networks. Our experimental data show no overall statistical differences between the different scales. Results also show that the single-stimulus presentation provides highly repeatable results even if different scales or groups of participants are used.


IEEE Transactions on Broadcasting | 2011

The Importance of Visual Attention in Improving the 3D-TV Viewing Experience: Overview and New Perspectives

Quan Huynh-Thu; Marcus Barkowsky; P. Le Callet

Three-dimensional video content has attracted much attention in both the cinema and television industries, because 3D is considered to be the next key feature that can significantly enhance the visual experience of viewers. However, one of the major challenges is the difficulty in providing high quality images that are comfortable to view and that also meet signal transmission requirements over a limited bandwidth for display on television screens. The different processing steps that are necessary in a 3D-TV delivery chain can all introduce artifacts that may create problems in terms of human visual perception. In this paper, we highlight the importance of considering 3D visual attention when addressing 3D human factors issues. We provide a review of the field of 3D visual attention, discuss the challenges in both the understanding and modeling of 3D visual attention, and provide guidance to researchers in this field. Finally, we identify perceptual issues generated during the various steps in a typical 3D-TV broadcasting delivery chain, review them and explain how consideration of 3D visual attention modeling can help improve the overall 3D viewing experience.


international conference on image processing | 2010

Video quality assessment: From 2D to 3D — Challenges and future trends

Quan Huynh-Thu; Patrick Le Callet; Marcus Barkowsky

Three-dimensional (3D) video is gaining a strong momentum both in the cinema and broadcasting industries as it is seen as a technology that will extensively enhance the users visual experience. One of the major concerns for the wide adoption of such technology is the ability to provide sufficient visual quality, especially if 3D video is to be transmitted over a limited bandwidth for home viewing (i.e. 3DTV). Means to measure perceptual video quality in an accurate and practical way is therefore of highest importance for content providers, service providers, and display manufacturers. This paper discusses recent advances in video quality assessment and the challenges foreseen for 3D video. Both subjective and objective aspects are examined. An outline of ongoing efforts in standards-related bodies is also provided.


IEEE Journal of Selected Topics in Signal Processing | 2012

The Influence of Subjects and Environment on Audiovisual Subjective Tests: An International Study

Margaret H. Pinson; Lucjan Janowski; Romuald Pepion; Quan Huynh-Thu; C. Schmidmer; Philip J. Corriveau; Audrey C. Younkin; P. Le Callet; Marcus Barkowsky; William Ingram

Traditionally, audio quality and video quality are evaluated separately in subjective tests. Best practices within the quality assessment community were developed before many modern mobile audiovisual devices and services came into use, such as internet video, smart phones, tablets and connected televisions. These devices and services raise unique questions that require jointly evaluating both the audio and the video within a subjective test. However, audiovisual subjective testing is a relatively under-explored field. In this paper, we address the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovisual quality. Six laboratories from four countries conducted a systematic study of audiovisual subjective testing. The stimuli and scale were held constant across experiments and labs; only the environment of the subjective test was varied. Some subjective tests were conducted in controlled environments and some in public environments (a cafeteria, patio or hallway). The audiovisual stimuli spanned a wide range of quality. Results show that these audiovisual subjective tests were highly repeatable from one laboratory and environment to the next. The number of subjects was the most important factor. Based on this experiment, 24 or more subjects are recommended for Absolute Category Rating (ACR) tests. In public environments, 35 subjects were required to obtain the same Students t-test sensitivity. The second most important variable was individual differences between subjects. Other environmental factors had minimal impact, such as language, country, lighting, background noise, wall color, and monitor calibration. Analyses indicate that Mean Opinion Scores (MOS) are relative rather than absolute. Our analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment.


IEEE Transactions on Affective Computing | 2012

Physiological-Based Affect Event Detector for Entertainment Video Applications

Julien Fleureau; Philippe Guillotel; Quan Huynh-Thu

In this paper, we propose a methodology to build a real-time affect detector dedicated to video viewing and entertainment applications. This detector combines the acquisition of traditional physiological signals, namely, galvanic skin response, heart rate, and electromyogram, and the use of supervised classification techniques by means of Gaussian processes. It aims at detecting the emotional impact of a video clip in a new way by first identifying emotional events in the affective stream (fast increase of the subject excitation) and then by giving the associated binary valence (positive or negative) of each detected event. The study was conducted to be as close as possible to realistic conditions by especially minimizing the use of active calibrations and considering on-the-fly detection. Furthermore, the influence of each physiological modality is evaluated through three different key-scenarios (mono-user, multi-user and extended multi-user) that may be relevant for consumer applications. A complete description of the experimental protocol and processing steps is given. The performances of the detector are evaluated on manually labeled sequences, and its robustness is discussed considering the different single and multi-user contexts.


IEEE Signal Processing Magazine | 2011

Multimedia Quality Assessment [DSP Forum]

Fatih Porikli; Al Bovik; Christopher J. Plack; Ghassan AlRegib; Joyce E. Farrell; Patrick Le Callet; Quan Huynh-Thu; Sebastian Möller; Stefan Winkler

Globally almost 17 exabytes of mobile data is used every month. Almost 10 exabytes of this data is used to view or download videos. This is possible due to the explosive increase in the smart phone users around the globe. Most smartphone manufactures claim the quality of images and videos captured by their devices to be a differentiating factor compared to their competition. Hence quality measurement becomes crucial for these people to meet the customers expectations. Another scenario is the amount of videos streamed online has also exploded in recent years. The video resolution is altered to meet the available bandwidth. Further these videos are viewed on di�erent display devices. If both the display devices and the service providers have the perceptual quality of the video they can optimize their respective technologies to provide satisfactory service to the end user. We present a novel framework called Deep Video Quality Evaluator (DeepVQUE) for doing full-reference video quality assessment using deep 3D convolutional neural networks (3D ConvNets). DeepVQUE is a complementary framework to traditional handcrafted feature based methods in that it uses deep 3D ConvNet models for feature extraction. 3D ConvNets are capable of extracting spatio-temporal features of the video which are vital for video quality assessment (VQA). Most of the existing approaches operate on spatial and temporal separately, But the complex relationship between spatial and temporal is ignored. In this thesis, we study the spatial quality using state-of- the-art full reference image quality assessment (FRIQA) metrics and spatio-temporal quality using 3D ConvNets. Speci�cally, the proposed approach measures the spatio-temporal quality of a video with respect to its pristine version by applying widely used distance measures such as the l1 or the l2 norm to the volume-wise pristine and distorted features. Overall quality of the video is pooled using support vector regression (SVR) on spatial and spatio-temporal quality scores. In this thesis, we also study a no-reference image quality assessment which uses a binary classi�er to estimate the quality of an image. Subjective quality assessment is the most reliable source of assessment for images or videos. The level of con�dence with which the subjects score images with low distortion and high distortion is much higher when compared to other levels of distortion. In some cases, the distortion may not be uniformly distributed throughout the image. This motivated us to train a classi�er using image patches where each patch is labeled either zero or one based on the level of distortion. To determine the quality of an image it is divided into patches and passed through a pre-trained classi�er. The patch wise classi�cation captures local distortion and the overall quality score for the image is given by the ratio of the number of patches classi�ed as zero over the total number of patches.This IEEE Signal Processing Magazine forum discusses the latest advances and challenges in multimedia quality assessment. The forum members bring their expert insights into issues such as perceptual models and quality measures for future applications such as three-dimensional (3-D) videos and interactivity media. The invited forum members are Al Bovik (University of Texas), Chris Plack (University of Manchester), Ghassan AlRegib (Georgia Institute of Technology), Joyce Farrell (Stanford University), Patrick Le Callet (University de Nantes), Quan Huynh-Thu (Tech-nicolor), Sebastian Möller (Deutsche Telekom Labs, TU Berlin), and Stefan Winkler (Advanced Digital Sciences Center). The moderator of this forum is Dr. Fatih Porikli (MERL, Cambridge).


quality of multimedia experience | 2013

Subjective and objective evaluation of an audiovisual subjective dataset for research and development

Margaret H. Pinson; Christian Schmidmer; Lucjan Janowski; Romuald Pépion; Quan Huynh-Thu; Phillip Corriveau; Audrey C. Younkin; Patrick Le Callet; Marcus Barkowsky; William Ingram

In 2011, the Video Quality Experts Group (VQEG) ran subjects through the same audiovisual subjective test at six different international laboratories. That small dataset is now publically available for research and development purposes.


Vision Research | 2014

Effect of the accommodation-vergence conflict on vergence eye movements.

Cyril Vienne; Laurent Sorin; Laurent Blonde; Quan Huynh-Thu; Pascal Mamassian

With the broader use of stereoscopic displays, a flurry of research activity about the accommodation-vergence conflict has emerged to highlight the implications for the human visual system. In stereoscopic displays, the introduction of binocular disparities requires the eyes to make vergence movements. In this study, we examined vergence dynamics with regard to the conflict between the stimulus-to-accommodation and the stimulus-to-vergence. In a first experiment, we evaluated the immediate effect of the conflict on vergence responses by presenting stimuli with conflicting disparity and focus on a stereoscopic display (i.e. increasing the stereoscopic demand) or by presenting stimuli with matched disparity and focus using an arrangement of displays and a beam splitter (i.e. focus and disparity specifying the same locations). We found that the dynamics of vergence responses were slower overall in the first case due to the conflict between accommodation and vergence. In a second experiment, we examined the effect of a prolonged exposure to the accommodation-vergence conflict on vergence responses, in which participants judged whether an oscillating depth pattern was in front or behind the fixation plane. An increase in peak velocity was observed, thereby suggesting that the vergence system has adapted to the stereoscopic demand. A slight increase in vergence latency was also observed, thus indicating a small decline of vergence performance. These findings offer a better understanding and document how the vergence system behaves in stereoscopic displays. We describe what stimuli in stereo-movies might produce these oculomotor effects, and discuss potential applications perspectives.


SID Symposium Digest of Technical Papers | 2011

55.1: Diversity and Coherence of 3D Crosstalk Measurements

Laurent Blonde; Jean-Jacques Sacre; Didier Doyen; Quan Huynh-Thu; Cedric Thebault

3D crosstalk is a major contributor to 3D quality loss and visual fatigue on stereoscopic displays. This paper presents several 3D crosstalk measurement methods and discusses the coherence between methods, towards the derivation of meaningful quality indicators. It also identifies the need of synthetic indicators for complex crosstalk effects.


Proceedings of SPIE | 2013

Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention

Quan Huynh-Thu; Cyril Vienne; Laurent Blonde

Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers’ visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer’s attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.

Collaboration


Dive into the Quan Huynh-Thu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Margaret H. Pinson

National Telecommunications and Information Administration

View shared research outputs
Researchain Logo
Decentralizing Knowledge