Wide Color Gamut Image Content Characterization: Method, Evaluation, and Applications
Junghyuk Lee, Toinon Vigier, Patrick Le Callet, Jong-Seok Lee
11 Wide Color Gamut Image Content Characterization:Method, Evaluation, and Applications
Junghyuk Lee, Toinon Vigier, Patrick Le Callet,
Fellow, IEEE, and Jong-Seok Lee,
Senior Member, IEEE
Abstract —In this paper, we propose a novel framework tocharacterize a wide color gamut image content based on per-ceived quality due to the processes that change color gamut,and demonstrate two practical use cases where the frameworkcan be applied. We first introduce the main framework andimplementation details. Then, we provide analysis for under-standing of existing wide color gamut datasets with quantitativecharacterization criteria on their characteristics, where fourcriteria, i.e., coverage, total coverage, uniformity, and totaluniformity, are proposed. Finally, the framework is applied tocontent selection in a gamut mapping evaluation scenario in orderto enhance reliability and robustness of the evaluation results.As a result, the framework fulfils content characterization forstudies where quality of experience of wide color gamut stimuliis involved.
Index Terms —Wide color gamut, color gamut mapping, con-tent characterization, content selection, quality of experience.
I. I
NTRODUCTION I N order to provide more realistic and higher visual qual-ity of experience (QoE) of multimedia contents to view-ers, technologies related to wide color gamut (WCG) haveemerged. Since the HDTV standard ITU-R Rec.709 [2], sev-eral WCGs have been proposed. International Telecommuni-cation Union (ITU) approved Rec.2020 [3] as the standardcolor gamut for UHDTV, which covers the widest area ofthe CIE 1931 space [4] (see Fig. 1). Recently, many devicesincluding mobile devices support WCGs as a process oftransition to Rec.2020 [5]. Considering various environmentsof multimedia content consumption, gamut mapping is ofteninevitable in order to match the original color to displayingdevices.In this situation, several gamut mapping algorithms (GMAs)have been proposed as well as the standard algorithms inthe CIE guideline [6]. Among them, gamut reduction aimsto reproduce details and color quality of WCG images insmaller gamuts, and maps colors from a large source gamut
This research was supported by the MSIT (Ministry of Science and ICT),Korea, under the “ICT Consilience Creative Program” (IITP-2019-2017-0-01015) supervised by the IITP (Institute for Information & communicationsTechnology Promotion), the International Research & Development Programof the National Research Foundation of Korea (NRF) funded by the Koreagovernment (MSIT) (NRF-2016K1A3A1A21005710), and the Science andTechnology Amicable Research (STAR) Program funded by the PartenariatsHubert Curien (PHC) and the Campus France (PHC-STAR 36664WK).A preliminary version of this work was presented at the InternationalConference on Image Processing (ICIP) in 2018 [1]. © to a smaller target gamut. For instance, a gamut reductionalgorithm proposed in [7] iteratively modifies the color ofeach pixel based on adaptive local contrast according to theRetinex theory [8]. In developing and evaluating methodsrelated to color representation of visual contents includingWCG and GMA, it is important to assess how the resultwill be perceived by human observers. In order to assessperceived QoE, subjective and/or objective studies are usuallyconducted [9], [10], [11], [12], [13].When subjective or objective QoE evaluation is conducted,one of the primary steps is to select a representative andcompact set of source contents, whose processed versions areassessed. This step, equipped with a proper content characteri-zation method, is important not only to conduct an experimentefficiently with limited resources (especially for subjectiveevaluation) but also to draw reliable and reproducible con-clusions. If the contents for an experiment are biased and notrepresentative in their characteristics, the results may be biasedand not be generalizable for other types of contents. Thus, itis important to select representative contents according to thepurpose of a specific experiment. Towards this, it is necessaryto objectively measure the representativeness and suitability ofa set of contents.In this paper, we propose a novel framework to characterizeWCG contents and its applications. We note that WCGcontents are frequently exposed to the gamut mapping pro-cesses targeting diverse displaying environments. Therefore,it is important to consider the perceptual difference causedby gamut reduction for the WCG contents. Thus, our mainidea is to measure perceptual difference due to successivegamut reduction in order to characterize a WCG content. Wealso validate the framework by applying it to two applicationsinvolving content characterization and selection in practicalWCG-related studies.Our main contributions are summarized as follows:1) We propose an objective framework for WCG contentcharacterization based on perceptual properties relatedto differences due to color gamut change. We obtainthe perceptual difference due to gamut reduction bypredicting the subjective score with an objective metric.2) In order to demonstrate its effectiveness, we apply theframework to practical applications related to WCG. Asone of the applications, we propose multiple criteriacharacterizing WCG datasets quantitatively based on the Our code is publicly available at https://github.com/junghyuk-lee/WCG-content-characterization a r X i v : . [ c s . MM ] J a n proposed perceptual difference. Using them, we conductanalysis of existing WCG datasets.3) In addition, we apply the framework in a scenario ofbenchmarking GMAs. We demonstrate that the relia-bility of the benchmarking is maximized by contentselection using the proposed framework.Note that this paper has distinguished contributions comparedto our preliminary work [1] in various respects. While the pre-liminary work only introduces the basic idea of the proposedframework, this paper provides its detailed description andfurther analysis along with the shared source code. In addition,we present the two practical applications involving WCGcontents and demonstrate the effectiveness of our frameworkfor content characterization.The rest of this paper is organized as follows. In Section II,we briefly survey the related works. In Section III, we presentthe proposed framework for WCG content characterization andprovides its implementation details. In Section IV, we describehow the proposed framework is applied to quantify charac-teristics of WCG datasets and provide analysis of existingWCG datasets. In Section V, we describe another use caseof the proposed framework for comparison of GMAs. Finally,Section VI provides concluding remarks.II. R ELATED W ORKS
A. Gamut Mapping
In order to reproduce the original color of contents indevices having smaller color gamuts, several GMAs have beenproposed. They can be categorized into global and local strate-gies. The former changes all colors of out-of-gamut pixelstowards the inside of the target gamut by gamut compressionor clipping [14], [15], [16], [17], [18]. QoE of the gamut-reduced image often decreases since the color may becomeblurred around the pixels where the color changes. The latterconsiders spatial relationship between pixels at the expenseof increased computational complexity in order to enhanceperceived quality of the gamut-reduced images [7], [19], [20],[21], [22], [23], [24], [25].
B. QoE Assessment of Gamut Mapping
QoE of gamut-mapped contents is usually assessed byconducting subjective or objective studies. In [26], a psy-chophysical experiment is conducted to evaluate four GMAs,where the subjective quality of the gamut-reduced imagesis assessed. In [27], [28], [29], [30], [31], various colorimage difference metrics are proposed to measure objectivequality of gamut-mapped images. In [32], subjective scores ofgamut-reduced images using different GMAs are obtained bya psychophysical experiment, and are used to evaluate fourobjective metrics.However, in [33], it is concluded that the color differ-ence measured by objective metrics and the perceived imagedifference between original and gamut-reduced images donot correlated well. There are attempts to improve objectivemetrics by employing spatial filtering that simulates the humanvisual system [34] and by extracting features based on percep-tually important distortion [35]. On the other hand, studies that consider measuring QoE of WCG contents are rare.In [36], a physiological experiment is conducted to measureelectroencephalography during watching WCG video contents.
C. Content Characterization
Winkler [37] quantifies the characteristics of the contentsin existing image and video datasets, including spatial infor-mation and colorfulness for color images, and motion vectorsfor video contents, based on which the representativeness ofa set of contents can be evaluated [38], [39]. In [40], it issuggested to consider attributes of the test material such asbrightness, colorfulness, amount of motion, scene cuts, typesof the content, etc. for subjective video quality assessment.In [41], contrast, colorfulness, and naturalness are consideredto characterize tone-mapped images for HDR contents. In [42],a content selection procedure for light field images is proposedusing high-level features consisting of depth properties, dispar-ity range of pixels, refocusing features, etc. as well as generalimage quality features.In [43], however, it is argued that those simple charac-teristics do not sufficiently cover the perceptual aspects ofvisual contents when processing steps (i.e. tone-mapping) areinvolved. Therefore, an approach is proposed to characterizeHDR contents in the viewpoint of whether an HDR contentis challenging for tone mapping operators. It focuses on theperceptual change due to the dynamic range reduction thatis frequently applied to HDR contents. Using this character-ization method, a framework to build a representative HDRdataset is proposed in [44]. In a similar spirit, we propose anovel characterization framework for WCG contents.III. P
ROPOSED F RAMEWORK
A. General Algorithm
We propose a framework for WCG content characterizationbased on the perceptual change caused by gamut mapping.We define WCG content characteristics as degrees of theperceptual differences due to successive gamut reduction. Theoverall procedure of the proposed method is summarized inAlgorithm 1.The framework in Algorithm 1 produces an N -dimensionalfeature vector of perceptual difference for each WCG sourcecontent. First, we obtain N gamut-reduced images by applyinga gamut reduction operator that converts the color gamut of thereference image G into a target gamut G n ( n = 1 , · · · , N ).For each gamut-reduced image I n , we apply an objective met-ric that measures the perceptual difference from the referenceimage I . Finally, we obtain a feature vector D describing thebehavior of the WCG content in terms of perceptual differencedue to gamut reduction. We can utilize this feature in variousapplications such as WCG dataset analysis, content clustering,and selection, which will be presented in Sections IV and V. B. Obtaining Ground Truth of Perceptual Difference
Hereafter, we provide implementation details of the pro-posed framework. In Algorithm 1, we use an objective met-ric
P D to measure the perceptual difference due to gamut
Algorithm 1
General framework for WCG content character-ization
Input I : WCG source image Output D : vector of perceptual differences of I N : number of target gamut spaces for gamut reduction G : reference gamut space that covers all colors of I G n : n -th gamut space smaller than G n − ( n = 1 , · · · , N ) f GR ( I, G ) : function that generates a gamut-reduced imagewith all colors in gamut G from image I P D ( I, I (cid:48) ) : function that measures the perceptual differ-ence between images I and I (cid:48) for n = 1 : N do Generate I n = f GR ( I n − , G n ) Calculate d n = P D ( I n , I ) end for Obtain D = [ d , d , · · · , d N ] (cid:124) reduction. Although various image quality metrics have beenproposed in literature, metrics specifically designed to measureperceptual difference of images exposed to color gamut changedo not exist. Therefore, we conduct a subjective test in whichthe mean opinion score (MOS) of the perceptual differencebetween gamut-mapped images is measured. MOS is then usedto benchmark existing color metrics and optimize the best onevia nonlinear transformation.
1) Data:
We collect 54 images consisting of scenes fromHdM-HDR-2014 [45] and Arri Alexa sample footage , inshort, HdM and Arri, respectively. HdM contains videos filmedin a professional cinematography environment with dynamicranges up to 18 stops and a color gamut close to Rec.2020.Especially, it focuses on the WCG by containing videos withhighly saturated color and lights. The perceptual difference ofthe videos is large when the gamut is reduced. Arri is a videosample footage provided by the ARRI company. Contents ofthe dataset are in various natural topics with up to the Rec.2020color gamut. Compared to HdM, color differences are not largewhen the gamut is not much reduced. The collected image setis divided into training and validation sets of 30 and 24 images,respectively. The HDR images from HdM are converted to thestandard dynamic range with a fixed value of exposure.We use DCI-P3 as a reference WCG, which originates fromthe cinema industry. And, we use two target gamuts for gamutreduction (i.e., G and G ): Rec.709 and Toy. With widespreaddisplays abiding by the HDTV standard, gamut reductionfrom P3 to Rec.709 frequently happens to WCG contents.In addition, to cover a high degree of gamut reduction, weemploy an artificially created gamut, called Toy, which hasbeen used in the state-of-the-art WCG studies [7], [46]. It issmaller than Rec.709 and produces large perceptual differencewhen the gamut of a WCG image is reduced to it. The choiceof these two gamuts is based on our preliminary experiments,where for the gamuts between P3 and Rec.709, the gamut-reduced images are not visually distinguishable from those inP3 nor Rec.709; in addition, gamuts smaller than Toy give rise RIMARY C OLORS IN THE
CIE 1931 C
OLOR S PACE
Gamuts Red Primaries Green Primaries Blue Primariesx y x y x y
DCI-P3 0.680 0.320 0.265 0.690 0.150 0.060Rec.709 0.640 0.330 0.300 0.600 0.150 0.060Toy 0.570 0.320 0.300 0.530 0.190 0.130
Fig. 1. Color spaces of color gamuts considered in this work on CIE 1931chromaticity diagram. to too much color distortion in the gamut-reduced images andthus are not practically meaningful. For gamut reduction, weconsider a simple gamut mapping algorithm because complexand time-consuming algorithms are not preferred in the contentcharacterization process. Hence, we use the gamut clippingmethod that maps colors outside the target gamut at the nearestboundary of the target gamut.
2) Subjective Test:
We adopt the paired comparison testmethodology [47] for the subjective test, because the dif-ference due to gamut reduction is mostly subtle perceptualdifference rather than large quality distortion. The referenceimage in the P3 gamut and one of the gamut-reduced imagesproduced in Section III-B1 are shown in a side-by-side manner.The images are compared in terms of color difference on athree-point scale: no difference (0), slight difference (1), andclear difference (2).The test is conducted under the standardized test roomcondition complying with the laboratory condition describedin ITU-R BT.500 such as luminance of the monitor, roomillumination, observers, etc. [48]. We use an EZIO ColorEdgemonitor that can display up to the P3 color gamut. We heuristi-cally crop each image in half-width ( × pixels) to showboth images side-by-side on a single monitor. Participantsare 51 healthy non-expert volunteer subjects consisting of 26males and 25 females, who are screened by a color and visiontest. We obtain the MOS for each of the 60 images (30 sourceimages × two target gamuts) by taking the average value ofthe ratings over the subjects. The test consists of an exercise and a test sessions. Duringthe exercise session, the test methodology is described to thesubjects with five exercise stimuli that are different from thetest stimuli. The test session proceeds sequentially for eachpair of images as follows. First, a reference image and one ofits gamut-reduced versions are displayed on the monitor up tofive seconds. Then, the monitor turns into a gray screen. Atanytime during these steps, the subjects can enter their ratingusing a keyboard. Finally, the monitor turns into (or stays) grayfor one second for a break and then the next pair is shown. Theviewing order of the stimuli is set random for each subject.The arrangement of the reference image or the gamut-reducedimage (i.e., left or right side) is also randomized for each pair.At the beginning of the test session, three dummy pairs areshown for stabilization, which are also different from the teststimuli.
C. Fitting Objective Metric
In order to approximate the subjective score of the colordifference due to gamut reduction in an objective manner, weemploy the color extension of the structural similarity index( cssim ) [49], [50], which can effectively measure perceptu-ally significant structural differences due to gamut reductionbetween two color images. The preliminary study [1] showsthat it performs best with the highest accuracy among eightcommonly-used objective color difference metrics [51], [52],[53], [54], [55], [56], [57].For each pair of the reference and gamut-reduced image,we measure the cssim score. The score is further fitted to theMOS by a monotonic nonlinear function as described in [58]: f ( x ) = α β ( γ − x ) , (1)where the fitted values of the parameters are α = 2 , β = − . ,and γ = 1 . . The result of fitting for the training datasetis shown in Fig. 2. In order to evaluate the prediction per-formance, we obtain MOS for the validation dataset from20 subjects by following the same procedure described inSection III-B2. The Pearson correlation coefficients (PCCs)between the ground truth MOS and predicted MOS using thefitting function are 0.92 and 0.80 for the training and valida-tion sets, respectively. Therefore, we calculate the perceptualdifference in Algorithm 1 as d n = P D ( I n , I ) = f ( cssim ( I n , I )) . (2) D. Validation
We validate the framework by applying it to a simplecontent selection task. As mentioned in Section I, usingrepresentative contents is crucial to draw reliable conclusionin studies on QoE of WCG images. In the task, the mainobjective is to select representative images that have diversebehaviors in terms of the perceptual difference due to suc-cessive gamut reduction. We use the framework to obtainpredicted perceptual differences due to gamut reduction to thetwo target gamuts (Rec.709 and Toy) as two-dimensional fea-tures characterizing the 24 candidate images in the validation
Fig. 2. Fitted sigmoid function to predict MOS of perceptual difference using cssim . The ground truth and predicted MOS are shown as dots and a redline, respectively. dataset. Then, the k -means clustering algorithm is applied tothe predicted perceptual differences. The value of k determinesthe number of representative clusters for content selection,which should be chosen by the user according to the purposeof content selection. In this experiment, we set the value of k to five based on the distribution of the images in termsof the predicted perceptual differences. One image for eachcluster is randomly selected to construct a representative imageset, which maximizes the coverage of the feature space. Forcomparison, we also apply a random selection method wherefive images are selected randomly in the same dataset.The result for each selection method is shown in Fig. 3.It can be seen that the selected images by our framework inFig. 3a are more spread than the randomly selected images.In Fig. 3b, however, the selected images are biased to theupper-side of the feature space. In this case, images havingsmall perceptual difference by severe gamut reduction are notconsidered, and the obtained image set cannot be said to berepresentative. Fig. 4 shows two example images (marked inFig. 3a) in different gamuts. In Fig. 4a, as predicted, largeperceptual differences are observed for both gamut-reducedimages compared to the reference P3 image, i.e., the overallcolor of the scene and the green laser lights at the top area.On the contrary, Fig. 4b hardly shows any difference betweengamut-reduced ones, which is also predicted in Fig. 3a. Byselecting images with diverse characteristics, a representativedataset can be constructed by our framework.We also evaluate robustness of content selection with ourframework. For each of the two methods (random selectionand our framework), the selection task is repeated two timesto obtain two sets of selected images, and the PCC betweenthe MOSs of the two sets is measured. We consider that ahigh value of PCC by a selection method represents a highlevel of robustness of the method, because it means that thecharacteristics of the selected images are consistent regardlessof repetition or random effects. We repeat the procedure 100times. Much higher PCC values are obtained by our frameworkthan random selection (0.83 vs. 0.15 on average), which isfound to be statistically significant via a t-test, t (137 .
1) = (a) (b)Fig. 3. Example of content selection with (a) our framework and (b) random selection. The x- and y-axis are predicted perceptual differences of images fromthe P3 to Rec.709 and Toy gamuts, respectively. Among the data points shown as blue dots, the selected images are marked with red circles. Note that thelower-right area of each figure is empty because as the gamut is reduced more, the perceptual difference becomes larger, thus the value of the y-axis wouldbe always bigger than that of the x-axis. (a)(b)Fig. 4. Selected images corresponding (a) (cid:13) and (b) (cid:13) of Fig. 3a in the P3, Rec.709, and Toy gamuts (left, middle, and right panels, respectively). . , p < . .IV. A PPLICATION TO
WCG D
ATASET C HARACTERIZATION
In this section, we apply the proposed framework to char-acterization of WCG image datasets. We describe dataset The statistical significance of higher PCC values by our framework isobtained in all cases with k from 2 to 10. characterization criteria and analyze existing WCG datasetsbased on them. Characterizing datasets helps an experimenterto determine or construct a suitable dataset for studies relatedto QoE of WCG contents. A. Dataset Characterization Criteria
By extending the dataset characterization criteria presentedin [37], we propose to measure four statistics of perceptual difference measured by the framework as follows. In [37],three statistics of various characteristics extracted from imagesor videos in the dataset are proposed. They are two criteriameasuring the coverage and uniformity in each dimension, anda multidimensional coverage criterion. In addition to these, wealso consider the multidimensional uniformity. Note that wenormalize the perceptual differences in each dimension to scalethe span of the criteria within [0, 1], i.e., ˜ d i = d i /s i , where s i is a normalization factor that is equal to the maximum possiblevalue of MOS ( s i = 2 in our case) because the minimum valueof MOS is zero.
1) Coverage:
To quantify how wide the range of theperceptual differences covered by the images of a dataset, wemeasure the difference between the smallest and largest per-ceptual difference values of the images. Specifically, coverage C i for gamut space i is calculated as C i = max ( z i ) − min ( z i ) , (3)where z i is a set of the normalized perceptual differences ˜ d i of all images in the dataset when the gamut is reduced totarget gamut i from the reference gamut ( i = 1 , . . . , N ). Themaximum value of C i is obtained when the dataset containsimages corresponding to both no-difference (MOS = 0) andclear difference (MOS = 2) for the i th target gamut. In otherwords, one image has less or no colors outside the i th gamutspace so that it does not cause perceptual difference by gamutreduction, but the other image contains lots of colors outsidethe space and thus its perceptual difference can be clearlyobserved by gamut reduction.
2) Total Coverage:
This is the relative area occupied by thedata points in the space of perceptual differences. It is similarto C i , but considers the interaction of different dimensions in Z = { z , z , . . . , z N } . It is calculated as follows: C total = N (cid:115)(cid:90) convex ( Z ) , (4)where convex ( Z ) returns the convex hull for N -dimensionalvectors in Z . C total becomes the largest when the dataset con-sists of images having the maximum coverage of perceptualdifference for all target gamuts. Using a dataset having a largevalue of C total in an experiment implies that images havingextreme perceptual characteristics (i.e., both severe and littleperceptual differences) under gamut change are employed.
3) Uniformity:
While the above coverage measures con-sider the range of perceptual differences observed in theimages, uniformity measures how evenly the perceptual dif-ferences are distributed within the range. For this, we usethe information entropy, which is popularly used to measurethe uniformity of a distribution. In other words, we constructthe histogram of z i , and then compute its entropy as followsin order to quantify the uniformity of the distribution of theperceptual differences. U i = − B (cid:88) k =1 p i,k log B p i,k , (5)where B is the number of bins of the histogram and p i,k is theratio of the images of which perceptual differences are in the range of the k th bin. The uniformity has the largest value of1 when the perceptual differences of the dataset are uniformlydistributed. It becomes low when the dataset contains imageshaving similar perceptual differences, and reaches 0 when theperceptual differences are the same for all images.
4) Total Uniformity:
This measures the uniformity of per-ceptual differences over the whole dimensions of reducedtarget gamuts. In this case, we compute the N -dimensionalhistogram of Z and its entropy, i.e., U total = − N N (cid:88) i =1 B (cid:88) k =1 q i,k log B q i,k , (6)where B is the number of bins for each dimension of thehistogram and q i,k is the normalized count in the k th bin(normalized over the whole dimension). It becomes the largestvalue (i.e., 1) when a dataset contains diverse images in termsof perceptual differences and the perceptual differences areuniformly distributed over all target gamuts. On the other hand,it has the lowest value of 0 when the dataset contains imagesthat show the same amount of perceptual difference for alltarget gamuts. A dataset having a large value of U total isbeneficial to conduct experiments with images having diverseperceptual characteristics under gamut change. B. Analysis of Existing Datasets
We analyze the two existing WCG datasets, HdM andArri, in terms of the four criteria described above . In thisexperiment, we collect 38 and 11 images from each dataset,respectively. We use the perceptual difference for the 49images due to successive gamut reduction from the referenceP3 gamut to the Rec.709 and Toy gamuts as in Section III-C.We then measure the four criteria of the two WCG datasets.For (total) uniformity, we use 10 bins for each dimensionof the histograms (i.e., B = 10 ). The measured criteria aresummarized in TABLE II. In addition, the distributions of theperceptual difference for the two datasets are shown in Fig. 5.First, the coverages of the two datasets have differentbehaviors depending on the target gamuts. The perceptualdifferences of the images in the HdM dataset cover overabout a half of the scale for both target gamuts as shownin Fig. 5a. For the case of gamut reduction to Toy, theperceptual difference is biased to large values because mostimages of HdM contain many pixels with highly saturatedcolors, which produces large perceptual difference when thegamut is reduced. On the contrary, pixels with highly saturatedcolor are few in the images of the Arri dataset, so the coveragecriterion for Rec.709 is low while that for Toy is high as shownin Fig. 5b.Similarly to the results of the dimension-wise coveragecriterion, the HdM dataset has a medium level of total coverageof perceptual differences, showing the convex hull coveringalmost the upper-half area in Fig. 5a. On the other hand,although the coverage value for the Toy gamut is large asshown in Fig. 5b, the total coverage of the Arri dataset is smalldue to the extremely low coverage for Rec.709. Note that z T oy These are the only publicly available datasets that support Rec.2020.
TABLE IIR
ESULTS OF
WCG D
ATASET C HARACTERIZATION
Criteria HdM ArriToy Rec.709 Total Toy Rec.709 Total
Coverage 0.512 0.647 0.412 0.933 0.017 0.093Uniformity 0.707 0.509 0.550 0.713 0.000 0.357 would be always higher than z for the same image becausethe details of color are more distorted in Toy, so the practicalmaximum possible value of total coverage is 0.707 ( = √ . ).In terms of uniformity, the perceptual differences causedby the large gamut difference (i.e., the case of Toy) arequite uniformly distributed for both datasets. For the smallgamut reduction (to Rec.709), the perceptual differences of theHdM dataset are slightly biased to low values. The perceptualdifferences of the Arri dataset are extremely biased, so all datapoints are allocated in a single bin and the uniformity is zero.In the case of total uniformity, there exist differences be-tween the two datasets. The perceptual differences of the HdMdataset are quite uniformly distributed on the two-dimensionalspace in Fig. 5a, although the data points are slightly biased tothe upper region (where large perceptual differences occur dueto large gamut reduction). For the Arri dataset, the perceptualdifferences are biased to the left-side in Fig. 5b, so the totaluniformity becomes low.Overall, each of the two datasets has its own strengths andlimitations in a complementary manner. HdM has a relativelysmall coverage of z T oy , while Arri has limited characteristicsin the Rec.709 gamut. For example, if the Arri dataset is usedfor an experiment involving gamut changes, the experimentwould draw biased conclusion for small gamut difference.Based on this understanding, one can choose either of thetwo datasets for particular research problems; for instance, theArri dataset could be more effective for the experiments thatfocus on large gamut difference. Furthermore, one can obtainan enhanced dataset by supplementing one of the two datasetswith particular contents having characteristics desired for thegiven objective.V. A
PPLICATION TO EVALUATION OF GAMUT MAPPINGALGORITHMS
In this section, we present another practical application ofthe proposed framework, which is the problem of evaluationof GMAs. In this scenario, the proposed framework plays arole to select image contents used for performance comparisonof different GMAs. We demonstrate the reliability of theframework for selection of representative contents for faircomparison.
A. Scenario
The main goal of the scenario is to benchmark performanceof GMAs. Each GMA is applied to a set of source imagecontents having wide gamuts, and its performance is measuredby an objective quality metric in terms of perceptual colorinformation loss in the gamut-reduced images in comparison to the original ones. Here, which image dataset is used is animportant issue. For instance, if images that do not have colorprofiles challenging enough to reveal distinguished gamutmapping performance, the GMAs may be evaluated to performsimilarly, which may not be the case if challenging imagesare included. Therefore, careful selection of the images isrequired to obtain unbiased benchmarking results, for whichthe proposed framework can be used. Therefore, our objectiveis to evaluate the reliability of the benchmarking resultsbetween different source content selection methods.We limit the number of GMAs for comparison to two inorder to validate the effectiveness of the proposed frame-work clearly rather than to present extensive benchmarkingof many GMAs. One is the state-of-the-art gamut reductionalgorithm [7] that adaptively modifies local contrast of pixelsresiding outside of the target gamut based on the Retinextheory [8]. For the other one, we use the gamut compressionalgorithm [6] that maps the entire color of the source imageinside the target gamut in the CIE 1931 space.To evaluate the performance of gamut mapping, we usethe color image difference (CID) [35] that predicts perceptualcolor difference between the reference and gamut-reducedimage, which is used to evaluate performance of the gamutreduction algorithm in [7]. As the main objective of conductingthe scenario, we focus on the reliability and robustness of testresults with representative contents selected by our framework.First, the selected dataset should sufficiently cover diversegamut characteristics so that it is representative. Second,in terms of robustness, experiments with content selectionfollowed by the same procedure should produce consistentresults and conclusions regardless of repetition.
B. Content Selection
The pool of candidate source images consists of half-HD( × pixels) WCG images from both the HdM andArri datasets. After excluding images containing no or too fewpixels in WCG (outside the Rec.709 gamut) from the data usedin Section IV-B, 35 candidate images are used. The referencegamut is Rec.2020, and we use three target gamuts for gamutmapping: P3, Rec.709, and Toy.The proposed framework is applied to select representativeimages from the pool. As described in Section III-C, eachcandidate image is represented by a two-dimensional percep-tual feature vector. Then, the k -means clustering algorithmwith k = 3 is used to group them into three clusters,from each of which three images are randomly selected.For comparison, content selection using an existing contentfeature, colorfulness [53], is also conducted. It measures thevariety and intensity of colors in an image. The colorfulnessfeatures computed for the candidate images are also clusteredinto three groups and three images are randomly chosen fromeach group. These content selection procedures are repeated100 times with different random seeds. (a) HdM (b) ArriFig. 5. Measured perceptual differences and corresponding convex hulls for the (a) HdM and (b) Arri datasets. C. Evaluation
In order to compare the two GMAs, we define CID gain g t for target gamut t and for a source image as g t = CID ( I , GC ( I , t )) − CID ( I , GR ( I , t )) , (7)where I is the reference image, and GC ( I , t ) and GR ( I , t ) are the gamut-compressed and gamut-reduced versions of I ,respectively. g t becomes positive when the gamut reductionalgorithm performs better than the gamut compression al-gorithm, and its absolute value indicates the degree of theperformance difference.Using the CID gains for 100 repetitions, the two contentselection methods are compared with respect to two aspects:robustness and representativeness. First, a content selectionmethod is considered to be robust when the CID gains remainconsistent, i.e., the averages and standard deviations of theCID gains over the selected images are similar across therepetitions. Second, a dataset of images chosen by a contentselection method is regarded as being representative if theimages have diverse color characteristics. Thus, the CID gainslie in a wide range, resulting in a large average and standarddeviation over the images. D. Results
Fig. 6 shows the average and standard deviation of CIDgains for the selected images with respect to the target gamutand selection method. In all cases, the average CID gains arepositive, which indicates that the gamut reduction algorithmproduces gamut-reduced images with smaller difference fromthe reference ones compared to the gamut compression algo-rithm. When the three target gamuts are compared, a smallergamut yields larger CID gains because more color distortionis introduced by the gamut compression algorithm than thegamut reduction algorithm as the gamut difference becomeslarger.The two selection methods show clearly distinct results.First, the average and standard deviation of the CID gainsappear more similar across 100 trials when the proposed framework is used, particularly when the target gamut is small.In order to statistically assess this, we conduct one-sided F-tests under the null hypothesis that the two populations (onefor the proposed framework and the other for the methodusing colorfulness) of the average (or standard deviation)values of the CID gains have the same variance. The resultsare shown in TABLE III, which confirms that the casesinvolving large gamut changes show statistically significantdifference (i.e., Rec.709 and Toy for the average and Toy forthe standard deviation). Note that for P3, the gamut differencefrom Rec.2020 is small, so the average and standard deviationof the CID gains are also small. These results demonstratethat the selection method has an impact on the results ofGMA comparison, where content selection using the proposedframework provides improved robustness.Second, on average, the average and standard deviationvalues are larger for the case using the proposed frameworkthan for the case using colorfulness. Since many images in thepool are not challenging for GMAs as shown in Section IV-B,for which the CID gain is small, a larger average or standarddeviation value indicates a more representative dataset. Weperform one-sided t-tests under the null hypothesis that thetwo populations of the average (or standard deviation) valuesof the CID gains have the same mean. As shown in TABLE III,the null hypothesis is rejected in all cases, indicating that theaverage and standard deviation values are significantly largerfor our method. This confirms representativeness of the datasetobtained using our method and, consequently, reliability of theresults of the benchmarking.For comparison, we provide further results using selectionfeatures other than colorfulness. We use two no-referencecolor quality metrics: contrast enhancement based contrast-changed image quality measure (CEIQ) [59] and acceleratedscreen image quality evaluator (ASIQE) [60]. The former isa metric based on a learned support vector machine usingmultiple features estimating contrast distortion, while the latterassesses image quality considering four types of quality fea-tures consisting of picture complexity, screen content statistics,global brightness quality, and sharpness of details. We conduct (a) Framework (P3) (b) Framework (Rec.709) (c) Framework (Toy)(d) Colorfulness (P3) (e) Colorfulness (Rec.709) (f) Colorfulness (Toy)Fig. 6. CID gains for the images selected by the method using the proposed framework or the one using colorfulness. The average and standard deviationvalues of the CID gains over the selected images are represented by the bars with dark color and the shaded area, respectively.TABLE IIIR
ESULT OF THE S TATISTICAL T ESTS C OMPARING THE S ELECTION M ETHOD U SING THE P ROPOSED F RAMEWORK AND THE O NE U SING C OLORFULNESS . T HE D EGREES OF F REEDOM OF T - TESTS ON A VERAGEAND S TANDARD D EVIATION U SING THE W ELCH -S ATTERTHWAITE E QUATION A RE AND
ESPECTIVELY . S
TATISTICAL S IGNIFICANCE (B ONFERRONI - CORRECTED FOR M ULTIPLE C OMPARISON )I S M ARKED IN B OLD . Target gamut Average Standard deviationStatistics p -value Statistics p -value F-tests P3 F = 0 .
72 0 . F = 0 .
94 0 . Rec.709 F = 0 . . F = 0 .
72 0 . Toy F = 0 . . F = 0 . < . t-tests P3 t = 7 . < . t = 6 . < . Rec.709 t = 8 . < . t = 7 . < . Toy t = 9 . < . t = 9 . < . statistical tests comparing the CID gains obtained by ourframework and the method using either CEIQ or ASIQE.The results are shown in TABLE IV. Similar to the resultsin TABLE III using colorfulness, statistical significance isobserved for F-tests in the cases of the large gamut change (i.e.between Rec.709 and Toy) and for t-tests in all cases. Thus,our framework can effectively select representative contentsreliably compared to the methods using these image qualitymetrics. TABLE IVR
ESULT OF THE S TATISTICAL T ESTS C OMPARING THE S ELECTION M ETHOD U SING THE P ROPOSED F RAMEWORK AND THE O NES U SING
CEIQ
AND
ASIQE. S
TATISTICAL S IGNIFICANCE I S M ARKED IN B OLD . Target gamut Average Standard deviationStatistics p -value Statistics p -value F-tests (CEIQ) P3 F = 0 .
78 0 . F = 0 .
83 0 . Rec.709 F = 0 . . F = 0 . < . Toy F = 0 . < . F = 0 . < . t-tests P3 t = 10 . < . t = 10 . < . Rec.709 t = 12 . < . t = 11 . < . Toy t = 12 . < . t = 12 . < . F-tests (ASIQE) P3 F = 0 .
76 0 . F = 0 .
92 0 . Rec.709 F = 0 . . F = 0 . . Toy F = 0 . < . F = 0 . < . t-tests P3 t = 9 . < . t = 9 . < . Rec.709 t = 10 . < . t = 11 . < . Toy t = 10 . < . t = 13 . < . VI. C
ONCLUSION
We proposed a content characterization method for a WCGimage content and evaluated it in practical applications. Themain idea was to obtain perceptual color differences dueto successive gamut reduction as content characteristics for the WCG content. As one of the practical use cases ofthe framework, we analyzed existing datasets by measuringdataset characterization criteria on the WCG characteristics.Four criteria consisting of coverage, total coverage, uniformity,and total uniformity effectively characterized WCG datasets.In addition, we validated WCG content characteristics as acontent selection feature in a GMA benchmarking scenario.Using the framework, we were able to select representativeWCG contents, and draw robust and reliable benchmarkingresults.In the future, the proposed framework can be improved inseveral ways. First, we employed cssim for objective qualityassessment due to its superiority. If metrics that performbetter than cssim are developed in the future, e.g., deeplearning-based methods, our framework could benefit fromemploying such improved metrics. Second, the scope of theframework could be extended to video contents by consideringthe temporal dimension of color perception.R EFERENCES[1] J. Lee, T. Vigier, P. Le Callet, and J.-S. Lee, “A perception-basedframework for wide color gamut content selection,” in
Proceedings ofIEEE International Conference on Image Processing , Oct. 2018, pp.709–713.[2] ITU, “ITU-R BT.709-6. Parameter values for the HDTV standards forproduction and international programme exchange,” Tech. Rep., 2015.[3] ——, “ITU-R BT.2020-2. Parameter values for ultra-high definition tele-vision systems for production and international programme exchange,”Tech. Rep., 2015.[4] T. Smith and J. Guild, “The C.I.E. colorimetric standards and their use,”
Transactions of the Optical Society , vol. 33, no. 3, pp. 73–134, 1931.[5] C. Chinnock, “The status of wide color gamut UHD-TVs,” Tech. Rep.,2016.[6] CIE, “Guidelines for the evaluation of gamut mapping algorithms,” Tech.Rep., 2004.[7] S. W. Zamir, J. Vazquez-Corral, and M. Bertalmio, “Gamut mappingin cinematography through perceptually-based contrast modification,”
IEEE Journal of Selected Topics in Signal Processing , vol. 8, no. 3, pp.490–503, 2014.[8] E. H. Land and J. J. McCann, “Lightness and retinex theory,”
Journalof the Optical Society of America , vol. 61, no. 1, pp. 1–11, 1971.[9] K. Gu, S. Wang, H. Yang, W. Lin, G. Zhai, X. Yang, and W. Zhang,“Saliency-guided quality assessment of screen content images,”
IEEETransactions on Multimedia , vol. 18, no. 6, pp. 1098–1110, Jun. 2016.[10] K. Gu, S. Wang, G. Zhai, S. Ma, X. Yang, W. Lin, W. Zhang,and W. Gao, “Blind quality assessment of tone-mapped images viaanalysis of information, naturalness, and structure,”
IEEE Transactionson Multimedia , vol. 18, no. 3, pp. 432–443, Mar. 2016.[11] Z. Fan, T. Jiang, and T. Huang, “Active sampling exploiting reliableinformativeness for subjective image quality assessment based on pair-wise comparison,”
IEEE Transactions on Multimedia , vol. 19, no. 12,pp. 2720–2735, Dec. 2017.[12] X. Min, K. Gu, G. Zhai, J. Liu, X. Yang, and C. W. Chen, “Blind qualityassessment based on pseudo-reference image,”
IEEE Transactions onMultimedia , vol. 20, no. 8, pp. 2049–2062, Aug. 2018.[13] P. G. Freitas, W. Y. L. Akamine, and M. C. Q. Farias, “No-referenceimage quality assessment using orthogonal color planes patterns,”
IEEETransactions on Multimedia , vol. 20, no. 12, pp. 3353–3360, Dec. 2018.[14] M. C. Stone, W. B. Cowan, and J. C. Beatty, “Color gamut mapping andthe printing of digital color images,”
ACM Transactions on Graphics ,vol. 7, no. 4, pp. 249–292, Oct. 1988.[15] G. M. Murch and J. M. Taylor, “Color in computer graphics: Manip-ulating and matching color,” in
Advances in Computer Graphics V .Springer, 1989, pp. 19–47.[16] F. Ebner and M. D. Fairchild, “Gamut mapping from below: Findingminimum perceptual distances for colors outside the gamut volume,”
Color Research & Application , vol. 22, no. 6, pp. 402–413, 1997.[17] J. Morovic and M. R. Luo, “Gamut mapping algorithms based onpsychophysical experiment,” in
Proceedings of the Color and ImagingConference , vol. 1997, no. 1, 1997, pp. 44–49. [18] S. O. Naoya Katoh, Masahiko Ito, “Three-dimensional gamut mappingusing various color difference formulae and color spaces,”
Journal ofElectronic Imaging , vol. 8, no. 4, pp. 365–379, 1999.[19] J. Morovic and Y. Wang, “A multi–resolution, full–colour spatial gamutmapping algorithm,” in
Proceedings of the Color and Imaging Confer-ence , vol. 2003, no. 1, 2003, pp. 282–287.[20] P. Zolliker and K. Simon, “Adding local contrast to global gamutmapping algorithms,” in
Proceedings of the Conference on Colour inGraphics, Imaging, and Vision , vol. 2006, no. 1, 2006, pp. 257–261.[21] I. Farup, C. Gatta, and A. Rizzi, “A multiscale framework for spatialgamut mapping,”
IEEE Transactions on Image Processing , vol. 16,no. 10, pp. 2423–2435, 2007.[22] P. Zolliker and K. Simon, “Retaining local image information in gamutmapping algorithms,”
IEEE Transactions on Image Processing , vol. 16,no. 3, pp. 664–672, Mar. 2007.[23] Ø. Kol˚as and I. Farup, “Efficient hue-preserving and edge-preservingspatial color gamut mapping,” in
Proceedings of the Color and ImagingConference , 2007, pp. 207–212.[24] A. Alsam and I. Farup, “Spatial colour gamut mapping by orthogonalprojection of gradients onto constant hue lines,” in
Advances in VisualComputing , 2012, pp. 556–565.[25] C. Gatta and I. Farup, “Gamut mapping in RGB colour spaces withthe iterative ratios diffusion algorithm,” in
Proceedings of the IS&TInternational Symposium on Electronic Imaging , 2017, pp. 12–20.[26] F. Dugay, I. Farup, and J. Y. Hardeberg, “Perceptual evaluation of colorgamut mapping algorithms,”
Color Research & Application , vol. 33,no. 6, pp. 470–476, 2008.[27] CIE, “Recommendations on uniform color spaces, color-difference equa-tions, psychometric color terms,” Tech. Rep., 1978.[28] X. Zhang and B. A. Wandell, “A spatial extension of CIELAB for digitalcolor image reproduction,” in
SID International Symposium Digest ofTechnical Papers , vol. 27, 1996, pp. 731–734.[29] M. D. Fairchild and G. M. Johnson, “The iCAM framework for imageappearance, image differences, and image quality,”
Journal of ElectronicImaging , vol. 13, pp. 126–138, Jan. 2004.[30] G. Hong and M. R. Luo, “Perceptually-based color difference forcomplex images,” in
Proceedings of the Congress of the InternationalColour Association , vol. 4421, 2002, pp. 618–622.[31] Z. Wang and A. C. Bovik, “A universal image quality index,”
IEEESignal Processing Letters , vol. 9, no. 3, pp. 81–84, Mar. 2002.[32] N. Bonnier, F. Schmitt, H. Brettel, and S. Berche, “Evaluation of spatialgamut mapping algorithms,” in
Proceedings of Color and ImagingConference , 2006, pp. 56–61.[33] J. Y. Hardeberg, E. Bando, and M. Pedersen, “Evaluating colour imagedifference metrics for gamut-mapped images,”
Coloration Technology ,vol. 124, no. 4, pp. 243–253, 2008.[34] M. Pedersen and J. Y. Hardeberg, “A new spatial hue angle metric forperceptual image difference,” in
In Proceedings of the ComputationalColor Imaging Workshop , 2009, pp. 81–90.[35] I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,”
IEEE Transactions onImage Processing , vol. 22, no. 2, pp. 435–446, 2013.[36] D. Darcy, E. Gitterman, A. Brandmeyer, S. Daly, and P. Crum,“Physiological capture of augmented viewing states: objective measuresof high-dynamic-range and wide-color-gamut viewing experiences,” in
Proceedings of the IS&T International Symposium on Human Vision andElectronic Imaging , 2016, pp. HVEI126:1–9.[37] S. Winkler, “Analysis of public image and video databases for qualityassessment,”
IEEE Journal of Selected Topics in Signal Processing ,vol. 6, no. 6, pp. 616–625, 2012.[38] F. Zhang, F. M. Moss, R. Baddeley, and D. R. Bull, “BVI-HD: A videoquality database for HEVC compressed and texture synthesized content,”
IEEE Transactions on Multimedia , vol. 20, no. 10, pp. 2620–2630, Oct.2018.[39] A. Mackin, F. Zhang, and D. R. Bull, “A study of high frame rate videoformats,”
IEEE Transactions on Multimedia , vol. 21, no. 6, pp. 1499–1512, Jun. 2019.[40] M. H. Pinson, M. Barkowsky, and P. Le Callet, “Selecting scenes for2D and 3D subjective video quality tests,”
EURASIP Journal on Imageand Video Processing , vol. 2013, no. 1, pp. 50:1–12, Aug. 2013.[41] L. Krasula, K. Fliegel, P. Le Callet, and M. Kl´ıma, “Objective evaluationof naturalness, contrast, and colorfulness of tone-mapped images,” in
Proceedings of the Applications of Digital Image Processing XXXVII ,vol. 9217, 2014, pp. 92 172D:1–10.[42] P. Paudyal, J. Guti´errez, P. Le Callet, M. Carli, and F. Battisti, “Charac-terization and selection of light field content for perceptual assessment,” in Proceedings of the International Conference on Quality of MultimediaExperience , 2017, pp. 1–6.[43] M. Narwaria, C. Mantel, M. Perreira Da Silva, P. Le Callet, andS. Forchhammer, “An objective method for high dynamic range sourcecontent selection,” in
Proceedings of the International Workshop onQuality of Multimedia Experience , 2014, pp. 13–18.[44] L. Krasula, M. Narwaria, K. Fliegel, and P. Le Callet, “Preference ofexperience in image tone-mapping: dataset and framework for objectivemeasures comparison,”
IEEE Journal of Selected Topics in SignalProcessing , vol. 11, no. 1, pp. 64–74, 2017.[45] J. Froehlich, S. Grandinetti, B. Eberhardt, S. Walter, A. Schilling, andH. Brendel, “Creating cinematic wide gamut HDR-video for the evalu-ation of tone mapping operators and HDR-displays,” in
Proceedings ofthe Digital Photography X , vol. 9023, 2014, pp. 90 230X:1–10.[46] S. W. Zamir, J. Vazquez-Corral, and M. Bertalm´ıo, “Gamut extensionfor cinema,”
IEEE Transactions on Image Processing , vol. 26, no. 4,pp. 1595–1606, 2017.[47] J.-S. Lee, “On designing paired comparison experiments for subjectivemultimedia quality assessment,”
IEEE Transactions on Multimedia ,vol. 16, no. 2, pp. 564–571, 2014.[48] ITU, “ITU-R BT.500-13. Methodology for the subjective assessment ofthe quality of television pictures,” Tech. Rep., 2012.[49] B. Ortiz-Jaramillo, A. Kumcu, and W. Philips, “Evaluating colordifference measures in images,” in
Proceedings of the InternationalConference on Quality of Multimedia Experience , 2016, pp. 1–6.[50] A. Toet and M. P. Lucassen, “A new universal colour image fidelitymetric,”
Displays , vol. 24, no. 4, pp. 197–207, 2003.[51] G. Sharma, W. Wu, and E. Daa, “The CIEDE2000 color-dierence for-mula: Implementation notes, supplementary test data, and mathematicalobservations,”
Color Research & Application , vol. 30, pp. 21–30, 2004.[52] X. Zhang and B. A. Wandell, “A spatial extension of CIELAB fordigital color-image reproduction,”
Journal of the Society for InformationDisplay , vol. 5, no. 1, pp. 61–63, 1997.[53] D. Hasler and S. S¨usstrunk, “Measuring colourfulness in natural im-ages,” in
Proceedings of the IST/SPIE Electronic Imaging 2003: HumanVision and Electronic Imaging VIII , vol. 5007, no. 19, 2003, pp. 87–95.[54] M. H. Pinson and S. Wolf, “A new standardized method for objectivelymeasuring video quality,”
IEEE Transactions on Broadcasting , vol. 50,no. 3, pp. 312–322, 2004.[55] G. M. Johnson, “Using color appearance in image quality metrics,” in
Proceedings of the Second International Workshop on Video Processingand Quality Metrics for Consumer Electronics , 2006.[56] C. H. Chou and K. C. Liu, “A fidelity metric for assessing visual qualityof color images,” in
Proceedings of the 16th International Conferenceon Computer Communications and Networks , 2007, pp. 1154–1159.[57] U. Rajashekar, Z. Wang, and E. P. Simoncelli, “Quantifying color imagedistortions based on adaptive spatio-chromatic signal decompositions,”in
Proceedings of the 16th IEEE International Conference on ImageProcessing , 2009, pp. 2213–2216.[58] ITU, “ITU-T J.149. Method for specifying accuracy and cross-calibration of Video Quality Metrics (VQM),” Tech. Rep., 2004.[59] J. Yan, J. Li, and X. Fu, “No-reference quality assessment ofcontrast-distorted images using contrast enhancement,” arXiv preprintarXiv:1904.08879 , pp. 1–15, 2019.[60] K. Gu, J. Zhou, J. Qiao, G. Zhai, W. Lin, and A. C. Bovik, “No-referencequality assessment of screen content pictures,”
IEEE Transactions onImage Processing , vol. 26, no. 8, pp. 4005–4018, 2017.
Junghyuk Lee received his B.S. degree from theSchool of Integrated Technology at Yonsei Uni-versity, Korea, in 2015, where he is currently work-ing toward the Ph.D. degree. His research interestsinclude multimedia signal processing and wide colorgamut imaging.
Toinon Vigier obtained a PhD in July 2015 fromthe Ecole Centrale de Nantes in the AmbiancesArchitectures and Urbanity lab, where she focusedon virtual reality for urban studies. She specificallystudied the impact of rendering and color effectson the perception of urban atmospheres throughVR subjective tests. She was then a postdoctoralfellow in the Image Video and Communication teamat Universit´e de Nantes. She worked mainly onvideo quality and eye-tracking studies in the Euro-pean CATRENE project UltraHD-4U which aims atstudying and implementing a complete chain for the broadcasting of UHD-4Kvideos. Since September 2016, she is an Associate Professor at Universit´e deNantes in the Image Perception Interaction research team of the Laboratory ofDigital Sciences in Nantes (LS2N). Her research mainly focuses on the study,the analysis and the prediction of the quality of experience for immersiveand interactive multimedia through subjective and objective measures. Sheis currently involved in various national and international interdisciplinaryprojects focusing on user experience in immersive VR media for variousapplications (health, cinema, architecture, design. . . ). She is also active inthe standardization working group IEEE 3333.1 and she served as reviewersin a lot of international conferences and journals (IEEE TIP, IEEE TCSVT,SPIE JEI, IEEE VR, IEEE QoMEX, ACM TVX, IEEE MMSP).
Patrick Le Callet (IEEE Fellow) is full professor atUniversity of Nantes, in the Electrical Engineeringand the Computer Science departments of PolytechNantes. He is one of the steering director of CNRSLS2N lab (450 researchers). He is also the scientificdirector of the cluster “Ouest Industries Cr´eatives”,gathering more than 10 institutions (including 3universities). “Ouest Industries Cr´eatives” aims tostrengthen Research, Education & Innovation ofthe Region Pays de Loire in the field of CreativeIndustries. He is mostly engaged in research dealingwith cognitive computing and the application of human vision modeling inimage and video processing. His current centers of interest are AI boosted QoEQuality of Experience assessment, Visual Attention modeling and applications.He is co-author of more than 300 publications and communications andco-inventor of 16 international patents on these topics. He serves or hasbeen served as associate editor or guest editor for several Journals suchas IEEE TIP, IEEE STSP, IEEE TCSVT, Springer EURASIP Journal onImage and Video Processing, and SPIE JEI. He is serving in IEEE IVMSP-TC (2015- to present) and IEEE MMSP-TC (2015-to present) and one thefounding member of EURASIP TAC (Technical Areas Committee) on VisualInformation Processing.