Exploratory Study on User's Dynamic Visual Acuity and Quality Perception of Impaired Images
Jolien De Letter, Anissa All, Lieven De Marez, Vasileios Avramelos, Peter Lambert, Glenn Van Wallendael
EExploratory Study on User’s Dynamic Visual Acuityand Quality Perception of Impaired Images
Jolien De Letter, Anissa All, Lieven De Marez
Media, Innovation and Communication TechnologiesGhent University
Ghent, Belgium
Vasileios Avramelos, Peter Lambert, Glenn Van Wallendael
Internet and Data Science Lab (IDLab)Ghent University
Ghent, [email protected]
Abstract —In this paper we assess the impact of head movementon user’s visual acuity and their quality perception of impairedimages. There are physical limitations on the amount of visualinformation a person can perceive and physical limitationsregarding the speed at which our body, and as a consequence ourhead, can explore a scene. In these limitations lie fundamentalsolutions for the communication of multimedia systems. As such,subjects were asked to evaluate the perceptual quality of staticimages presented on a TV screen while their head was in adynamic (moving) state. The idea is potentially applicable tovirtual reality applications and therefore, we also measured theimage quality perception of each subject on a head mounteddisplay. Experiments show the significant decrease in visual acuityand quality perception when the user’s head is not static, and givean indication on how much the quality can be reduced withoutthe user noticing any impairments.
Index Terms —Dynamic visual acuity, image impairment per-ception, subjective evaluation, virtual reality
I. I
NTRODUCTION
Improving the resolution of head mounted devices is cur-rently one of the main areas of focus in the field of virtualreality (VR). This was not surprising, especially since theamount of visual information displayed by VR displays ismuch smaller than the human viewing capacity. VR displaysdeliver around pixels per degree, with a ◦ field ofview and a fixed depth of focus of two meters [1], [2].This is in large contrast to TV screens where the resolutionstarts to match human perception [3]. Humans are, capableof perceiving around pixels per degree, a field of viewbetween ◦ and ◦ , and a depth of focus which can vary.Although it will take some time to fully exploit the humanvisual system, VR experiences will certainly head towardsimproving in that area.Such a poor visual system performance not only yieldslower quality immersive experiences, but it is also linkedto greater VR sickness, which poses a comfort problem tousers [4]. VR sickness or simulator sickness, also knownas Virtual Reality Induced Symptoms and Effects (VRISE)or cybersickness, refers to a constellation of oculomotor andnausea related symptoms that users experience during and afterparticipating in a simulated environment [5], [6]. Similar tomotion sickness, simulator sickness is caused by a conflictbetween the perceived visual information and the bodily senses. Amongst the several factors which have been foundto contribute to VRISE, there is also the issue of lag. Lagoccurs when there is a delay or latency between the actions ofa user, for instance head motion, and the systems response [4].Considering a bad visual performance induces simulatorsickness, pursuing a better view in VR will become crucial inthe future [4]. Such a view expansion with a higher resolutionand a wider field of view would, consequently, demand asubstantial amount of bandwidth and storage. For this reason,blurring the points outside of the human vision range as away to minimize streaming data will become a more evidentsolution. This blurring in VR could occur through foveatedrendering if the VR installation is equipped with eye-trackingtechnology or through system-feedback on the user’s positionand movement [7].In the current study, we have explored the possibilities ofthe latter in order to minimize streaming data. As users’ bodiesare frequently moving around in VR, we want to learn howsubstantially their head movement affects their visual acuityin such a way that they see little or no detail. The goal of thecurrent study was to learn users’ changes in (a) visual acuityperformance and (b) perception of impaired image qualitywhen their heads are stationary (static) versus when theirheads are in motion (dynamic). According to our knowledge,there is no other study on dynamic visual acuity targeting theimprovement of multimedia applications.From the literature we learn that visual acuity declines whenhorizontal movement of the head increases due to an imperfectpursuit of eye movements, resulting in a continued imagemotion on the retina [8]–[10]. For television (TV) and VR,we expected to find a similar pattern. Consequently, as usersin motion will see less detail, we anticipated that they wouldrate the impairment of images as less perceptible than whenobserving statically. The main contribution of this work is therealization of the perception loss of the subjects quantitatively,when their head is in a dynamic state. This is already crucialinformation for being able to control the delivery of highquality video at lower bitrates in the case of a dynamic viewingenvironment (e.g., in VR). Additionally, the results give usan indication for potentially studying cybersickness in VR, aswell as useful insights for providing quality VR experiences a r X i v : . [ c s . MM ] J a n ig. 1. Typical Sloan visual acuity test [11] used for clustering the subjectsand setting weights on opinion scores. It ranges between and , where isthe best score achieved for reading the bottom set of characters. at lower data rates.II. D YNAMIC V ISUAL A CUITY
A. Introduction
Visual acuity is the ability of the eye to perceive details. Itdepends on optical factors and neural factors. Visual acuity canbe described as static and dynamic. Static visual acuity, thecommon measure of visual acuity, is defined as the slightestdetail that the human eye can distinguish in a stationary,high contrast target (typically an eye chart with black letterson a white background). The standard
Sloan chart [11] formeasuring visual acuity is shown in Fig. 1.Dynamic visual acuity is the ability of the eye to visuallydiscern fine detail either in a moving object, or a stationaryobject while the actual subject is in motion, or both. In otherwords, dynamic visual acuity is the ability to distinguishfine detail when there is relative motion between the objectand the observer. While static visual acuity is the commonmeasure, for scenarios where motion is present such as drivinglicensing, dynamic visual acuity should also be considered.Similarly, dynamic visual acuity would seem to be morerelevant when subjectively evaluating VR content, or any othercontent present in a dynamic viewing environment.So far, practical dynamic visual acuity test methods arevery limited and serving only medical purposes [12], [13].However, in the next sub-section we elaborate on the currentstatus of the relevant research and literature (according to ourfindings). First, we focus on the research on visual acuityin respect of a vision research point of view, and then, wefocus more on the research as perceived from a Quality ofExperience (QoE) point of view.
B. Related work
There have been attempts in the past to increase ourknowledge on perception aspects under movement both froma vision research point of view and from a QoE point of view. With respect to vision research, research has been expandedtowards movement starting at the basis, our eyes. The eyeis limited in its visual capacity and these limits becomemore stringent with increased freedom of movement. Whenconsidering the eye by itself, only the foveola (approx. ◦ of vision) and the fovea (approx. ◦ of vision) provide uswith a high acuity of vision. When moving further from ourgaze direction, our visual acuity almost linearly decreases upto the mid-peripheral vision (approx. ◦ of vision), fromwhere visual acuity drops even faster [14]. In addition tothat, lateral masking phenomena make our peripheral visualacuity decline when more information is present [15]. Whenthe eye starts moving and exploring the scene, one talks aboutsaccades. Express saccades take around ms and regularsaccades around 150 ms corresponding to almost 14 displayedpictures considering a screen refresh rate of Hz [16].Because the eye is in motion, perception loss is inevitable [17].When moving our head, visual perception starts to interactwith vestibular compensation. The vestibular system in theinner ear senses body motion (mainly the head), and usesthis to control movement of the eye. The vestibulo-ocularreflex (VOR) moves the eyes contrary to the head, enablinggaze stabilization for both linear and angular head motion.To measure visual acuity loss between stationary head and arotating head, different dynamic visual acuity tests have beencreated [18], [19]. However, in this area (vision research), thestate of the art has a strict focus on visual acuity rather thanerror visibility of visual coding related quality loss. In thisresearch domain, most tests involve unnatural structures suchas letters and symbols to measure visual acuity and discardany realistically looking scenes. Such assumptions limit thepossibility to translate such research findings explicitly to thedomain of visual representation and interaction.With respect to QoE for visual representation and coding,multiple studies have tried to push the state of the art in percep-tion in the direction of plenoptic VR content. These studiescan be classified in the QoE tests purely based on graphicsand 3D meshes [20], and the ones based on camera captured-content [21]. For most works, the camera-captured VR contentis transformed back to regular 2D video, which eliminates theinteraction possibilities and the physical movements one couldmake [22]. First efforts to investigate VR quality of experienceare made in ITU recommendations under development suchas the ITU-T
SG12, Q13, G.QoE-VR and
P.360-VR , but theseare mainly limited to ◦ video. Irrespective of the fact thatthese works are limited to only a subset of the freedom tomove (only rotation of the head), they are mainly focused onoverall quality of experience rather than perception capabilitieson the fundamental level.Additional to the perception restrictions of moving eyesthere are the physical limitations involving movement of ourbody. Kinematics of the human body has been extensivelyinvestigated and modelled, but never used for the investigationof dynamic visual acuity and the actual latency to achieve agaze direction, for all possible directions. The efforts neededor predicting that latency, which is associated with a differentgaze direction, can be complemented by the research on visualsaliency. Saliency research tries to model the region of interestfor the observer and tries to answer the question on where theobserver will be looking in certain content. The state of theart in this area seems promising since it is already expandingfrom 2D video towards ◦ VR experiences [23], [24].III. M
ETHODOLOGY
A. Experimental setup
Dynamic visual acuity is the acuity which is obtainedduring relative motion of either the optotypes (standardizedsymbols for testing vision) or the observer [9]. Contrary tothe state of the art, the optotypes in this experiment remainedstationary in both the dynamic and static tests. The dynamicaspect in our experiment was introduced by the subject’s headmovement. We chose to do this as we are interested in thevisual performance of users when they move their heads, andnot when the images are moving around them, because this ishow viewers in VR typically interact with content. To ensurereliability, one test supervisor stood behind the subjects andturned the participants heads at a pre-determined pace ( . Hz)and angle ( ◦ ), similar to a method described in [25]. Tofacilitate this, participants remained seated during the entireduration of the experiment (see Fig. 2). One subject whofelt nauseous due to VR sickness was unable to perform thedynamic tests and was, thus, excluded from further testing.In total 22 people participated. Each visual acuity test tookapproximately 3 to 4 minutes, while the entire test sessionwas kept under the maximum testing limit of 30 minutes, asrecommended by [26].This study used the consumer version of Oculus Rift
Virtual Desktop
OculusRift delivers a resolution of × pixels per eye, aframe rate of fps, and a latency between . ms and . ms. The TV monitor, which was used for comparison, was a Philips 9000 series Smart-LED TV (46PFL9705k/02) with aresolution of × pixels. It had a inch diagonaldimension, and was thus satisfying the minimum requirementof inch to be used in visual quality assessment tests [27]. B. Subjective test
The recruitment and the study occurred at a knowledgeand innovation center in Ghent-Belgium. All participants wererecruited through voluntary participation. As compensation fortheir time and effort, participants were given one consumptionof e people participated in the experi-ments, of which were female and were male. This exceedsthe minimum-norm of as recommended by ITU-T [27].We solely recruited non-experts who were not involved in the Fig. 2. The dynamic aspect of the experiment was introduced by a supervisorstanding behind the subject. The participant’s head was turned horizontally ata pre-determined pace of . Hz and a width of ◦ [25]. study process, and thus, had no preconceptions about the goalof the experiment. The ages of the participants varied between and with an average age of . . If the respondentsnormally wore sight corrections, these were also worn duringthe test.Before the actual experiments took place, a pilot study wasorganized. The first pilot consisted of a small scale trial runof the actual experiment in order to assess the efficacy of theexperiment design, the clarity of instructions, the instruments,and the total testing time. At the start of the experiment, thesubjects were informed about the procedure and they filledin a consent form. All subjects participated in a visual acuitytest and an impaired image quality rating (IIQR) under fourconditions: on a TV and on a head mounted display (HMD),while the users head was either moving or static. All subjectsviewed the same set of images with counterbalanced qualitylevels, environment (HMD or TV), and state of the user’sheads (static or dynamic). Before starting the VR test, eachparticipant was given enough time to ensure the headset wastightly fitted around their head, and to adjust the lens slideron the bottom of the headset.Participants were seated three meters from the TV screen.To obtain a similar distance-to-chart on the HMD screen, thedisplay settings within Virtual Desktop were changed to fish-eye o . Subjects were given instructions to read each letterfrom the displayed Sloan chart (see Fig. 1). When participantsread a line incorrectly, their visual acuity value was the scoreof the previous line. In case participants read the last linecorrectly, they received the highest score possible.The
Sloan charts used for this experiment were the 2000 Series RevisedETDRS Chart 1, 2 and 3 published by Precision Vision [11].Each conducted impaired image quality test consisted oftwo phases: the training test, and the main test. The trainingphase introduced the subjects to the test setup and allowedthem to practice the assessment tasks. The content used forthe training was similar to the main test content. To keepthe total testing time under minutes, we only selected two ig. 3. Example of the evaluation method (double stimulus impairment scale)used in the actual tests. The viewer sees the original picture on the left sideand the impaired picture on the right side (Top: test set woman , bottom: testset city ). images of the same resolution ( × pixels). The firstimage consisted of a woman’s head and upper body sittingin front of a textured background, while the second imageshowed the landscape of a big city consisting of buildingsand a blue sky as a background. The introduced impairmentwas a result of 1) downscaling to different resolutions and2) JPEG coding using different quantization parameters. Theevaluation technique was a Double Stimulus Impairment Scale(DSIS) method, where the reference images and the impairedimages were shown side by side (see Fig. 3). Subjects wereaware of the position of the reference image, and they wereasked to answer the following question: ”How do you ratethe impairment of the image on the right in comparison to theimage on the left?” on a six-point impairment scale [28]:1 - imperceptible2 - just perceptible3 - definitely perceptible but not annoying4 - somewhat objectionable5 - definitely objectionable6 - very objectionableThis is similar to the latest ITU picture assessment standard-ized 5-grade impairment scale [29]. Each subject viewed atotal of ten images per test condition: five reference images,and five impaired images in various resolutions and compres-sion sizes of either test set ( woman ) or test set ( city ).Every participant received seconds to view each pair ofimages and assess the quality by voting from 1 to 6 accordingto the above described impairment scale. The average scorefor each case is the mean opinion score (MOS) [30] used forvisualizing the results of this work. TABLE IV
ISUAL ACUITY RESULTS OF THE TEST SUBJECTS – M
AXIMUM POSSIBLEACUITY SCORE : 2 – F
ORMAT : M
EAN (S TANDARD D EVIATION ). Static on TV
Dynamic on TV
Static HMD
Dynamic on HMD
IV. R
ESULTS
The visual acuity test scores are presented in Table I. Asexpected, subjects scored significantly higher on the staticacuity test than on the dynamic visual acuity test both on TVand in VR. Table II summarizes our results. For each level ofimpairment we introduced, for every scenario (Static/Dynamic,TV/HMD), for each test set, we calculated the mean (MOS)and its standard deviation. For evaluating relevant correlations,we calculated the effect sizes between static and dynamicscenarios for each test set. Pearson’s r is often used as effectsize when paired quantitative data are available and it iscalculated as follows. Given pairs of data ( x , y ) , ..., ( x n , y n ) consisting of n number of pairs, the Pearson’s r coefficient isdefined as: r = (cid:80) ni =1 ( x i − x )( y i − y ) (cid:112)(cid:80) ni =1 ( x i − x ) ( y i − y ) , where n is the number of the paired samples, x i , y i are thesample points, and x = n (cid:80) ni =1 x i is the mean of the samples(same for y ). A larger absolute value indicates a stronger effect(larger correlation).Dynamic visual acuity scores correlated well with the dy-namic IIQR results. More specifically, on TV: r = 0 . , p < . , and on HMD: r = 0 . , p < . ), where r is thecorrelation coefficient and p is the significance level. Typically,if p is lower than the conventional ( p < . ) thecorrelation coefficient is called statistically significant. In otherwords, during the dynamic IIQR respondents with a betterdynamic sight rated the impaired images as more annoyingthan other participants who had lower dynamic visual acuitytest scores. For the static and dynamic conditions on HMD, wesee a medium effect size ( r = 0 . ) at the lowest impairmentlevel (level ) for the test set woman , whereas for the test set city there is a medium effect size ( r = 0 . ) at the impairmentlevel . When comparing to the static and dynamic conditionson TV, we validate that impairment levels and providemedium and large effect sizes for the woman test set. Similarly,for the city test set, the range of impairment levels − yieldsmedium and large effect sizes.In Fig. 4 and Fig. 5, those effect sizes for different impair-ment levels are presented in terms of the corresponding impair-ment scale. More explicitly, for the TV test scenario, strongercompression results in subjects perceiving more impairments.However, when the subjects are in a dynamic state, their vision ig. 4. Perceived impairment scale vs. level of impairment (compres-sion/downscaling) for the test image woman . Level 1 - 5 denotes differentimpairment levels scale, where level 1 is high resolution and light compressionwhile level 5 is low resolution and heavy compression.Fig. 5. Perceived impairment scale vs. level of impairment (compres-sion/downscaling) for the test image city . Level 1 - 5 denotes differentimpairment levels scale, where level 1 is high resolution and light compressionwhile level 5 is low resolution and heavy compression. is less sensitive and the impairments are less perceptible. Inthat way, we can get an indication of how much the qualitycould be reduced, without being noticeable by the majorityof the subjects. For the HMD test scenario, it can be seenthat only level 4 compression and higher results in impairmentthat gets definitely noticeable and objectionable. This might becaused by the inadequate resolution of the HMD which doesnot allow the user to perceive the scene in high detail. Onthe other hand, we can draw similar conclusion for how muchthe quality can be dropped (for the specific setup) withoutnoticeable compression artifacts.V. D ISCUSSION
From this experiment we can draw the following threeconclusions which are applicable to both the TV and VRscenarios. Firstly, users performed better at the static visualacuity test than at the dynamic test which is in line withpast studies [8], [31]. Secondly, dynamic acuity correlatedwith perception of impairment. People who scored higherin dynamic sight tests, were able to see the differencesbetween the reference image and impaired images during thedynamic impairment rating. Finally, although the results were
TABLE IIS
UMMARIZED RESULTS OF THE SUBJECTIVE TEST INCLUDING MEANS , STANDARD DEVIATIONS , AND EFFECT SIZES (P EARSON ’ S CORRELATION ) OF THE MEAN OPINION SCORES FOR DIFFERENT LEVELS OF IMPAIRMENTAND DIFFERENT CONDITIONS . Impairmentlevel Imagecontent andfile size Condition Mean Standarddeviation Effect size
Level 1Resolution - 616p Woman30.7 KB Static TV 2.53 0.96 0.18 (a)Dynamic TV 2.12 1.23Static HMD 2.29 1.29 0.33 (b)Dynamic HMD 1.47 0.86City21.7 KB Static TV 1.94 0.61 0.06 (a)Dynamic TV 1.76 0.78Static HMD 1.82 0.99 0.10 (a)Dynamic HMD 1.59 0.74Level 2Resolution - 308p Woman18 KB Static TV 4.00 1.10 0.37 (b)Dynamic TV 2.59 1.17Static HMD 2.41 1.18 0.11 (a)Dynamic HMD 2.18 1.25City13 KB Static TV 3.41 1.10 0.51 (c)Dynamic TV 2.06 1.03Static HMD 1.88 0.87 0.10 (a)Dynamic HMD 1.71 0.91Level 3Resolution - 308p Woman11.7 KB Static TV 4.00 1.17 0.36 (b)Dynamic TV 3.06 1.08Static HMD 2.41 1.16 0.14 (a)Dynamic HMD 1.76 1.05City8.85 KB Static TV 3.59 1.03 0.33 (b)Dynamic TV 2.76 1.07Static HMD 2.24 1.01 0.27 (b)Dynamic HMD 1.71 0.78Level 4Resolution - 308p Woman8.7 KB Static TV 4.94 1.02 0.45 (c)Dynamic TV 3.71 1.30Static HMD 2.53 1.47 0.21 (a)Dynamic HMD 2.00 1.00City6.74 KB Static TV 3.59 1.07 0.29 (b)Dynamic TV 2.82 1.29Static HMD 2.59 1.00 0.05 (a)Dynamic HMD 2.18 0.94Level 5Resolution - 154p Woman5.19 KB Static TV 5.82 0.35 0.24 (a)Dynamic TV 5.41 0.8Static HMD 3.76 1.19 -0.02 (a)Dynamic HMD 3.76 1.3City4.24 KB Static TV 5.35 0.74 0.31(b)Dynamic TV 4.47 1.31Static HMD 3.59 1.44 0.06 (a)Dynamic HMD 3.06 1.56a: small effect size ( r =0.10), b: medium effect size ( r =0.30), c: large effect size ( r =0.50) statistically insignificant in the impairment ratings, we foundpromising effect sizes (Pearson’s correlation) between staticand dynamic conditions for the different impairment levels.Unlike significance tests, effect size is independent of samplesize. Statistical significance, on the other hand, depends uponboth sample size and effect size. For this reason, p values areconsidered to be confounded because of their dependence onsample size. Sometimes a statistically significant result meansonly that a huge sample size was used [32], [33]. From all theabove, we can assume that the impairment perception of thehuman visual system in a static and a dynamic environment islinearly correlated, something that can be exploited to createpractical QoE bandwidth management solutions.Future work of this research consists of extending thetests to regular video, o video, and light field video.Additionally, a more realistic motion scheme for the useris a work in progress, where the dynamic visual acuity andcontent evaluation can be measured during actual movementof the subject. That movement can be directly dependent onthe viewed content which will lead the subject throughout theprocess. In other words, the subject will evaluate various VRcontent while freely roaming in space, which is a more realisticcenario than forced horizontal head movement. Finally, whilea link between poor visual system performance and cybersick-ness was confirmed in the past [4], future extensions of thiswork could probe whether or not there is a connection betweenuser’s visual acuity and cybersickness.VI. C ONCLUSIONS
We investigated the correlation of (dynamic) visual acuityand impairment rating of 2D images on TV and on HMD.Results showed that visual acuity and impairment ratings areindeed correlated, while users’ scores on TV and on HMD forthe same content are not necessarily correlated. Although usingdistorted images and maintaining a good user experience mightseem mutually exclusive, our experiment shows that, when inmotion, users rate certain impairment levels of images as lessperceptible than static users. To the best of our knowledge,this study was the first to combine and compare static anddynamic visual acuity and impaired image quality tests bothon TV and on HMD. The identified range of quality tippingpoints can be used as a baseline for subsequent research usinga dataset consisting of various types of video content used onHMDs. R
EFERENCES[1] H. Langley, ”This is what virtual reality will (probably) look like in2021” ”Oculus Connect 3 Opening Keynote” ”Perceptual quality of 4K-resolution videocontent compared to HD” . 2016 Eighth International Conference onQuality of Multimedia Experience (QoMEX), Lisbon, 2016, pp. 1-6.[4] S. Davis, K. Nesbitt and E. Nalivaiko, ”A systematic review of Cyber-sickness” , in Proceedings of the Conference on Interactive Entertain-ment, pp. 1-9, 2014.[5] S. V. G. Cobb, S. Nichols, A. Ramsey and J. R. Wilson, ”Virtual Reality-Induced Symptoms and Effects (VRISE)” , Presence: Teleoperators andVirtual Environments, Volume 8, Issue 2, pp. 169186, 1999.[6] R. S. Kennedy , N. E. Lane , K. S. Berbaum and M. G. Lilienthal, ”Simulator Sickness Questionnaire: An enhanced method for quantify-ing simulator sickness” , International Journal of Aviation Psychology,Volume 3, Issue 3, pp. 203220, 1993.[7] A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D.Luebke and A. Lefohn, ”Towards foveated rendering for gaze-trackedvirtual reality” , in Siggraph Asia, Macao, 2016.[8] C. T. Horng, Y. S. Hsieh, M. L. Tsai, W. K. Chang, T. H. Yang, C. H.Yauan, C. H. Wang, W. H. Kuo and Y. C. Wu, ”Effects of horizontalacceleration on human visual acuity and stereopsis” , in InternationalJournal of Environmental Research and Public Health, vol. 12, no. 1,pp. 910-926, 2015.[9] J. L. Demer and F. Amjadi, ”Dynamic Visual-Acuity of Normal SubjectsDuring Vertical Optotype and Head Motion” , Investigative opthalmol-ogy & visual science, vol. 34, no. 6, pp. 1894-1906, 1993.[10] J. W. Miller, and E. Ludvigh, ”Study of Visual Acuity during OcularPursuit of Moving Test Objects. I. Introduction” , in Journal of theOptical Society of America, vol. 48, no. 11, pp. 799-802, 1958.[11] ”Sloan Letter Revised Series ETDRS Charts (3 Meter)” ”Dynamic visual acuity testing for screening patientswith vestibular impairments” , in the Journal of Vestibular Research,Volume 22, Issue 2, pp. 145-151, 2012. [13] C. Li, J. L. Beaumont, R. M. Rine, J. Slotkin and M. C. Schubert, ”Normative Scores for the NIH Toolbox Dynamic Visual acuity Testfrom 3 to 85 Years” , in Frontiers in Neurology, Volume 5, pp. 223,2014.[14] S. M. Anstis, ”A chart demonstrating variations in acuity with reti-nal position” , in Vision Research, Volume 14, Issue 7, pp. 589-592,ISSN 0042-698, 1974.[15] P. W. Monti, ”Lateral Masking of end Elements by Inner Elementsin Tachistoscopic Pattern Perception” , in Perceptual and Motor Skills,Volume 36, Issue 3, pp. 777-778, 1973.[16] B. Fischer and E. Ramsperger, ”Human express saccades: extremelyshort reaction times of goal directed eye movements” , in ExperimentalBrain Research, Volume 57, Issue 1, pp. 191-195, 1984.[17] A. C. Schutz, D. I. Braun and K. R. Gegenfurtner, ”Object recognitionduring foveating eye movements” , in Vision Research, Volume 49,Issue 18, pp. 2241-2253, ISSN 0042-6989, 2009.[18] B. Brown, ”Dynamic visual acuity, eye movements and peripheral acuityfor moving targets” , in Vision Research, Volume 12, Issue 2, pp. 305-321, ISSN 0042-6989, 1972.[19] J. L. Demer, V. Honrubia and R. Baloh, ”Dynamic visual acuity: a testfor oscillopsia and vestibulo-ocular reflex function” , in the AmericanJournal of Otology, Volume 15, Issue 3, pp. 340-347, 1994.[20] A. Doumanoglou et al., ”Quality of Experience for 3-D Immersive MediaStreaming” , in IEEE Transactions on Broadcasting, Volume 64, no. 2,pp. 379-391, 2018.[21] R. Recio, P. Carballeira, J. Gutierrez and N. Garcia, ”Subjective Assess-ment of Super Multiview Video with Coding Artifacts” , in IEEE SignalProcessing Letters, Volume 24, no. 6, pp. 868-871, 2017.[22] S. Ling, J. Gutierrez, G. Ke and P. Le Callet, ”Prediction of the Influ-ence of Navigation Scan-path on Perceived Quality of Free-ViewpointVideos” , in Computer Vision and Pattern Recognition, abs/1810.04409,2018.[23] J. Gutierrez, E. J. David, A. Coutrot, M. P. Da Silva and P. L. Callet, ”Introducing UN Salient360! Benchmark: A platform for evaluatingvisual attention models for 360 contents” , in the Tenth InternationalConference on Quality of Multimedia Experience (QoMEX), pp. 1-3,2018.[24] V. Sitzmann et al., ”Saliency in VR: How Do People Explore VirtualEnvironments?” , in IEEE Transactions on Visualization and ComputerGraphics, Volume 24, no. 4, pp. 1633-1642, 2018.[25] R. M. Rine and J. Braswell, ”A clinical test of dynamic visual acuityfor children” , in International Journal of Pediatric Otorhinolaryngology,vol. 67, no. 11, pp. 1195-1201, 2003.[26] C. Keimel, ”Design of Video Quality Metrics with Multi-Way DataAnalysis” , Springer Singapore, 2016.[27] ITU-T, ”Subjective video quality assessment methods for multimediaapplications” , 2009.[28] ITU,
Studies toward the unification of picture assessment methodology ,1990.[29] ITU-R BT.500-13, ”Methodology for the subjective assessment of thequality of television pictures” , 2012.[30] ITU-R P.800.1, ”Mean opinion score (MOS) terminology” , 2016.[31] J. W. Miller, ”Study of Visual Acuity during the Ocular Pursuit ofMoving Test Objects. II. Effects of Direction of Movement, RelativeMovement, and Illumination” , Journal of the Optical Society of America,vol. 48, no. 11, pp. 803-808, 1958.[32] P. D. Ellis, ”The Essential Guide to Effect Sizes: Statistical Power,Meta-Analysis, and the Interpretation of Research Results” , CambridgeUniversity Press, doi:10.1017/CBO9780511761676, 2010.[33] G. M. Sullivan and R. Feinn, ”Using Effect Sizeor Why the P Value IsNot Enough ””Using Effect Sizeor Why the P Value IsNot Enough ”