No-Reference Light Field Image Quality Assessment Based on Micro-Lens Image
NNo-Reference Light Field Image QualityAssessment Based on Micro-Lens Image
Ziyuan Luo*, Wei Zhou*, Likun Shi, and Zhibo Chen
CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application SystemUniversity of Science and Technology of China, Hefei 230027, China [email protected]
Abstract —Light field image quality assessment (LF-IQA) playsa significant role due to its guidance to Light Field (LF)contents acquisition, processing and application. The LF can berepresented as 4-D signal, and its quality depends on both angularconsistency and spatial quality. However, few existing LF-IQAmethods concentrate on effects caused by angular inconsistency.Especially, no-reference methods lack effective utilization of 2-D angular information. In this paper, we focus on measuringthe 2-D angular consistency for LF-IQA. The Micro-Lens Image(MLI) refers to the angular domain of the LF image, whichcan simultaneously record the angular information in bothhorizontal and vertical directions. Since the MLI contains 2-D angular information, we propose a No-Reference Light Fieldimage Quality assessment model based on MLI (LF-QMLI).Specifically, we first utilize Global Entropy Distribution (GED)and Uniform Local Binary Pattern descriptor (ULBP) to extractfeatures from the MLI, and then pool them together to measureangular consistency. In addition, the information entropy of Sub-Aperture Image (SAI) is adopted to measure spatial quality.Extensive experimental results show that LF-QMLI achieves thestate-of-the-art performance.
Index Terms —Light field, Image quality assessment, Objectivemodel, Micro-lens image, Angular consistency I. I NTRODUCTION
As a representative of the attractive technique for immer-sive multimedia data, Light Field (LF) images have attractedwidespread attention [1]. Unlike traditional 2D images, LFimages can record radiance information in both spatial and an-gular dimensions [2], leading to better immersive experience.In order to provide satisfactory viewing quality of experience(QoE), light field image quality assessment (LF-IQA) playsa crucial role in LF contents acquisition, processing andapplication.The LF image is a 4-D signal containing spatial and angularinformation. In Fig. 1, we show the LF image with differentformats. Fig. 1(a) shows the LF image captured by a lensletLF camera called Lytro Illum [3]. The parameters u and v refer to angular dimensions while s and t represent spatialdimensions. We can obtain the Sub-Aperture Image (SAI) byfixing u and v [4], while the Micro-Lens Image (MLI) isthrough fixing s and t [5], as shown in Fig. 1(b-c). The bottomand right of Fig. 1(b) are Epipolar-Plane Images (EPIs) [1],which are produced by fixing ( u , s ) and ( v , t ). The SAI onlycontains spatial information of the LF image, while the EPIinclude both spatial and angular dimensions. However, the EPI * Equal contribution Fig. 1. LF image with different formats. (a) Lenset image captured by LytroIllum; (b) SAI array with EPIs in the bottom and right; (c) 9 MLIs in the redbounding box of (a) at high magnification. only contains horizontal or vertical angular direction. Unlikethe SAI and EPI, the MLI includes 2-D angular information.On account of the 4-D characteristic, the perceptual qualityof LF image mainly depends on spatio-angular resolution,angular consistency and spatial quality [1]. Concretely, spatio-angular resolution refers to the LF image resolution (i.e. thevalues of u , v , s and t ). Angular consistency measures the visualcoherence of LF images while spatial quality indicates the SAIquality. Since spatio-angular resolution is an inherent factor ofthe LF image, we consider the effects of angular consistencyand spatial quality in this paper.Although the subjective evaluation of LF-IQA [6–8] isprecise and reliable, it is resource and time-consuming. There-fore, an effective objective LF-IQA model is urgently required.In general, image quality assessment (IQA) methods can beclassified into three categories: full-reference (FR), reduced-reference (RR) and no-reference (NR) [9]. Among FR meth-ods, intact information of original images is needed. Structuresimilarity between original and distorted images is measured instructural similarity index (SSIM) [10], with several variants,e.g. MS-SSIM [11] and FSIM [12]. MP-PSNR Full [13]and MP-PSNR Reduc [14] based on Morphological pyramiddecomposition are proposed to evaluate the multi-view imagequality. RR methods only require part of information fromoriginal images. NR methods only utilize distorted images,which can be applied to most applications where referenceimages are hardly available, e.g. Mittal et al. [15] uses scenestatistics in spatial domain and binocular fusion and rivalryare concerned in BSVQE [16].There exist only a few objective LF-IQA models. Fang et al. [17] proposes a FR LF-IQA method to compute the a r X i v : . [ ee ss . I V ] A ug ig. 2. Flow diagram of proposed LF-QMLI model. (a) LF image (lensletformat); (b) MLI Array; (c) SAI Array. gradient magnitude similarity between original and distortedEPIs. Paudyal et al. [18] predicts the LF image qualitywith the structure similarity between original depth map anddistorted depth map. However, neither of the methods considerthe spatial quality degradation on the SAI. In addition, theEPIs only contain horizontal or vertical angular dimension,leading to insufficient measurement of angular consistencyfor the LF image applications. Therefore, a LF-IQA methodthat considers spatial quality and 2-D angular consistency isnecessary for practical application.In this paper, we propose a novel NR Light Field imageQuality assessment model based on Micro-Lens Image (LF-QMLI) to evaluate both of angular consistency and spatialquality for LF images. As shown in Fig. 1(c), each pixel inthe MLI comes from the same point in spatial domain, but iscaptured in various directions. Hence, there exists quite strongdependence between MLI pixels for 2-D angular consistency.To our best knowledge, we are the first to utilize MLI toevaluate angular consistency of LF images.In this work, we first obtain the MLI by fixing s and t ,while the SAI is generated through fixing u and v . Second,the Global Entropy Distribution (GED) and Uniform LocalBinary Pattern descriptor (ULBP) are proposed to measure theangular consistency on each MLI, and then we utilize contentpooling. Third, the information entropy of SAI is utilized toevaluate spatial quality. Finally, we train a regression modelthat predicts the perceptual quality of distorted LF images.Our experimental results demonstrate that proposed model LF-QMLI achieves the state-of-the-art performance.The rest of the paper is organized as follows: Section IIintroduces our LF-QMLI method. Experimental results areshown in Section III and we conclude the paper in Section IV.II. P ROPOSED M ETHOD
The flow diagram of our proposed LF-QMLI model isshown in Fig. 2. We first convert LF images from lensletformat into MLI and SAI arrays. Since the entropy canmeasure the angular dependence between adjacent pixels [19],the GED is adopted on MLI to measure angular consistency.As the textural variation shown in Fig. 3, the ULBP is selectedto measure the textural features for original and distorted (a) (b) (c) (d) (e)Fig. 3. MLIs with various types of distortion in different quality levels. Notethat distortions in higher level represent worse visual quality. (a) OriginalImage; (b) LN lv.2 Distorted Image; (c) LN lv.5 Distorted Image; (d) NN lv.2Distorted Image; (e) NN lv.5 Distorted Image.
MLIs. In addition, the information entropy of SAI is utilizedto measure spatial quality [20]. After content pooling, aregression model is used to predict the perceptual quality ofLF images. A. Angular Consistency Based on MLI
The distortion caused by angular inconsistency affects LFimage quality. A LF camera captures the same object in spatialdomain with various angles of view, and the MLI is composedof light rays from both horizontal and vertical directions,leading to a 2-D angular domain. Global Entropy Distribution (GED) : In previous works,information entropy is proved as an efficient method to mea-sure spatial image quality [21]. However, without consideringangular dimension, it cannot work well in LF images.The entropy of undistorted images possesses certain sta-tistical properties, owing to the dependence between adjacentpixels [19]. As shown in Fig. 3(a), since the variation betweenpixels is piecewise linear [22], the MLI without angular distor-tion is regular and gradually varied. With increased distortion,the dependence between adjacent pixels is destroyed, leadingto the change of global entropy. In Fig. 3(b)-(e), the distortioncaused by angular inconsistency primarily affects the MLIentropy.Our proposed GED includes global image entropy distribu-tion and global frequency entropy distribution of MLI.The image entropy is E I = − (cid:88) x P x log P x , (1)where x is the pixel value within a MLI, ranging from 0 to255, with empirical probability density P x .The frequency entropy is E F = − (cid:88) i (cid:88) j P i,j log P i,j , (2)where P i,j is the value of probability map locating at ( i, j ) inthe DCT domain of MLI.Finally, the MLI global entropy E MLI includes imageentropy and frequency entropy. E MLI = { E I , E F } . (3)We conducted validation experiments on various levels ofdistortion caused by angular inconsistency. We considered 2 ABLE IE
MLI
ON VARIOUS ANGULAR DISTORTION E MLI
Ori. Img LN lv.2 LN lv.5 NN lv.2 NN lv.5IE FE kinds of angular distortion { linear interpolation (LN), near-est neighbor interpolation (NN) } and 2 levels of distortion { Level2, Level5 } [8]. The higher level represents higherdistortion. The image entropy (IE) and frequency entropy (FE)of 5 MLIs (Fig. 3) were computed to demonstrate the validityof E MLI . It is shown in Table I that the angular distortioncan affect E MLI in a conspicuous and predictable way. Theloss of image details is caused by the distortion, leading toimage entropy reduction. Generally, the angular distortion ofhigher level obtains smaller IE and greater FE. On the samedistortion level, NN destroys angular consistency more acutelythan LN, which is verified in subjective experiments [8].However, there exist a large number of MLIs in the LFimage, so we utilize the GED considering all MLIs in theLF image. We will propose our Content Pooling method insection II-A3. Uniform Local Binary Pattern (ULBP) : Although im-age entropy and frequency entropy can measure the distortioncaused by angular inconsistency, E MLI is the global charac-teristics of the whole MLI. As shown in Fig. 3, the increasedangular distortion changes the local texture of the MLI. Thus,we utilize ULBP to measure local textural features in MLI.Local binary patterns (LBP) has been proved as an efficientoperator to extract local distribution information [23, 24]. LBPis a very simple but efficient texture descriptor, with rotationinvariance, position invariance and robustness under variousillumination. Since LBP can efficiently represent local distri-butions of joint pixels, we adopt modified ULBP descriptor todescribe inconsistency of local adjacent pixels.The LBP operator can be given as [25]
LBP
P,R = P − (cid:88) p =0 s ( g p − g c )2 p . (4)Here we set a neighbor of P = 4 members on a circle ofradius R = 1 . g c is the gray value of central pixel, while g p is the gray value of the neighbor pixel. s represents the signfunction.To reduce the number of pattern types, we utilize modifiedUniform LBP operator on each MLI i . There exist P + 2 typesof uniform pattern class without regard to upright property.Then we combine the Probability of every Type of binarypatterns ( P T ) into vector
P T i . P T i = { P T , P T , · · · , P T n (cid:124) (cid:123)(cid:122) (cid:125) P +2 } . (5) Content Pooling : In order to evaluate LF image quality,we pool the characteristics E MLI and
P T i of each MLItogether into LF image angular features. Fig. 4. Histograms of GED for different types and levels of angular distortion.(a) IE histogram; (b) FE histogram.
The method of pooling used on E MLI is percentile pool-ing [26]: central elements of E MLI are extracted while ex-tremely large or small elements are neglected. Our experimen-tal results verify that percentile pooling improves our proposedmodel.We reserve 60% of central elements of E MLI here andshow the global distribution histogram of IE and FE in Fig. 4.In accord with our analysis of Table I, with the increase ofangular distortion, the histogram of IE has a left shift, whilethe histogram of FE has a right shift. The higher level thedistortion is, the larger the shift will become and the steeperthe curve will appear.The mean values and skew values of IE and FE distributionhistogram are selected as the GED features. f GED = { mean ( IE ) , skew ( IE ) , mean ( F E ) , skew ( F E ) } . (6)The angular inconsistency of distorted LF images appearsobviously at the edge between the foreground and background,while it appears inconspicuously in the background or mildareas. In these mild MLIs, less information is contained andthe ULBP features might be misleading. Therefore, in orderto exclude the MLIs in mild areas, we introduce a selector on P T i . The ULBP features of a whole LF image are exactedas: f ULBP = avg { P T i } ∀ .i if R ( i ) > threshold, (7)where R ( i ) = M ax ( M LI i ) − M in ( M LI i ) , the Range of asingle MLI i . threshold is selected as gray value 20 here. B. Spatial Quality
The spatial quality plays an important part in LF imageperceptual quality. Specifically, we utilize information entropydistribution of SAI to measure the changes in spatial quality.In the undistorted SAI, there exists the spatial dependencebetween adjacent pixels [19]. With increased spatial distortionsuch as compression, the dependence is destroyed, resultingin changes of information entropy.We divide the SAI into × blocks, then compute SAIImage Entropy (SIE) and SAI Frequency Entropy (SFE) ofach block. Finally, pooling all the blocks together like whatis shown in section II-A3 and obtain Spatial Quality features. f SQ = { mean ( SIE ) , skew ( SIE ) ,mean ( SF E ) , skew ( SF E ) } . (8)III. E XPERIMENT RESULTS A. Light Field Image Databases
To test the performance of our proposed LF-QMLI model,comparison experiments were conducted on Win5-LID [8] andVALID [27] databases. In Win5-LID database, there exist 220distorted LF images with various distortion types and levels.The distortion types consist of JPEG2000, HEVC, linearinterpolation (LN), nearest neighbor interpolation (NN) andtwo CNN models. The overall Mean Opinion Score (MOS)value is provided for each LF image.VALID database includes 5 reference LF images capturedby Lytro Illum with a number of distorted LF images causedby several types of compression methods. In this experiment,we utilize several distorted LF images obtained through theinteractive methodology, including 40 LF images with twotypes of distortion: HEVC and VP9.
TABLE IIP
ERFORMANCE C OMPARISON
Win5-LID VALIDMetrics SROCC LCC RMSE SROCC LCC RMSEPSNR
SSIM [10]
MS-SSIM [11]
FSIM [12]
IWSSIM [28]
IFC [29]
NQM [31]
VSNR [32]
BRISQUE [15]
NIQE [33]
FRIQUEE [34]
Chen [35]
SINQ [36]
BSVQE [16]
MP-PSNR Full [13]
MP-PSNR Reduc [14]
MW-PSNR Full [37]
MW-PSNR Reduc [37]
APT [39]
LF-IQM [18]
LF-QMLI 0.8802 0.9038 0.4147 B. Comparison with Previous Objective Metrics
We conducted comparison experiments between our pro-posed model and several FR, RR and NR metrics, includingnine 2D-FR metrics [10–12, 28–32], three 2D-NR metrics [15, 33, 34], one 3D-FR metric [35], two 3D-NR metrics [16, 36],five Multi-views FR metrics [13, 14, 37, 38], one Multi-viewsNR metric [39] and one LF-RR metric [18]. Three evaluationcriteria are selected to measure the correlation between MOSand predicted results, consisting of Spearman Rank Order Cor-relation Coefficient (SROCC), Linear Correlation Coefficient(LCC) and Root Mean Squared Error (RMSE). The SROCCmeasures the monotonicity while LCC evaluates the linearrelationship between predicted score and MOS. The RMSEcomputes the deviation of prediction. The better consistencywith human perception is reflected in SROCC and LCC closingto 1 as well as RMSE closing to 0.Then, we use SVR for regression [40]. LIBSVM pack-age [41] is utilized to implement the SVR, which uses a radialbasis function (RBF) kernel. We randomly select 80% of thedatabase as the training set while the remaining 20% constitutethe test set. The median of correlation coefficients across 1000random trails were regarded as the final results.The results of all metrics are shown in Table II. Here we canfind that almost all 2D-FR metrics perform well on VALIDdatabase and LF-QMLI is competitive in NR metrics. Thissituation may be caused by its limited types of distortion.VALID only introduces two compression distortions, whichdestroy the spatial quality of LF images but do not take angularconsistency into consideration, so previous 2D-FR metricscan almost excellently measure the distortion. The resultsshow that VALID is not challenging for quality assessment ofLF images. Therefore, we mainly analyze how our proposedmodel LF-QMLI performs on Win5-LID database.On Win5-LID database, LF-QMLI outperforms all previousmetrics. In general, the existing 2D and 3D metrics onlyconsider the degradation of spatial quality, ignoring the degra-dation of angular consistency. Although multi-view metricscan measure the angular distortion, they do not take the com-pression distortion and similar spatial distortions into account.Therefore, we can reach a conclusion that the proposed modelLF-QMLI can evaluate both angular consistency and spatialquality. C. Ablation Study
In order to verify the validity of our proposed MLI-basedmodel, we conducted an ablation study on Win5-LID databaseand the results are demonstrated in TABLE III. The featuresextracted from MLI f MLI can obviously improve the modelperformance. One possible reason is that the f MLI providesthe measurement of the 2-D angular consistency.
TABLE IIIA
BLATION S TUDY
SROCC LCC RMSE
Model-f
MLI
Model
IV. C ONCLUSION
In this paper, we proposed a No-Reference Light Fieldimage Quality assessment model based on Micro-Lens ImageLF-QMLI). We theoretically analyze the significance of MLIin LF-IQA and extract features for our evaluator. The modelcan effectively measure the angular consistency and spatialquality. The results show that LF-QMLI achieves state-of-the-art performance. In the future, we will consider more advancedfeatures on the MLI to improve our model.
ACKNOWLEDGMENT
This work was supported in part by NSFC under Grant61571413, 61632001. R
EFERENCES[1] G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, andY. Liu, “Light field image processing: An overview,”
IEEE Journal ofSelected Topics in Signal Processing , vol. 11, no. 7, pp. 926–954, 2017.[2] M. Levoy, Z. Zhang, and I. McDowall, “Recording and controllingthe 4d light field in a microscope using microlens arrays,”
Journal ofmicroscopy , vol. 235, no. 2, pp. 144–162, 2009.[3] R. Ng, M. Levoy, M. Br´edif, G. Duval, M. Horowitz, P. Hanrahan et al. ,“Light field photography with a hand-held plenoptic camera,”
ComputerScience Technical Report CSTR , vol. 2, no. 11, pp. 1–11, 2005.[4] V. Van Duong, T. N. Canh, and B. Jeon, “Light field image codingfor efficient refocusing,” in . IEEE,2018, pp. 74–78.[5] D. Cho, M. Lee, S. Kim, and Y.-W. Tai, “Modeling the calibrationpipeline of the lytro camera for high quality light-field image reconstruc-tion,” in
Proceedings of the IEEE International Conference on ComputerVision , 2013, pp. 3280–3287.[6] I. Viola, M. ˇReˇr´abek, T. Bruylants, P. Schelkens, F. Pereira, andT. Ebrahimi, “Objective and subjective evaluation of light field imagecompression algorithms,” in .IEEE, 2016, pp. 1–5.[7] V. Kiran Adhikarla, M. Vinkler, D. Sumin, R. K. Mantiuk,K. Myszkowski, H.-P. Seidel, and P. Didyk, “Towards a quality metricfor dense light fields,” in
Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition , 2017, pp. 58–67.[8] L. Shi, S. Zhao, W. Zhou, and Z. Chen, “Perceptual evaluation oflight field image,” in , 05 2018.[9] W. Zhou and L. Yu, “Binocular responses for no-reference 3d imagequality assessment,”
IEEE Transactions on Multimedia , vol. 18, no. 6,pp. 1077–1084, 2016.[10] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli et al. , “Imagequality assessment: from error visibility to structural similarity,”
IEEEtransactions on image processing , vol. 13, no. 4, pp. 600–612, 2004.[11] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structuralsimilarity for image quality assessment,” in
The Thrity-Seventh AsilomarConference on Signals, Systems & Computers, 2003 , vol. 2. Ieee, 2003,pp. 1398–1402.[12] L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarityindex for image quality assessment,”
IEEE transactions on ImageProcessing , vol. 20, no. 8, pp. 2378–2386, 2011.[13] D. Sandi´c-Stankovi´c, D. Kukolj, and P. Le Callet, “Dibr synthesizedimage quality assessment based on morphological wavelets,” in . IEEE, 2015, pp. 1–6.[14] D. Sandi´c-Stankovic, D. Kukolj, and P. Le Callet, “Multi–scale synthe-sized view assessment based on morphological pyramids,”
Journal ofElectrical Engineering , vol. 67, no. 1, pp. 3–11, 2016.[15] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference imagequality assessment in the spatial domain,”
IEEE Transactions on ImageProcessing , vol. 21, no. 12, pp. 4695–4708, 2012.[16] Z. Chen, W. Zhou, and W. Li, “Blind stereoscopic video quality assess-ment: From depth perception to overall experience,”
IEEE Transactionson Image Processing , vol. 27, no. 2, pp. 721–734, 2017.[17] Y. Fang, K. Wei, J. Hou, W. Wen, and N. Imamoglu, “Light filed imagequality assessment by local and global features of epipolar plane image,”in . IEEE, 2018, pp. 1–6. [18] P. Paudyal, F. Battisti, and M. Carli, “Reduced reference quality assess-ment of light field images,”
IEEE Transactions on Broadcasting , vol. 65,no. 1, pp. 152–165, 2019.[19] L. Liu, B. Liu, H. Huang, and A. C. Bovik, “No-reference image qualityassessment based on spatial and spectral entropies,”
Signal Processing:Image Communication , vol. 29, no. 8, pp. 856–863, 2014.[20] Q. Hu, Z. X. Xie, Z. F. Wang, and Y. H. Liu, “Constructing nr-iqafunction based on product of information entropy and contrast,” in ,vol. 2. IEEE, 2008, pp. 548–550.[21] J. Sponring, “The entropy of scale-space,” in
Proceedings of 13thInternational Conference on Pattern Recognition , vol. 1. IEEE, 1996,pp. 900–904.[22] S. Heber, R. Ranftl, and T. Pock, “Variational shape from light field,” in
International Workshop on Energy Minimization Methods in ComputerVision and Pattern Recognition . Springer, 2013, pp. 66–79.[23] T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation oftexture measures with classification based on kullback discriminationof distributions,” in
Proceedings of 12th International Conference onPattern Recognition , vol. 1. IEEE, 1994, pp. 582–585.[24] T. Ojala, M. Pietik¨ainen, and D. Harwood, “A comparative study oftexture measures with classification based on featured distributions,”
Pattern recognition , vol. 29, no. 1, pp. 51–59, 1996.[25] T. Ojala, M. Pietik¨ainen, and T. M¨aenp¨a¨a, “Multiresolution gray-scaleand rotation invariant texture classification with local binary patterns,”
IEEE Transactions on Pattern Analysis & Machine Intelligence , no. 7,pp. 971–987, 2002.[26] A. K. Moorthy and A. C. Bovik, “Visual importance pooling for imagequality assessment,”
IEEE journal of selected topics in signal processing ,vol. 3, no. 2, pp. 193–201, 2009.[27] I. Viola and T. Ebrahimi, “Valid: Visual quality assessment for light fieldimages dataset,” in . IEEE, 2018, pp. 1–3.[28] Z. Wang and Q. Li, “Information content weighting for perceptual imagequality assessment,”
IEEE Transactions on image processing , vol. 20,no. 5, pp. 1185–1198, 2010.[29] H. R. Sheikh, A. C. Bovik, and G. De Veciana, “An information fidelitycriterion for image quality assessment using natural scene statistics,”
IEEE Transactions on image processing , vol. 14, no. 12, pp. 2117–2128,2005.[30] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,”in , vol. 3. IEEE, 2004, pp. iii–709.[31] N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C.Bovik, “Image quality assessment based on a degradation model,”
IEEEtransactions on image processing , vol. 9, no. 4, pp. 636–650, 2000.[32] D. M. Chandler and S. S. Hemami, “Vsnr: A wavelet-based visualsignal-to-noise ratio for natural images,”
IEEE transactions on imageprocessing , vol. 16, no. 9, pp. 2284–2298, 2007.[33] L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completelyblind image quality evaluator,”
IEEE Transactions on Image Processing ,vol. 24, no. 8, pp. 2579–2591, 2015.[34] D. Ghadiyaram and A. C. Bovik, “Perceptual quality prediction onauthentically distorted images using a bag of features approach,”
Journalof vision , vol. 17, no. 1, pp. 32–32, 2017.[35] M.-J. Chen, C.-C. Su, D.-K. Kwon, L. K. Cormack, and A. C. Bovik,“Full-reference quality assessment of stereopairs accounting for rivalry,”
Signal Processing: Image Communication , vol. 28, no. 9, pp. 1143–1155, 2013.[36] L. Liu, B. Liu, C.-C. Su, H. Huang, and A. C. Bovik, “Binocular spatialactivity and reverse saliency driven no-reference stereopair qualityassessment,”
Signal Processing: Image Communication , vol. 58, pp.287–299, 2017.[37] D. Sandi´c-Stankovi´c, D. Kukolj, and P. Le Callet, “Dibr synthesizedimage quality assessment based on morphological wavelets,” in . IEEE, 2015, pp. 1–6.[38] F. Battisti, E. Bosc, M. Carli, P. Le Callet, and S. Perugia, “Objectiveimage quality assessment of 3d synthesized views,”
Signal Processing:Image Communication , vol. 30, pp. 78–88, 2015.[39] K. Gu, V. Jakhetiya, J.-F. Qiao, X. Li, W. Lin, and D. Thalmann, “Model-based referenceless quality metric of 3d synthesized images using localimage description,”
IEEE Transactions on Image Processing , vol. 27,no. 1, pp. 394–405, 2017.40] A. J. Smola and B. Sch¨olkopf, “A tutorial on support vector regression,”
Statistics and computing , vol. 14, no. 3, pp. 199–222, 2004.[41] C.-C. Chang and C.-J. Lin, “Libsvm: A library for support vectormachines,”