Multi-color balancing for correctly adjusting the intensity of target colors
MMulti-color balancing for correctly adjustingthe intensity of target colors st Teruaki Akazawa
Tokyo Metropolitan University
Tokyo, [email protected] nd Yuma Kinoshita
Tokyo Metropolitan University
Tokyo, [email protected] rd Hitoshi Kiya
Tokyo Metropolitan University
Tokyo, [email protected]
Abstract —In this paper, we propose a novel multi-color balancemethod for reducing color distortions caused by lighting effects.The proposed method allows us to adjust three target-colorschosen by a user in an input image so that each target color isthe same as the corresponding destination (benchmark) one. Incontrast, white balancing is a typical technique for reducing thecolor distortions, however, they cannot remove lighting effectson colors other than white. In an experiment, the proposedmethod is demonstrated to be able to remove lighting effectson selected three colors, and is compared with existing whitebalance adjustments.
Index Terms —Color Image Processing, Color Consistency,White Balance Adjustment, Color Balance Adjustment, ColorCorrection
I. I
NTRODUCTION
In human perception, even if illumination conditionschange, the color appearance of objects in a scene doesnot change [1]. White balance adjustment is a technique forreproducing this ability as a computer vision task [2]–[4]. Itis generally performed to remove the effects of illuminationconditions, i.e., lighting effects. To apply white-balancingto images, first, we need to estimate a white region withremaining lighting effects, called “source white point.” Thereare many studies on estimating source white points [2], [3],[5]–[9]. Next to the estimation, a color transform matrix, whichmaps a source white point into a desired destination whitepoint, are designed, and then the matrix is applied to eachpixel in an image. Once a source white point is correctlyestimated and white-balancing is applied to an image, lightingeffects on white are perfectly removed. However, even whena white balance adjustment is applied, there are still somelighting effects on other colors. This is because colors otherthan white are not considered in designing the matrix [10]. Forthis reason, in some computer vision tasks such as detectingobjects with specific colors, the specific colors still suffer fromlighting effects [11]–[17].Accordingly, in this paper, we propose a target-color correc-tion method based on a novel multi-color balance adjustment.In the proposed multi-color balancing, a color transform matrixis designed from three target colors. By applying the proposedmethod to an input image, each target color in the image isperfectly adjusted to the corresponding desired color. In an experiment, the effectiveness of the proposed methodis shown, compared with state-of-the-art white-balance adjust-ments. II. R
ELATED WORK
Pixel values captured by an RGB digital camera are givenby using three elements: spectra of illumination, spectralreflectance of objects and camera spectral sensitivity [18]. Forthis reason, illumination changes affect the color of capturedimages. Hence, white-balancing has generally been applied toimages to remove the lighting effects, so far [4].
A. White balance adjustment
Let P XYZ = ( X P , Y P , Z P ) (cid:62) and P (cid:48) XYZ = ( X (cid:48) P , Y (cid:48) P , Z (cid:48) P ) (cid:62) be a pixel value of input image I XYZ in the XYZ color space[19] and that of the corresponding white-balanced image I (cid:48) XYZ ,respectively. A white balance adjustment is performed by thefollowing equation [4]: P (cid:48) XYZ = M WB P XYZ . (1)Matrix M WB in (1) is given as M WB = M A − ρ D /ρ S γ D /γ S
00 0 β D /β S M A , (2)where ( ρ S , γ S , β S ) (cid:62) and ( ρ D , γ D , β D ) (cid:62) are calculated from asource white point ( X S , Y S , Z S ) (cid:62) of image I XYZ and a desireddestination (benchmark) white point ( X D , Y D , Z D ) (cid:62) as ρ S γ S β S = M A X S Y S Z S , ρ D γ D β D = M A X D Y D Z D (3)Note that some automatic algorithms are available for esti-mating source white point ( X S , Y S , Z S ) (cid:62) [2], [3], [5]–[9].Matrix M A with a size of 3 × M A will be the 3 × M A does not correspond to the 3 × a r X i v : . [ c s . MM ] F e b ulti-colorbalancing Adjust perfectly
Target colorsTarget colors
Known target-color regionUnknown target-color region Benchmark colors R e du ce c o l o r d i ff e r e n ce Input image 𝐼 !" Adjusted image
𝐼′′ !"
Fig. 1. Overview of proposed method. for high-quality white-balancing. For example, under the useof Bradford’s model, M A is given as M A = . . − . − . . . . − . . . (4) B. Problem with white balancing
By using white-balancing, lighting effects on white colorcan be perfectly removed if the white point of an input imagecan be estimated correctly [4]. However, lighting effects onother colors cannot be corrected because colors except whiteare not considered in the calculation of M WB (see (2)).Accordingly, in this paper, we propose a novel multi-colorbalance method that enables us to perfectly remove lightingeffects on three target-colors.III. P ROPOSED MULTI - COLOR BALANCING
A. Overview
As shown in Fig. 1, we assume that objects having targetcolors are in an input image ( I XYZ ), and their locations areknown. The locations are called “known target-color region.”Besides, we suppose that there are other regions in whichobjects having the same target-colors as ones in the knownregion are located. The regions are called “unknown target-color region.” By using a color transform matrix calculatedfrom three target-colors in the known target-color region, theproposed multi-color balancing can adjust target colors notonly in the known target-color region but also in the unknownregions to desired destination (benchmark) colors.
B. Proposed multi-color balancing
As with white balance, multi-color balanced pixel P (cid:48)(cid:48) XYZ isgiven as P (cid:48)(cid:48) XYZ = M MCB P XYZ . (5)Let ( X S1 , Y S1 , Z S1 ) (cid:62) , ( X S2 , Y S2 , Z S2 ) (cid:62) and ( X S3 , Y S3 , Z S3 ) (cid:62) be three target-colors in the XYZ colorspace. Also, let ( X D1 , Y D1 , Z D1 ) (cid:62) , ( X D2 , Y D2 , Z D2 ) (cid:62) and ( X D3 , Y D3 , Z D3 ) (cid:62) be corresponding benchmark colorsrespectively. Then, M MCB satisfies D = M MCB S , (6) 𝐼′′ !"
Select Target Color S Calculate M MCB,1
Select Target Color S Calculate M MCB,2 M MCB,1 M MCB,2 S a m e a s B e n c h m a r k C o l o r s Set Destination Color D Set Destination Color DS DS D Multi-Color BalanceMulti-Color Balance
𝐼′′ !" 𝐼 !" 𝐼 !" 𝑷′′ !"
𝑷′′ !" 𝑷 !" 𝑷 !" Fig. 2. Example of applying the proposed method to two images undertwo different light sources.Yellow rectangle: known target-color region, bluerectangle: unknown target-color region. where S = X S1 X S2 X S3 Y S1 Y S2 Y S3 Z S1 Z S2 Z S3 , D = X D1 X D2 X D3 Y D1 Y D2 Y D3 Z D1 Z D2 Z D3 . (7)When both S and D have full-rank, M MCB is designed by M MCB = DS − . (8)When M MCB in (8) is applied to I XYZ , a multi-color balanced I (cid:48)(cid:48) XYZ is obtained where target colors in the known target-color region is the same as benchmark colors. Target colorsin the unknown target-color region are reduced to those closeto benchmark colors.
C. Application to two images under two different light sources
To correct target-colors in two images ( I XYZ , and I XYZ , )taken under two different light sources, the proposed methodis applied to the images as following steps.(i) Decide three benchmark colors ( X D1 , Y D1 , Z D1 ) (cid:62) , ( X D2 , Y D2 , Z D2 ) (cid:62) and ( X D3 , Y D3 , Z D3 ) (cid:62) in the XYZcolor space, and then design D by using the benchmarkcolors, as in (7).(ii) Select three target-colors from the known target-color region in I XYZ , : ( X S1 , , Y S1 , , Z S1 , ) (cid:62) , ( X S2 , , Y S2 , , Z S2 , ) (cid:62) and ( X S3 , , Y S3 , , Z S3 , ) (cid:62) ,and then define S as in (7).(iii) Calculate M MCB , with S and D , as in (8).(iv) Transform every pixel value P XYZ , in I XYZ , to obtainmulti-color balanced image I (cid:48)(cid:48) XYZ , , as in (5).(v) Similarly to I XYZ , , I XYZ , is transformed in accordancewith (ii) – (iv). Note that if the same D as that of I XYZ , a) Post-it note (b) Color checker Fig. 3. Two sets of images used in this experiment. Yellow rectangle:known target-color region, blue rectangle: unknown target-color region, greenrectangle: white region. TABLE IR
EPRODUCTION ANGULAR ERROR ( POST - IT NOTE ). Method White Known region Unknown regionPink Yellow Blue Pink Yellow BlueInput 0.3716 0.3223 0.1391 0.3987 0.3508 0.1422 0.3993WB (XYZ) is used, target-colors in I XYZ , are reduced to the samecolors as in I (cid:48)(cid:48) XYZ , .IV. E XPERIMENT
We conducted an experiment to make sure that the proposedmethod can reduce lighting effects.
A. Experimental conditions
Two sets of images: “Post-it note” and “Color checker” wereprepared for the experiment (see Fig. 3). Each set consistsof two images captured in a scene under different lightingconditions, where the two images have the same target colorsin a known target-color region, and moreover, have the sameobjects (post-it notes or a color rendition chart) as shown inFig. 3.The color difference between each benchmark color andthe corresponding color in an adjusted image was evaluatedby using two metrics as below.(i) Reproduction angular error [22].(ii) Hue difference ∆ H of the CIEDE2000 [23], [24].The proposed method was compared with white-balancingwith XYZ scaling and white-balancing with Bradford’s coneresponse model. For ( X D , Y D , Z D ) (cid:62) and ( X S , Y S , Z S ) (cid:62) in (3),we used the CIE standard illuminant D65 [25] and the averagepixel value of a white region selected manually from eachinput image (see the green rectangle in Fig. 3), respectively.Three target-colors were also selected from the known target-color region, where the average pixel value of each object wasused as a target color. TABLE IICIEDE2000
HUE DIFFERENCE ∆ H ( POST - IT NOTE ). Method White Known region Unknown regionPink Yellow Blue Pink Yellow BlueInput 9.4615 31.7843 31.0816 33.1196 30.7124 30.0463 30.3340WB (XYZ)
TABLE IIIR
EPRODUCTION ANGULAR ERROR ( COLOR CHECKER ).Method White Blue Green RedInput 0.0798 0.0149 0.0917 0.0796WB (XYZ)
Proposed (W, R, G)
B. Experimental results
Figure 4 shows the color correction results for “Post-it note.” Note that we selected pink, yellow, blue post-it notes as target colors and used (0 . , . , . , (0 . , . , . , (0 . , . , . as thebenchmark colors of pink, yellow, blue post-it notes, respec-tively. In Table I, the proposed method was compared with twowhile-balance adjustments in terms of reproduction angularerrors. From the table, the existing white-balance adjustmentsdid not correct colors except white. In contrast, the proposedmethod perfectly corrected the three target-colors in the knowntarget-color region, and reduced the color differences in theunknown target-color region. From Table II, the hue differencewas also confirmed to have a similar trend to Table I.Figure 5 shows color correction results for “Color checker,”where the images do not have the unknown target-colorregion. We prepared two sets of target colors: red, green,blue and white, red, green. We used (0 . , . , . , (0 . , . , . , (0 . , . , . as bench-mark colors of red, green and blue, respectively. We alsoselected the CIE standard illuminant D65 for the benchmarkcolor of white. As shown in Table III, the proposed methodperfectly adjusted target colors to their benchmark colors;in comparison, the two white balance adjustments correctedonly white. Furthermore, when we select white as one of thetarget colors, the proposed method has the same correctionperformance of white as that of white-balancing as confirmedin Table IV. V. C ONCLUSION
In this paper, we proposed a multi-color balance adjustmentfor reducing lighting effects. In the proposed method, a colortransform matrix is designed by using three target colors andcorresponding destination (benchmark) colors. By applyingthe matrix to an image, target colors in “known target-colorregion” are perfectly mapped into their benchmark colors.
ABLE IVCIEDE2000
HUE DIFFERENCE ∆ H ( COLOR CHECKER ).Method White Blue Green RedInput 2.7132 0.9592 4.9048 3.2240WB (XYZ)
Proposed (W, R, G)
Additionally, the color differences between benchmark colorsand target colors in “unknown target-color region” are reduced.The experimental results showed that the proposed methodaccurately adjusted three target colors in the known target-color region; in comparison, the conventional white-balanceadjustments corrected only white. The proposed method alsoreduced the color differences between target colors in un-known target-color regions and corresponding benchmark col-ors. R
EFERENCES[1] T. I. of Image Electronics Engineers of Japan,
Color managementtechnology extended color space and color appearance . Tokyo DenkiUniversity Press, 2008.[2] M. Afifi, B. Price, S. Cohen, and M. S. Brown, “When color con-stancy goes wrong: Correcting improperly white-balanced images,” in
Proceedings of the IEEE Conference on Computer Vision and PatternRecognition (CVPR) , Jun. 2019, pp. 1535–1544.[3] M. Afifi and M. S. Brown, “Deep white-balance editing,” in
Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition(CVPR)
Journalof the Optical Society of America , vol. 61, no. 1, pp. 1–11, Jan. 1971.[6] G. Buchsbaum, “A spatial processor model for object colour perception,”
Journal of the Franklin Institute , vol. 310, no. 1, pp. 1–26, Jul. 1980.[7] J. van de Weijer, T. Gevers, and A. Gijsenij, “Edge-based color con-stancy,”
IEEE Transactions on Image Processing , vol. 16, no. 9, pp.2207–2214, Aug. 2007.[8] D. Cheng, D. K. Prasad, and M. S. Brown, “Illuminant estimation forcolor constancy: why spatial-domain methods work and the role of thecolor distribution,”
Journal of the Optical Society of America , vol. 31,no. 5, pp. 1049–1058, May 2014.[9] M. Afifi, A. Punnappurath, G. D. Finlayson, and M. S. Brown, “As-projective-as-possible bias correction for illumination estimation algo-rithms,”
Journal of the Optical Society of America , vol. 36, no. 1, pp.71–78, Jan. 2019.[10] D. Cheng, B. Price, S. Cohen, and M. S. Brown, “Beyond white: Groundtruth colors for color constancy correction,” in
Proceedings of the IEEEConference on Computer Vision (ICCV) , Dec. 2015, pp. 298–306.[11] N. Akimoto, H. Zhu, Y. Jin, and Y. Aoki, “Fast soft color segmentation,”in
Proceedings of the IEEE Conference on Computer Vision and PatternRecognition (CVPR) , Jun. 2020, pp. 8274–8283.[12] K. Seo, C. Go, Y. Kinoshita, and H. Kiya, “Hue-correction scheme con-sidering non-linear camera response for multi-exposure image fusion,”
IEICE Transactions , vol. E103-A, no. 12, pp. 1562–1570, Dec. 2020.[13] Y. Kinoshita and H. Kiya, “Hue-correction scheme consideringciede2000 for color-image enhancement including deep-learning-basedalgorithms,”
APSIPA Transactions on Signal and Information Process-ing , vol. 9, no. e19, pp. 1–10, Sep. 2020.[14] Y. Kinoshita and H. Kiya, “Hue-correction scheme based on constant-hue plane for deep-learning-based color-image enhancement,”
IEEEAccess , vol. 8, pp. 9540–9550, Jan. 2020.[15] Y. Kinoshita and H. Kiya, “Scene segmentation-based luminance adjust-ment for multi-exposure image fusion,”
IEEE Transactions on ImageProcessing , vol. 28, no. 8, pp. 4101–4116, Aug. 2019. [16] M. Afifi and M. S. Brown, “What else can fool deep learning? ad-dressing color constancy errors on deep neural network performance,”in
Proceedings of the IEEE Conference on Computer Vision (ICCV) ,Oct. 2019, pp. 243–252.[17] Y. Kinoshita, S. Shiota, and H. Kiya, “Automatic exposure compensationfor multi-exposure image fusion,” in
Proceedings of IEEE Conferenceon Image Processing (ICIP) , 2018, pp. 883–887.[18] K. Hirai, “Image processing and color space,”
The Institute of ImageInformation and Television Engineers , vol. 71, no. 5, pp. 306–314, May2017, (in Japanese).[19] CIE,
Commission internationale de l’Eclairage proceedings . Cam-bridge University Press, 1932.[20] K. M. Lam, “Metamerism and colour constancy,” Ph.D. dissertation,University of Bradford, 1985.[21] J. von Kries, “Beitrag zur physiologie der gesichtsempfindung,”
Archivesof Anatomy and Physiology , pp. 503–524, 1878.[22] G. D. Finlayson and R. Zakizadeh, “Reproduction angular error: Animproved performance metric for illuminant estimation,” in
Proceedingsof the British Machine Vision Conference , Sep. 2014.[23] ISO/CIE, “Iso/cie 11664-6:2014 colorimetry-part 6: Ciede2000 colour-difference formula,” 2014.[24] G. Sharma, W. Wu, and E. N. Dalal, “The ciede2000 color-differenceformula: Implementation notes, supplementary test data, and mathemat-ical observations,”
Color Research & Application , vol. 30, no. 1, pp.21–30, Dec. 2004.[25] ISO/CIE, “Iso 11664-2:2007 colorimetry-part 2: Cie standard illumi-nants,” 2007. nput White Balance(Bradford’s Model ) ProposedWhite Balance(XYZ Scaling )Unknown RegionKnown RegionUnknown RegionKnown Region L i gh t s ou r ce L i gh t s ou r ce Fig. 4. Experimental results for “Post-it note”. Zoom-ins of boxed regions are shown in bottom of each image. (a) Input (b) White Balance(Bradford’s Model ) (c) Proposed (Select Red, Green, Blue) (d) Proposed (Select White, Red, Green)(b) White Balance(XYZ Scaling ) L i gh t s ou r ce L i gh t s ou r ce2