Color Contrast Enhanced Rendering for Optical See-through Head-mounted Displays
SSUBMITTED TO IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 1
Color Contrast Enhanced Rendering for OpticalSee-through Head-mounted Displays
Yunjin Zhang, Rui Wang, Yifan (Evan) Peng, Wei Hua, and Hujun Bao
Original blending BackgroundVirtual objects Enhanced blending
Fig. 1:
Representative results of color contrast enhanced rendering for optical see-through head-mounted displays(OST-HMDs).
Left:
Original blending scene perceived from a commercially available OST-HMD in front of atypical background.
Middle:
The original background and virtual objects.
Right:
Our method improves thevisual distinctions between rendered images and the physical environment by performing a constrained coloroptimization regarding the perception in chromaticity and luminance of the displayed color.
Abstract —Most commercially available optical see-through head-mounted displays (OST-HMDs) utilize optical combiners tosimultaneously deliver the physical background and virtual objects to users. Given that the displayed images perceived by users are ablend of rendered pixels and background colors, high fidelity color perception in mixed reality (MR) scenarios using OST-HMDs is achallenge. We propose a real-time rendering scheme to specifically enhance the color contrast between virtual objects and thesurrounding background for OST-HMDs. Inspired by the discovery of color perception in psychophysics, we first formulate the colorcontrast enhancement as a constrained optimization problem. We then design an end-to-end algorithm to search the optimalcomplementary shift in both chromaticity and luminance of the displayed color. This aims at enhancing the contrast between virtualobjects and real background as well as keeping the consistency with the original color. We assess the performance of our approach ina simulated environment and with a commercially available OST-HMD. Experimental results from objective evaluations and subjectiveuser studies demonstrate that the proposed method makes rendered virtual objects more distinguishable from the surroundingbackground, thus bringing better visual experience.
Index Terms —simultaneous color induction, color perception, human visual system, mixed reality, post-processing effect, real-timerendering. (cid:70)
NTRODUCTION I N recent years, innovations in optical see-through head-mounted displays (OST-HMDs) have led to the rapiddevelopment of mixed reality (MR) technologies. In contrastto virtual reality (VR) HMDs or video see-through HMDs,OST-HMDs mainly enable users to perceive the real, non-rendered environment through the optics, as well as the vir-tual content through displays. This design scheme dramati- • Yunjin Zhang, Rui Wang, Wei Hua, and Hujun Bao were in the StateKey Laboratory of CAD&CG, Zhejiang University, China.Yifan (Evan) Peng was in Electrical Engineering, Stanford University,USA.Corresponding author: Rui Wang, [email protected]
This work has been submitted to the IEEE for possible publication.Copyright may be transferred without notice, after which thisversion may no longer be accessible. cally relieves the discomfort when wearing common video-based see-through HMDs. However, the optical combinerblends the rendered pixels with the physical backgroundso that the virtual objects are unable to be presented in-dependently. This essential property of combiners causes acolor-blending problem, which hinders the ability of clearlyobserving the virtual content in MR scenarios, especiallywhen the virtual and real scenes show low color contrast.Existing solutions to the color-blending problem canbe divided into two categories, namely, hardware- andsoftware-based solutions. Hardware-based solutions phys-ically adjust each pixel’s transparency on displays to avoidthe color blending of rendered images and the back-ground [1], [2]. Although certain solutions exert every effortfor miniaturization [3], [4], their scalability and flexibility are a r X i v : . [ c s . G R ] J a n SUBMITTED TO IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS very limited on account of additional hardware. Software-based methods focus on color correction, seeking to modifythe colors of rendered pixels to minimize the color blend-ing of the virtual content and the background [5]. Theseapproaches attempt to mitigate the color-blending effect viathe subtraction compensation. Researchers have developedhigh-precision, pixel-precise color correction algorithms [6],[7] and accurate colorimetric background estimation [8].However, for commercial OST-HMDs, such as GoogleGlass , Microsoft HoloLens , and Magic Leap , using thesubtraction compensation may decrease the brightness ofthe virtual content, resulting in low visual distinctions be-tween rendered images and the physical environment.In certain MR scenarios, we note that the color correct-ness may not be the only purpose of displaying virtual ob-jects. Instead, the color contrast of virtual objects to the realbackground should be sometimes pursued to allow usersto better distinguish the perceived virtual objects from thebackground. To this point, intuitively increasing the bright-ness of the virtual objects may lead to a decreased contrastwithin their surfaces [9]. Therefore, this work proposes anovel real-time color contrast enhancement for OST-HMDs,aiming to improve the distinction between the renderedimage and the real background, as well as to consider theconsistency of enhanced colors to the original displayedcolors. In particular, our work builds on the characteristicof the human visual system (HVS) that the color of oneregion induces the complementary color of the neighboringregion in perception [10]. We perform a constrained coloroptimization in the CIEL ∗ a ∗ b ∗ color space to search theoptimal complementary color under constraints regardingthe perception in chromaticity and luminance. Results showthat the proposed method can enhance the perceived dis-tinction between the virtual content and the surroundingbackground in typical environments.Particularly, our contributions are as follows: • We exploit a novel insight of color-blending problemthat enhances color contrast to improve users’ visualexperience; • We develop an end-to-end, real-time rendering algo-rithm to find optimal colors, improving the distinctionbetween displayed images and physical environmentson OST-HMDs; • We demonstrate the effectiveness of our approach ina simulated environment and on a commercial OST-HMD.
ELATED W ORK
MR devices and corresponding algorithms are becomingprolific research areas. Numerous studies explored how tobetter present colors of virtual scenes, such as harmoniza-tion [11], defocus correction [12], [13], color reproduction[14], [15], color balancing [16], light filtering [17], [18], [19],color correction [6], [7], and visibility improvement [9], [20],etc. In this section, we introduce the most relevant researchtopics, color correction and visibility improvement, in more detail, and then describe a highly relevant work that theperceived color contrast in the HVS.
Color Correction.
Solutions to color blending are com-monly categorized into hardware- or software-based ap-proaches. Hardware-based approaches commonly refer toocclusion support, which can avoid color blending physi-cally by cutting rays off between background light and theuser’s eyes at the pixel level [2]. Spatial-light modulators(SLM) provide the possibility to achieve this goal [1], [3],[21]. By creating a mask pattern of transparency, OST-HMDsgenerate a black background in the mask area, blocking en-vironment light and providing the occlusion effect. Certainmethods use a high-speed switcher to frequently alternatebetween the virtual content and the real scene, achievingfull or partial occlusion [22], [23], [24]. Their approaches,however, usually make the entire solutions impractical fordaily use. An effort at compactness was made recently [4],but its flexibility is often restricted owing to additionaloptical components.By contrast, software-based methods attempt to mitigatecolor blending by changing the color of display pixels.Typical methods are known as color correction or com-pensation, which start from capturing the background andthen accurately mapping the background image to the corre-sponding virtual content. Thereafter, the background coloris subtracted from the rendered color of the virtual scene[5]. Certain research pays more attention to colorimetricestimation [8], [25] or radiometric measurement [7] of thebackground to obtain background information with higheraccuracy.However, the aforementioned methods of subtractioncompensation result in a decrease in the brightness ofdisplayed images, thereby reducing the visibility of virtualcontent in the daily environment. One work manages toincrease visibility with high contrast [6] but is demonstratedmainly for the textual content. In general, our goal in thecurrent work is to improve the distinctions between virtualobjects and physical backgrounds rather than just keep theperceived image consistent with the input, which is the pri-mary difference between our method and color correction.
Visibility Improvement.
Complementary to color cor-rection, few recent works focus on improving the visibilityof the virtual content. Fukiage et al. [9] proposed an adap-tive blending method on the basis of a subjective metric ofvisibility. However, on common OST-HMDs, this methodincreases the brightness but reduces the contrast of virtualobjects, leading to a washed-out effect of the texture details.Lee and Kim [20] used tone mapping to enhance the visibil-ity of low gray-level regions under ambient light, but theyonly demonstrated on grayscale images. The main problemof these two approaches is that they only consider theluminance of rendered images, limiting their effectivenessfor virtual objects with colorful, intricate textures.
Color Contrast.
Color contrast is the perceptual differ-ence between one region and its adjacent region in color. Inthe HVS, this difference is influenced by the chromaticityand luminance of a test stimulus and its surrounding areawhen presented simultaneously [30], [31], [32]. This effectis called simultaneous color induction. For example, a redpatch looks redder on a green background than on a redbackground (see Figure 2). Many studies verify the color
HANG et al. : COLOR CONTRAST ENHANCED RENDERING FOR OPTICAL SEE-THROUGH HEAD-MOUNTED DISPLAYS 3
Fig. 2:
Demonstration of the simultaneous color induction.
Top:
The two red patches in the center are displayed in thesame RGB value (255 , , , yet they appear different col-ors in perception. Bottom:
Two grey patches are displayed inthe same RGB value (64 , , but have different perceivedcolors. More patterns can be found in related studies [26],[27], [28], [29].induction and related phenomenon through various exper-iments [26], [31]. Some researchers systematically quantifythe effects of simultaneous color induction along the dimen-sion of hue [27], [29].It is generally accepted that two contents with differentcolors displayed simultaneously induce a complementarycolor shift to each other (the complementarity law). In otherwords, changes in the color appearance of the inducedstimulus are directed away from the appearance of inducingsurrounds. Recently, some hypotheses, such as the directionlaw [28], [33], [34], challenge the traditional complementar-ity law, showing that the mechanism of simultaneous colorinduction is still not understood completely. Additionally,verifying these hypotheses is beyond the scope of this work.Nevertheless, these valuable advances provide a theoreticaland experimental foundation for our approach, promptingus to stand on a novel perspective for color blending. ROBLEM S TATEMENT
In this section, we describe the color-blending problemillustrated in Figure 3. We formulate our method, color con-trast enhancement, as a constrained optimization of colorblending. According to the definition of color blending onOST-HMDs by Gabbard et al. [35], the blending procedurecan be formulated as follows: c = H ( l bl ) (1a) l bl = D MR ( l d , l bg ) (1b) l bg = R ( l s , b ) , (1c)where c represents the perceived color, and H is the op-eration of the HVS for the blended light l bl that reachesthe user’s eyes. D MR is an abstract of the entire OST-HMDs system, which contains multiple parameters, such aslens opacity and display brightness. l d denotes the displaylight and l bg refers to the background light that enters thefront of the OST-HMDs. Reflectance function R depicts howthe light source l s in the background b interacts with theobject surface and finally enters the OST-HMDs. Previous light source 𝑙𝑙 𝑠𝑠 𝐴𝐴𝐴𝐴 𝐷𝐷 Light source (LB) Δ E=20.45 Δ E=51.03 𝐷𝐷 𝑀𝑀𝑀𝑀 𝐻𝐻 displaylight 𝑙𝑙 𝑑𝑑 backgroundlight 𝑙𝑙 𝑏𝑏𝑏𝑏 blendedlight 𝑙𝑙 𝑏𝑏𝑏𝑏 𝑐𝑐 = 𝐻𝐻 𝑙𝑙 𝑏𝑏 perceivedcolor 𝑐𝑐𝑙𝑙 𝑏𝑏 = 𝐷𝐷 𝑀𝑀𝑀𝑀 𝑙𝑙 𝑑𝑑 , 𝑙𝑙 𝑏𝑏𝑏𝑏 𝑙𝑙 𝑏𝑏𝑏𝑏 = 𝐴𝐴 𝑙𝑙 𝑠𝑠 , 𝑏𝑏 bg light indisplay 𝑙𝑙 𝑏𝑏 Fig. 3:
Illustration of the color-blending problem on OST-HMDs. The light source l s interacts with the object surfacein the background b by the reflectance function R , and thenthe reflected light (i.e., the background light l bg ) enters theOST-HMDs. Subsequently, the entire display system D MR blends the background light in displays l b with the displaylight l d to generate the blended light l bl . Finally, the user’seyes receive l bl , which forms the perceived color c throughthe operation H of HVS.software-based color correction methods devote to accu-rately measuring and estimating l bg and parameters of D MR (e.g., lens opacity), allowing them to subtract the back-ground light in displays ( l b ) from the display light ( l d ). As aresult, these methods remove the background color from thedisplay pixels to mitigate the color blending effect. Unlikecolor correction approaches, we are not seeking to handlethe distortions [6] or measure hardware parameters of OST-HMDs [7]. Instead, we focus on the HVS’s operation H andthe perceived color c . Therefore, function D MR ( l d , l bg ) canbe approximated as l d + l b : l bl ≈ l d + l b . (2)On the basis of the opponent-colors theory proposed byJameson and Hurvich [30], the perceived color c of a teststimulus can be expressed as follows: c = f [( r − g ) t + ( r − g ) i , ( y − b ) t + ( y − b ) i , ( w − bk ) t + ( w − bk ) i ] , (3)where f is a function of the sums. ( r − g ) t , ( y − b ) t , and ( w − bk ) t are responses of three paired and opponent neuralsystems of the test stimulus. ( r − g ) i , ( y − b ) i , and ( w − bk ) i denote the responses of corresponding systems from the sur-rounding area induced by the simultaneous color induction.It is vice versa that this induction also affects the perceivedcolor of the surrounding area.The color difference ∆ E ∗ ab defined in the CIEL ∗ a ∗ b ∗ color space (abbreviated as the LAB color space) betweencolor x and color y can be calculated as follows: ∆ E ∗ ab ( x, y ) = || x − y || (4) = (cid:113) ( L ∗ x − L ∗ y ) + ( a ∗ x − a ∗ y ) + ( b ∗ x − b ∗ y ) , where L ∗ , a ∗ , and b ∗ are three orthonormal bases of the LABcolor space used to describe the luminance and chromaticityof colors.The key of our approach is to shift l d for increasing thecolor difference ∆ E ∗ ab ( l d , l b ) in (4). In addition, shifting l d SUBMITTED TO IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
Exact complementary color Ours
Fig. 4:
Comparison between using the exact complementarycolor and our optimization. We use a landscape photo as thetest background and some objects from the
Hand InteractionExamples scene as the virtual content. The rendering resultsin the left uses the per-pixel complementary color of thebackground color. The image shown in the right is renderedwith our method. Please refer to the supplementary videofor an example in the real environment captured by theHoloLens.leads to an increase in the corresponding responses ( r − g ) i , ( y − b ) i , and ( w − bk ) i in (3), which further enhances theperceived color difference between l d and surrounding l b .In this manner, we improve the distinction between thevirtual content and surrounding background. We’ve alsotested enhancing ∆ E ∗ ab ( l d , l d + l b ) and find that l d + l b usually has a large luminance component but a relativelysmall chromaticity component. Applying our color contrastenhancement for such a color usually produces brightercolors than that produced from l d , but decreases the contrastwithin surfaces of virtual objects. Therefore, we choose toenhance ∆ E ∗ ab ( l d , l b ) instead of ∆ E ∗ ab ( l d , l d + l b ) .When the background light in displays l b and the displaylight l d exhibit the most significant color difference, thatis, l b and l d are complementary colors, the HVS perceivesa maximum color contrast. However, considering only thecolor contrast may lead to an unintended color alteringof displayed objects. Figure 4 shows an example of sucha case. It is clear that the virtual objects at the bottomleft lack texture details and can hardly be recognized. Thismost straightforward optimization rule can almost only beused for textual content. Therefore, we introduce severalconstraints in the optimization of optimal displayed color l opt to maintain the color consistency between the enhancedcolor and the original color as: l opt = arg max l opt ∆ E ∗ ab ( l opt , l b ) subject to constraints . (5) Given the aforementioned issue, the color difference ∆ E ∗ ab between l b and l d cannot be unlimited. One should keep thecolor consistency of the original color of virtual objects. Onthis basis, we introduce the first constraint, named the ColorDifference Constraint , to restrict the range of l opt : ∆ E ∗ ab ( l opt , l d ) ≤ λ E , (6)where λ E represents a non-negative color difference thresh-old. As such, the optimal displayed color l opt and theoriginal displayed color l d should be kept within a certainrange in color.Further, if the hues of the background color l b andthe display color l d are similar, the optimized l opt would
4. https://github.com/microsoft/MixedRealityToolkit-Unity
Only the Color Difference Constraint Ours
Fig. 5:
Demonstration of the Chroma Constraint. Two land-scape photos serve as the test background and virtualcontent. The left side is rendered with the optimal colorsthat meet only the Color Difference Constraint, and theright side is the optimized result of our method. Pleaserefer to the supplementary video for an example in the realenvironment captured by the HoloLens.shift along the complementary direction of l b , resulting in adecrease in chroma of l d (see Figure 5 for a specific example).We introduce the second constraint to tackle the chromareduction, named the Chroma Constraint : ch opt − ch d ≥ , (7)where ch opt and ch d represent the chroma of l opt and l d , respectively. Note that although we have restricted thereduction in chroma, there is no boundary to the incrementexisting with this constraint.However, in addition to the constraint added on chroma,the luminance can benefit from adding a correspondingconstraint. For example, if l b is a bright, whitish color, suchas a wall or clear sky, the complementary color of l b is closeto a dark grey. In this case, the optimized color l opt movestoward the direction of l b ’s complementary color, resultingin a decrease in luminance (see Figure 6 for an example).Reducing the luminance of displayed color on commonOST-HMDs often leads to increased transparency. Therefore,displaying l opt with no constraint on luminance reducesthe visibility of virtual objects. Moreover, the experimentalresults of Fukiage et al. [9] showed that if the visibility of vir-tual objects in OST-HMDs is enhanced by unconstrainedlyincreasing the luminance of l d , then the perceptual contrastwithin surfaces of these objects decreases. These situationslead to the third constraint, which we call the LuminanceConstraint : ∆ L ∗ ( l opt , l d ) ≤ λ L , (8)where ∆ L ∗ is the luminance difference, and λ L denotes anon-negative luminance threshold. This constraint alleviatesreduced visibility of the displayed content caused by thebright background, as well as the contrast reduction ofvirtual objects in a dim environment.Finally, we introduce an evident constraint between l opt and l b , named the Just Noticeable Difference Constraint : ∆ E ∗ ab ( l opt , l b ) ≥ λ JND , (9)where λ JND represents the just noticeable difference (JND).This constraint indicates that the optimal color l opt and thebackground color l b require a minimum color differencethat the HVS can distinguish. Note that this constraint isgenerally satisfied automatically on account of the objectiveof our optimization. However, for certain extreme caseswhere users exhibiting an above-average JND, applying thisconstraint is mandatory. HANG et al. : COLOR CONTRAST ENHANCED RENDERING FOR OPTICAL SEE-THROUGH HEAD-MOUNTED DISPLAYS 5
Only the 1st & 2nd Constraints Ours
Fig. 6:
Demonstration of the Luminance Constraint. We usea bright gray and a dark gray as the test background anda landscape photo and other objects as the virtual contents.The left side is rendered with the optimal colors that followthe first and second constraints but not the third, and theright side is the optimized result of our algorithm. Pleaserefer to the supplementary video for an example in the realenvironment captured by the HoloLens.
LGORITHM
Given these four constraints of color contrast enhancement,the next step is to solve the optimization problem, asshown in (5). Accordingly, we design a real-time algorithmto enable color contrast enhancement in a variety of realenvironments. Figure 7 shows an overview of the proposedmethod, which mainly includes three procedures:
I. Preprocessing:
We perform a Gaussian blur and fieldof view (FoV, see Subsection 4.1) calibration for thestreaming video of the background.
II. Conversion:
We convert the display and backgroundcolors from RGB to the LAB color space and viceverse.
III. Optimization:
Utilizing the aforementioned four con-straints, we optimize the displayed colors on the basisof the background colors.
It is generally believed that multiple individual pattern an-alyzers contribute to the contrast sensitivity of humans [36].These pattern analyzers are often called spatial-frequencychannels, which filter the perceived image into spatiallylocalized receptive fields with a limited range of spatialfrequencies. That is, different analyzers are tuned to var-ious types of information. For example, the low-spatial-frequency channels receive the color and outlines of objects,whereas the high-spatial-frequency channels perceive de-tails. Considering the low-pass nature of color vision [37]and the blurring characteristic of the non-focal field in theHVS, our method does not need a pixel-precise camera-to-display calibration. Instead, we use a Gaussian blur to ex-tract the spatial color information and filter out details of thebackground video as G ( x, y ) = πσ e − x y σ , where x and y are the horizontal and vertical distances from a pixel toits center pixel, respectively, and σ is the standard deviationof the normal distribution. Such a blurring simulates thenon-focal effect of the HVS. In this manner, the displayedcolor obtains the weighted average of multiple backgroundcolors in the corresponding region. This filter also reducesthe flicker in optimized colors caused by the high-spatial-frequency details in the background (see Figure 8 for exam-ples).Given the low-frequency characteristics of the blurredbackground video and the difference of FoV between cap-tured videos and OST-HMDs, we subsequently use a screen- space coordinate mapping called FoV calibration betweenthe background video and the frame buffer of renderingsystem, in order to approximate pixel-precise calibration(such as homography). On this basis, we apply the followingcoordinate mapping to map the background video to theframe buffer: (cid:40) u = s u i + b u v = s v j + b v , (10)where ( u, v ) and ( i, j ) represent the 2D texture coordinatesof the frame buffer and the background video, respectively. ( s u , s v ) denotes the scale factor in the horizontal and verticaldirections of the background video coordinates, and ( b u , b v ) is corresponding offsets. Figure 9 shows the illustration. Weassume that the FoV of captured videos is greater than thatof OST-HMDs. Using this calibration, the low-frequency in-formation of background videos and that of real scenes seenthrough displays of OST-HMDs is as similar as possible. After preprocessing, the displayed color can be paired withan average background color of the corresponding area.These colors are all stored and represented in the RGB colorspace. However, the widely used RGB color space is notperceptually uniform that the same amount of numericalchange does not correspond to the same amount of colordifference in visual perception.In order to achieve better optimization, we convert thebackground color and the displayed color from RGB to theLAB color space, to take full advantage of its perceptualuniformity and independence of luminance and chromatic-ity. After performing the color contrast enhancement, wetransform LAB colors back to RGB to display appropriatelyon OST-HMDs.Additionally, the gamut of the LAB color space is moreextensive than displays and even the HVS, indicating thatmany of the coordinates in the LAB color space, especiallythose located in the edge area, cannot be reproduced ontypical displays. For simplicity, we scale the range of theoriginal LAB color space to [ − , and take the inscribedsphere as the solution space of our method. Given a blurred background color l b and an original dis-played color l d in the scaled LAB color space, the objectiveof our optimization is to find the optimal displayed color l opt .In this work, we denote points by capital italic letters.First, we calculate the coordinates I of the ideally optimaldisplayed color corresponding to the blurred backgroundcolor without considering any constraint: I = − B dist ( B, O ) (11a) = − norm ( −−→ OB ) , (11b)where B represents the coordinates of the blurred back-ground color l b . dist ( B, O ) denotes the distance between B and the center O of the unit sphere. This formula can alsobe rewritten to the latter form, where norm ( −−→ OB ) means tonormalize the vector −−→ OB . That is, the farthest point from B SUBMITTED TO IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
Background video
Runtime RenderingRuntime Rendering
Gaussian blur
FoV calibration
Conversion from RGB to
LAB Optimiza- tion for pixel color
Final image
Virtual scene Conversion from LAB to
RGB
I. II. II.III.
Fig. 7:
Algorithm Overview. Our algorithm takes the rendered virtual scene and the streaming background video as input.First, a Gaussian blur and FoV calibration are applied to the background video. Second, we transform the blurred videoand the virtual scene from RGB to the LAB color space and then optimize the displayed color for all pixels. Finally, thepixel color of the virtual scene is converted back to the RGB color space and output to displays.
With blurring W/o blurring
Fig. 8:
Demonstration of the necessity of image blur. We usea landscape photo as the test background. The left side isthe optimized result of our algorithm with background blur-ring enabled. Optimal displayed colors without backgroundblurring are represented on the right side. The zoomedregions emphasize details.
Calibrated background video 𝒊𝒊𝒋𝒋 𝒔𝒔 𝒖𝒖 𝒊𝒊 + 𝒃𝒃 𝒖𝒖 Captured backgroundvideo with blurring 𝒔𝒔 𝒗𝒗 𝒋𝒋 + 𝒃𝒃 𝒗𝒗 𝒃𝒃 𝒖𝒖 𝒃𝒃 𝒗𝒗 𝒖𝒖𝒗𝒗 𝒃𝒃 𝒖𝒖 𝒃𝒃 𝒗𝒗 Perceived background Calibrated backgroundvideo with blurring , Fig. 9:
Illustration of FoV calibration. This process is per-formed when the FoV between the captured backgroundvideo and the frame buffer are different. We crop and scalethe captured video to match the position and size of thebackground seen through OST-HMDs.in the unit sphere is the intersection of the extension line of BO and the sphere (Figure 10a).Given the ideally optimal displayed color I , we then con-sider the aforementioned four constraints: Color DifferenceConstraint, Chroma Constraint, Luminance Constraint, andJust Noticeable Difference Constraint, into color optimiza-tion in (5). First, we calculate the coordinates E of initiallyoptimal display color by applying the Color DifferenceConstraint to I : −−→ DE = min ( dist ( D, I ) , λ (cid:48) E ) · norm ( −→ DI ) , (12)where −→ DI is the ideal change vector starting from the coor-dinates D of original displayed color l d , and λ (cid:48) E is the scaledcolor difference threshold specified by users. Now, we havethe change vector −−→ DE of the displayed color constrained bythe color difference. Figure 10a presents a two-dimensionalillustration of this step. Let v (cid:48) be the projection of vector v onto the plane a ∗ Ob ∗ of the LAB color space. Thus, the change in chroma −−→ DE (cid:48) ch of the vector −−→ DE (cid:48) can be obtained by calculating theprojection of −−→ DE (cid:48) onto the vector −−→ OD (cid:48) (see Figure 10b for atwo-dimensional example). For the Chroma Constraint, wediscard the chroma reduction of −−→ DE (cid:48) as: −−→ DC = t ch · −−→ DE (cid:48) ch + −−→ DE (cid:48) h (13a) t ch = (cid:40) , if θ ch ≤ ◦ , if θ ch > ◦ . (13b) Algorithm 1
Finding the coordinates P of the optimaldisplayed color l opt Input:
D, B, λ (cid:48) E , λ (cid:48) JND
Output: P Components x , y , and z denote L ∗ , a ∗ , and b ∗ , respectively. dir = Vector3(0, 0, 0);
Color Difference Constraint : I = − norm( B ); E = min ( dist ( D, I ) , λ (cid:48) E ) · norm( I − D ); Chroma Constraint : E (cid:48) = Vector3(0,
E.yz ); E (cid:48) ch = change in chroma of E (cid:48) ; E (cid:48) h = E (cid:48) − E (cid:48) ch ; θ ch = angle from Vector3(0, D.yz ) to E (cid:48) ; dir . yz = (( cos θ ch ≥ · E (cid:48) ch + E (cid:48) h ) .yz ; Luminance Constraint : θ l = angle from Vector3(1, 0, 0) to E ; dir.x = ( cos θ l ≥ cos θ l ) : (1 + cos θ l )) · E.x ; Optimal display color : P = D + dir ; Just Noticeable Difference Constraint : if (dist( P , B ) < λ (cid:48) JND ) then P = intersection of line DP and circle B of radius λ (cid:48) JND ; end if return P Here −−→ DE (cid:48) h = −−→ DE (cid:48) − −−→ DE (cid:48) ch , which refers to the com-ponent of −−→ DE (cid:48) perpendicular to −−→ DE (cid:48) ch . θ ch represents theangle ( ◦ – ◦ ) from −−→ OD (cid:48) to vector −→ DI (cid:48) . This angle de-scribes the deviation in hue and chroma between the op-timal displayed color and the original displayed color. In HANG et al. : COLOR CONTRAST ENHANCED RENDERING FOR OPTICAL SEE-THROUGH HEAD-MOUNTED DISPLAYS 7
𝑂𝑂𝑏𝑏 ∗ 𝑎𝑎 ∗ 𝐵𝐵 𝐼𝐼 𝜆𝜆 𝐸𝐸′
𝐷𝐷 𝐸𝐸 (a)
𝐷𝐷 𝑂𝑂𝑏𝑏 ∗ 𝑎𝑎 ∗ 𝐼𝐼𝐵𝐵 𝜃𝜃 𝑐𝑐ℎ 𝐸𝐸𝐶𝐶 𝜃𝜃 𝐶𝐶ℎ 𝑡𝑡 𝐶𝐶ℎ
𝐷𝐷𝐸𝐸 ℎ (b) 𝑂𝑂𝐿𝐿 ∗ 𝑏𝑏 ∗ 𝐵𝐵𝐼𝐼 𝐷𝐷𝐸𝐸 𝜃𝜃 𝑙𝑙 𝐿𝐿 𝐷𝐷𝐸𝐸 𝐿𝐿 ∗ (c) 𝐵𝐵𝑂𝑂 𝑏𝑏 ∗ 𝑎𝑎 ∗ 𝐼𝐼 𝐷𝐷 𝑃𝑃 ′ 𝑃𝑃 𝜆𝜆 𝐽𝐽𝐽𝐽𝐽𝐽′ (d)
Fig. 10:
We illustrate our algorithm on the 2D plane formed by two of the three axes of the LAB color space for simplicity.( a ) Illustration of calculating coordinates I and E . All points in this subfigure are plotted on the plane a ∗ Ob ∗ . In the unitcircle, I is the point farthest from B . All points with the same distance λ (cid:48) E from D form a circle, which intersects theline DI at point E . In this figure, all points are colored with the same L ∗ value equals to neutral gray. ( b ) Illustrationof determining the coordinates C . All points in this subfigure are plotted on the plane a ∗ Ob ∗ . If θ ch is an obtuse angle,the change in chroma −−→ DE ch of vector −−→ DE will be discarded. Thus, the chroma-constrained vector −−→ DC only contains thecomponent −−→ DE h of −−→ DE . In this case, all points are colored with the same L ∗ value equals to neutral gray. ( c ) Illustrationof determining the coordinates L . All points in this subfigure are plotted on the plane b ∗ OL ∗ . The change in luminance −−→ DE L ∗ of vector −−→ DE will be attenuated with cos θ l . Thus, −−→ DE L ∗ is shortened to −→ DL . In this case, all points are colored withthe same a ∗ value equals to zero. ( d ) Illustration of finding the new optimal displayed color P (cid:48) under the Just NoticeableDifference Constraint. All points in this subfigure are plotted on the plane a ∗ Ob ∗ . Sometimes the coordinates P of optimaldisplayed color l opt falls into the circle B on account of a large radius λ (cid:48) JND , meaning that the HVS cannot distinguishbetween l opt and l b . Therefore, we find the intersection of line DP and circle B as P (cid:48) . In this figure, all points are coloredwith the same L ∗ value equals to neutral gray.this manner, the adaptive parameter t ch provides a smoothvisual effect when hue and chroma change.As for the Luminance Constraint, we also use an adap-tive parameter to scale the alterations in luminance: −→ DL = (1 − | cos θ l | ) · −−→ DE L ∗ , (14)where −−→ DE L ∗ refers to −−→ DE ’s component on the L ∗ axis of theLAB color space. θ l represents the angle ( ◦ – ◦ ) from thepositive L ∗ axis to the vector −−→ DE (see Figure 10c for a two-dimensional example). Correspondingly, this angle indicatesthe difference in luminance between the optimal displayedcolor and the original displayed color. Once again, thisadaptive coefficient (1 − | cos θ l | ) smoothes the results andmakes it redundant to specify the luminance threshold λ L by users.According to the above steps, the coordinates P ofoptimal displayed color l opt can be obtained by: P = D + −−→ DC + −→ DL. (15)In contrast to the other three constraints, the Just Notice-able Difference Constraint is only applied to some extremecases (Figure 10d), wherein we reduce the length of vector −−→ DP so that the distance from the coordinates P (cid:48) of newoptimal displayed color to the coordinates B of the blurredbackground color is equal to a scaled JND λ (cid:48) JND in theunit sphere. Such extreme cases are found in users whohave an above-average JND, leading to a larger radius(i.e., λ (cid:48) JND ) of circle B in Figure 10d. Algorithm 1 givesthe pseudocode of the entire optimization process above,where Vector3(0, E.yz ) generates a three-dimensional vectorwhose first component is , and the last two components are E.y and
E.z . MPLEMENTATION
We implemented our algorithm as a full-screen post-processing effect using the Unity 2019 rendering engineand validated it on a software-simulated environment anda commercially available OST-HMD. The simulated envi-ronment is based on the Unity Editor with a resolution of × , using a series of still images as the simulatedbackgrounds. We used the color difference ∆ E ∗ ab in (4) forobjective evaluation in the simulated environment. For thereal environment, we used the first generation of MicrosoftHoloLens MR headset [38] to evaluate the performance andquality of our algorithm. Both environments take the mod-ified Hand Interaction Examples scene as the virtual content,which includes text, photographs, user interfaces, and 3Dmodels with plain or intricate materials. We attenuated theluminance of background images in the simulated environ-ment and streaming videos in the HoloLens by to sim-ulate the real background perceived through the translucentlens in the HoloLens. It is a tweakable parameter for othertypes of OST-HMDs with different transparency. In bothenvironments, the kernel size and σ of our Gaussian filterare and . , respectively. These values are used for allparticipants in our user studies (Subsection 6.2 and 6.3). Wecaptured streaming videos at frames per second (FPS) bya built-in RGB camera located in the center of the HoloLensin real-time. The screen brightness of the HoloLens wasfixed at .The data of our FoV calibration (Subsection 4.1) wasdetermined by manual calibration. Specifically, ( s u , s v ) =(0 . , . , and ( b u , b v ) = (0 . , . .
5. https://unity.com/
SUBMITTED TO IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 𝜆𝜆 𝐸𝐸′ = 0.2 𝜆𝜆 𝐸𝐸′ = 0.4 𝜆𝜆 𝐸𝐸′ = 0.6
Fig. 11:
Some of the result images in the simulated environment, generated by the Unity Editor. Each group with adifferent λ (cid:48) E shows two background images. The first row shows the original blending images, and the second row showsthe results of our method. In the last row, pixels colored in cyan indicate a perceptually increased ∆ E ∗ ab between thebackground color and the displayed color, wherein numbers in figures represent the corresponding percentage of thesepixels in all foreground pixels. Please refer to the supplementary video for an example with different background colorscaptured by the HoloLens. ESULTS
In our context, we take the scaled color difference ( λ (cid:48) E ) toregulate our color contrast enhancement, where λ (cid:48) E = 0 means no effect of our algorithm. To demonstrate the ef-fectiveness of our method, we performed a series of ex-periments in simulated and real environments, includingobjective evaluations, algorithm comparisons, performanceanalysis, and subjective user studies. All experiments on theHoloLens were conducted in indoor scenes. In real envi-ronments, we found that physical environmental conditions(such as scene illuminance) have no significant impact onour experimental results, on account of the auto exposureand the auto white balance of the built-in RGB camera ofthe HoloLens. We used different types of images as the background of thesimulated environment to evaluate our method in variouspractical scenarios. Figure 11 shows some of the results.Virtual objects and dimmed background are blended di-rectly as (2) instead of mixed by alpha blending to simulatethe optical properties of OST-HMDs. Pixels that meet thefollowing condition are marked as cyan in the last rowof Figure 11, meaning that these pixels have an increasedcolor difference ∆ E ∗ ab from the corresponding backgroundin perception: (∆ E ∗ ab ( l b , l opt ) > ∆ E ∗ ab ( l b , l d )) ∩ (∆ E ∗ ab ( l opt , l d ) ≥ λ JND ) . (16)Here, λ JND is about . in the LAB color space [39]. Notethat not all virtual content can gain perceptual color contrastenhancement. Given the constraints mentioned above, thecolor difference between certain optimal colors and theiroriginal displayed colors is less than JND. Moreover, somedisplay colors are initially the complementary colors of thebackground colors and do not require further enhancement.To better demonstrate the effectiveness of our approach, wevalidated it with varying λ (cid:48) E with the same background,as shown in Figure 12. Generally, larger color differenceleads to an increase of the enhanced pixels in quantities 𝝀𝝀 𝑬𝑬′ = . 𝑬𝑬′ = . 𝑬𝑬′ = . Fig. 12:
Results of different λ (cid:48) E for the same background inthe simulated environment. Numbers indicate the percent-age of enhanced pixels in all foreground pixels. One can seethat the white text in front of yellow backgrounds looksbluish, whereas that in front of blue backgrounds looksyellowish. Please refer to the supplementary video for anexample of different λ (cid:48) E captured by the HoloLens.and makes the displayed color more complementary to thebackground color.To evaluate the enhancement contributed by hues, wecompared results (Figure 13) produced by our approachand just increasing luminance and chroma, with the same λ (cid:48) E of . . Figure 15a shows a two-dimensional illustrationof this enhancement. The changes in color difference ofeach pixel between the two enhancements are identical.Pixels that have an increased perceptual color difference aremarked as cyan like Figure 11. In most cases, approach con-sidering hues produces more enhancement than increasingluminance and chroma only. Some virtual objects like thecoffee cup in Figure 13 are indeed more visible by applyingthe latter approach, but texture details like shadows are HANG et al. : COLOR CONTRAST ENHANCED RENDERING FOR OPTICAL SEE-THROUGH HEAD-MOUNTED DISPLAYS 9
Ours Only increasing luminance and chromaOnly hue shifts of our direction Only hue shifts of the other direction
Fig. 13:
Results of different enhancements in the simulatedenvironment, with the same change in color difference.
Left:
Our color contrast enhancement.
Right:
Enhancing colorcontrast by increasing luminance and chroma only. Thenumbers in figures indicate the corresponding percentageof the enhanced pixels.
Ours Only increasing luminance and chromaOnly hue shifts of our direction Only hue shifts of the other direction
Fig. 14:
Results of different hue shifts in the simulatedenvironment, with the same control conditions (i.e., iso-color-difference, iso-luminance, and iso-chroma).
Left:
Colorcontrast enhancement produced only by hue shifts of ourmethod.
Right:
Another enhancement produced only byhue shifts but in a different direction. Numbers in figuresdenote the corresponding percentage of the enhanced pixelsin all foreground pixels.distorted.The effects of color contrast enhancement produced bytwo directions of the hue shift were also evaluated. Figure 14shows a comparison result, where λ (cid:48) E was both set to . .Figure 15b presents an illustration of the two directions.The first direction is based on our method, and the otherdirection has the same control conditions. The optimizedcolors of the two enhancements are the same as the originalcolors in luminance. The changes in chroma and colordifference between the two enhancements are also identical.In fact, there is only one other direction ( −−−→ DC (cid:48)(cid:48) ) that is iso-color-difference, iso-luminance, and iso-chroma comparedto −−→ DC . Pixels have an increased perceptual color differenceare marked as cyan. Normally, A considerable degree ofcolor contrast can be enhanced by hue shifts only. Moreover,the enhancement along the hue shift direction we proposedis better than that along the other hue shift direction underthe same conditions.We also evaluated our algorithm on the HoloLens. Fig- 𝐷𝐷 𝑂𝑂𝑏𝑏 ∗ 𝑎𝑎 ∗ 𝐼𝐼𝐵𝐵 𝜃𝜃 𝑐𝑐ℎ 𝐸𝐸𝐶𝐶 𝜃𝜃 𝐶𝐶ℎ 𝑡𝑡 𝐶𝐶ℎ
𝐷𝐷𝐸𝐸 ℎ 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝑎𝑎 + 𝐿𝐿𝐿𝐿𝐶𝐶𝐿𝐿𝐿𝐿𝑎𝑎𝐿𝐿𝐿𝐿𝐿𝐿𝐶𝐶 ′ (a) 𝐷𝐷 𝑂𝑂𝑏𝑏 ∗ 𝑎𝑎 ∗ 𝐼𝐼𝐵𝐵 𝜃𝜃 𝑐𝑐ℎ 𝐸𝐸𝐶𝐶 𝜃𝜃 𝐶𝐶ℎ 𝑡𝑡 𝐶𝐶ℎ
𝐷𝐷𝐸𝐸 ℎ 𝑂𝑂𝑡𝑡𝑂𝑂𝑂𝑂𝑂 𝐻𝐻𝐻𝐻𝑂𝑂 𝐷𝐷𝐷𝐷𝑂𝑂𝐶𝐶 ′′ (b) Fig. 15: ( a ) A 2D illustration of increasing luminance andchroma only. C (cid:48) is the coordinates of optimized color fol-lowing this enhancement. The length of −−→ DC (cid:48) is equal to −−→ DE h , and its direction is along −−→ OD . ( b ) A 2D illustrationof enhancing color contrast in another direction of the hueshift. C (cid:48)(cid:48) is the coordinates of optimized color following thisenhancement. The length of −−−→ DC (cid:48)(cid:48) is equal to −−→ DE h , while itsdirection is opposite to −−→ DE h .ure 1 shows one of the experimental results, where thescaled color difference threshold λ (cid:48) E we used was . . Infront of a yellow background, our method shifts the dis-played color to the complementary direction (blue) of thebackground color, to enhance the color difference for bettervisual distinctions. For example, the sky and the groundof the landscape photograph look bluish, as well as thetext in the scene. However, subject to the aforementionedconstraints (Subsection 3.1), the chroma of yellow cheeseand mantle does not decrease. Overall, our method is ableto keep the consistency from the original displayed color.Although our goal is to improve the distinctions betweenvirtual contents and surrounding backgrounds rather thancorrecting colors, we compared our color contrast enhance-ment with a typical compensation method [5]. Figure 16presents the experimental results. Note that this result ofour enhancement shows the maximum color contrast undera given λ (cid:48) E . If one needs less contrast, but with more consis-tency with the original color, the threshold λ (cid:48) E is tweakable.As noted previously, subtraction compensation reduces theluminance of rendered pixels, leading to low visual distinc-tions between virtual objects and the physical environment.Additionally, we compared our method with the visibility-based blending approach [9], as shown in Figure 17. Thisblending method increases the brightness at the expense ofcontrast within surfaces, resulting in a washed-out visualeffect of the virtual content. We conducted three user studies to evaluate our color con-trast enhanced rendering subjectively. We first performeda two-alternative forced choice (2AFC) subjective experi-ment, wherein the participants were asked to compare theresults of color-contrast-enhanced rendering based on ourapproach and original rendering. We provided five levelsof λ (cid:48) E , ranging from . – . , with a step of . . During theexperiment, participants wore the HoloLens and exploredthe modified Hand Interaction Examples scene freely in var-ious environments (see Figure 18 for an example). In thisscene, dozens of virtual objects were placed surrounding theuser. Owing to the small FoV of the HoloLens, participantsneeded to rotate their head (change the camera viewport)
Ours Subtraction compensation
Fig. 16:
Rendering results of the two methods on theHoloLens, in front of different backgrounds.
Left : Our colorcontrast enhancement ( λ (cid:48) E = 0 . ). Right:
Subtraction com-pensation [5] ( k v = 1 . and k b = 0 . , where k v = 1 . means the same display color in both methods, and k b = 0 . denotes lens transparency, i.e., attenuation ofbackground luminance). When the background is light gray(such as a white wall), our method has limited optimizationfor rendered pixels. By contrast, the subtraction compensa-tion causes degradation, making the virtual contents moretransparent. Please refer to the supplementary video foranother comparison result between the two methods. Ours Visibility-based blending
Fig. 17:
Rendering results of the two methods on theHoloLens, in front of different backgrounds.
Left : Ourcolor contrast enhancement ( λ (cid:48) E = 0 . ). Right:
Visibility-enhanced blending [9] ( V t = 1 . , which is given by the au-thors in their paper). When the background is an achromaticpattern, our method has little optimization for renderedpixels. By contrast, visibility-enhanced blending improvesthe distinctions between virtual contents and physical back-ground but decreases contrast within surfaces of objects andleads to a possibly unintended change in color. Please referto the supplementary video for another comparison resultbetween the two methods.to see different virtual objects with different backgrounds.Then, participants performed full comparisons by freelytoggling between the two results shown in the HoloLens,and then were asked two questions. First, which one ofthe results is more distinguishable from surrounding back-grounds. Second, which result looks more natural. We askedeach participant to look at three randomly picked objects,where each object is with one camera viewport. Thus, eachparticipant compared results in three random camera view- Fig. 18:
Scene perceived from the HoloLens by participantsin a daily environment, with our color contrast enhancementenabled ( λ (cid:48) E = 0 . ). One can see that the photograph in frontof the green background looks reddish.ports under five color differences and gave a total of choices for each question.A total of 15 participants, 12 males and 3 females, witha mean age of . years (range – ), took part in theexperiments. All participants had normal or corrected-to-normal vision without any form of color blindness. Par-ticipants gave informed consent to take part in this study.Before beginning the study, we required each participantto perform an eye-to-display calibration through a built-incalibration application of the HoloLens. Finally, we received choices of each question under each color difference ( λ (cid:48) E ).The results show that our method is more distinguishablefrom surrounding backgrounds in of comparisonswhen λ (cid:48) E ≥ . ( p < . , Binom. test). In addition, when λ (cid:48) E ≥ . , the original rendering images are statisticallymore natural ( of , p < . , Binom. test).Figure 19 shows the preferences of the participants. Forall color differences tested on the HoloLens, our methodis preferred more often than original rendering in distinc-tion. On the other hand, the naturalness of color-contrast-enhanced images significantly decreases as λ (cid:48) E increaseswhen λ (cid:48) E ≥ . . These results show that the value of λ (cid:48) E is positively correlated with the distinction but negativelycorrelated with the naturalness, which means that thereis a trade-off between contrast and consistency. However,when λ (cid:48) E = 0 . , the optimized virtual content is moredistinguishable from the background ( of , p = 0 . ,Binom. test), whereas the difference of naturalness betweenthe optimized and the original virtual content is not sta-tistically significant ( of , p = 0 . , Binom. test). Webelieve that our approach has successfully found a trade-offbetween contrast and consistency. To subjectively evaluate the effect of hues, we performedanother 2AFC experiment. Participants were asked to maketwo comparisons between results of 1) our color contrastenhancement and enhancing color contrast by increasingluminance and chroma only; 2) two different directions ofhue shift (see Section 6.1). Participants wore the HoloLensand freely explored the same scene as User Study I invarious environments. For each comparison, participantsfreely toggled between the two rendering results and thenwere asked one question that which of the results is moredistinguishable from the surrounding background. Eachparticipant compared the results of three random viewportsand gave a choice for each viewport of each comparison. We
HANG et al. : COLOR CONTRAST ENHANCED RENDERING FOR OPTICAL SEE-THROUGH HEAD-MOUNTED DISPLAYS 11 p=0.07 p=0.02 p<0.01 p<0.01 p<0.01p=0.77 p=0.77 p=0.07 p<0.01 p<0.01
More distinguishable 𝜆𝜆 𝐸𝐸′ = 0.2 𝜆𝜆 𝐸𝐸′ = 0.4 𝜆𝜆 𝐸𝐸′ = 0.6 𝜆𝜆 𝐸𝐸′ = 0.8 𝜆𝜆 𝐸𝐸′ = 1.0 R a t i o w h o p r e f e r c o l o r c o n t r a s t e nh a n c e m e n t Fig. 19:
Results of our first subjective experiment that theparticipants compared our color-contrast-enhanced render-ing with the original rendering. Participants’ preferencesare shown as percentages. λ (cid:48) E represents the scaled colordifference threshold. p-value (Binom. test) is shown abovethe column, accordingly. The error bars represent standarderror. p=0.03 p=0.01 More distinguishable R a t i o w h o p r e f e r t h e f i r s t e nh a n c e m e n t Ours versus Increasingluminance and chroma only Our direction versusAnother direction
Fig. 20:
Results of our second subjective experiment thatthe participants compared the results of different enhance-ments. Participants’ preferences are shown as percentages.p-value (Binom. test) is shown above the column, respec-tively. The error bars represent the standard error.fixed λ (cid:48) E to . .Sixteen participants, including 4 females and 12 maleswith an average age of . years old (range – ),volunteered. All participants had normal or corrected-to-normal vision without any form of color blindness. Seven ofthem participated in the previous study. Participants gaveinformed consent to take part in this study. Before starting,we required each participant to perform the eye-to-displaycalibration.For each comparison, a total of 48 choices was reportedfrom participants. Figure 20 shows the preference of theparticipants. For both comparisons, the first enhancementwas preferred more often than the second one. Specifically,our color contrast enhancement is more distinguishable thanincreasing luminance and chroma only ( of , p = 0 . ,Binom. test). Also, the hue shift direction we proposed ismore distinguishable than that along the other hue shiftdirection under the same conditions ( of , p = 0 . ,Binom. test). These results indicate that hue shifts in anappropriate direction can further improve the visual dis-tinctions between rendered images and the surroundingenvironment. We conducted another 2AFC experiment in which the par-ticipants were asked to compare the results of our color con-trast enhanced rendering with those of 1) subtraction com-pensation [5]; 2) visibility-enhanced blending [9]. Similar to the previous user studies, participants wore the HoloLensand freely perceived virtual objects taken from the samescene, and then fully compared the rendered images of thetwo methods by freely toggling between them. For each re-lated method, participants were asked three questions. First,which of the results is more distinguishable from surround-ing backgrounds. Second, which result has higher contrast.Third, which result looks more natural. Each participantcompared rendered images of three random viewports andgave six choices for each question. We fixed the λ (cid:48) E of ourmethod to . .A total of 12 participants consisting of 5 females and7 males volunteered, ages – (mean . ). None of theparticipants exhibited signs of any form of color blind-ness. All participants had normal or corrected-to-normalvision. No participant took part in User Study I and II.Participants gave informed consent to participate in thisstudy. We required each participant to perform the eye-to-display calibration before starting the study. To furtherensure the accuracy of the camera-to-display calibration, wealso required the participants to be at least cm awayfrom surrounding environments.Finally, we collected 36 choices for every question of eachcompared method. Figure 21 shows the preferences of theparticipants. For the first question, participants preferredour method more often than subtraction compensation ( of , p < . , Binom. test). However, our algorithm isless preferred than visibility-enhanced blending ( of , p = 0 . , Binom. test). For the second question, our re-sults are considered to have higher contrast compared withvisibility-enhanced blending ( of , p < . , Binom.test). For the third question, the rendering images generatedby our algorithm are statistically more natural than the othertwo methods ( of , p < . , Binom. test). These resultsmean that, in most cases, our approach is regarded as moredistinguishable and natural than subtraction compensation,whereas the contrast and naturalness of rendered images aresignificantly higher than visibility-enhanced blending.We’ve compared with the subtraction compensation ap-proach [5]. We note that more sophisticated methods basedon subtraction compensation (e.g., [7], [25]) also fail to keepthe original visibility of virtual content on common OST-HMDs. In addition, results show that participants regardedthe visibility-enhanced blending [9] as more distinguishablethan our approach. This is because their method increasesthe luminance of virtual contents extensively. For applica-tions that pay more attention to texture details, it may notbe an optimal solution. Our method does not rely on any precomputation. There-fore, it supports real-time rendering under various scenarioswith dynamic camera viewports. Parameters such as colordifference can be tuned in real-time. We measured theruntime performance on the HoloLens, which has a displayarea of × pixels for each eye. We rendered displayedcontents at different viewports in the Hand Interaction Ex-amples scene, covering percentages of the display area thatranges from – , with a step of . Different sur-rounding backgrounds are also used in our measurement.For each percentage, we sampled ten times and calculated p<0.01 p=0.03p=0.13 p<0.01p<0.01 p<0.01 More distinguishable More contrast More natural
Ours versus Subtraction compensation Ours versusVisibility-based blending R a t i o w h o p r e f e r c o l o r c o n t r a s t e nh a n c e m e n t Fig. 21:
Results of our second subjective experiment that theparticipants compared our color-contrast-enhanced render-ing with subtraction compensation [5] ( k v = 1 and k b = 0 . )and visibility-enhanced blending [9] ( V t = 1 . ). Participants’preferences are shown as percentages. p-value (Binom. test)is shown above the column, respectively. The error barsrepresent standard error. F P S % Display Area ( 𝜆𝜆 𝐸𝐸′ = 0.4)
Fig. 22:
Performance of our color contrast enhancementas a full-screen post-processing effect on the HoloLens,measured in FPS. The error bars represent the maximumand minimum FPS.the average FPS. Generally, reported FPS from the HoloLensvaries from – , depending on the number of renderedpixels, as shown in Figure 22. IMITATION
Our approach, though not free from limitations, constitutesa promising avenue to future work. Although the exper-imental results indicate that our color contrast enhancedrendering for OST-HMDs works well in a proper thresholdof color difference, finding an adaptive threshold corre-sponding to the current physical environment remains anunexplored valuable topic.There are several aspects in our implementation requir-ing elaboration. First, our method currently does not con-sider achromaticity. Although uncommon, it is reasonable toassume that the physical environment is completely achro-matic (see bottom of Figure 16 and 17). One possibility toincorporate this achromaticity situation into our optimiza-tion would be to increase the chroma of displayed colors.Second, the luminance of some optimal colors may behigher than the original displayed colors, resulting in a po-tential increase regarding power consumption of displays.In battery-powered OST-HMDs, battery life is one of thecritical factors. Appropriate solutions that take the display brightness into account are worth exploring in future work.Note that for those applications weighting more about thegeometry over color and texture, the scope of current workdoes not fit well. We consider this as an orthogonal problemand save it for future work.
ONCLUSION
On the basis of a novel insight for color blending, wepresent an end-to-end, real-time color contrast enhancementalgorithm for OST-HMDs that takes both chromaticity andluminance of displayed colors into account. Existing meth-ods focus solely on changing the luminance of displays orenvironments to improve the distinction between virtualobjects and real background, or compensating displayedcolors to approximate the color originally intended. In thiswork, we further consider the impact of chromaticity toimprove the color perception of rendered images from theperspective of color contrast. Specifically, we use the com-plementary color of the background as the search directionin the LAB color space to determine the optimal color of thedisplay pixel under various constraints.To summarize, we strive to inspire future studies oncolor contrast enhanced rendering. Our current implemen-tation achieves several key technical characteristics. First, itadaptively enhances the visibility of virtual objects. Second,it is real-time. Third, there is no need for any hardwarechange. Our algorithm is implemented on the GPU usingpixel shaders, allowing users to tweak the parameters, suchas the color difference, in real-time. We present results insimulated and real environments. In addition, both the ob-jective evaluation and user study confirm that our approachcan improve the perceptual contrast between displayedcontent and physical background in OST-HMDs, makingthe virtual objects more distinguishable from surroundingbackgrounds whereas achieving the maximum level of con-sistency with the original displayed color. R EFERENCES [1] O. Cakmakci, Y. Ha, and J. P. Rolland, “A compact optical see-through head-worn display with occlusion support,” in
Proceed-ings of the 3rd IEEE/ACM International Symposium on Mixed and
Augmented Reality , ser. ISMAR ’04. Washington, DC, USA: IEEE
Computer Society, 2004, pp. 16–25.[2] K. Kiyokawa, Y. Kurata, and H. Ohno, “An optical see-throughdisplay for mutual occlusion of real and virtual environments,” in
Proceedings IEEE and ACM International Symposium on AugmentedReality (ISAR) , Oct 2000, pp. 60–67.[3] C. Gao, Y. Lin, and H. Hua, “Occlusion capable optical see-throughhead-mounted display using freeform optics,” in , Nov2012, pp. 281–282.[4] A. Wilson and H. Hua, “Design and prototype of an augmentedreality display with per-pixel mutual occlusion capability,”
Opt.Express , vol. 25, no. 24, pp. 30 539–30 549, 2017.[5] C. Weiland, A.-K. Braun, and W. Heiden, “Colorimetric and photo-metric compensation for optical see-through displays,” in
Univer-sal Access in Human-Computer Interaction. Intelligent and UbiquitousInteraction Environments , C. Stephanidis, Ed. Berlin, Heidelberg:Springer Berlin Heidelberg, 2009, pp. 603–612.[6] J. D. Hincapi´e-Ramos, L. Ivanchuk, S. K. Sridharan, and P. P.Irani, “SmartColor: Real-time color and contrast correction foroptical see-through head-mounted displays,”
IEEE Transactions onVisualization and Computer Graphics , vol. 21, no. 12, pp. 1336–1348,2015.[7] T. Langlotz, M. Cook, and H. Regenbrecht, “Real-time radiometriccompensation for optical see-through head-mounted displays,”
HANG et al. : COLOR CONTRAST ENHANCED RENDERING FOR OPTICAL SEE-THROUGH HEAD-MOUNTED DISPLAYS 13
IEEE Transactions on Visualization and Computer Graphics , vol. 22,no. 11, pp. 2385–2394, 2016.[8] J. Kim, J. Ryu, S. Ryu, K. Lee, and J. Kim, “[POSTER] Optimizingbackground subtraction for OST-HMD,” in , Oct2017, pp. 95–96.[9] T. Fukiage, T. Oishi, and K. Ikeuchi, “Visibility-based blending forreal-time applications,” in , Sep 2014, pp. 63–72.[10] J. M. Wolfe, K. R. Kluender, D. M. Levi, L. M. Bartoshuk, R. S. Herz,R. L. Klatzky, S. J. Lederman, and D. M. Merfeld,
Sensation andperception, 4th ed.
Wolfe, Jeremy M.: [email protected]: SinauerAssociates, 2015.[11] L. Gruber, D. Kalkofen, and D. Schmalstieg, “Color harmonizationfor augmented reality,” in , Oct 2010, pp. 227–228.[12] Y. Itoh and G. Klinker, “Vision enhancement: Defocus correctionvia optical see-through head-mounted displays,” in
Proceedingsof the 6th Augmented Human International Conference , ser. AH ’15.New York, NY, USA: ACM, 2015, pp. 1–8.[13] K. Oshima, K. R. Moser, D. C. Rompapas, J. E. Swan, S. Ikeda,G. Yamamoto, T. Taketomi, C. Sandor, and H. Kato, “SharpView:Improved clarity of defocussed content on optical see-throughhead-mounted displays,” in , Mar2016, pp. 253–254.[14] C. Menk and R. Koch, “Truthful color reproduction in spatialaugmented reality applications,”
IEEE Transactions on Visualizationand Computer Graphics , vol. 19, no. 2, pp. 236–248, 2013.[15] Y. Itoh, M. Dzitsiuk, T. Amano, and G. Klinker, “Semi-parametriccolor reproduction method for optical see-through head-mounteddisplays,”
IEEE Transactions on Visualization and Computer Graphics ,vol. 21, no. 11, pp. 1269–1278, 2015.[16] T. Oskam, A. Hornung, R. W. Sumner, and M. Gross, “Fast andstable color balancing for images and augmented reality,” in , Oct 2012, pp. 49–56.[17] G. Wetzstein, W. Heidrich, and D. Luebke, “Optical image process-ing using light modulation displays,”
Computer Graphics Forum ,vol. 29, no. 6, pp. 1934–1944, 2010.[18] S. Mori, S. Ikeda, A. Plopski, and C. Sandor, “BrightView: Increas-ing perceived brightness of optical see-through head-mounteddisplays through unnoticeable incident light reduction,” , pp. 251–258, 2018.[19] Y. Itoh, T. Langlotz, D. Iwai, K. Kiyokawa, and T. Amano, “Lightattenuation display: Subtractive see-through near-eye display viaspatial color filtering,”
IEEE Transactions on Visualization and Com-puter Graphics , vol. 25, no. 5, pp. 1951–1960, 2019.[20] K.-H. Lee and J.-O. Kim, “Visibility enhancement via optimal two-piece gamma tone mapping for optical see-through displays underambient light,”
Optical Engineering , vol. 57, no. 12, pp. 1 – 13, 2018.[21] K. Rathinavel, G. Wetzstein, and H. Fuchs, “Varifocal occlusion- capable optical see-through augmented reality display based on focus-tunable optics,”
IEEE Transactions on Visualization and Com-puter Graphics , vol. 25, no. 11, pp. 3125–3134, 2019.[22] A. Maimone and H. Fuchs, “Computational augmented realityeyeglasses,” in , Oct 2013, pp. 29–38.[23] Q. Y. J. Smithwick, D. Reetz, and L. Smoot, “LCD masks for spatialaugmented reality,” in
Stereoscopic Displays and Applications XXV ,A. J. Woods, N. S. Holliman, and G. E. Favalora, Eds., vol. 9011,no. March 2014, Mar 2014, p. 90110O.[24] T. Rhodes, G. Miller, Q. Sun, D. Ito, and L.-Y. Wei, “A transparentdisplay with per-pixel color and opacity control,” in
ACM SIG-GRAPH 2019 Emerging Technologies , ser. SIGGRAPH ’19. NewYork, NY, USA: ACM, 2019, pp. 5:1–5:2.[25] J. Ryu, J. Kim, K. Lee, and J. Kim, “Colorimetric backgroundestimation for color blending reduction of OST-HMD,” in , Dec 2016, pp. 1–4.[26] R. O. Brown and D. I. MacLeod, “Color appearance depends onthe variance of surround colors,”
Current Biology , vol. 7, no. 11, pp.844–849, Nov 1997.[27] M. A. Webster, G. Malkoc, A. C. Bilson, and S. M. Webster, “Colorcontrast and contextual influences on color appearance,”
Journal ofVision , vol. 2, no. 6, pp. 7–7, 11 2002. [28] V. Ekroll and F. Faul, “Transparency perception: the key to under-standing simultaneous color contrast,”
J. Opt. Soc. Am. A , vol. 30,no. 3, pp. 342–352, Mar 2013.[29] S. Klauke and T. Wachtler, ““Tilt” in color space: Hue changesinduced by chromatic surrounds,”
Journal of Vision , vol. 15, no. 13,pp. 17–17, 09 2015.[30] D. Jameson and L. M. Hurvich, “Perceived color and its depen-dence on focal, surrounding, and preceding stimulus variables,”
J.Opt. Soc. Am. , vol. 49, no. 9, pp. 890–898, Sep 1959.[31] J. Krauskopf, Q. Zaidi, and M. B. Mandlert, “Mechanisms ofsimultaneous color induction,”
J. Opt. Soc. Am. A , vol. 3, no. 10,pp. 1752–1757, Oct 1986.[32] D. Jameson and L. M. Hurvich, “Opponent chromatic induction:Experimental evaluation and theoretical account,”
J. Opt. Soc. Am. ,vol. 51, no. 1, pp. 46–53, Jan 1961.[33] V. Ekroll and F. Faul, “New laws of simultaneous contrast?”
Seeingand Perceiving , vol. 25, no. 2, pp. 107 – 141, 01 Jan. 2012.[34] S. Ratnasingam and B. L. Anderson, “What predicts the strengthof simultaneous color contrast?”
Journal of Vision , vol. 17, no. 2, pp.13–13, 02 2017.[35] J. L. Gabbard, J. E. Swan, J. Zedlitz, and W. W. Winchester, “Morethan meets the eye: An engineering study to empirically examinethe blending of real and virtual color spaces,” in , Mar 2010, pp. 79–86.[36] F. W. Campbell and J. G. Robson, “Application of fourier analysisto the visibility of gratings,”
The Journal of Physiology , vol. 197,no. 3, pp. 551–566, Aug 1968.[37] S. J. Anderson, K. T. Mullen, and R. F. Hess, “Human peripheralspatial resolution for achromatic and chromatic stimuli: limitsimposed by optical and retinal factors.”
The Journal of Physiology ,vol. 442, no. 1, pp. 47–64, 1991.[38] B. C. Kress and W. J. Cummings, “11-1: Invited Paper: Towards theultimate mixed reality experience: HoloLens display architecturechoices,”
SID Symposium Digest of Technical Papers , vol. 48, no. 1,pp. 127–131, 2017.[39] M. Mahy, L. Van Eycken, and A. Oosterlinck, “Evaluation ofuniform color spaces developed after the adoption of CIELAB andCIELUV,”