SmartNight: Turning Off the Lights on Android
SSmartNight: Turning Off the Lights on Android
Andrew Banman
University of Minnesota100 Church St. S.E.Minneapolis, Minnesota [email protected]
ABSTRACT
Smartphone users benefit from content with dark color schemes:increasingly common OLED displays are more power effi-cient the darker the display, and many users prefer a dark dis-play for night time use. Despite these benefits, many appli-cations and the majority of web content are drawn with whitebackgrounds. There are many partial solutions to darkenthe displayed content, but none work in all situations. EnterSmartNight, a content-aware solution to dynamically darkencontent on Android. By trading off content fidelity, Androidwith SmartNight displays content with nearly 90% lower av-erage picture level. It is implemented in the Android frame-work, and requires no external support. It seamlessly incor-porates existing solutions, making it a bridge between thestate-of-the-art and future solutions.
1. INTRODUCTION
Dark color schemes on mobile phones are beneficial tousers in at least three ways. First, organic light-emittingdiode (OLED) displays consume less power the darkerthe display. Each pixel in an OLED display is indi-vidually lit, meaning a pixel presenting black color intheory draws no power. A smartphone’s display is thebiggest individual energy consumer on the device (foruses other than phone calls) [2] so the improved effi-ciency of OLED is highly desirable. OLED displays arenow commonplace, such as in the new Samsung GalaxyS10 smartphone [8]. Dark content gives the user themost energy benefit from this technology.Second, bright screens during nighttime smartphoneuse cause mild health problems. Medical research hasshown white light suppresses an individual’s melatoninlevels, interrupting their circadian rhythm and leadingto irregular sleep. Some studies and preliminary mea-surements have even linked inhibited melatonin with di-abetes, heart disease, and some types of cancer. It hasalso been shown that blue light has a larger effect onmelatonin levels than red or green light, which explainsthe recent popularity of display-warming solutions [4],where each pixel is shifted towards red while preservingrelative color differences. [6]These health concerns are related to the general dis- comfort caused by bright lights in a dark room. Manyprofessional programmers choose light-on-dark color schemesto reduce eye strain. Others simply prefer the appear-ance of these themes. Whatever the motivation, thereappears to be a large demand for more dark content.
The most familiar solutions to the white screen prob-lem are application “dark” (or “night”) modes. Typicallythese modes swap the default UI color scheme with adark grey or black background and light grey or whitetext. Dark modes are gaining popularity, and it wouldbe very interesting to quantify this trend with a surveyof applications providing dark modes. This trend to-wards dark schemes is exemplified by Google’s decisionto include a global dark mode in Android Q. Accordingto recent rumors, the user can set the UI globally to adark theme while automatically switching applicationsto their internal dark modes. [9]In an ideal world all application developers would shipa dark mode, but even this hypothetical does not ac-count for the fact that the majority of web content isdominated by white pixels. One study found that 80%of website content is white. [5] Web browsing has tra-ditionally been problematic on mobile phones for manyreasons, but its too important a use case to ignore. Thisspecific challenge is addressed by Chameleon browser[3], a custom browser solution that uses a proxy serverto dynamically change the color of websites to producedarker content. Chameleon gives good results, but it’seventual implementation seems unlikely since it requiresextensive proxy support. A local solution that doesn’tdepend on external services is more likely to be used.In this paper I explore a global, content-aware so-lution that dynamically transforms bright content intodark content. The goal is to balance the tradeoff be-tween content fidelity and dark screens, with a biastowards the latter. My solution, called
SmartNight ,fills the gaps between existing dark-mode solutions withminimal adoption barriers. It is implemented in An-droid 9.0.The rest of the paper proceeds as follows: § a r X i v : . [ c s . H C ] M a y orks, § §
3, and finally thereis discussion and recommendations for future work in § Application views are composed of multiple windowstypically three at any time: the status bar, applicationUI, and navigation bar. Each window is backed by anindependent
Layer provided by the
SurfaceFlinger ser-vice. Note that there are different names for the draw-ing surface in different contexts: window in the contextof the WindowManager or layer in SurfaceFlinger’s par-lance. SurfaceFlinger is responsible for creating the in-dependent Layers, compositing them, and sending themto the display hardware for the final rendering.Each Layer holds a
BufferQueue that synchronouslyshares graphical buffers between the producer (i.e. theapplication), and its consumers (e.g. SurfaceFlinger).Importantly, buffers are shared by reference via a syn-chronous latching mechanism; they are never copied.BufferQueue producers are implemented by the
Surface class, part of the public java API. Most Surfaces areeventually consumed by SurfaceFlinger, but others arerendered directly by the application.
2. SMARTNIGHT DESIGN
At a high level, SmartNight consists of two majorsteps: an augmented SurfaceFlinger 1) performs somestraightforward content analysis to decide whether ornot to transform each visible Layer, and 2) transformsthe color the Layers identified in step 1. The transfor-mation is accomplished by applying a color transforma-tion matrix. These steps are covered in § § SmartNight’s goal is to transform bright-dominantscreens to dark-dominant screens, while preserving thosethat are already dark-dominant. To do this it must de-termine what are the dominant colors of each currentlyvisible Layer. Luckily, SurfaceFlinger can do this anal-ysis more-or-less asynchronously with respect to the ac-tual rendering. When SurfaceFlinger receives an asyn-chronous invalidation message from one of its Surfaceproducers, it inspects the Layer for changes. If theLayer has been latched, we infer the content was up- dated and a refresh is needed. SmartNight inserts thecontent analysis step at this point in the inspection.Content analysis is done in three phases: samplingthe Layer’s graphical buffer, computing the approxi-mate overall luminance of the image, and comparingthe luminance against threshold values to determinewhether the content is too bright and thus should bemodified. Additional steps can (and should) be addedto improve the judgement. For example, video buffersshould be identified and judged not be transformed, soas not toe disrupt playback.
This initial version of SmartNight uses a very simplesampling method to determine whether the image iscomprised of more bright or dark pixels. Roughly 2500pixels are sampled by incrementing through vertical andhorizontal strides. The horizontal stride is periodicallyoffset to avoid sampling along vertical lines through the2-dimensional image. This offset prevents oversamplingvertically aligned colors, such as a narrow vertical baror justified text, and improves the accuracy of smallersample sizes.Each pixel is extracted according to the buffer’s pixelformat, as specified in the containing Layer. The pixelformat specifies the byte layout of the red, green, blue,and sometimes alpha values. In this version the alphachannel is discarded, but it may prove useful to othermethods. I also record the maximum values of eachchannel for later comparison.An alternative method could be to search for thebackground color instead of the overall dominant color.Assuming the background color is constant (or nearlyso), rather than sample scattered individual pixels lookat contiguous groups of pixels. If a each pixel in thegroup has identical color then it is very likely it matchesthe background color. This can be part of an optimiza-tion: keeping a similar stride pattern as before, inspecteach pixel in a cache-aligned block and only include it inthe overall count if its neighbors have the same (or verynearly the same) RGB values. In this manner you canexclude pixels making up foreground images in trivialadditional time.
There are various formula for computing luminancefrom RGB color channels. In this implementation I usea simplified formula, that nevertheless accounts for thedifferent brightness contrubtions of each RGB channel: l = 3 r + b + 4 g . The simplification is suitable because1) computation should be as fast as possible, and 2) wedon’t need precise results to make a good judgement. Once the average luminance of the layer is computed2e must decide whether to perform the inversion. Set-ting this threshold too high will permit more white dis-plays to be shown, while setting it too close to the cen-ter will be less stable. I arrived, through trial and error,at 60% of full white as the floor for a bright pixel and40% as the ceiling for a dark pixel. If the bright pixelsoutnumber the dark then a flag is set to direct Surface-Flinger to invert the entire Layer. Pixels within therange are not counted.
As discussed, almost every rendered object passesthrough the SurfaceFlinger service, and each of these isbacked by a Layer. During the composition step, Sur-faceFlinger renders each visible Layer sequentially (ac-cording to its z-position) into earlier-computed croppeddimensions. It turns out one can set the color transformfor the renderer at the time SurfaceFlinger draws eachlayer. This permits the simple logic: if the layer shouldbe transformed, then set the renderer’s color matrix anddraw, else clear the render’s color matrix and draw.
Butis this an efficient method?Since Android 3.0, all images associated with and Ap-plication’s windows are rendered using hardware accel-eration. In other words, the GPU is the default ren-derer for all applications. GPUs are extremely efficientat performing vector computations, such as multiply-ing a color transformation matrix. Thus, by default wecan swap color matrices virtually for free. The contra-dicting case occurs when applications explicitly opt touse CPU rendering, but rendering of this kind escapesSurfaceFlinger anyway, as discussed in §
3. RESULTS
There are two main dimensions on which to evaluateSmartNight and its competitors. The first dimensionis display power reduction. One would like to know byhow much SmartNight reduces battery consumption forOLED devices. In absence of real power measurements,one can measure how dark on average the same imagesrendered with SmartNight enabled versus the competi-tion. A common OLED display metric is the averagepicture level (APL). I compare SmartNight to the de-fault display as well as to a red-shift strategy in § § Each OLED device has its own power characteristic,but they all share the property that power increases thecloser the display is to full white. APL measures theoverall distance from black (0) to full white (1). Thus,APL serves as a proxy measure of the power requiredto display an image. I compare the APL for commonAndroid screens and for popular websites in Figures 1and 2. For each image I calculate the APL of the de-fault display, SmartNight, and red-shifted versions ofeach where only the red pixels are counted. This is anextreme flavor of red-shift that gives a low bound onthe APL for such display-warming schemes.In every case, the default display has the highest APL(as expected) and SmartNight had lower APL, a meanreduction of 72%. The mean default APL was 0.75, cor-roborating the result from [5] that smartphone screensare dominated by bright pixels. SmartNight also low-ered APL by more than the default red-shift transfor-mation in 17 cases, while there were 11 cases where red-shift had lower APL than SmartNight. These imageswere near the bright content cutoff, meaning the inver-sion has a smaller affect on APL. However, SmartNight × red-shift did better than red-shift in 92% of cases, wasequivalent in the remaining cases, and reduced APL by89% relative to default. Figure 3 shows the darkeningof each scheme relative to the default display, and con-clusively shows SmartNight’s superior darkening effect,3 igure 1: Average Picture Level of pre-installed applications and common androidscreens. Larger APL values indicate an imagecloser to full white.Figure 2: Average Picture Level of the top 15most popular webpages. Larger APL values in-dicate an image closer to full white.[1] Figure 3: Average Picture Level reduction rel-ative to the default image of common androidscreens and popular webpages. Larger valuesindicate larger APL reductions, i.e. a darkeroverall affect. especially when combined with red-shift. The QoE is further divided into quality based on la-tency – i.e. how smooth the display appears – and theconsistency of the color representation.
SmartNight was evaluated on Android emulation soft-ware shipped with the Android Open Source Project(AOSP) source code. Various graphics indicators weremeasured with the android debugger (adb) tool, specif-ically with the dumpsys sub command.As discussed previously, there should be no degrada-tion in video latency because the bulk of the work isdone asynchronously with layer rendering and apply-ing color transformations is very cheap. There was noperceived slowdown or framerate drop. Frames per sec-ond did not drop below the maximum 60fps throughoutvideo playback on the VLC application.However, SmartNight performed poorly in terms ofthe number of “janky” frames (those deemed too slow)and rendering speed distribution. Table 1 summarizesthese results. The default color inversion feature alsoincurred a small performance hit. This is most likelydue to the added cost of applying the color transfor-mation to each layer. These results need to be corrobo-rated by real hardware measurements, but they do showwhat appears to be real performance loss. However, ex-tra time spent rendering frames will not impact powersignificantly since CPU is not a major consumer. Fur-thermore, the rendering speed is not the final word on4martNight Default InvertedFrames rendered: 1048 1189 1079Janky frames 79.48% 27.17% 33.92%50th percentile: 19ms 9ms 10ms90th percentile: 27ms 22ms 24ms95th percentile: 34ms 25ms 28ms99th percentile: 61ms 44ms 57ms
Table 1: Framerate statistics gathered with adbwhile navigating the VLC application by hand.About 1000 frames were sampled in each caseover the course of similar input sequence. Jankyframes were identified as too slow. Various per-centiles in the distribution of rendering speedsare also shown. The default display and the de-fault color inversion feature were measured aswell as SmartNight display health. More study is needed to assess whetherthe performance is in acceptable bounds relative to thebenefits of darkening.A final note on latency measurements: it seems likea contradiction that framerate is not affected while theaverage rendering time increased for SmartNight. Thismay be due to peculiarities with how the frames persecond are counted in the emulation environment. An-other peculiarity I observed while testing was that videoframes did not seem to be counted where I expectedthem under the VLC package. It could be they werenot factored into the reported framerate by adb. Nev-ertheless, I observed no video slowdown or jitter.
SmartNight has some problems with visual artifacts.The first one a user would notice is white flickering dur-ing some transitional animations, such as minimizingand application or swiping from the right. These areprobably caused by incorrect content analysis place-ment. In this implementation, the content analysis isonly done when SurfaceFlinger receives an invalidatemessage from a client producer. I suspect this code pathis not exercised for the first drawn frame of a layer. Alikely fix is to duplicate the content analysis at layercreation. In fact, this may be a workaround for thelatency problem: rather than re-analyze a layer everytime it changes, it may be sufficient to analyze the abovethe fold content and infer that the rest follows a similarcolor scheme. More tinkering is called for.Video also remains a problem for SmartNight. Whenviewing video in portrait mode there are often largeblack bars above and below the video – as a conse-quence of maintaining wide aspect ratios – and in thatcase most frames will be categorized as dark and there-fore won’t be transformed. Unfortunately in the generalcase it’s more than likely a bright frame from the video will trip the transformation. This looks like a flicker-ing video, alternating between inverted colors and realcolors.I envision various potential solutions to the videoproblem. First, one could identify the type of producerfor each Layer’s BufferQueue; if the buffer came from avideo stream then we can skip the content analysis stepand decide not to transform the colors. In the best case,the identification would follow from metadata belongingto the BufferQueue, Layer, or some other encapsulatingobject. Initial experiments along this line failed, but Iwouldn’t rule out the option. The Layer’s window typewas tested (specifically
TYPE_APPLICATION_MEDIA ), aswell as GraphicBuffer’s usage flags. These were earlyshots in the dark but, with more experience and in-vestigation, I have a hunch the answer is hiding some-where in the SurfaceFlinger stack. Alternatively, per-haps the necessary information could be added to theLayer at the time of creation in another patch. Fail-ing the simple solution, one could experimentally guessvideo content by taking buffer update timings – pre-sumably frequently updated layers are displaying videoplayback. Finally, the content analysis algorithm couldbe smart enough to ignore most video content. Pre-viously I suggested an algorithm for determining thebackground color by looking for contiguous, identicalpixels. With clever shape selection of these regions onecould rule out all manner of natural images typicallyseen in video.
4. CONCLUSIONS AND FUTURE WORK
SmartNight is a novel step towards global night modeon Android smartphones. It works well with existingnight-optimized content, like dark websites and application-specific night modes. Initial measurements show a sim-ple content-aware color inverter algorithm reduces APLby 72%, and that this can be pushed as high as 89%with color warming transformation. These gains comeat no perceived framerate drop, although there are someartifacts that negatively effect the QoE.There are a number of areas for improvement thatmake this an interesting project to continue. More workneeds to be done to make the experience more smooth;in particular the color flickering problem must be solved.Research should be done on how to incorporate non-SurfaceFlinger-rendered buffers, and videos should bemore robustly excluded from the darkening color trans-formation.If and when this work is complete, a thorough tech-nical evaluation is needed to definitely measure any en-ergy savings on OLED devices. Next, a user study mustquanitativley assess the QoE, which ultimately comesdown to a tradeoff between content fidelity and nighttime comfort. I predict that a significant portion ofusers will appreciate the benefits of SmartNight enough5o forgive the occasional inverted picture.
5. REFERENCES [1] Alexa top 500 global sites, 2015.[2] A. Carroll and G. Heiser. An analysis of powerconsumption in a smartphone. In
Proceedings ofthe 2010 USENIX Conference on USENIX AnnualTechnical Conference , USENIXATC’10, pages21–21, Berkeley, CA, USA, 2010. USENIXAssociation.[3] M. Dong and L. Zhong. Chameleon: Acolor-adaptive web browser for mobile oleddisplays. In
Proceedings of the 9th InternationalConference on Mobile Systems, Applications, andServices , MobiSys ’11, pages 85–98, New York, NY,USA, 2011. ACM.[4] f.lux: software to make your life better, 2019.[5] A. Laaperi.