Deadeye: A Novel Preattentive Visualization Technique Based on Dichoptic Presentation
DDeadeye: A Novel Preattentive Visualization Technique Based onDichoptic Presentation
Andrey Krekhov,
Student Member, IEEE , and Jens Kr ¨uger,
Member, IEEE
Fig. 1. Deadeye is a preattentive visualization technique that allows to guide attention in various visualization scenarios. In contrast toexisting methods such as flickering, shape, or color, our technique does not modify any visual property of the target object. Possibleapplication scenarios include, e.g., highlighting of lines in a line chart (left), comparative visualization including its entertainmentderivatives (middle: spot-the-difference puzzle), and scientific visualization of chemical reactions (right).
Abstract —Preattentive visual features such as hue or flickering can effectively draw attention to an object of interest – for instance,an important feature in a scientific visualization. These features appear to pop out and can be recognized by our visual system,independently from the number of distractors. Most cues do not take advantage of the fact that most humans have two eyes. In caseswhere binocular vision is applied, it is almost exclusively used to convey depth by exposing stereo pairs. We present
Deadeye , a novelpreattentive visualization technique based on presenting different stimuli to each eye. The target object is rendered for one eye onlyand is instantly detected by our visual system. In contrast to existing cues, Deadeye does not modify any visual properties of the targetand, thus, is particularly suited for visualization applications. Our evaluation confirms that Deadeye is indeed perceived preattentively.We also explore a conjunction search based on our technique and show that, in contrast to 3D depth, the task cannot be processed inparallel.
Index Terms —Popout, preattentive vision, comparative visualization, dichoptic presentation
NTRODUCTION
Designing comprehensive visualizations requires a deep understandingof how perception actually works. Therefore, the visual system is oneof our most important tools for acquiring and parsing information thatsurrounds us. Making efficient use of certain visual characteristicshelps us to create visualizations that excel in their usability and userperformance.In particular, drawing the attention of users to certain elements is aresearch subject that is continuously being worked on in various fields,including psychology, computer science, psychophysics, and biology.Researchers have discovered several visual cues that can capture andguide our attendance to the object(s) of interest. The most prominent • Andrey Krekhov and Jens Kr¨uger are with Center of Visual Data Analysisand Computer Graphics (COVIDAG), University of Duisburg-Essen.E-mail: { andrey.krekhov, jens.krueger } @uni-due.de.• Jens Kr¨uger is also with SCI Institute, University of Utah. E-mail:[email protected] received xx xxx. 201x; accepted xx xxx. 201x. Date of Publicationxx xxx. 201x; date of current version xx xxx. 201x. For information onobtaining reprints of this article, please send e-mail to: [email protected] Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx example is probably the search function of a PDF viewer, web browser,or text editor: it uses color to highlight the occurrence of a query, whichallows us to instantly locate the results. A more sophisticated exampleis a medical visualization that utilizes flickering to highlight suspiciouscells or tissue and helps doctors with exploring the data.Cues such as color, flickering, shape, size, and motion are examplesof so-called preattentive visual features. Such features make an objectpop out and allow us to grasp the object’s presence in usually lessthan 250 ms. One important property of preattentive cues is that theyperform equally well with an increasing number of distractors – thesearch time for our visual system remains constant. In other words,those features are processed in parallel by our visual system and are notsearched in a serial fashion. That property is crucial when we reviseour example with a full-page text search or the exploration of a hugemedical dataset.This paper summarizes Deadeye , a novel preattentive visualizationtechnique that is based on the dichoptic presentation phenomenon.We render two different images for the two eyes. The images haveone difference. The object that should pop out is presented to onlyone eye. Surprisingly, previous research stated that binocular rivalry,an effect based on showing two different images, is not processed inparallel, with the exception of luster effects. Since then, those dichoptictechniques have lost popularity as possible popout effects. a r X i v : . [ c s . H C ] J a n ig. 2. Preattentive visual cues and conjunction search. (a) Color as cue: The target object is a red circle among blue distractors and can berecognized preattentively. (b) Conjunction of color and shape: The target object is either a blue square or a red circle. We have to search each objectin a seral fashion to find the target. (c) Conjunction of stereo and color: The target object is either in the back plane and red or in the front plane andblue. Images redrawn from [20] and [37]. Our contribution is twofold. First, our finding opens the way forfurther research on that technique and encourages derived applicationsin visualization (cf. Figure 1). Second, Deadeye is the first preattentivetechnique that does not modify any visual properties of the target object.All existing cues have to alter the target in one way or another – beit reshaping, recoloring or introducing motion. These changes canresult in data misinterpretation, usually a highly undesired side effect.Furthermore, especially in visualization, most properties have a certainpurpose and meaning, and reserving a whole dimension such as color orposition for the visual popout is an expensive tradeoff. On the contrary,Deadeye solves these issues by preserving all visual properties of thetarget.The paper shows that Deadeye is indeed perceived preattentively byconducting a state-of-the-art study that is common for such cues. Ourevaluation also underlines the general applicability of the technique,as not a single participant reported headache or other similar physicalstrains. Three additional explorative experiments illustrate the usage ofour technique in different real-world visualization scenarios.In addition, we examine the phenomenon of a conjunction searchbased on Deadeye. Although preattentive features are processed inparallel, a conjunction of multiple cues usually leads to a serial search(cf. Figure 2). In most cases, we have to inspect each object individuallyand check for its properties. Hence, the time needed for the taskincreases linearly with the number of distractors. However, there exista few exceptions, one of which is 3D depth. Previous research showsthat the depth cue can be combined with, e.g., color or motion, andstill be processed in parallel. As our technique also makes use of thebinocular visual system, one might assume that Deadeye also shares thatproperty. To investigate that claim, we conduct a study by combiningour technique with hue variations. The results clearly show a significantincrease in time needed to accomplish the task and the overall accuracy,exposing the serial character of that task.
ACKGROUND AND R ELATED W ORK
The human visual system works in a rather dynamic way. Our eyesare able to perceive detailed information in a very limited area only.That area is determined by the fovea, the central part of our retina. Thefovea provides high-resolution images but covers only between one andtwo degrees of vision. In order to gather all the data, our eyes performrapid saccadic movements between the states of fixed steadiness, i.e.,our eyes scan for interesting objects and determine the next locationfor gathering a high-resolution fragment. Humans are not aware of theprocess, since the interchange between saccades and fixations happensthree to four times in a second. For a deeper insight into the basicsof the human vision, please refer to the research by Yanbus [57, 58],Noton et al. [38] and Itti et al. [24]. Early research on the interplay between eye movement and dataprocessing has discovered several visual features that can be detectedin a split second. That speed led to the assumption that these featurescan be detected preattentively. As pointed out by Healey et al. [20], theterm preattentive is not entirely correct, since a brief period of focusis still required to perceive these cues. However, we will stick to thatterm as it is used throughout the literature.Such preattentive visual features can be detected within one focusperiod before the saccadic eye movement is triggered. Since the sac-cade has an initiation time of 200-250 ms [20], researchers apply thattime threshold to verify if a visual variable is preattentive. In general,experiments are conducted as follows: an image with a number ofdistractors and possibly a target object is exposed to the participants,as shown in Figure 2. The participants have to decide whether a targetobject was present. The image is shown for either less than 250 ms(e.g., [16]), or until the participants make the decision via a button press(e.g., [5]). In the first case, the error rate is considered as the primaryindicator, whereas in the second case, one takes both the error rateand the reaction time into count. The experiments are conducted withvarying set sizes in order to prove that the error rate and/or reactiontime remain constant with an increasing number of distractor objects.The work by Healey et al. [20] summarizes and provides referencesfor the following 16 preattentive visual cues: orientation, length, clo-sure, size, curvature, density, number, hue, luminance, intersections,terminators, 3D depth, flicker, direction of motion, velocity of motion,and lighting direction. A prominent example is hue: consider a reddot that pops out in a set of blue distractor dots. For instance, Nagyet al. [35] conducted experiments to show how differences in colorbetween distractors and the target object affect our search efficiency.As expected, a rather small color difference leads to an increased searchtime and diminishes our ability to process that visual feature in a preat-tentive way.A more recent summary suited for a broader audience is exposedby Wolfe et al. [56]. Amongst other things, the article includes aclassification of discovered features based on their likeliness to performas guidance attributes. In addition, the authors summarize how the priorhistory of an observer influences the guidance of attention.The most related visual feature to our research is the luster effectdiscovered by Wolfe et al. [55]. Hereby, the target is rendered dimmerthan the background for one eye and brighter than the background forthe other eye. The result is often said to pop out because of a metallicsheen. The authors also experimented with other dichoptic presentationtechniques and concluded that they cannot be perceived preattentivelyand that luster is an exception. Our paper partially supports theseassumptions, as Deadeye also works preattentively.Another binocularity-based and closely related visual feature is 3Ddepth. Although previous research suggested that preattentive detectionis solely based on 2D image features, Enns et al. [12, 13] found that ourisual system can also access visual features related to the 3D sceneinformation. The authors concluded that both the lighting direction andthree-dimensionality can be detected preattentively.Different theories and models aim to explain the underlying natureof preattentive processing. Among the prominent ones is the featureintegration theory by Treisman [50]. A detailed description of suchmodels is behind the scope of our paper. We point to the work byHealey et al. [20] for a detailed state-of-the-art summary.The model decomposes the low-level visual system in feature mapsfor each specific visual cue. Each map is encoded in parallel, whichleads to an almost instantaneous detection of the feature. Other exam-ples of such models include the texton theory [27] and the Booleanmap theory [22].Clearly, each variable has its advantages and drawbacks. Certainfeatures, such as shape or 3D depth, have to alter the underlying vi-sualization and can lead to misinterpretation. Other features suffer interms of accuracy when the target object is located in the peripheralarea. A detailed study was conducted by Gutwin et al. [16] to determinehow different variables behave with varying distance from the focuspoint. This is a rather critical attribute, as multi-monitor setups gainmore and more popularity. For these use-cases, Gutwin et al. suggestapplying motion or flickering, as these variables perform well evenat large angles. In the process, the authors applied the NASA-TLXsurvey, establishing comparisons among several variables. Therefore,we decided to rely on the same questionnaire to compare our outcometo the well-established visual cues.
Preattentive visual cues play an important role in visualization (e.g.,[53]) and other areas such as human-computer interaction (e.g., [29]).One prominent use-case is directing the attraction of users to a cer-tain object of interest. Several emphasis methods have recently beensummarized by Hall et al. [17] and methods for attention modeling ingeneral can be found in the state-of-the-art report by Borji et al. [7] andin the summary paper by Healey et al. [20]. Not surprisingly, none ofthese summaries mention binocular disparities as a means for guidingattention or annotating objects.In the following paragraphs, we present a brief selection of estab-lished attention-guidance techniques. Hoffmann et al. [21] utilizedvisual cues in order to highlight the active window in a multi-monitorsetup. The authors evaluated five types of window frames and masksand finally suggested a combination of color and tapered trails to guidethe users’ attention. Cole et al. [10] introduced the
Stylized Focus , avariation of shading effects in order to draw the viewers’ attention toa specific 3D object. The
Popout Prism presented by Suh et al. [45]is an overview+detail-based system that enhances the representationof documents. The authors used the two popout effects color and sizein order to visually emphasize elements of interest and also reduce thecognitive load.Alper et al. [2] utilized the stereoscopic depth cue for highlightingpurposes in 2D and 3D graph visualizations by projecting objects ofinterest onto a plane closer to the user. Additionally, the authors intro-duced a juxtaposing mechanism to allow focus and context views. Theauthors also emphasized the benefits of using highlighing mechanismsthat do not reserve important visual attributes such as motion or color.Possible applications and design guidelines for rather dynamicpopout techniques flicker, direction, and velocity were studied by Huberet al. [23]. The adoption of flickering for dynamic narrative visual-izations, the so-called
Attractive Flicker , was discussed in detail byWaldner et al. [52]. In a first stage, user attention is attracted by a shortand intensive flickering of the target object. The engagement stagerelies on a less disturbing luminance oscillation and helps to keep trackof the target. In contrast to cues such as color and size, the flickeringdoes not distort the scene elements. Our technique goes a step further,since we do not alter any visible property of the object of interest.
The term dichoptic presentation is applied when each eye is exposedto different stimuli, i.e., two different images. One of the resulting phenomena is called binocular rivalry (e.g., [1,6,15,28,41]). Instead ofa stable single image, our vision switches to a mode with alternating pe-riods of monocular dominance. One explanation is that the monocularneurons compete in the primary visual cortex and lead to the mentionedrivalry with regard to the interpretation of the image. Interestingly, asshown by Logothetis et al. [28], the rivalry does not depend on the eye,i.e., the dominant eye does not behave differently. One reason is that,in most cases, we are not able to determine which eye has detecteda certain distinct stimulus. The research by Baker [4] addresses thatpossible loss of information by utilizing EEG and multivariate patternanalysis. Baker concludes that such eye-of-origin knowledge is lost dueto our perception and consciousness. We also evaluate our techniquefor both eyes separately to explore the interplay between factors suchas eye dominance and the preattentive processing of Deadeye.Wolfe et al. [55] claimed that binocular rivalry cannot be processedpreattentively. The only exception is the luster effect that we mentionedpreviously. Zou et al. [62] revisited that research and concluded thatthough interocular differences can guide attention in some cases (e.g.,luster), the effects are rather weak and overridden by stronger featuressuch as orientation or luminance. However, we show that Deadeyeis another valid exception and propose to reconsider binocular rivalryin terms of its preattentiveness. Similarly to our suggestion, Frieden-berg [15] also pointed out that it is still not clear whether binocularrivalry falls under late-stage voluntary attentional control or is pro-cessed by preattentive mechanisms. Another evidence to reconsiderrivalry for attention guidance can be found in the work of Paffen etal. [40]. The authors conducted a study with ten participants to see howtransparent, monocular, and binocular changes in images are perceived.They concluded that change blindness is attenuated in cases where thechange is monocular. The work by Zhaoping [61] also aligns with ourfindings by indicating that ocular discontinuities have the potential toautomatically capture our attention. In her studies, Zhaoping comparedocular singletons to orientation singletons and focused on the role ofour primary visual cortex during the creation of bottom-up saliencymaps for attention guidance.Dichoptic presentation was extensively studied regarding luminancevariation, i.e., exposing an object with different brightness to the leftand right eye. How we perceive such a phenomenon was describedin the work by de Weert et al. [11] and Teller et al. [46]. Anstis etal. [3] explored luminance effects in a more diversified way by addingother presentation techniques such as flicker to their comparisons. Theresearch by Formankiewicz et al. [14] also contributed to that field byexamining the ways how such luminance disparities are detected andrevealed similarities to the detection of surface properties.The use of dichoptic presentation for creating novel visual experi-ences has gained little attention in science. Most closely related to ourproposed method is the work by Zhang et al. [60]. The authors brieflydiscuss several ideas for “Unconventional Binocular Presentation”, withhighlighting as one of those proposed applications. Unfortunately, theshort note does not evaluate any of those thoughts in more detail. Later,Zhang [59] revisited some of the ideas and focused on the luster effects.Apart from rivalry, dichoptic presentation also takes place in thecase of binocular disparity. The term is associated with our ability toextract depth information based on the fact that our eyes have a slighthorizontal offset and perceive two slightly different images. For furtherdetails on the basics of our stereo vision, we refer to the work of Juleszet al. [25, 26] and Marr et al. [30, 31]. Caziot and Backus [8] performeda thorough study on the effects and parameters of stereoscopic offsetsto improve object recognition. We, however, avoid spatial offsets as theposition of a data point cannot be changed in general without changingits value, e.g., in plots or graphs. For an overview and classification ofthe numerous other uses of stereoscopic 3D, we refer the reader to theformalization paper by Schild et al. [44] and Schild’s dissertation [43].In contrast to those rather common depth perception approaches, theso-called da Vinci stereopsis [36] refers to cases where we are able toextract depth information based on monocular occlusions. We point thereaders to the state-of-the-art report by Harris et al. [18] for an overview.In addition, the more recent findings by Tsirlin et al. [51] indicate thatdepth perception in such cases is most likely due to occlusion geometry. ig. 3. An extract from the data about elections to the Australian Houseof Representatives, 1949-2007, represented as a line chart. In threetrials of our prestudy, we enhanced one or multiple lines using Deadeyeand asked the subjects to name the corresponding data. HE D EADEYE E FFECT
We make use of dichoptic presentation and render the target object forone eye only. The other eye sees the plain background at the targetlocation. In contrast to most binocular rivalry experiments, we do notshow different objects at the target location. We simply create oneimage with and one image without the object. Hence, we claim that theconflict produced by the monocular neurons can be easily resolved. Ourhigh-level visual system does not have to decide between two differentobjects. Instead, we get an image that is not unusual in our daily life.Think of a distant object that you look at through a small hole – thevisual system does not report any conflicts, although the object is seenwith one eye only (the dominant one).Despite that naturalness, we can recognize the object enhanced withDeadeye in a split second. The object is best described as “eye-catching”or “somehow wrong”, although we can focus on it and perceive alldetails without trouble. We attribute that popping out to our stereovision ability. Extracting depth information and fusing two slightlyoffset images happens nearly instantly. However, depth calculationsfor the target object result in an error, i.e., that a single scene elementcannot be put in a depth relative to other objects and the visual systempreattentively recognizes that something went wrong.An additional explanation to be considered is self-preservation mech-anisms: We instantly react to a visual stimulus that is placed right infront of our eye, e.g., by closing our lid and moving back from the pos-sible danger. In such cases, the stimulus is too near and visible with oneeye only. Hence, we encourage more in-depth research including brainmonitoring via fMRI to determine the exact cause of the observed effect.Our contribution focuses in the first place on establishing Deadeye as apreattentive visualization technique.
Though Deadeye can be applied to a broad variety of visualizationpurposes, we cover a few basic examples to motivate the researchon that technique. One possible scenario is the visualization of high-dimensional data via line charts where we want to attract the observers’attention to one or multiple lines of interest (cf. Figure 3).Applying Deadeye offers a unique advantage that cannot be achievedwith any of the existing popout cues: highlighting the line without“wasting” any visual dimensions such as color or motion. The techniquecan be regarded as an additional degree of freedom for visual encodingthat is orthogonal to the existing methods. This is essential, as real-world data visualizations tend to utilize as many visual properties(color, stroke width, etc.) as available to cover all data dimensions.Deadeye-enhanced line charts do not have to reserve a visual attributefor highlighting and, thus, can include more data dimensions.The same argumentation is also valid for other representations suchas bar charts, pie charts, or spider diagrams. Note that applying the“default” stereo vision to make a line to stand out from the chart is
Fig. 4. Visualization of a transferase (Nicotinic Acid MononucleotideAdenylyltransferase). As part of our exploratory prestudy, we appliedDeadeye on a small number of atoms (between 4 and 8) and asked theparticipants to count the elements that pop out. often not an option for such cases, as depth alters the coordinates of theunderlying data and might result in wrong interpretations.Another mentionable area is comparative visualization. Hereby, oneof the common methods is to expose side-by-side views of the data.Such comparison tasks can be quite difficult and are even used as achallenge in so-called spot-the-difference puzzles (cf. Figure 1).Our research on Deadeye demonstrates that there could be a muchmore effective way: the two images can be jointly exposed to our leftand right eye. Hereby, the Deadeye effect allows a rapid recognitionand localization of missing or differing objects, be it for scientific orentertaining purposes.Note that though Deadeye requires stereo equipment, the correspond-ing hardware became commodity in recent years. 3D glasses, stereoprojectors, and 3D TVs are wide spread but rarely used in day-to-dayvisualization applications. Hence, exploiting the devices for informa-tion highlighting is an alternative way of “recycling” such barely usedequipment.
Prior to our main study, we conducted three exploratory experimentsregarding the general applicability of Deadeye in different visualizationscenarios. Our goals were twofold. First, we verified that subjects areable to exactly locate the popout targets and not only perceive a certainvisual mismatch. Second, we determined whether the presence of othercues, such as color, are limiting the applicability of our technique.We executed the following three prestudies: naming the Deadeye-enhanced line in a line chart (seven subjects, three trials, task cf. Fig-ure 3), counting popping out atoms in a 3D visualization (four subjects,two trials, task cf. Figure 4), and naming the eye-catching image ele-ments in a spot-the-difference puzzle scenario (five subjects, one trial,task cf. Figure 1). We did not impose any time constraints and onlyrecorded the correctness of the answers.In all cases, participants were able to locate and report the targetswithout any side effects such as headache. Additionally, in the line chartcase, we verified that subjects were able to read the y-axis values aloudfor several data points to make sure that information extraction is notaffected. We conclude that Deadeye-enhanced objects can indeed bespotted and followed in a variety of visualization scenarios. Moreover,the technique was not affected by the presence of different colors or in a3D-context (atom counting). Note, however, that the exact interplay ofthe technique with other preattentive cues is subject of future research.
VALUATION
We conducted a user study to examine whether Deadeye can be per-ceived preattentively. Our experiment design is based on existing bestpractices for determining such effects. The participants are exposed toseries of images composed of distractors and possibly a target object asshown in Figure 5. The participants have to decide whether the targetobject is present or not. As described in the Related Work section, two ig. 5. Our experimental setup. We equipped participants with active shutter glasses and presented a series of images on a 3D TV. After each image,the participants decided whether there was a target object or not. The images contained a varying number of circles jittered on a 5x6 grid. The twoscreenshots are a stereo pair example from our largest set with 30 objects. Readers could try to stereo fuse the images to perceive the popout effect. approaches exist. One either displays the images for a fixed amount oftime (100-250 ms) and measures the error rate, or the image is shownuntil the participants make a decision. In that case, one also has toconsider the reaction time. In our case, we applied the fixed time optionand displayed the image for 250 ms.To prove the preattentive nature of Deadeye, the setup has to berepeated with varying set sizes. The error rate must remain nearlyconstant, no matter how many distractor objects are presented. There-fore, we considered four sets with the following sizes: 4, 8, 16, and 30.Additionally, we evaluated the conjunction search property of Deadeyeby combining our technique with color. Since the common 3D depth isa visual cue that can be combined in parallel with other variables, onemight assume the same property for our technique.
Our main hypothesis is H1 is that Deadeye is indeed a popout techniquethat can be perceived preattentively.
Therefore, the error rate has tobe sufficiently low and remain constant among the different set sizes.Most of the related works consider an error rate below 10% as optimal.Readers might suppose that Deadeye feels uncomfortable for ourvisual system. Therefore, we formulate a second hypothesis H2 asfollows: Deadeye does not lead to headache or other physical strain.
Otherwise, the application possibilities of that visual cue would berather limited.
The study took place in a virtual reality laboratory at our university. Af-ter informing participants about the study’s procedure, we administereda first questionnaire to assess the general demographic data. In addition,we asked whether the participants have any visual impairment andconducted a hole-in-the-card-test for eye dominance (Dolman method,e.g., [9, 42]). Participants had to hold a DIN A4 sheet of paper at arm’slength and fixate a distant object through a hole in the middle of thepaper. They closed the left and right eye in turn and reported whetherthey still saw the object. If the object disappeared, the closed eye wasmarked as the dominant eye.The main part of our study took place in front of a 3D TV (W / H /D 122,40 x 74,10 x 30,60 cm, 1080p) with active shutter glasses and arefresh rate of 60 Hz. The room light was switched off and the curtainswere shut in order to minimize the flickering of the shutter glasses. Theparticipants were placed on a chair 280 cm in front of the screen, asdepicted in Figure 5. The distance resulted in a horizontal viewingangle of 12.63 ◦ from the focus point (vertical: 7.54 ◦ ). We gave theparticipants a keyboard with explicitly marked yes and no keys and toldthem to use their thumbs or index fingers for executing the input. Thekeys were chosen to have the maximum possible distance.We told the participants that a series of images would be presented.Each image has a blue background and contains a number of yellowcircles. Possibly, one of the circles pops out. The image remains visiblefor a split second. After the inspection, the participants have to press yes if they think that the target object was present, and no otherwise. Between the images, a white crosshair on the same blue backgroundwas displayed in the middle. We explicitly advised the participantsto focus on the crosshair. This is important as it prevents saccadiceye movements, i.e., the participants’ eyes remain in the focus stagewhen they see the image, because the saccade has an initiation time ofapproximately 200-250 ms. The crosshair image was always presentedfor 2500 ms and disappeared when the screen switched to the actualimage. We also informed the participants that they did not have torush with their answer. During that period, the same blue backgroundwithout the crosshair was exposed.In addition, we explained that there would be a training stage beforeeach set and that an audio feedback would indicate whether the givenanswer was correct or not. During the real experiment, the two soundswould be replaced with a third, rather neutral sound. This soundwould prevent participants from becoming distracted by thinking aboutwrong answers and, thus, making subsequent errors due to the lack ofconcentration.Overall, that part of the experiment consisted of four set sizes: 4, 8,16, and 30 circles. For each set, participants had the chance to practiceuntil they felt comfortable and told the examiner to begin with thereal test. After each set, the participants were asked if they needed apause or wanted to continue. Each set consisted of 48 images, halfof them with a target object in a randomized order. Each participantexperienced the same configuration. For the 24 images with a targetobject, 12 were rendered for the right eye and 12 for the left eye. Theorder was again randomly chosen. We applied that variation mainlyto see whether there is a dependency between the error rate for an eyeregarding its dominance.The positions for the circles were randomly generated on a 5x6 gridwith a jittering/offset function, as can be seen in Figure 5. We alsoleft a vertical margin of 11,48 cm and a horizontal margin of 17,44cm, limiting the overall horizontal viewing angle to about 8.88 ◦ fromthe focus point (vertical: 5.22 ◦ ). Each circle had a size of 4,59 cm orapproximately 0.94 ◦ . This setup led to the image being located in thefocal, paracentral, and near-peripheral vision areas.After the four sets were completed, we administered a web-basedeffort questionnaire mainly based on the NASA-TLX survey [19]. Themain reason for choosing NASA-TLX is that the work by Gutwinet al. [16] used the same questionnaire for a variety of preattentivecues. Hence, we strived to create a meaningful comparison to othertechniques.The NASA-TLX survey contains six subscales, each scale rangesfrom 0 to 100 in increments of 5. Following aspects are measured: men-tal demand (low/high), physical demand (low/high), temporal demand(low/high), performance (good/poor), effort (low/high), and frustrationlevel (low/high).We included three additional questions on a seven-point Likert scaleranging from 0 to 6 with larger numbers indicating a more positiveoutcome: how well have you perceived the popout object? , how surewere you that you made the right decisions? , and how well were youable to focus the crosshair? . We will relate to these custom questions as clearness , decision-making , and focus , respectively. We also includedthe binary question about whether the participants experienced any ig. 6. The average accuracies for the first experiment are nearly constant and confirm that Deadeye is a preattentive cue. As often the case for suchfeatures, false negatives ( M = . ) significantly dominate over false positives ( M = . ). In case of the conjunction search, the images were shownuntil the participants made a decision. The average accuracies and reaction times significantly decrease with an increasing number of distractors,exposing the serial character of the search. headache or related physical strains . In sum, 21 persons (9 females, 12 males), aged 18 to 42 ( M = . , SD = . N =
15) or employees ( N = N = N = M = . , SD = .
09; 8 objects: M = . , SD = .
09; 16objects: M = . , SD = .
06; 30 objects: M = . , SD = .
09) aresimilar to other preattentive visual cues. We applied the repeated mea-sures ANOVA with the set size as within-subject variable to investigatewhether the number of distractors influences the accuracy. The result, F ( , ) = . , p = . M = . , SD = . M = . , SD = . t ( ) = − . , p = . M = . , SD = .
13) and to the non-dominant eye ( M = . , SD = . t ( ) = − . , p = . After the preattentive experiment, we administered a questionnairebased on the NASA-TLX survey (see Figure 7) and our four cus-tom questions. All variables were normally distributed according toKolmogorov-Smirnov tests.The NASA-TLX results for that experiment are rather homogeneous:mental demand ( M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = .
17) were rated quite similar. How-ever, the replies were wide spread and the reported min/max valuescontained both 0 and 100 for each subscale. This is also reflected in therather large standard deviation.The custom questions regarding clearness ( M = . , SD = . M = . , SD = .
49) are rather above average, whereasdecision-making ( M = . , SD = .
53) is slightly below, i.e., theparticipants were not very sure whether they made the correct deci-sions. All participants gave a negative answer to the question regardingheadache or similar strains.
An important question for preattentive processing is the possibility ofcombining multiple features. In their manifold research Triesman etal. [48–50] pointed out that searching for a single visual stimulus can beconducted in parallel , whereas we can perform only a serial search foran item defined by the conjunction of two visual variables. An exampleconjunction of hue and form is depicted in Figure 2: the target objectcould be either a red circle among blue circles or a blue square amongred squares. Wolfe et al. [54] experimented with different conjunctionsof color, motion, size, and orientation, even considering a combinationof three cues.The assumptions of Triesman were partially disproved by McLeodet al. [32], and later, M¨uller et al. [34]: the authors reported that a con-junction of form and motion can be processed in parallel. Furthermore,as pointed out by Townsend [47], the general question whether a visionprocess is serial or parallel is still widely discussed.Our second experiment was inspired by the findings on a conjunctionsearch by Nakayama et al. [37]. The authors discovered that depth canbe combined in parallel with other variables such as color and motion.The conducted experiments grouped objects in a front and a back plane.In the case of color, objects in the front plane were red and objects inthe back plane were blue. Hence, a target object was either a blue one inthe front or a red one in the back, as depicted in Figure 2. The authorsclaim that our visual system processes the planes in an alternating way,and that the search in each plane is executed in parallel. Since stereo ig. 7. Results of the NASA-TLX survey for both experiments. Lower values are preferable. vision is also based on binocular disparities, one might assume thatour technique could lead to similar effects with regard to a conjunctionsearch. Therefore, we conduct a similar experiment by combiningDeadeye with color to evaluate that assumption.
Our technique does not produce 3D depth, nevertheless Deadeye alsorelies on a binocular disparity. Hence, the similarity in the internalprocessing, i.e., the merging of two images, might lead to a similarproperty for our method. Therefore, we suggest that
Deadeye canbe combined with color as a second visual cue in parallel as our lasthypothesis H3 . Subsequent to the questionnaire of the preattentive experiment, weexplained our conjunction search experiment to the participants: Now,half of the circles would be magenta. All yellow circles are poppingout, i.e., with Deadeye applied, all magenta circles are not poppingout. A target object would have one of the two properties: either it is ayellow circle that does not pop out, or it is a magenta circle that popsout. The main difference from the first experiment is that the imagewould be displayed until the participants make a decision. We asked thesubjects to perform as quickly and as accurately as possible. The reasonto change the timing strategy was its more explorative fashion, as wewould gather more information about the behavior of our technique forsuch a conjunction search.Since the time was not fixed for this experiment, we decided to limitit to three set sizes: 4, 8, and 16 circles. Again, we offered a trainingphase for each set size and each set was composed of 48 images. Also,24 images had a target object, 12 with a popping out magenta circleand 12 with a unmodified yellow circle. In each subgroup, 6 of thetarget objects were rendered for the left eye and 6 for the right eye. Theordering was randomized in the same fashion as in the first experiment.After completion of the three sets, we requested the participants fillout the same questionnaire as after the first experiment. Overall, thestudy took about one hour.
For the f experiment, we analyzed the accuracy and the reaction time.All variables were normally distributed according to Kolmogorov-Smirnov tests. The average accuracies decrease with the increasedobject count (4 objects: M = . , SD = .
09; 8 objects: M = . , SD = .
12; 16 objects: M = . , SD = . F ( , ) = . , p = . p = . p = . p = . M = . s , SD = .
84; 8 objects: M = . s , SD = .
88; 16 objects: M = . s , SD = . F ( , ) = . , p <. p = . p < . p < . M = . , SD = .
24) and the non-modified yellow tar-get ( M = . , SD = . t ( ) = . , p = . Again, we administered a questionnaire based on the NASA-TLXsurvey (see Figure 7) and our four custom questions. All variables werenormally distributed according to Kolmogorov-Smirnov tests.The NASA-TLX survey exposes rather high scores on the subscalesmental demand ( M = . , SD = .
84) and effort ( M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = . M = . , SD = .
54) and especially decision-making ( M = . , SD = .
65) were all rather below the average. Nevertheless,similar to the first experiment, no participant reported a headache orsimilar strains.We conducted a paired-samples t-test to compare the outcomesof the questionnaires for the two experiments. There is a signifi-cant difference for the following subscales: mental demand ( t ( ) = − . , p = . t ( ) = − . , p = . t ( ) = − . , p = . t ( ) = . , p = . t ( ) = . , p = . ISCUSSION
The performance evaluation of the first experiment supports our mainhypothesis H1 that Deadeye is indeed a preattentive technique. Theparticipants recognized the feature in a 250 ms time frame with anaverage accuracy of ∼
90 % independently of the number of distractors.The accuracy is similar to the outcome of most preattentive experimentsdescribed in the Related Work section and is considered as a completelysufficient proof.With regard to wrong answers, ∼
70 % of the errors were falsenegatives. In other words, it is more likely to overlook a popout targetrather than wrongly imagine its presence, which is also a commonbehavior for preattentive techniques.We can also confirm our second hypothesis H2 that our techniquedoes not lead to physical strains such as headache. All participantsreplied with no to the corresponding question, both for the preattentiveand for the conjunction experiments.Hence, our finding is indeed an important contribution to the funda-mental research, since Deadeye offers several advantages over otherpreattentive techniques. It is simple to implement (the object has to beremoved for one eye), and, in comparison to most other methods, itdoes not alter any visible properties of the object such as color, position,size, or motion.Our third hypothesis H3 stated that Deadeye can be processed inparallel when combined with color as a second visual cue. Clearly,this is not the case and the hypothesis has to be rejected. The accuracyand the reaction time are both significantly worse when the set sizeincreases. The reaction time for only 4 objects is already greater than2 seconds (i.e., not preattentive) and increases up to 3.5 seconds for16 objects. Similarly, such an addition of 12 distractors results inan accuracy drop from 87 % to 77 %. Nevertheless, the accuraciesshow that a conjunction is still possible, although the search has to beexecuted serially.Hence, Deadeye does not share similar properties with 3D depthas visual cue regarding conjunction search. Nakayama et al. [37]suggest that 3D depth works in parallel because our visual system isable separate the objects in the near and the far plane and to analyzeeach plane in turn. Hereby, the objects on each plane are processed inparallel. Although Deadeye eliminates any depth information of thetarget objects and puts them into a zero-depth plane, our experimentresulted in serial processing.We hypothesize that the distance between the two planes does matter.Deadeye enhancements are rather subtle compared to two planes witha significant depth offset. Hence, we propose to repeat the 3D depthexperiment and to gradually reduce the distance between the planes. Weassume that there is a certain minimum threshold for parallel processing.If this is not the case, either the two-plane explanation is not fullycorrect, or our visual system is not able to group Deadeye-enhancedobjects into a single depth plane. More aligned with our results, thework by O’Toole et al. [39] also reported that the question of parallelvs. serial search has several nuances when it comes to 3D depth as acue. In particular, it does matter whether targets and distractors havecrossed or uncrossed disparities, and whether targets are behind or infront of distractors.To conclude, if a conjunction search is required, the 3D depth has anadvantage. However, preattentive cues are rarely used for conjunctions.For the majority of popout applications, 3D depth has the drawback onmodifying the position of the target object, which is often a significantissue. Deadeye, in contrast, preserves the correct object position.An interesting aspect for preattentive cues is the subjective userperception. The first thing we notice is that users tend to underestimatetheir performance. This can be seen in the rather average results on theperformance scale of the NASA-TLX ( how successful do you think youwere in accomplishing the goals? ) and our custom question on decision-making ( how sure were you that you made the right decisions? ). Bothresults contradict the achieved accuracy of ∼
90 %. On the otherhand, this is not surprising when we recall that the preattentive analysis
Fig. 8. Average success rates for the recognition of a Deadeye-enhancedobject at each screen position. The matrix indicates an increase of theerror rate with increasing distance from the focus point. happens unconsciously. The images were shown for a split second, andsubjects had to rely on a rather unchecked, intuitive result provided bythe visual system.Our second experiment was rated significantly worse regarding men-tal and physical demand, performance, and clearness. Overall, we canconclude that the task was more challenging and less straightforwardcompared to the first experiment. The target object could be only dis-covered by a conscious and serial search, which explains these scores.The participants also reported a significant decrease in their ability tofocus on the crosshair before each image. We attribute that outcome tothe fact that a serial search does not benefit from a stationary period,and an immediate saccadic eye movement appeared more efficient tothe most subjects.When comparing the NASA-TLX results of our first experimentto the outcomes reported by Gutwin et al. [16], we can observe thatDeadeye performed slightly better than existing cues (note: be awareof different scales). Especially the subscales performance and effortappear to be significantly better and indicate that Deadeye can competeeven against canonical cues such as color.However, that comparison should be considered with a grain ofsalt. The authors explored the cues with a significantly larger viewingangle to study the behavior regarding the peripheral vision. Hence, thedifference in results might be due to the viewing angle. Unfortunately,there is not much work on the comparison of different features, andwe encourage to undertake further steps towards a comprehensiveoverview.Our analysis shows an additional relation to the peripheral experi-ments of Gutwin et al., as the authors have discovered that certain cueswere less efficient when applied far from the focus point. Similarly,our accuracy matrix in Figure 8 exposes a drop-off in performance forthe outer columns. This indicates that Deadeye suffers under the samereduced peripheral efficiency as, e.g., shape and color. In our experi-ment, the image area covered a total visual angle of ∼ ◦ and includedthe central, paracentral, and the near-peripheral regions. Our depthperception decreases with the distance from the focus point (e.g., [33])nd nearly disappears in the peripheral region. Since we assume thatthe depth computation process is at least partially responsible for dis-covering Deadeye-enhanced objects, such a drop-off or even a completeinapplicability in far-peripheral regions appears rather comprehensible.Another observation from our experiment is that it does not matterwhether Deadeye is rendered on the dominant or the non-dominant eye.That behavior is rather in line with the findings of, e.g., Logothetis etal. [28], stating that binocular rivalry is not eye-dependent.Certainly, Deadeye has limitations that need to be discussed. Firstof all, Deadeye requires a stereoscopic environment, since a differentimage needs to be exposed to each eye. This fact renders the techniqueless convenient for everyday usage and requires additional hardware.In addition, we suggest to evaluate Deadeye in a 3D environmentsuch as virtual reality. Our technique eliminates the depth cues for thetarget object. Thus, we assume that Deadeye might perform differentlyin such a scenario. Another disadvantage is that Deadeye is not applica-ble for one-eyed users. On the other hand, this kind of visual restrictionis significantly less frequent compared to, e.g., color vision deficiency.A mentionable difference between Deadeye and most other preatten-tive cues is its binary character. Our current method has no graduation,i.e., the object is either popping out or not. In contrast, cues such as huecan be applied in varying intensity levels, which is an additional degreeof freedom and, e.g., allows a differentiation between target objects. ONCLUSION AND F UTURE W ORK
Our contribution is
Deadeye , a novel preattentive visualization tech-nique. Preattentive cues are used in a plethora of visualization ap-proaches and interaction paradigms and allow us to enhance objectsof interest such that they pop out independently from the number ofdistractors. Our technique contributes to the fundamental researchin visualization in two ways. First, a discovery of a preattentive cueopens the door for further research and several possible applications.Second, Deadeye is the first preattentive technique that does not alterany visible properties of the target object. In contrast to existing cues,our method does not displace, recolor, reshape or animate the target.Hence, the probability of misinterpreting the data is minimized, whichis a significant benefit compared to existing methods.Deadeye is based on presenting two slightly different images to thehuman visual system. Hereby, the target object that should pop outis rendered for one eye only. We evaluated the method by a state-of-the-art study being commonly applied for popout variables anddemonstrated that Deadeye can indeed be perceived preattentively.Three smaller, explorative experiments illustrate real-world applicationsof our technique in different visualization setups. In addition, weconducted a conjunction search experiment by combining Deadeye withcolor as a second cue. Our results showed that, in contrast to common3D depth, Deadeye conjunctions cannot be processed in parallel.Our initial findings encourage additional research questions that wewill address in future work. Our preliminary lab experiments indicatethat Deadeye delivers robust performance even if distractors of differentkind are present. Our first tests with objects of different shape and colorsupport that assumption and will be further extended in the future. Thiswould be a significant advantage over many of other preattentive cues,since the data to be visualized is often composed of heterogeneousobjects.Another interesting experiment would be to apply our techniqueto moving objects to evaluate how the effect performs in dynamicscenes. Furthermore, we suggest integrating and evaluating Deadeyein sophisticated attention guidance setups and existing application-level tools that already rely on popout techniques. In summary, webelieve that dichoptic presentation has the potential to become a usefulingredient to the visualization toolbox beyond just stereoscopic 3D. A CKNOWLEDGMENTS
We are immensely grateful to Christine Pickett for her comments thatgreatly improved the manuscript. We also wish to thank Rolf Rehe forreminding us how we used the wall-eyed vision trick to solve spot-the-difference puzzles when we were kids. We would also like to show our gratitude to the anonymous reviewers for their detailed feedback andall the valuable suggestions they made. R EFERENCES [1] D. Alais and R. Blake.
Binocular rivalry . MIT press, 2005.[2] B. Alper, T. Hollerer, J. Kuchera-Morin, and A. Forbes. Stereoscopichighlighting: 2d graph visualization on stereo displays.
IEEE Transactionson Visualization and Computer Graphics , 17(12):2325–2333, Dec. 2011.doi: 10.1109/TVCG.2011.234[3] S. Anstis and A. Ho. Nonlinear combination of luminance excursionsduring flicker, simultaneous contrast, afterimages and binocular fusion.
Vision Research , 38(4):523–539, 1998.[4] D. H. Baker. Decoding eye-of-origin outside of awareness.
NeuroImage ,147:89–96, 2017.[5] B. Bauer, P. Jolicoeur, and W. B. Cowan. Visual search for targets that areor are not linearly separable from distracters. 36:1439–65, 06 1996.[6] R. Blake. A neural theory of binocular rivalry.
Psychological review ,96(1):145, 1989.[7] A. Borji and L. Itti. State-of-the-art in visual attention modeling.
IEEETransactions on Pattern Analysis and Machine Intelligence , 35(1):185–207, Jan 2013. doi: 10.1109/TPAMI.2012.89[8] B. Caziot and B. T. Backus. Stereoscopic offset makes objects easier torecognize.
PLoS ONE , 10(6):e0129101, 2015. doi: 10.1371/journal.pone.0129101[9] C.-Y. Cheng, M.-Y. Yen, H.-Y. Lin, W.-W. Hsia, and W.-M. Hsu. As-sociation of ocular dominance and anisometropic myopia.
Investigativeophthalmology & visual science , 45(8):2856–2860, 2004.[10] F. Cole, D. DeCarlo, A. Finkelstein, K. Kin, K. Morley, and A. Santella.Directing gaze in 3d models with stylized focus. In
Proceedings of the 17thEurographics Conference on Rendering Techniques , EGSR ’06, pp. 377–387. Eurographics Association, Aire-la-Ville, Switzerland, Switzerland,2006. doi: 10.2312/EGWR/EGSR06/377-387[11] C. M. De Weert and W. J. M. Levelt. Binocular brightness combinations:Additive and nonadditive aspects.
Perception & Psychophysics , 15(3):551–562, 1974.[12] J. T. Enns and R. A. Rensink. Influence of scene-based properties onvisual search.
Science , 247(4943):721, 1990.[13] J. T. Enns and R. A. Rensink. Sensitivity to three-dimensional orientationin visual search.
Psychological Science , 1(5):323–326, 1990.[14] M. A. Formankiewicz and J. Mollon. The psychophysics of detectingbinocular discrepancies of luminance.
Vision research , 49(15):1929–1938,2009.[15] J. Friedenberg.
Visual attention and consciousness . Psychology Press,2012.[16] C. Gutwin, A. Cockburn, and A. Coveney. Peripheral popout: The in-fluence of visual angle and stimulus intensity on popout effects. In
Pro-ceedings of the 2017 CHI Conference on Human Factors in ComputingSystems , CHI ’17, pp. 208–219. ACM, New York, NY, USA, 2017. doi:10.1145/3025453.3025984[17] K. Hall, C. Perin, P. Kusalik, and C. Gutwin. Formalizing emphasis ininformation visualization. In
Computer Graphics Forum , vol. 35, pp.717–737, 2016.[18] J. M. Harris and L. M. Wilcox. The role of monocularly visible regions indepth and surface perception.
Vision research , 49(22):2666–2685, 2009.[19] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index):Results of empirical and theoretical research.
Advances in psychology ,52:139–183, 1988.[20] C. Healey and J. Enns. Attention and visual memory in visualization andcomputer graphics.
IEEE Transactions on Visualization and ComputerGraphics , 18(7):1170–1188, July 2012. doi: 10.1109/TVCG.2011.127[21] R. Hoffmann, P. Baudisch, and D. S. Weld. Evaluating visual cues forwindow switching on large screens. In
Proceedings of the SIGCHI Con-ference on Human Factors in Computing Systems , CHI ’08, pp. 929–938.ACM, New York, NY, USA, 2008. doi: 10.1145/1357054.1357199[22] L. Huang and H. Pashler. A boolean map theory of visual attention.
Psychological review , 114(3):599, 2007.[23] D. E. Huber and C. G. Healey. Visualizing data with motion. In
Visualiza-tion, 2005. VIS 05. IEEE , pp. 527–534. IEEE, 2005.[24] L. Itti and C. Koch. Computational modelling of visual attention.
Naturereviews. Neuroscience , 2(3):194, 2001.[25] B. Julesz. Binocular depth perception of computer-generated patterns.
Bell Labs Technical Journal , 39(5):1125–1162, 1960.26] B. Julesz. Foundations of cyclopean perception. 1971.[27] B. Julesz et al. Textons, the elements of texture perception, and theirinteractions.
Nature , 290(5802):91–97, 1981.[28] N. K. Logothetis, D. A. Leopold, and D. L. Sheinberg. What is rivallingduring binocular rivalry?
Nature , 380(6575):621, 1996.[29] C. Malamed.
Visual language for designers: principles for creatinggraphics that people understand . Rockport Publishers, 2009.[30] D. Marr and T. Poggio. A computational theory of human stereo vi-sion.
Proceedings of the Royal Society of London B: Biological Sciences ,204(1156):301–328, 1979.[31] D. Marr, T. Poggio, et al. Cooperative computation of stereo disparity.
From the Retina to the Neocortex , pp. 239–243, 1976.[32] P. McLeod, J. Driver, and J. Crisp. Visual search for a conjunction ofmovement and form is parallel.
Nature , 332(6160):154–155, 1988.[33] H. Mochizuki, N. Shoji, E. Ando, M. Otsuka, K. Takahashi, and T. Handa.The magnitude of stereopsis in peripheral visual fields.
Kitasato Med J ,41:1–5, 2012.[34] H. J. M¨uller and A. von Muhlenen. Visual search for conjunctions ofmotion and form: The efficiency of attention to static versus moving itemsdepends on practice.
Visual Cognition , 6(3-4):385–408, 1999.[35] A. L. Nagy and R. R. Sanchez. Critical color differences determined witha visual search task.
JOSA A , 7(7):1209–1217, 1990.[36] K. Nakayama and S. Shimojo. Da vinci stereopsis: Depth and subjec-tive occluding contours from unpaired image points.
Vision Research ,30(11):1811 – 1825, 1990. Optics Physiology and Vision. doi: 10.1016/0042-6989(90)90161-D[37] K. Nakayama and G. H. Silverman. Serial and parallel processing of visualfeature conjunctions.
Nature , 320(6059):264–265, 1986.[38] D. Noton and L. Stark. Scanpaths in saccadic eye movements whileviewing and recognizing patterns.
Vision Research , 11(9):929 – IN8, 1971.doi: 10.1016/0042-6989(71)90213-6[39] A. J. O’Toole and C. L. Walker. On the preattentive accessibility ofstereoscopic disparity: Evidence from visual search.
Perception & Psy-chophysics , 59(2):202–218, 1997.[40] C. L. Paffen, R. S. Hessels, and S. Van der Stigchel. Interocular conflictattracts attention.
Attention, Perception, & Psychophysics , 74(2):251–256,2012.[41] C. L. E. Paffen, I. T. C. Hooge, J. S. Benjamins, and H. Hogendoorn.A search asymmetry for interocular conflict.
Attention, Perception, &Psychophysics , 73(4):1042–1053, May 2011. doi: 10.3758/s13414-011-0100-3[42] C. Porac and S. Coren. The dominant eye.
Psychological bulletin ,83(5):880, 1976.[43] J. Schild.
Deep gaming : the creative and technological potential of stereo-scopic 3D vision for interactive entertainment . PhD thesis, [ Charleston,SC ] , 2014.[44] J. Schild and M. Masuch. Formalizing the potential of stereoscopic 3d userexperience in interactive entertainment. vol. 9391, pp. 93911D–93911D–12, 2015.[45] B. Suh, A. Woodruff, R. Rosenholtz, and A. Glass. Popout prism: Addingperceptual principles to overview+detail document interfaces. In Pro-ceedings of the SIGCHI Conference on Human Factors in ComputingSystems , CHI ’02, pp. 251–258. ACM, New York, NY, USA, 2002. doi:10.1145/503376.503422[46] D. Y. Teller and E. Galanter. Brightnesses, luminances, and fechner’sparadox.
Perception & Psychophysics , 2(7):297–300, 1967.[47] J. T. Townsend. Serial vs. parallel processing: Sometimes they look liketweedledum and tweedledee but they can (and should) be distinguished.
Psychological Science , 1(1):46–54, 1990.[48] A. Treisman and S. Gormican. Feature analysis in early vision: Evidencefrom search asymmetries.
Psychological review , 95(1):15, 1988.[49] A. Treisman and J. Souther. Illusory words: The roles of attention andof top–down constraints in conjoining letters to form words.
Journal ofExperimental Psychology: Human Perception and Performance , 12(1):3,1986.[50] A. M. Treisman and G. Gelade. A feature-integration theory of attention.
Cognitive psychology , 12(1):97–136, 1980.[51] I. Tsirlin, L. M. Wilcox, and R. S. Allison. Da vinci decoded: Does davinci stereopsis rely on disparity?
Journal of Vision , 12(12):2–2, 2012.[52] M. Waldner, M. L. Muzic, M. Bernhard, W. Purgathofer, and I. Viola.Attractive flicker: Guiding attention in dynamic narrative visualizations.
IEEE Transactions on Visualization and Computer Graphics , 20(12):2456–2465, Dec. 2014. [53] C. Ware.
Information visualization: perception for design . Elsevier, 2012.[54] J. M. Wolfe, K. R. Cave, and S. L. Franzel. Guided search: an alternativeto the feature integration model for visual search.
Journal of ExperimentalPsychology: Human perception and performance , 15(3):419, 1989.[55] J. M. Wolfe and S. L. Franzel. Binocularity and visual search.
Attention,Perception, & Psychophysics , 44(1):81–93, 1988.[56] J. M. Wolfe and T. S. Horowitz. Five factors that guide attention in visualsearch.
Nature Human Behaviour , 1(3):0058, 2017.[57] A. Yarbus.
Eye movements and vision . Plenum Press, New York, 1967.[58] A. L. Yarbus. Eye movements during perception of complex objects. In
Eye movements and vision , pp. 171–211. Springer, 1967.[59] H. Zhang.
Spectacularly Binocular: Exploiting Binocular Luster Effectsfor HCI Applications . PhD thesis, School of Computing, Natuanal Univer-sity of Singapore, 2014.[60] H. Zhang, X. Cao, and S. Zhao. Beyond stereo: An exploration ofunconventional binocular presentation for novel visual experience. In
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems , CHI ’12, pp. 2523–2526. ACM, New York, NY, USA, 2012. doi:10.1145/2207676.2208638[61] L. Zhaoping. Attention capture by eye of origin singletons even withoutawareness—a hallmark of a bottom-up saliency map in the primary visualcortex.
Journal of Vision , 8(5):1–1, 2008.[62] B. Zou, I. S. Utochkin, Y. Liu, and J. M. Wolfe. Binocularity and visualsearch—revisited.